I recently attended a very thought provoking talk by Janco Wolmarans (@jancowol) at the Agile Scrum Gathering Conference (@SUGZA, #SGZA). It really got me thinking.

Janco Wolmarans
Janco Wolmarans

I have captured my notes from the talk so that I can internalise what was presented and also to share it with you. My hope is that it can shed new light on how we perceive coding and it can make you look at the activities surrounding software development differently next time.

Since I am an electrical engineer, the idea of going back to this rather technical view of software development as the transfer of information across a channel really appealed to me. Although aspects of the talk appear to be very theoretical, the magic of Janco’s presentation is that it leaves you thinking… “Hmmm, that actually does apply to software development”.

At the end of the blog, I also explore some of my own ideas surrounding code comments and the benefit of double-encoding.


Janco started his talk with a reflection on whether there is something more fundamental to software development. When we don’t understand something, we use a metaphor in its place. “Something is like something else”.

“Metaphors reign where mysteries reside” – Chuck Missler

The problem with metaphors is when they don’t apply.

What is Programming?

  1. A set of instructions for a computer to exercise (insert favourite dictionary definition here).
  2. A description of a solution to another programmer disconnected in time. This could be another physical person but it could also be yourself later in time.

What is Information? Is Code Information?

Janco ponders the thought of whether code is information. Information is a polymorphic concepts and the definition of it is specific to its context.

The general definition might look like this:

  • X consists of one or more data.
  • The data in X is well formed.
  • The data in X is meaningful.

Therefore, we can answer the question: “Yes, code is information”.

  • The data is made up of the characters of the source code.
  • The code is well formed and complete because of the syntax that defines the language.
  • The code has meaning, not only to the machine, but more importantly to other programmers.

Information Theory – Claude E. Shannon (1948)

Claude Elwood Shannon - Father of Information Theory
Claude Elwood Shannon – Father of Information Theory

Claude Elwood Shannon is known as the father of Information Theory. Just about every form of digital communication we use today is based somehow on the seminal work that Shannon did. Janco gave a brief overview of Shannon but I was most intrigued to hear it again because of my recent interest in data compression (watch this space). Janco presented the following illustration which describes communication in just about any information system. Notice the important distinction between the signal and noise. This means that a message “X” might come out slightly altered “X’ ” on the other end.

Generalised communication in an information system.
Generalised communication in an information system.

Janco presents an interesting idea, which is “What if we drew software development in this fashion.” Below is my rendition of his picture, with my own ideas on what could be considered the signal and noise.

Coding represented as a Signal and Noise
Coding represented as a Signal and Noise

Signal vs. Noise

  • The signal is the solution to a problem.
    • I added my own ideas on what would belong in the signal in the illustration above.
  • The noise is all the other things that are around the code.
    • Fluff
    • Accidental (but necessary) complexities.
    • I added my own ideas on what would belong in the noise in the illustration above.

Information Entropy

Entropy in physics is the tendency to move towards disorder.

Shannon’s definition of entropy in an information system is different.  Information entropy is the log-base-2 of the number of possible outcomes [wiki].

With two coins there are four outcomes, and the entropy (N) is two bits.
With two coins there are four outcomes, and the entropy (N) is two bits.

Janco makes the following remarks about entropy:

  • It is based on the probability.
  • Entropy is equivalent to uncertainty.
  • Uncertainty of the code doing what I expect.

The way that I like to explain entropy is with the following example…

Try and guess the next number in the sequence: “1,2,3,4…?” If you answered “5” you would be right. Now try to guess the next number in another sequence: “1, 9, 0, 6…?”. If you answered “8” then you somehow miraculously would have also been right. The second time it was a bit harder to guess what the next number might be. Therefore the second sequence had higher entropy. I base this example on my understanding of what is written on the wiki:

If one of the events is more probable than others, observation of that event is less informative. Conversely, rarer events provide more information when observed. Since observation of less probable events occurs more rarely, the net effect is that the entropy (thought of as average information) received from non-uniformly distributed data is less than log2(n). Entropy is zero when one outcome is certain.

4 Rules of Simple Design – Kent Beck

Janco then presented some important design concepts made popular by Kent Beck, the father of Agile Programming.

  1. All tests must pass.
  2. DRY: Don’t repeat yourself. Refactor the code to avoid duplication of logic. Call the common code.
  3. Clearly express intent.
  4. No superfluous parts.

Janco makes the clear connection then to an information system:

  1. Signal
  2. Entropy
  3. Entropy
  4. Noise

XP Values – Kent Beck

  1. Communication
  2. Feedback
  3. Simplicity
  4. Respect
  5. Courage

Janco points out that 1-3 relate to effective communication. He then asks: “What would a model for Dev-to-Dev communication look like through the medium of code?”

Dev to Dev communication in TDD
Dev to Dev communication in TDD


Refactoring is a way to achieve “DRY” and to “Clearly Express Intent”. To refactor is to express the solution in a different way but to keep the same outcome. Refactoring increases the clarity of the solution by raising the level of abstraction. This is done by encapsulating many low-level concepts into one higher-level concept (by adding more context). This leads to a reduced message space, thus lowering the level of uncertainty throughout the system.

“Being abstract is different from being vague” – Dijkstra

Abstraction = New Level where you can still be precise

Good programmers write code that other humans can understand.

Things to Explore

These are some ideas that came out from the audience or were raised by Janco as additional ideas to explore. If you have some thoughts on these or additional ones to add, please leave a comment below.

  1. We need to better articulate “Noise”. What are concrete examples to look out for and can we mitigate it?
  2. We want to have a Ubiquitous Language. See Domain Driven Design (DDD).
    1. Strive for one mental model across Client – BA – Dev.
    2. Have a problem with different perspectives of the problem.
  3. Mob Programming
    1. Reduces noise because we all share a common signal.

That was the end of my notes from Janco’s talk. Below are my own ideas to augment what he presented.

Comments are Noise! Really? I say no! It’s a redundant Channel.

Something that really struck me as odd during the talk was Janco’s negative attitude towards comments in code. This sentiment seemed to be shared by several members of the audience. This didn’t sit well with me at all. It seemed like Janco’s take was that code should be self describing. The problem with code comments was that they tend to be superfluous and they can easily get out of date. Maybe I was missing a critical piece of context that they were privy to and I wasn’t.

I have also heard other people saying “Only write comments where the logic of the code is not obvious”. My personal commenting style obviously annoys the heck out of those people, because I comment everything. Sorry.

My take on it is simple (you will see why it makes sense later):

Write comments everywhere so you can tell the whole story of the code only by reading the green lines (comments). Ignore the actual code until you need to dig into the detail.

This seems crazy at first. Why on earth would you write the perfect solution in code and then write it again (imperfectly) in English? Let me reveal another important concept that was NOT addressed in the talk by Janco but I learnt the hard way at my excellent time at Cyest. It was once again confirmed at Synthesis.

The Value of Double-Encoding

Imagine if I could tell you that we could write code in such a way that a positive outcome is guaranteed when a bug is discovered. You would probably say I was crazy. The answer is simple…

Encode the solution more than once. If one encoding has a bug then the other encoding can be used to look where the logic differs from the first. If the bug is a logical error then both encodings will not make sense in the context of the situation (thus highlighting the logical error). This is a win-win situation.

We have been doing this with Unit Testing for ages. Unit tests are an additional encoding of expected outputs given known inputs. It checks that the code (encoding 1) behaves exactly as expected when compared to the unit test (encoding 2). If the outputs differ then the system reports a failure and a bug is picked up. What is interesting is that at the extreme end of this argument it even makes sense to simply “copy-and-paste” actual implementation into a unit test and compare the outputs against each other. The value in this is only made clear down the line (later in time) because unit tests tend to change slower than code. If a bug is introduced into the code, there is still the previous implementation in the unit test that will give capture the old behaviour exactly. Obviously this approach assumes good sets of known inputs.

My idea is that the code comments act as a second encoding to the code but it’s not in a completely separate [disconnected] place in the code (like unit tests are), rather it’s inline with the code. The wording then becomes the following:

Encode the solution more than once by commenting everything (even the obvious code). You can find encoding bugs by looking for places where the code differs from what is described in the comment (or vice-versa). You can find logical errors by tracing through the expected logic and seeing where either the code or the comments tell a different story. This is a win-win situation.

3 Possible Outcomes with 2 Encodings
3 Possible Outcomes with 2 Encodings

For the first time, Janco’s idea of relating code to an information system gives me a language to use where my argument makes some sense. The idea is commonly used in very noisy channels. By adding a redundant channel to transmit the signal, you are able to compare both results at the end and more easily pick out the signal from the noise. The chance of both encodings having an error is much smaller than only one of the encodings having an error. With one channel, you really can’t tell.

If you were to extend this idea to 3 channels then something even more useful pops out:

Best of 3 wins.
Best of 3 wins.

If we use inline (or in-band) redundancy in the form of code comments AND we use out of band redundancy in the form of unit tests then we are able to use a “best of 3” strategy to decide which encoding has the bug. The chance of all 3 being wrong is now even less, making bugs much easier to find.

I have found this technique SO useful when writing and maintaining rather complex code (the calculation engine for a financial modelling tool, for example) that I have now decided that I will write all my code like this. As with all developers, writing the unit tests is a “Best-effort” endeavour. I always have great intentions but there is usually some external factor hindering me. However, inline code comments are something I can ALWAYS do, and therefore I have chosen to do exactly that.

Therefore, if you ever come across my code and it’s commented EVERYWHERE (even in the obvious bits), please don’t get upset with me… there is some method to my madness. I never know where the bug will be in my code, but the tiny bit of extra effort it takes to add the multiple encodings pays itself back many times given the full life span of the code.

“It’s not magic… it’s statistics” – Luke

Comments from my Colleagues

The following is a very interesting set of replies that I got from people I work with. I humbly thank them for their insight because I have a lot of respect for their ideas. I feel that it’s very important that I post this because it captures to polarization of thinking around the topic above. It also provides a very seasoned perspective on how my argument around comments in the code should be considered carefully.

I have only provided first names here to keep the conversation flow simple.


I have to say that I agree with Janco that comments are mostly noise and really don’t buy your idea of commenting everything.

I like Robert C. Martin’s reasoning that your code should be human readable and therefore comments add no additional value.  The emphasis should therefore be on writing “Clean Code”, and not cluttering your code with comments.

Check out Robert C. Martin’s “Clean Code: A Handbook of Agile Software Craftsmanship“.  I found this book to be incredibly inspiring and good at showing what “clean code” is and what “clean code” should be.  The book is in the “library”, so check it out.


I believe that comments can be noise however have a look at the following code:

using System;
using System.Numerics;
namespace PiCalc {
internal class Program {
private readonly BigInteger FOUR = new BigInteger(4);
private readonly BigInteger SEVEN = new BigInteger(7);
private readonly BigInteger TEN = new BigInteger(10);
private readonly BigInteger THREE = new BigInteger(3);
private readonly BigInteger TWO = new BigInteger(2);
private BigInteger k = BigInteger.One;
private BigInteger l = new BigInteger(3);
private BigInteger n = new BigInteger(3);
private BigInteger q = BigInteger.One;
private BigInteger r = BigInteger.Zero;
private BigInteger t = BigInteger.One;
public void CalcPiDigits(){
BigInteger nn, nr;
bool first = true;
while (true) {
if ((FOUR*q + r - t).CompareTo(n*t) == -1) {
if (first) {
first = false;
nr = TEN*(r - (n*t));
n = TEN*(THREE*q + r)/t - (TEN*n);
q *= TEN;
r = nr;
} else {
nr = (TWO*q + r)*l;
nn = (q*(SEVEN*k) + TWO + r*l)/(t*l);
q *= k;
t *= l;
l += TWO;
k += BigInteger.One;
n = nn;
r = nr;

private static void Main(string[] args) {
new Program().CalcPiDigits();

From the function names it is clear that we are calculating PI. However, I believe that comments or a link to the algorithm IS needed. Just renaming the variables will most probably not work. Further, I have had many times where clients that are not technical looks at the code and can understand what it does via the comments and have been able to give valuable input. So even though comments do tend to ‘decay’ over time if the ‘signal’ component is strong enough in the comments they will continue to provide pointers in the right direction.


Hi Johan. My argument would be that the code example you have given is NOT an example of clean code and hence it is confusing to read.

The main problems with the code is that:

  • Variable names are not descriptive
  • The method is way to long

A clean code example would probably be more something like:

while(true) {

The names of the methods then lend to describing the logic and the code becomes more “human readable”


Both comments really add a lot more depth to this discussion. Both sides highlight important aspects that need to be considered in context.

I appreciate you taking the time to shed light on the parts that matter in code.

I might not have the perfect commenting strategy but it is interesting to relate the idea to Information Theory as Janco presents it. It makes you think…

What Garth presents is clearly a good method to enhance the fidelity of the signal (in one band). I would imagine that we all would agree that If this is possible then it’s always preferable. However, Johan highlights a case where (for arguments sake) it’s not. When I look at the PI digit calculator, I also know that the code is usually used in a performance critical section (eg: I want to calculate PI to the billionth digit etc) and there might be good reasons (that I am not aware of from the little context we have) why the code needs to be written in that fashion. It that was the reason for the style of code as written, then I would personally prefer to have the inline comments to help me along without having the performance knock of the extra method calls (and yes, I do realise that I might be opening a can of worms now around the debate on micro-optimisations being done by hand vs the runtime jitter).


My turn!

While I agree with Garth on better descriptive names, I think we must use some common sense and judgement not to normalise to the n-th degree. In languages with first-class functions then it’s mostly cool, but in crummy Java and C# you tend to end up with a proliferation of rubbish functions that are only useful in a single context.

E.g. Too many functions called batty names like:
If (StringIsNotNullAndEqualToSynthesis(myString)) {
.. Blah

are actually not as readable as
If (“Synthesis”.equals(myString)) {
.. Blah

Also – I find it really helps to try to make your code as immutable (google it) as possible,

e.g. Garths previous example:

while(true) {

Shows clearly that mutation is happening and therefore not clear what the code is actually doing (because it all happens outside of this snippet). This makes this code difficult to lift out without diving into the rabbit hole of each function. The rule of thumb is that a function should take as parameters all it’s inputs, and are a return value return all the outputs. It should not mutate anything else (as far as possible).

Soooooo many benefits to doing this:

  • Truly testable code (no injecting weird dependencies into seemingly unrelated objects during test setup).
  • Super easy to debug (maybe no debugger required even! Stuff doesn’t ever change!)
  • Performance benefits! You don’t need to copy data around for various usages, because you can guarantee that data doesn’t change you can simply pass around a reference to each caller without the danger of them modifying it! (flamebait achtung!)

Hey I’m not saying this is a firm rule, it’s simply a nice starting point – obviously you are going to need to mutate stuff. I’m just stating that I bet you the “mutant” area of your code will contain the bulk of your bugs 🙂


Hi Tom,

I fully get your point about immutability and functions taking all their inputs as parameters and spitting out one output.  What I find is that this is a natural by-product of test driven development.  So for me this is one of the reasons why test driven development is a must and not an option.


Hi Luke, great that you took the time to write this down. I like the direction; esp. with double verification and in-line value of comments. The problem is comments are not machine-verifiable/runnable. Perhaps if we write our comments in a language like gerkin, we’d achieve in-line unit tests. Something like (excuse inaccurate syntax):

// Scenario: Sums 2 numbers
// Given 15 and 12
// Expect 27
int sum(int i, int j) {}

Another point, is the trends seem to be towards higher level unit-test; so the double verification is not on the base utilities – these will get covered through broader, more efficient sweeps.

Of course code should be as expressive as possible.