Reading the interesting post Should You Ship This Code Before Reducing Technical Debt?!, I got in touch with a graph that caught my attention. It was from another interesting post McCabe Cyclomatic Complexity: the proof in the pudding.

It’s made on historical analysis of tens of thousands of source code files and shows the correlation of Cyclomatic Complexity (CC) values at the file level (x-axis) against the probability of faults being found in those files (y-axis).

This graph means that with a CC of 11, you have the lowest probability to have faults (well, fault be discovered) in your file. Over a CC of 38, you have a probability of 50% to get one.

Now some considerations / questions:

  1. I need a better metric: I want to measure something that can say that if I stay under this threshold (say 11 for CC), the probability of faults is less than 1%
  2. I was taught that CC < 4 is OK, CC between 4 and 10 is bad, CC > 10 is unacceptable … what’s up???
  3. With a CC between 1 and 35 the probab of fault is pretty constant (and incredibly HIGH!) so should we target our apps to have a CC of 35 or 11?
  4. Is Cyclomatic Complexity useful to predict anything?

PierG