codeintelligently
Back to posts
Code Intelligence & Analysis

Code Complexity: Cyclomatic vs Cognitive vs Change

Vaibhav Verma
7 min read
code complexitymetricsrefactoringcode qualitystatic analysis

Code Complexity: Cyclomatic vs Cognitive vs Change

I need to tell you something about cyclomatic complexity: it's lying to you. Not maliciously. It's answering a question you didn't ask.

Cyclomatic complexity was designed in 1976 by Thomas McCabe to estimate the number of test cases needed to cover a function. It counts decision points (if, else, case, while, for, &&, ||) and adds 1. That's it. It's a testing metric masquerading as a readability metric, and the entire industry has been treating them as the same thing for decades.

I've seen functions with a cyclomatic complexity of 4 that took me 20 minutes to understand, and functions with a cyclomatic complexity of 15 that I grasped in 30 seconds. The number wasn't wrong. It was measuring the wrong thing.

Let me show you three different complexity metrics, what each one actually measures, and when to use which.

Cyclomatic Complexity: The OG

Cyclomatic complexity (CC) counts the number of linearly independent paths through a function. In practice, it's the number of decision points plus 1.

typescript
// Cyclomatic complexity: 5
function processOrder(order: Order): Result {
  if (!order.items.length) return { status: 'empty' };        // +1
  if (!order.customer) return { status: 'no_customer' };      // +1

  let total = 0;
  for (const item of order.items) {                           // +1
    if (item.discount) {                                       // +1
      total += item.price * (1 - item.discount);
    } else {
      total += item.price;
    }
  }

  return { status: 'ok', total };
}

CC tells you: this function needs at least 5 test cases for full branch coverage.

CC doesn't tell you: how hard this function is to understand.

When CC is useful:

  • Estimating test effort
  • Setting a ceiling for function complexity (I use 15 as a hard limit, 10 as a warning)
  • Identifying functions that need refactoring based on testability

When CC misleads:

  • A switch statement with 20 simple cases has CC of 21, but it's trivially readable
  • Nested conditionals and flat conditionals score the same, despite wildly different cognitive loads
  • CC treats all decision points as equal, but an if (x != null) is not as complex as an if (a && (b || c) && !d)

Cognitive Complexity: The Modern Alternative

Cognitive complexity was introduced by SonarSource (the company behind SonarQube) in 2017 specifically to measure how hard code is to understand. It uses three rules:

  1. Increment for breaks in linear flow: if, else, for, while, catch, switch, sequences of logical operators
  2. Increment for nesting: Each level of nesting adds a penalty
  3. Ignore structures that don't impact readability: Null-coalescing, early returns, method extraction

This nesting penalty is the key insight. Compare these two functions:

typescript
// Cyclomatic complexity: 4
// Cognitive complexity: 7
function validateA(user: User): string[] {
  const errors = [];
  if (user.name) {                              // +1
    if (user.name.length < 2) {                 // +2 (nesting penalty)
      errors.push('Name too short');
    }
  }
  if (user.email) {                             // +1
    if (!user.email.includes('@')) {            // +2 (nesting penalty)
      errors.push('Invalid email');
    }
  }
  return errors;
}

// Cyclomatic complexity: 4 (same!)
// Cognitive complexity: 4
function validateB(user: User): string[] {
  const errors = [];
  if (!user.name) return errors;                // +1
  if (user.name.length < 2) {                   // +1
    errors.push('Name too short');
  }
  if (!user.email) return errors;               // +1
  if (!user.email.includes('@')) {              // +1
    errors.push('Invalid email');
  }
  return errors;
}

Both functions have the same cyclomatic complexity. But validateB uses early returns to avoid nesting, and cognitive complexity correctly identifies it as simpler to understand. If you've ever refactored deeply nested code into early returns and felt the readability improvement, cognitive complexity is the metric that captures that difference.

When cognitive complexity is useful:

  • Measuring readability and understandability
  • Identifying functions that are hard for new team members to grasp
  • Guiding refactoring decisions toward genuinely simpler code, not just fewer test paths

When cognitive complexity falls short:

  • It doesn't account for domain complexity. A function that implements a complex business rule will have high cognitive complexity even if the code is the clearest possible expression of that rule.
  • It's language-specific. The SonarSource implementation handles Java and TypeScript well but may not account for language-specific idioms in other ecosystems.

Change Complexity: The Missing Metric

Here's the one nobody talks about, and it's the most useful in practice.

Change complexity measures how hard it is to modify a piece of code safely. It's not about reading the code. It's about what happens when you change it.

Change complexity isn't a formal metric with a formula. It's a composite score derived from:

  1. Coupling breadth: How many other files/modules are affected when this code changes?
  2. Test coverage: What percentage of this code's behavior is verified by tests?
  3. Historical defect rate: How often do changes to this file introduce bugs?
  4. Author diversity: How many different developers have successfully modified this code?

I calculate change complexity with a formula like this:

Change Complexity = (Coupling Breadth * 0.3) +
                    ((100 - Test Coverage) * 0.3) +
                    (Defect Rate * 0.25) +
                    ((1 / Author Count) * 0.15)

The weights are debatable, but the principle is clear: a function might be easy to read (low cognitive complexity) and easy to test (low cyclomatic complexity) but extremely dangerous to change because it's coupled to 15 other modules, has 40% test coverage, and only one person has ever modified it successfully.

A real example

We had a billing calculation function at a previous company. Cyclomatic complexity: 8. Cognitive complexity: 6. By any standard metric, it was fine. Clean code.

But its change complexity was off the charts. It was imported by 23 other modules. It had no integration tests (only unit tests with mocked dependencies). And in the last 12 months, 4 out of 6 changes to it had caused production incidents.

The function was easy to read and easy to test in isolation. It was nearly impossible to change safely. Only change complexity captured that reality.

The Framework: CCC Assessment

When evaluating a module's complexity, I run all three analyses:

Cyclomatic (Can I test it?): If CC > 15, the function has too many branches. Extract some paths into separate functions.

Cognitive (Can I read it?): If cognitive complexity > 10, the function is hard to understand. Reduce nesting, use early returns, extract sub-expressions into named variables.

Change (Can I modify it safely?): If change complexity is high, the function is risky regardless of how clean it looks. Increase test coverage, reduce coupling, spread knowledge across the team.

The three metrics together give you a complete picture. Any one alone is misleading.

The Contrarian Take: Complex Code Isn't the Problem. Changing Complex Code Is.

Teams spend enormous effort reducing complexity scores. They refactor, they extract, they simplify. And that's often the right call. But I've seen teams burn weeks refactoring code that doesn't change.

If a function has a cyclomatic complexity of 35 but hasn't been modified in 2 years and has full test coverage, leave it alone. Your refactoring effort is better spent on the function with a complexity of 12 that changes every sprint and keeps breaking things.

Complexity itself doesn't cause bugs. Changing complexity causes bugs. Focus your refactoring budget on the intersection of complexity and churn. That's where the bugs live.

Tooling in 2026

  • ESLint complexity rule: Cyclomatic complexity for JavaScript/TypeScript
  • SonarQube / SonarCloud: Both cyclomatic and cognitive complexity with trend tracking
  • CodeClimate: Cognitive complexity with maintainability ratings
  • Custom scripts + git log: Change complexity (you'll need to build this yourself, but it's worth it)

Measure all three. Act on the combination. That's how you spend your refactoring budget where it actually matters.

$ ls ./related

Explore by topic