Static Analysis in 2026: Beyond Linting
Static Analysis in 2026: Beyond Linting
Somewhere around 2018, static analysis became synonymous with linting. ESLint, Prettier, RuboCop. Teams would set them up, check the "we do static analysis" box, and move on. In 2026, that's like saying you do security because you have a password field on your login form.
I'm not knocking linters. I run ESLint and Prettier on every project. But linting is the lowest rung of what static analysis can do for you, and if that's where you stop, you're leaving the most valuable insights on the table.
What Static Analysis Actually Means
Static analysis is any examination of source code that happens without executing it. Linting is a subset. Type checking is a subset. But the category extends far beyond style enforcement and syntax validation.
Here's the spectrum, from least to most sophisticated:
- Formatting - Prettier, gofmt. Purely cosmetic. Zero semantic value.
- Linting - ESLint, Pylint. Catches common mistakes and enforces conventions.
- Type checking - TypeScript, mypy. Proves properties about data flow at compile time.
- Semantic analysis - Tools that understand what code does, not just how it looks.
- Architectural analysis - Tools that enforce structural rules across the entire codebase.
- AI-augmented analysis - LLM-powered tools that can reason about intent and context.
Most teams operate at levels 1-3. The teams that ship the most reliable software operate at levels 4-6.
Semantic Analysis: Where Things Get Interesting
Semantic analysis tools go beyond pattern matching. They build a model of your program's behavior and use it to find bugs that no linter could catch.
Take taint analysis as an example. A taint analysis tool tracks data from user input ("sources") through your program to sensitive operations ("sinks") like database queries or file system calls. If untrusted data reaches a sink without being sanitized, the tool flags it.
// A linter sees nothing wrong here
app.get('/user', (req, res) => {
const userId = req.query.id;
const user = db.query(`SELECT * FROM users WHERE id = ${userId}`);
res.json(user);
});
// A semantic analysis tool sees SQL injection
// because req.query.id (source) flows to db.query (sink) unsanitizedTools doing this well in 2026:
- Semgrep: Open source, supports custom rules written in a pattern-matching DSL. I've written over 200 custom Semgrep rules for various projects. It's the most flexible option.
- CodeQL: GitHub's query language for code. Treats code as data you can query with SQL-like syntax. Incredible power, steeper learning curve.
- SonarQube: The enterprise standard. Good out-of-the-box rules, less flexible for custom analysis.
- Snyk Code: Real-time analysis in your IDE with AI-powered suggestions.
Architectural Analysis: Enforcing Boundaries
This is the area I'm most excited about and where I see the biggest gap in most teams' toolchains.
Architectural analysis tools let you define rules about how your code should be structured, then enforce those rules automatically. Think of it as linting for architecture.
// Example: ArchUnit-style rules (Java ecosystem)
// "Controllers should not directly access repositories"
noClasses()
.that().resideInAPackage("..controller..")
.should().dependOnClassesThat().resideInAPackage("..repository..");
// "Domain models should not depend on infrastructure"
classes()
.that().resideInAPackage("..domain..")
.should().onlyDependOnClassesThat()
.resideInAnyPackage("..domain..", "java..", "kotlin..");In the TypeScript world, tools like Dependency Cruiser and Sheriff let you define similar rules:
// .dependency-cruiser.cjs
module.exports = {
forbidden: [
{
name: 'no-circular',
severity: 'error',
from: {},
to: { circular: true }
},
{
name: 'no-ui-in-domain',
severity: 'error',
from: { path: '^src/domain' },
to: { path: '^src/ui' }
}
]
};Run this in CI, and you'll catch architectural drift the moment it happens, not six months later during a painful refactor.
The Framework: SCALE Your Static Analysis
Here's a framework I use when setting up static analysis for a new team or project:
S - Style (Week 1): Set up formatting and basic linting. Prettier + ESLint with a reasonable config. Don't bikeshed. Pick a config and move on. This eliminates noise in code reviews.
C - Correctness (Week 2): Enable TypeScript strict mode. Turn on noUncheckedIndexedAccess, exactOptionalPropertyTypes, and noImplicitReturns. Add Semgrep with the default security rules.
A - Architecture (Month 1): Define your architectural boundaries. Set up Dependency Cruiser or equivalent. Start with the rules you care most about: no circular dependencies, layer separation, module boundaries.
L - Logic (Month 2): Write custom Semgrep or CodeQL rules for your business-specific patterns. "Every API endpoint must validate input with Zod." "Every database call must be inside a transaction boundary." These rules encode your team's knowledge.
E - Evolution (Ongoing): Review your static analysis findings monthly. Which rules generate the most noise? Remove them. Which bug categories keep slipping through? Write rules for them. Your static analysis config should evolve with your codebase.
The Contrarian Take: More Rules Isn't Better
I've seen teams with 500+ ESLint rules enabled. Their CI takes 4 minutes just for linting. Developers routinely add // eslint-disable-next-line comments. The signal-to-noise ratio is terrible.
Here's my rule of thumb: if a static analysis rule has more than a 5% false positive rate, disable it. A rule that cries wolf erodes trust in the entire system. I'd rather have 50 rules that developers trust and fix immediately than 500 rules they've learned to ignore.
The best static analysis setup isn't the most strict one. It's the one your team actually listens to.
What 2026 Brings: LLM-Augmented Analysis
The newest frontier is using LLMs to augment traditional static analysis. Tools like GitHub Copilot's code review features can catch issues that are hard to express as rules. "This error message exposes internal implementation details to users." "This retry logic doesn't have exponential backoff, which will amplify cascading failures."
These aren't things you can write a regex for. They require understanding intent, not just syntax.
We're at the beginning of this shift, and the tools are imperfect. But the direction is clear: static analysis in 2027 will look as different from today as today looks from 2018's "just run ESLint" era.
Start climbing the spectrum now, and you'll be ready.
$ ls ./related
Explore by topic