Amy Johnston · 23rd January 2026
Static code analysis vs dynamic code analysis: what’s the difference?
In this blog post, we’ll set out the practical differences between static code analysis (also known as static application security testing — SAST) and dynamic analysis (also known as dynamic application security testing — DAST) in Salesforce. We’ll dive into where SAST, DAST, software composition analysis, and vulnerability scanning fit, as well as why deterministic code reviews have become a critical safety layer in an AI era.
Salesforce teams are constantly under pressure to increase deployment speed and automation, while also ensuring security and code quality — moving fast without breaking anything in production. This is especially challenging for teams in enterprise organizations, who have multiple teams (and partners) building in parallel, and are integrating different projects and work streams in their CI/CD pipeline.
AI tools enable larger volumes of code to be written faster than ever, with exciting potential for productivity gains. But the result is often that teams have more to review, more to test, and more chances for risky changes to slip through. Guardrails need to be in place throughout your Salesforce DevOps lifecycle. That’s why static code analysis and dynamic code analysis matter. They both help reduce risk, but in different ways and at different stages of the DevOps lifecycle. If you treat them in the same way, you end up with gaps in your process — and those gaps are usually discovered too late, when they’re hardest to fix and highly likely to impact your end users.
What is static code analysis / static application security testing (SAST)?
Static code analysis inspects code without running it. It’s a shift-left tactic designed to surface risky patterns early, when fixes are cheapest and more contained. In Salesforce environments, static analysis can scan Apex, Lightning Web Components (LWCs), Visualforce , and supporting metadata to flag security vulnerabilities, maintainability issues, and standards violations before a change reaches runtime.
This is where repeatable Salesforce risks should get caught: missing CRUD/FLS checks , unsafe sharing behavior , injection vulnerabilities , non-bulkified logic , accidental secrets in source , and architectural patterns that won’t scale as your org grows (also known as technical debt ).
A clear way to think about static analysis is that it answers a focused question: does this change introduce known risk or violate standards before it ever executes? Those standards can include platform best practices, security rules, and team-specific expectations such as custom PMD rules, formatting conventions, or architectural patterns. Static analysis doesn’t replace engineering judgment, but it removes a large amount of avoidable risk and inconsistency before changes ever reach runtime.
What is dynamic code analysis / dynamic application security testing (DAST)?
Dynamic code analysis tests an application while it’s running, by interacting with it from the outside in. Instead of inspecting source files, dynamic analysis exercises a deployed system through its APIs, user interfaces, and integration endpoints — typically in a staging or pre-production environment — to identify vulnerabilities and failures that only appear at runtime.
This distinction matters in Salesforce because many risks don’t emerge from a single file in isolation. They show up when real users, permissions, integrations, and data flows interact with the system as a whole. Dynamic analysis is designed to uncover issues that only become visible once code is deployed and behaving like a live application, such as unintended data exposure, broken access controls, or unsafe integration paths.
In other words, dynamic analysis validates behavior after changes have been built and deployed to an environment. It works later in the lifecycle, complementing static analysis rather than replacing it, and focuses on how the system responds to real-world use rather than how the code is structured.
Static vs dynamic code analysis: what's the practical difference?
Static and dynamic analysis are often described as two ways to “scan for security issues,” but they close different gaps. Static analysis is your pre-flight check. It stops risky patterns entering your pipeline and enforces standards at the source. Dynamic analysis is your test flight. It reveals vulnerabilities that only emerge when the system runs under real conditions. Static analysis can run locally in your IDE during development, and on a branch as it’s pushed or in a pull request, whereas dynamic analysis runs later in the cycle, potentially at a shared sandbox like UAT/Staging.
That’s why top Salesforce DevOps teams don’t pick just one. Static tools can’t see runtime truth and dynamic tools can’t enforce standards early or explain risks based on code structure alone. If you want reliable coverage across the lifecycle, you need both layers.
What is software composition analysis in Salesforce?
Static and dynamic analysis focus on what you build. Software composition analysis focuses on what you depend on e.g., your package files (package.json) or a code repository (packages inside node_modules). It's the process of scanning third-party components for known vulnerabilities, typically tracked as common vulnerabilities and exposures (CVEs). Salesforce teams can underestimate dependency risk because it's less visible than in some other ecosystems, but it's still a meaningful surface area. LWCs often rely on npm packages. Enterprises increasingly standardize on managed packages. Integrations may pull in third-party libraries. Build pipelines may include open-source tooling. Any of these components can introduce vulnerability risk independent of the quality of your proprietary code.
Software composition analysis exists to answer one thing clearly: are any of the components we rely on already known to be vulnerable? It doesn’t replace static analysis. It protects a different category of risk, and it belongs alongside your other shift-left tactics. In Salesforce this can be tricky to outline, but Static Resources can be an area to house libraries that may be out of date/vulnerable.
How vulnerability scanning and code reviews fit together?
It helps to think about vulnerability scanning and code reviews in two layers: what you scan and how you govern what gets shipped.
On the scanning side, vulnerability scanning is the umbrella category. Its job is to find known security weaknesses wherever they might exist. Under that umbrella sit a few different approaches, each covering a different surface area. Software composition analysis is one of them, focused specifically on third-party components and dependencies. As we’ve seen, it answers a clear question: are any of the libraries, packages, or external components we rely on already known to be vulnerable? That’s why software composition analysis is often grouped with “vulnerability scanning” more broadly — it’s a specialized branch of the same goal.
On the governance side, code reviews are the umbrella category. Reviews are where teams decide whether changes are safe, correct, and ready to ship. They act as the control point that turns detection into prevention. Static analysis sits closest to this review layer. It supports reviews by checking the change itself before it runs — asking whether the code introduces known risky patterns or violates agreed standards. These checks happen early, at the point of change, when issues are easiest to fix.
Dynamic analysis operates later in the lifecycle. Rather than gating individual pull requests, it validates behavior in a running environment by exercising the system through real user paths, APIs, and integrations. The results of dynamic testing inform release readiness and risk decisions further downstream, once changes have already been deployed to a sandbox or staging org.
Seen this way, static and dynamic analysis aren’t competing tactics. They’re inputs into the review layer, each providing a different kind of evidence about risk. And that review layer is what ultimately determines whether scanning actually prevents problems, rather than just reporting them.
This framing matters because the challenge most enterprise Salesforce teams face now isn’t a lack of scanners. It’s the gap between detection and delivery. The more change your pipeline absorbs — especially with AI accelerating output — the more important it becomes that scanning results are enforced through a deterministic code review layer inside CI/CD.
With that in mind, the real question for Salesforce teams isn’t whether you’re scanning for risk — it’s whether your code review layer is strong enough to turn those signals into consistent, scalable prevention across the DevOps lifecycle.
Why code reviews sit above everything else (especially now)
Once you see scanning and reviews as two distinct layers, it becomes clear why code reviews are the control point that determines whether risk is actually prevented or just detected.
Static analysis, dynamic analysis, and software composition analysis can all surface valuable signals. They tell you when something might be wrong. But scanning is not the same as governance. A scanner can only alert you. It can’t decide whether a change should ship, and it can’t enforce the standards your organization is accountable for.
That enforcement happens in the review layer. Code reviews are where teams validate intent, confirm security assumptions, and ensure changes align to the way the org is meant to operate. In Salesforce, that context is everything. Real-world risk is shaped by how code, metadata, permissions, sharing models, and integrations combine in your specific environment. A static tool can flag a missing CRUD check. A dynamic test can reveal a runtime gap. But only a review process can connect those findings to the broader question of whether this change is safe for your org.
This is why reviews need to be built into your DevOps lifecycle. If the review layer sits outside your CI/CD workflow — if it’s a report to read later, or a best-effort manual step that can be bypassed — prevention becomes inconsistent. Whereas integrated into CI/CD, quality gates automatically stop changes reaching production, and can autofix unambiguous issues. In high-velocity enterprise delivery, inconsistency and manual steps are where risk gets through. Reviews need to be the gate that turns detection into a reliable stop-or-ship decision.
And that matters more now than ever. As change volume increases, and as AI starts contributing meaningful parts of the codebase, the review layer isn’t just another quality step. It’s the only scalable way to keep speed and safety moving together.
Why AI makes deterministic guardrails non-optional
AI-assisted development is changing the shape of Salesforce delivery, even in teams that are adopting it thoughtfully. The most obvious shift is output. AI removes friction from building and iterating, which means more changes are created between releases. But it also means the review surface expands dramatically.
In practice, teams start seeing the same patterns. More (and sometimes larger) pull requests are produced in a shorter window, so review time per PR shrinks, and the temptation to approve based on surface-level checks goes up. Not because anyone wants to loosen standards, but because the system can outpace human capacity.
There is a deeper risk here as well: AI can produce code that looks plausible, while still introducing security gaps, ignoring subtle Salesforce best practices, or missing the context of how your org is configured. Relying on non-deterministic AI review checks to validate non-deterministic AI output isn’t a stable foundation. If your “guardrail” can hallucinate or vary from run to run, teams quickly lose trust in what it flags and what it misses.
That’s why the AI era increases the value of deterministic controls. The answer isn’t to slow delivery down. It’s to enforce standards early, consistently, and in a Salesforce context, so teams can increase throughput without increasing risk.
What static and dynamic tools do well — and where they fall short
Static and dynamic analysis are both essential parts of a mature pipeline. Static analysis catches common risk patterns early, which keeps problems cheap to fix and reduces the burden on manual reviewers. Dynamic analysis validates behavior in a running org, which is crucial when risk only appears through real interactions between code, metadata, permissions, and integrations.
But scanning tools alone don’t guarantee safe delivery. Static analysis is inherently limited to what can be inferred from source code. Most tools can’t see runtime-only failure modes, and can’t reliably interpret the intent behind a change. Teams also need to manage noise. Most static tools will generate a volume of findings without careful tuning and prioritization. If the signal-to-noise ratio drops too far, scans get ignored, and the guardrail becomes notional.
Coverage is another practical gap. Many static analysis tools are strongest on Apex and rely heavily on file extensions to decide which rules or engines to apply — for example, treating .cls files as Apex and .js files as JavaScript. In Salesforce, that model can break down. Visualforce pages, for instance, can embed JavaScript and external libraries inside markup, and those embedded risks are often invisible to scanners that only look at the outer file type.
Declarative and metadata-driven changes present a similar challenge. Flows, permissions, and configuration frequently drive real production incidents, yet they sit outside the scope of many traditional static analyzers. Dynamic tools have their own blind spots as well. They can reveal that something behaves unsafely once deployed, but not always where or why the risk exists within a specific pull request or set of changes — and by the time dynamic testing runs, fixes are usually more disruptive and expensive.
AI amplifies these limitations. When change volume rises, the cost of noisy scans rises with it. When code is generated faster than before, context-blind checks miss more subtle issues. Scanning remains necessary. But without a deterministic review layer to enforce what scans find, prevention becomes inconsistent.
The tools Salesforce teams typically use
Most Salesforce teams begin with PMD for Apex because it's open source, widely adopted, and effective for standard rule enforcement when configured to suit the org. Salesforce Code Analyzer has grown in popularity as teams standardize on Salesforce CLI and VS Code workflows, offering a consolidated way to run multiple static engines. Teams looking for deeper Salesforce-specific rules often use platforms like CodeScan (AutoRABIT), while others adopt broader analysis environments. Larger enterprises may also layer in heavyweight static analysis tools to meet organization-wide security program standards.
Each of these tools can add real value. Their shared limitation is not capability, but placement. They surface issues. They don’t, on their own, ensure those issues are acted on consistently. That consistency only comes when scanning is tied into a deterministic review gate inside CI/CD.
Why Gearset Code Reviews fit the full Salesforce DevOps lifecycle
Most static analysis tools exist adjacent to delivery. They generate findings and rely on humans to interpret and enforce them. That model can work at low volume. It doesn’t scale cleanly in enterprise environments, and it breaks down further when AI accelerates output.
Gearset’s Code Reviews is designed as a deterministic governance layer built into the Salesforce DevOps lifecycle — not just to prevent issues, but to help teams continuously improve. It understands Salesforce code and metadata together, which is where risk actually manifests in real orgs, and enforces standards in context rather than through generic rules alone. Guardrails stay up to date with every Salesforce release, so teams don’t need to manually track evolving platform patterns.
This is the practical difference between “having scans” and “having guardrails.” A scan tells you what might be wrong. A deterministic review layer prevents what is wrong from shipping. When AI is contributing code, that prevention layer is the only scalable way to keep governance reliable. You get the productivity gains of generative development, without compounding risk across the lifecycle.
Just as importantly, Code Reviews supports the “observe” phase of the DevOps lifecycle . It separates newly introduced issues from existing technical debt, allowing teams to focus reviews on what’s changed rather than being overwhelmed by legacy violations. Full branch scans provide visibility into the overall health of the codebase, making it possible to track technical debt trends over time and measure whether quality is genuinely improving. Reporting and metrics also give teams insight into contribution patterns and review outcomes, helping leaders understand how standards are being applied across contributors and where additional guidance or automation might be needed. Together, this turns code reviews from a one-time gate into a continuous feedback loop that supports safer delivery at scale.
Final takeaway
Static and dynamic analysis solve different problems, software composition analysis protects a different surface area again, and mature Salesforce DevOps teams use all three — because each closes a different gap in the lifecycle. But none of those practices prevent unsafe releases on their own. Prevention happens in the code review layer. If scanning isn’t enforced through deterministic reviews embedded in CI/CD, it remains advisory rather than preventive.
As Salesforce delivery scales — through parallel development, expanding contributor bases, or AI-generated change — the teams that stay fast and safe are the ones that layer static analysis, dynamic testing, and deterministic code reviews throughout their DevOps lifecycle, so risk is caught early, validated later, and governed consistently.
Next steps
If you want to strengthen the way you run static analysis in Salesforce, start with our free Static Code Analysis and PMD course on DevOps Launchpad.
If you need deterministic, Salesforce-specific guardrails that keep AI-generated code safe across your full CI/CD lifecycle, explore Gearset’s Code Reviews .