Rocket ship badge DevOps Launchpad

Sam Crossland · 27th February 2025

How to use Salesforce Code Analyzer for better code quality & security

In spring 2024, Salesforce released their official GitHub action on the GitHub marketplace, which made integrating your version control system (VCS) with the Salesforce Code Analyzer (SFCA) tool simple — but what if you're running on a different VCS and still want to leverage the scanner? In this blog post, we’ll cover everything from the basics of Static Code Analysis, to understanding the benefits of using Salesforce’s Code Analyzer (SFCA), and how to integrate it into your Salesforce DevOps pipeline if your team uses Azure DevOps (ADO).

What is Static Code Analysis and why does it matter for Salesforce?

In any software development process, ensuring code quality is essential — but catching issues before they reach production can be challenging. That’s where Static Code Analysis (SCA) comes in. SCA is a method of checking your source code without actually executing it, allowing teams to identify potential quality issues, errors, and security vulnerabilities early in the DevOps lifecycle. By running a code analyzer scan on your Salesforce code, you can detect issues at various stages of the development lifecycle, helping to prevent bugs and improve the overall health of your codebase.

Static analysis tools use features like custom rules and rulesets that enable teams to set their own standards for acceptable code quality. Teams can adjust settings like severity thresholds (where a range is used instead of boolean “good” or "bad”violations), choose which selected files to scan, and determine how scans run during different points in the development cycle. Whether you only scan selected files or an entire managed package, these SCA tools produce lists of potential problems, enabling teams to act before issues reach production.

For Salesforce development, where multiple languages and metadata types are involved, implementing both full and delta scans of your Salesforce code is crucial. Regular scans not only identify existing technical debt but they also help establish consistent coding standards across your team. By integrating SCA into your DevSecOps strategy, you ensure that code and security reviews become a natural part of your CI/CD pipeline, strengthening your overall DevOps process.

What are the benefits of SCA for Salesforce teams?

As we’ve already touched on, using SCA tools in your Salesforce development lifecycle provides a range of benefits that enable you to:

  • Detect bugs, errors, and vulnerabilities early — before they reach production
  • Improve code consistency by enforcing team-wide code standards
  • Improve code quality by proactively identifying quality issues
  • Improve and pass security reviews by integrating DevSecOps best practices
  • Meet regulatory compliance requirements with custom rules
  • Reduce technical debt by addressing historic and future code issues
  • Pass the AppExchange Security Review Wizard and get managed packages approved for listing on the Salesforce AppExchange

With tools like code explorers and code editors, especially those offering a command palette, developers can seamlessly integrate SCA into their daily workflows — easily initiating scans of selected files or entire repositories. By ensuring that every code submission meets your configured standards, you improve team collaboration, deliver higher-quality features, and enhance your Salesforce code’s long-term maintainability.

What is Salesforce Code Analyzer and why use it?

Salesforce Code Analyzer (SFCA) was originally released in 2020 as ‘Salesforce CLI Scanner’. It’s been developed over the years as an open source code quality scanner for various programming languages used across the Salesforce ecosystem, rebranding when v3.x was released in 2022. One of the main directives of the unified tool is to write better, performant, secure code, ensuring teams “shift left” in their development cycle to find code quality issues earlier and tackle them quickly.

SFCA’s architecture uses various “engines” that focus on key programming languages and concepts relevant to Salesforce development. These engines check for quality and security issues, then bring the results together in an easy-to-read format, like a CSV file or HTML report. Examples of the engines include PMD for scanning Apex code, ESLint/RetireJS for analyzing javascript (such as LWCs or custom scripts), and Salesforce Graph Engine for performing a deeper investigation into code structure via data flow analysis and to find potential security vulnerabilities.

Version 5 of SFCA is in Beta and offers key improvements for Salesforce teams, including:

  • Two new engines for analyzing Regex and Flows
  • A new configuration YAML file for easier global customization of SFCA
  • Greater flexibility in selecting specific rules from individual engines, including a fork of AppExchange 2GP rules
  • Significantly improved HTML reports with enhanced filtering and grouping options

There are two other key changes to be aware of with the Version 5 release of SFCA:

  • Salesforce teams will need to update how they use command-line arguments (CLI) and selections, and ensure additional packages, like Python, are installed for smooth integration with your CI/CD pipeline.

  • The Graph Engine, which provides in-depth code analysis, isn’t available in Version 5 yet. But, since it’s usually used to scan entire branches rather than small code changes in pull requests (PRs), it generally fits into a different stage of your development workflow. The Visual Studio Code ‘VSCode Extension’ is also still using Version 4, so developers running this in their IDEs will need to be aware.

Using SFCA with Azure DevOps: what you need to know

Now that we’ve covered what SFCA is, let’s look at how to integrate it with Azure DevOps (ADO). Unlike GitHub, ADO doesn’t offer an official integration with SFCA, so we’ll need to set up a custom pipeline and a corresponding YAML file. This configuration will tell ADO when to run Code Analyzer (like triggering on a PR to a specific branch) and where to output the results for review. These results will help you assess whether your code meets the necessary quality and security thresholds before you merge your PR.

There are several important steps to follow — plus a few nuances in ADO that differ from other version control providers. Before we get started, make sure you have the appropriate permissions to make changes across your repository and set up pipelines.

In this section, we’ll walk you through:

  • Key prerequisites for setting up the integration
  • A link to a ready-to-use YAML file that simplifies running scans on pull requests and collecting results
  • Important considerations to keep in mind when working with ADO

With Version 5 set to completely replace Version 4 soon, this blog post focuses on the latest version of SFCA. However, keep in mind that Version 5 is still in Beta, so some features may change before the final release.

So let’s dive into the setup process to get SFCA working seamlessly in your ADO environment.

Initial setup

CI trigger

Before setting up your pipeline, you’ll need to adjust a project-wide setting to prevent the pipeline from running on every commit to any branch. This ensures that the pipeline only triggers for pull requests (PRs) on your specified branches.

By default, Azure DevOps (ADO) triggers YAML pipelines for any code changes if no trigger section is defined. According to Microsoft:

“Today, if your YAML pipeline doesn’t specify a trigger section, it runs for any changes pushed to its repository. This can create confusion as to why a pipeline ran and lead to many unintended runs.”

To avoid unintended pipeline runs, you’ll need to enable a specific setting at the project level. Keep in mind that this is a project-wide change and may impact other pipelines within your ADO project.

How to enable the setting:

  1. Navigate to Project Settings in Azure DevOps.
  2. Go to Pipelines > Settings.
  3. Find and enable Disable Implied YAML CI Trigger.
Status checks in Gearset

Code coverage check

The next step is to prevent never-completing status checks from appearing on pull requests (PRs) that monitor PR checks.

We need to turn off a feature that can cause never-completing status checks to appear against PRs in Gearset or other tools watching the PR for checks.

Status checks in Gearset

Azure DevOps (ADO) includes a Code Coverage feature for certain programming languages (like .NET), which provides detailed scan reports during builds. This feature is enabled by default across your repository. When it encounters unsupported code (such as Salesforce metadata), it may briefly create a status check on PRs that disappears afterward but remains visible as an incomplete check.

To stop these unnecessary checks from appearing:

  1. Add a YAML configuration file to disable code coverage triggers.
  2. Place the file in the root of your repository (at the same level as sfdx-project.json or README.md).
  3. Ensure the YAML file is present on:
    • The main/master branch (essential for consistent behavior)
    • Any long-standing branches to prevent PR-related issues

Follow the detailed steps here to disable these checks from running unnecessarily, as well as this example YAML file provided here. Remember, this will disable code coverage across the board for the repo, so assess any other ADO pipelines that may be affected before changing these settings.

Get SFCA running in ADO

Configuring the ADO Pipeline

Now let's talk about the steps you need to take to activate SFCA scanning in ADO.

Step 1: Agents

Ensure you have available Agents to conduct the scanning, either in the Cloud or self-hosted. You can verify this in Project Settings > Agent Pools to see if any rows appear in the Agents tab. You’ll need to have at least one agent flagged as “online” for the pipeline to run.

Agent pools in Azure DevOps
Azure Pipelines in Azure DevOps

If you don't have dedicated or paid Agents configured, you may receive this error message as you get further along the setup process:

“No hosted parallelism has been purchased or granted.”

This is likely due to a change that Microsoft made in disabling free grants of runners, as explained here. You’d need to submit a request and wait for the runner to become available (could be up to 2-3 business days) before continuing setting up SFCA for ADO. This will only give you one runner so consider any parallel needs here if you need more.

You’ll also need to ensure that, at an organization level, you have an actual “tier” available for Microsoft-hosted runners, or a self-hosted runner available before continuing.

Organization Settings in Azure DevOps

Step 2: Setup a new pipeline

Create a new pipeline in ADO by heading to your project. Select Pipelines > New Pipelines and link into your Salesforce repository (Azure Repos Git).

Create a new pipeline in Azure DevOps
Link your Salesforce repository to Azure DevOps (Azure Repos Git)
Select the repository for your new pipeline

Make sure you select a Starter Pipeline to build a new one up and follow the steps below. Don’t click Save & Run until you’ve completed the next step.

Configure your pipeline

Step 3: Review your pipeline YAML

Set up as a fresh YAML file — this is the “engine” behind SFCA running and posting the status back to the PR.

Review your pipeline YAML

The most up-to-date YAML file configuration can be found in this repository. Copy the contents of the SFCAPipeline-v5.yml file and adjust it using the guidance below.

PR trigger configuration

At the top of the YAML file, the PR section defines branches that trigger the pipeline. In ADO, triggers rely on Build Validation rules in Branch Policies — not just YAML. Updating the PR section ensures consistency across systems, but without matching branch policies, triggers won’t occur. Referencing a non-existent branch may cause an error when saving.

Key variables to configure

Several default variables are defined at the top of the YAML file, but two are particularly important for controlling how violations are handled:

  • STOP_ON_VIOLATIONS: Determines whether the pipeline should fail if the violation threshold is exceeded.
  • MAXIMUM_VIOLATIONS: Sets the number of allowable violations before the pipeline marks the build as failed. The default value is 10.

Adjust these variables to align with your team’s quality and security standards.

File scanning configuration

The local variable $RelevantFilesForScanning specifies which file types the scanner will check (e.g. Apex, Visualforce Pages, LWC files). You can expand the file extensions to scan additional file types or use more engines. Be aware that adding more file types may increase the number of PRs scanned and impact your pipeline’s performance.

Installing required tools

The line sf plugins install @salesforce/sfdx-scanner@latest allows you to install Salesforce Code Analyzer as per the repository here — in v5.0.0-beta.2. There are multiple pre-requisite installs included, such as the NodeJS, the Salesforce CLI, and Python, to ensure the virtual machine (VM) has all required packages.

Version compatibility and future updates

It’s worth bearing in mind that new SFCA releases may introduce compatibility issues or new features. As a result, it’s important to regularly monitor updates, especially as SFCA approaches general availability (GA). Similarly, make sure to review your YAML setup periodically to avoid any unexpected pipeline behavior.

Step 4: Save the pipeline YAML file to a new branch

In Azure DevOps, click Save & Run to save your new pipeline YAML file. When prompted, create a new branch for this change. When naming your branch, use a name that reflects the addition of a pipeline automation file (rather than Salesforce metadata) — e.g. “feature/SFCAv5-ADO-Pipeline-Setup”. For now, untick the pre-selected PR option during save. This YAML file will eventually be needed in all relevant branches, but we'll complete other setup steps first.

Save your new pipeline YAML file

Step 5: Configure branch policies and build validation

To ensure your pipeline runs as intended, you’ll need to set up a Branch Policy on the branch where you want the pipeline to trigger — for example, your user acceptance testing (UAT) branch. This can apply to one or multiple branches, depending on your version control structure and testing requirements.

While the PR trigger in the YAML file tells it which branches to watch, Azure DevOps requires a Build Validation Policy for the pipeline to trigger on pull requests. This differs from GitHub, where the YAML trigger alone is usually enough.

To set up your Build Validation Policy, navigate to your desired branch (e.g. UAT) in Azure DevOps. Go to Branch Policies and locate the Build Validation section where you can add a new Build Validation Policy by following these steps:

  • Select your newly created pipeline.
  • Set it as Required to ensure the pipeline must pass before the PR can be merged.
  • Enter a clear display name (e.g. SFCA Code Analysis check) for easy identification.
  • Ensure the trigger is set to run automatically for every PR targeting this branch.
  • By default, PR build results are retained for 10 days. Make sure to review your retention policies to adjust this period if longer-term build history is required.

By default, PR build results are retained for 10 days. Make sure to review your retention policies to adjust this period if longer-term build history is required.

Edit your build policy

Step 6: Verify your file structure

In the parent folder of your branch, make sure the file structure includes the necessary YAML files to enable SFCA scanning and manage code coverage. These files should sit alongside your Salesforce metadata.

Verify your file structure

You’ll need to make sure you have the following files:

  • SFCAPipeline-v5.yml — which runs the SFCA scanning process
  • Codecoverage.yml — which prevents unnecessary code coverage checks running
  • Your existing Salesforce metadata files (e.g. classes, objects, triggers, etc.)

Ensuring these files are placed correctly allows the pipeline to function as intended, with SFCA scanning and code coverage configurations working seamlessly.

Step 7: Verify your pipeline settings

To avoid errors like “configuring the trigger failed, edit and save the pipeline again”, ensure your ADO pipeline settings are pointing to the correct YAML file (e.g. SFCAPipeline-v5.yml) rather than the default configuration. You can check and update the YAML file path with the following steps:

  • In Azure DevOps, navigate to Pipelines.
  • Click on your created pipeline to open it.
  • In the top-right corner, click the three dots (⋮) and select Settings.
  • In the YAML file path field, ensure it points to the correct file (e.g. SFCAPipeline-v5.yml).
Verify your pipeline settings

Step 8: Final configuration adjustments

With your ADO pipeline set up, there are two final adjustments to ensure it runs successfully.

Update permissions for status check pushback

To allow the pipeline to post the final status check back to the PR, the Build Service User needs appropriate permissions. To update the permissions follow these steps:

  • Navigate to Project Settings in Azure DevOps.
  • Go to Repositories and select the repository where the pipeline exists.
  • Click the Security tab.
  • Scroll down and find the Build Service User (listed near the bottom).
  • Locate the Contribute to pull requests permission and set it to Allow.
Update your permissions for status check pushback

Without this permission, the pipeline won’t be able to push its status back into the PR, which could cause incomplete checks or workflow interruptions.

Adjust shallow fetch settings

By default, Azure DevOps uses shallow fetch with a depth of one, which retrieves only the most recent commit. But, SFCA’s git diff requires the full commit history to function properly. You’ll need to disable shallow fetch to pull all commits using the following steps:

  • Navigate to Pipelines > Recent pipelines in Azure DevOps.
  • Find your pipeline, click the three dots (⋮) on the right, and select Edit.
  • In the pipeline editor, click the three dots (⋮) in the top-right corner and select Triggers.
  • Go to the YAML tab, then click Get sources.
  • Scroll down and untick the Shallow fetch checkbox.
  • Setting the fetch depth to 0 ensures the pipeline retrieves the full history, which aligns with best practices used in community repositories like Mitch Spano’s SFCA examples.
Adjust shallow fetch settings

Step 9: Test the pipeline with a pull request

Create a PR from your feature branch (containing the new YAML file) into your target branch (e.g. UAT). This allows you to test the pipeline setup against branches containing legitimate Salesforce code. To test this, merge the PR into your target environment branch. Create and test at least two additional PRs against the updated target branch:

  • Test PR 1: Contains valid files for scanning (e.g. .cls, .js, .html) — expect the pipeline to run and return scan results
  • Test PR 2: Contains non-scannable files (e.g. .xml) — the pipeline should recognize there’s nothing to scan and skip unnecessary steps, meaning no report will be generated

The goal of these tests is to confirm that the Azure DevOps pipeline behaves as expected before rolling out the PR to other branches.

Step 10: Roll out to all relevant branches

Once you’re satisfied with the pipeline’s performance, sync the target branch forward to other branches in your development workflow — continue syncing until you reach your main/master branch. You should also propagate changes to all remaining branches in your codebase to ensure consistency.

Seeing the results

Viewing results in Azure DevOps

Once your ADO pipeline is configured, you’ll be able to see the status checks directly within the PR view in ADO by navigating to the Checks section. Here, you’ll see two types of checks related to the pipeline:

  • Build Validation: the main pipeline trigger that remains visible throughout the PR lifecycle, playing a key role in determining if the PR can be merged.
  • Code Analysis Completed: appears if valid file extensions (e.g. .cls, .js, .html) were scanned, and only after the pipeline completes and posts results back via the API.
Viewing your checks and results in Azure DevOps

If you’re using Gearset, you can see extra information about the Branch Policies you have in place, like minimum reviewers or comment requirements. It will also split out the “Build Validation” rule from the extra PR status check (with a “Details” hyperlink to the side) separately so you can click straight through to the build results.

Viewing your checks and results in Gearset

Exploring pipeline run details

Clicking on a status check takes you to the specific pipeline run in the Pipelines section of ADO. There, you’ll find metrics on build duration and completion time, details on the linked repository and associated PR, and any artifacts generated after the run — displayed as “2 published” in the Related section.

Exploring pipeline run details in Azure DevOps

Accessing code analysis results

You can then click into Published Artifacts to see the folder/file breakdown of all files changed in the PR, as well as the code analyzer results file in html format. This can be downloaded and viewed in your browser for filtering, investigation and further analysis.

Published Artifacts in Azure DevOps
Salesforce Code Analyzer report

What if no valid file types were found to scan?

The ADO pipeline will run for every PR raised against your target branch. However, filtering logic built into the pipeline ensures that SFCA only runs if it detects relevant file types such as .cls, .js, or .html. If no valid files are found, the pipeline will still execute and produce a result in ADO, but no extra “Code Analysis” status check will appear on the PR.

No Code Analysis status check on a PR

If you look into that specific pipeline run, you'll see that no artifacts were generated, and the execution time taken will likely be just a few seconds rather than several minutes. This happens because the first part of the YAML file checks for relevant file extensions and then skips the remaining steps if no valid files are found.

No artifacts generated in the pipeline run

Caveats and evolutions

Caveats

This is an unofficial YAML file that I've written as a member of the community and hosted in a public repository as a helpful aid. Please ensure you technically review the logic and leverage it at your own risk, as per the relevant licence, and direct bugs/issues into the GitHub repository.

As mentioned above, the variables near the top of the file will need changing in accordance with your tolerance for failures (primarily STOP_ON_VIOLATIONS and MAXIMUM_VIOLATIONS), as this will affect how the pipeline passes or fails — preventing your ability to merge the PR.

There are currently a mix of static and dynamic VM/package versions in use (ubuntu-latest, Salesforce CLI/Code Analyzer latest, and NodeJS v22.x). This could cause issues as new package versions are released, and will need regression testing.

Evolutions

This pipeline is currently designed to work exclusively with PRs. It calculates the difference between the source and target branches, scanning only the files changed in the PR. As a result, if you try to run the pipeline manually, it will “pass” successfully but won’t produce meaningful results since no file differences are detected.

Future versions could enable full codebase scans, which would be useful for periodic health checks. These runs could leverage more resource-intensive engines, like the Graph Engine, to provide comprehensive analysis beyond PR changes.

With SFCA v5, you can now use the severity-threshold flag. This allows the pipeline to fail if violations at or above a certain severity level are detected. For example, setting the threshold to “2” would fail the run if any high or critical violations occur. This feature could replace the current maximum violations system, providing a more targeted approach to quality enforcement.

To reduce pipeline execution times, you could pre-create a virtual machine (VM) with essential tools like Node.js, the Salesforce CLI, and SFCA already installed. This would speed up the setup process during runs. However, managing and keeping the pre-built VM up to date introduces additional maintenance overhead, which is beyond the scope of this guide.

Considerations

We’ve seen how SFCA can be integrated into your CI/CD processes inside ADO, but what are some of the key considerations when it comes to using SFCA in comparison to some others in the marketplace?

Managing delta file scans

One of SFCA’s current limitations is the lack of native support for scanning only the delta files in a pull request (PR). While you can specify target files to scan through CLI arguments, determining which files to include requires additional logic — like the filtering steps outlined earlier. This can be challenging when your priority is identifying newly introduced issues rather than addressing existing technical debt across the codebase.

Strategically placing SFCA in your development workflow

Adopting a “shift left” approach — integrating security and quality checks earlier in the development cycle — is highly recommended. But, timing and placement of SFCA scans are crucial to maximizing their value without disrupting workflows.

Running scans locally in developers’ IDEs can be highly effective for catching issues early during development. But, local scans may lack centralized reporting, making it harder to track issues across teams. Incorporating multiple quality gates is a best practice:

  • PR-based scans (as covered in this guide) help prevent issues from reaching shared branches
  • One-off or scheduled scans across entire codebases can catch broader issues and support periodic code health assessments

Selecting the appropriate engines for each scan type — based on resource usage and scan focus — ensures you strike the right balance between thoroughness and efficiency.

Interpreting and acting on scan results

Collecting scan results is only the first step — knowing what to do with the data is equally important. You should consider:

  • Who is responsible for reviewing and analyzing scan outputs?
  • Who should be notified of identified issues?
  • Should the issues be incorporated into existing user stories or handled as separate tasks?

With SFCA, each run produces individual reports that require manual storage, interpretation, and tracking. Over time, this manual process can become burdensome, especially for teams managing multiple scans across large codebases.

Planning for ongoing maintenance and upgrades

SFCA, like any evolving tool, requires regular maintenance to stay up to date with new versions and features. For instance, the release of v5 introduced the Flowtest engine, which added Python as a prerequisite. It’s likely that future versions will include additional dependencies. Teams should plan for:

  • Version upgrades and compatibility checks
  • Regression testing to ensure new features don’t disrupt existing pipelines
  • Monitoring release notes for new pre-requisites or changes to scanning capabilities

Proactively managing these updates helps prevent unexpected disruptions in your development process.

What next?

Now that you’ve seen how to set up SFCA in Azure DevOps — along with the prerequisites, quirks, and key considerations — the next step is to try it out for yourself! Hands-on experimentation is the best way to understand how SFCA fits into your development workflow.

Key resources to help you get started

  • This blog — treat this as your setup guide

  • Repository link — where you can find the YAML file to bring everything together

  • Salesforce Code Analyzer extension v5 documentation and CLI commands

As an open-source community project, your feedback is invaluable. If you have suggestions, improvements, or tweaks to enhance the configuration, I’d love to hear from you at [email protected]. To learn even more about Static Code Analysis for Salesforce, you may also find the DevOps Launchpad Static Code Analysis and PMD course useful. Good luck!