
Mohammed Nadeem · 10th October 2025
From afterthought to safety net: my journey implementing Salesforce backup
In this post, Mohammed Nadeem (Gearset DevOps Leader) shares how his team redefined backup as a cornerstone of their release process. He reflects on the misconceptions they left behind, the lessons learned, and how this shift now empowers their developers and release managers with lasting confidence in every deployment.
For even more insights, watch Nadeem’s full “DevOps success in practice” webinar: How to bake backup into your Salesforce releases.
Why backup matters in Salesforce
One of the questions I hear most often is, “Salesforce already keeps my data safe, so why do I need a backup?” It’s a fair question. Salesforce is secure and reliable — but it’s also built on a shared responsibility model. In other words, Salesforce keeps the platform running; you’re responsible for what happens inside it. And that’s where the real risks live.
Think large, complex deployments that touch core objects and permissions. Hotfixes that skip a testing cycle. Misconfigurations or mass updates that don’t go to plan. Or the all-too-human mistakes — like the time 900-plus Apex classes vanished from a shared sandbox.
The faster you ship, the more operational risk you build up. The best decision we made was to make backups part of every release. That way, if something goes wrong, we have a time machine ready to bring our data and metadata back to where they belong — fast.
Misconceptions we had to unlearn
Even experienced teams fall into the same traps when it comes to backup. In Salesforce DevOps, the most dangerous risks often come from false confidence — assuming that because something sounds safe, it is safe. Over time, we learned that a few of the “obvious truths” about backup weren’t true at all. The good news? Once you spot these misconceptions, they’re easy to correct — and doing so makes your entire delivery process stronger.
1. “We can always just refresh a sandbox.”
On paper, that sounds simple. In reality, it’s one of the most operationally heavy processes you can take on during an incident. A refresh means masking PII (scrubbing out your customers’ real data), reconfiguring endpoints, aligning preview and non-preview orgs, and coordinating multiple teams — all while your users are waiting for a fix. It’s rarely acceptable in a live situation. A sandbox refresh might give you a clean start, but it won’t bring back what was lost in production. Backups, on the other hand, give you the ability to restore what actually matters — the data and metadata your business runs on — without the downtime.
2. “Data and metadata are separate concerns.”
In Salesforce, they’re two sides of the same coin. A seemingly small metadata change — a new picklist value, record type, or validation rule — can instantly block a data restore if your org’s structure no longer matches the snapshot. Treating them as separate systems creates hidden dependencies that only surface when something breaks. The real shift came when we started thinking of Salesforce configuration and business data as one living ecosystem. Once we backed them up together, we stopped treating restores as isolated technical exercises and started seeing them as part of maintaining org integrity.
3. “Backing up your orgs is enough.”
It’s not. Backups are only as good as your ability to restore them. If you never test that process, you’re building confidence on an assumption. The first time you attempt a restore shouldn’t be when production is on fire. We now run restore drills as part of our DevOps routine — the same way teams rehearse deployment rollbacks or disaster recovery. It’s a small investment that pays for itself in peace of mind.
Where our journey began
Our team was growing fast, headcount more than doubled within a few months, and our release cadence grew just as quickly. That kind of scale is exciting, but it also exposes the gaps you didn’t know were there. For us, it was the lack of a clear disaster recovery plan. We’d built strong processes for deploying and testing, but when something went wrong, we were still relying on goodwill and manual fixes. It worked — until it didn’t. So we went back to basics.
First, we focused on building shared understanding. You can’t build a culture of security and resilience if only a few people understand what DevOps really means in a Salesforce context. We leaned on the accessible, vendor-neutral training here on DevOps Launchpad, which really helped ground our team in the fundamentals — to give everyone a common backup and DevOps language. It’s much easier to talk about recovery, risk, and velocity when everyone’s speaking the same technical and cultural dialect.
Next, we socialized the risk narrative. The idea was simple: In Salesforce DevOps, you can’t innovate at speed unless you can recover at speed. For us, this reframed backup and recovery from a compliance checkbox to a business enabler — something that protects the freedom to ship fast, experiment safely, and learn quickly.
Finally, we defined what “good” looks like. For us, that meant pre- and post-deployment snapshots, tested restores, and clear, auditable evidence that we could recover both data and metadata. The result wasn’t just stronger technical safeguards — it was a mindset shift. Backup stopped being a contingency plan and became part of how we build confidence into every release.
Evaluating solutions: what mattered in practice
When it came to choosing a backup solution, we didn’t start with a feature checklist — we started with reality. Our team ran a hands-on proof of concept across some of the ecosystem’s best Salesforce backup tools, focusing less on what sounded good in demos and more on what actually held up during live releases. A few criteria quickly stood out once we got into the weeds.
1. Data and metadata parity
Salesforce doesn’t separate configuration from context, so our backup tool couldn’t either. We needed one place to back up and restore both data and metadata, with a consistent operator experience. That parity meant we could handle complex recoveries without juggling separate tools or processes — especially under pressure.
2. Automation control
Restores are rarely straightforward. Validations, Flows, and triggers can all block progress when you’re trying to fix a broken org. We needed full control — the ability to disable automations once and keep them off across iterative restore attempts, then re-enable them deliberately. That single feature removed an enormous amount of friction and stopped us from fighting the platform mid-incident.
3. High-frequency backups on critical objects
Not all data is equal. Core objects like invoices, payments, and key customer records deserve a tighter recovery point objective (RPO) than ancillary ones. Hourly backups for a select set of business-critical objects struck the right balance between safety and efficiency.
4. Operational ergonomics
Finally, we looked for something often overlooked in technical evaluations: usability under pressure. Clear difference comparison between pre- and post-deployment snapshots, readable logs, and an intuitive UI make a huge difference when release engineers are diagnosing issues in real time. Good ergonomics like this turns stressful recoveries into repeatable, controlled operations.
When we put these criteria through a proper proof-of-concept, Gearset stood out — not because of one killer feature, but because the operating experience matched the realities of Salesforce delivery.
Implementation approach: start small, learn fast
We treated implementation like any change to critical infrastructure: tight scope, short feedback loops, measurable progress. Here are a few tips I learned along the way.
1. Start where risk is highest: production
We began in prod to deliver early, visible wins and build trust. For any high-risk change — especially Big Bang releases — we took a fresh pre-deployment snapshot so we could restore quickly if needed, then captured a post-deployment snapshot to validate what actually changed.
2. Practice restores until they’re boring
Backups only matter if you can restore them. We scheduled artificial restore drills with clear owners and timeboxes. Every drill produced a small runbook update (what we learned, what we’d change next time), and a measured RTO (recovery time objective) so we could track improvement over time.
3. Expand the safety net to where work really happens
A real incident in lower environments made it obvious: creativity needs a safety net. We extended backups to every sandbox — shared and per-developer — so experimentation, hotfixes, and spikes didn’t carry unnecessary risk. Not all orgs need the same cadence, but all orgs need coverage.
4. Codify readiness so it survives busy weeks
Good habits shouldn’t rely on memory. We embedded backup checks and pre/post snapshot steps into our release readiness checklist, alongside gates for testing and approvals. When the pressure is on, the checklist does the remembering for you.
The result: a rollout that built confidence in stages, created measurable operational gains, and made recovery a normal part of delivery, avoiding any last-minute scrambling.
Cultural change we observed
The biggest win wasn’t just technical, but cultural too. When the team finally had a real safety net, everything else shifted.
1. Developers became bolder
With reliable backups and proven restores, the fear of breaking something faded. That freedom to experiment led to faster problem-solving, cleaner releases, and more creative solutions. Innovation thrives when the cost of failure isn’t catastrophic.
2. Release managers found breathing room
Instead of firefighting or rolling back manually, they focused on planning, verification, and continuous improvement. Pre- and post-deployment snapshots gave them clarity. Restores became predictable, not panic-inducing.
3. Stakeholders started trusting the process
Demonstrating that we could not only back up, but also restore quickly and reliably, built credibility. Auditors stopped asking for theoretical assurances — they saw the evidence for themselves.
This shift lined up perfectly with our wider DevOps goals: security, resilience, and continuous improvement. As one DORA insight puts it: don’t aim to be an “elite team” — aim to be a team that’s getting better at getting better.
Measurable impact
When we talk about cultural change, it’s easy to overlook the measurable wins that make it stick. In our case, the results were clear — and quantifiable.
1. Rollback time dropped from days to hours
Before implementing Gearset Backup, recovery meant digging through repositories, rebuilding environments locally, and manually reloading data with Data Loader — often taking entire days. It was draining, error-prone, and deeply dependent on whoever remembered the last fix.
Once we had tested, documented restore paths, that chaos turned into choreography. Restores became rehearsed, repeatable, and predictable. If an incident happened, the team didn’t scramble — they executed. What used to consume a weekend now took an afternoon.
2. Automation reduced toil and stress
We moved from reactive backups to proactive, automated schedules. Daily and weekly org-wide backups ran silently in the background, with hourly backups for critical business objects like invoices and payment records. The result wasn’t just better coverage — it was peace of mind. The team didn’t have to wonder whether backups were running; Gearset took care of it.
3. Audit readiness went from reactive to effortless
When auditors or stakeholders asked for evidence, we no longer had to piece together logs or recreate timelines. We could produce concrete proof on demand — snapshots, restore test results, and documented RPO/RTO assumptions. That transparency built trust both inside and outside the team. Compliance conversations shifted from “how do we prove this?” to “how can we improve this?”.
Together, these outcomes signaled a maturity milestone for the team. Backup wasn’t just insurance anymore — it was infrastructure for confidence. The systems ran smoother, the engineers slept better, and every new release carried the quiet assurance that, no matter what happened, recovery was just a few clicks away.
Best-practice checklist
Over time, our process settled into a rhythm — a few principles that made backup second nature, not a side project.
1. Start in production, start small
Protect what matters most first. Begin with your production org to prove value early, earn leadership confidence, and refine your playbook before scaling. Once the process feels routine, expand coverage to sandboxes and developer environments. It’s easier to scale reliability than to retrofit it.
2. Test restores regularly
Don’t wait for a real incident to test your plan. Schedule restore drills regularly, measure your recovery time objective (RTO), and record what you learn. Every exercise should leave behind a slightly sharper runbook, with clearer ownership and fewer surprises. If backups build confidence, testing keeps it honest.
3. Build backup gates into your releases
Nadeem’s team learned that embedding backup into their DevOps practice changed more than process; it changed culture. Developers took bolder bets, release managers gained breathing room, and the organization built lasting trust in its systems. Because in Salesforce DevOps, one truth holds steady: you can’t innovate at speed unless you can recover at speed.