What ‘good’ looks like for backup in 2026, and how to protect your last line of defence.
Ransomware attackers know that backups are your escape route. That is why modern attacks specifically target backup systems, encrypting or deleting them before deploying ransomware on production infrastructure. The pattern is consistent: compromise credentials, locate backup systems, destroy recovery options, then encrypt everything else.
A backup that can be reached and destroyed by an attacker who has gained access to your network is not really a backup at all. It is a copy sitting in the same blast radius as everything else. Genuine ransomware resilience requires backups that are architecturally separate from the systems they protect.
This guide covers the framework, the technology options, and the practical steps needed to build backup infrastructure that survives a ransomware attack. Whether you are a 20-person firm or a 500-person organisation, the principles are the same. The implementation differs, but the thinking does not.

Why traditional backups fail
Most backup strategies were designed for hardware failure, accidental deletion, and natural disasters. They were never designed to withstand a deliberate, targeted attack by someone with admin-level access to your systems. The assumptions they were built on no longer hold.
Same credentials
Backup systems accessible with the same admin credentials attackers already have. Once they compromise your domain admin account, they delete your backups first and deploy ransomware second. The backup was never separate from the thing it was supposed to protect.
Network accessible
Backup servers sitting on the same network as everything else. If attackers can reach production systems, they can reach your backups. Network-attached storage, mapped drives, and backup shares are all within the blast radius of a typical ransomware deployment.
No immutability
Backups that can be modified or deleted by anyone with admin access. Attackers simply overwrite them with encrypted versions, or delete them entirely. If your backup software can write to the target, so can the attacker who has compromised the backup software.
Never tested
Backups that have never been restored. You discover they are corrupted, incomplete, or configured incorrectly only when you need them most. An untested backup is not a backup. It is a hypothesis, and in a ransomware incident, hypotheses are worthless.
“The question is not whether your backups run successfully. The question is whether an attacker who has compromised your domain admin account can delete them. If the answer is yes, you do not have ransomware-resistant backups. You have a copy.”


The modern backup framework
The classic 3-2-1 backup rule has evolved. The modern version adds two critical requirements that address the ransomware threat directly: immutability and verification. Together, these five numbers define what a resilient backup strategy looks like.
Copies of your data at all times
Different media or storage types
Copy stored offsite
Copy that is offline or immutable
Errors after restore verification
The first three numbers are familiar. Three copies of your data, across two different media types, with one copy stored offsite. This has been standard advice for decades and it addresses hardware failure, site-level disasters, and basic redundancy. Most organisations get this far.
The fourth number is what changes everything for ransomware. One copy must be either offline or immutable. This means it cannot be modified, encrypted, or deleted by an attacker who has full access to your network and admin credentials. Without this, your backups are inside the same perimeter as everything else.
The final number is about verification. Zero errors after automated restore testing. A backup that completes without error is not the same as a backup that restores without error. Data can be corrupted, permissions can be wrong, application configurations can be missing. The only way to know is to test.
Together, these five requirements form a framework that addresses the full range of threats: hardware failure, natural disaster, accidental deletion, and deliberate attack. Meeting all five consistently is the standard your organisation should be working towards.
Achieving immutability
Immutability means that once data is written, it cannot be altered or deleted for a defined period. There are several ways to achieve this, and the right approach depends on your size, budget, and recovery requirements. Most organisations benefit from combining two or more of these methods.
Cloud object lock
Services like AWS S3 Object Lock, Azure Immutable Blob Storage, or Wasabi Object Lock allow you to write backups that cannot be deleted or modified for a defined retention period. The immutability is enforced at the storage layer, meaning even someone with admin credentials cannot alter or remove the data before the lock expires. This is the most accessible option for organisations already using cloud backup infrastructure.
Best for organisations already using cloud backup or looking for offsite immutability.
Air-gapped backups
Physical media such as tape or removable drives that are disconnected from any network after the backup completes. If storage cannot be reached over the network, it cannot be encrypted by ransomware. This is the oldest and still one of the most reliable methods of achieving true isolation. The trade-off is that recovery takes longer because someone physically needs to connect the media.
Best for highly sensitive data, compliance requirements, or maximum security posture.
Isolated recovery environment
Backup infrastructure deployed in a completely separate network segment with different credentials, authentication systems, and access controls. The production environment cannot communicate with the recovery environment under normal circumstances. In a ransomware scenario, this gives you a clean environment to restore into without risk of reinfection.
Best for larger organisations with complex recovery requirements.
Vendor-managed immutability
Backup-as-a-service providers like Datto, Veeam Cloud Connect, or Acronis maintain immutable copies on your behalf. The backup data lives in infrastructure you do not control, which is precisely the point. Your compromised credentials cannot reach it. This removes the burden of maintaining separate infrastructure while still providing genuine separation from your production environment.
Best for SMEs without dedicated backup infrastructure expertise.
What to back up by priority
Not everything needs the same level of protection. Applying immutability and offsite storage to all data is expensive and usually unnecessary. Instead, classify your data by business impact and assign protection levels accordingly.
Critical: immutable and offsite
Business-critical data that would halt operations if lost. This includes accounting and financial records, customer databases, contracts and legal documents, intellectual property, email archives, and core line-of-business application data. These assets need the highest level of protection: immutable copies stored offsite with tested recovery procedures.
Important: offsite with regular snapshots
Data that would cause significant disruption but could be partially reconstructed given time. Project files, internal documentation, configuration data, templates, and working documents fall into this category. Offsite backup with regular snapshots provides adequate protection without the cost of full immutability.
Operational: local backup with short retention
Data that supports daily operations but could be rebuilt from other sources. Application installers, non-critical archives, temporary project files, and cached data. Local backup with short retention periods is sufficient. The priority here is speed of recovery rather than long-term preservation.
Average dwell time of ransomware before activation in 2025
of ransomware attacks deliberately target backup repositories
organisations that pay the ransom recover all of their data
Testing your backups
A backup you have not tested is a backup you cannot trust. Regular testing is the only way to confirm that your recovery capability matches your assumptions. Schedule tests at multiple levels and document the results.
Monthly: file-level restores
Restore individual files and folders from your backup system. Verify they are intact, accessible, and contain the data you expect. Document the process, note any issues, and measure how long recovery takes. This catches corruption, configuration drift, and permission problems before they become critical.
Quarterly: full system restores
Restore a complete server or system image to a test environment. Confirm that applications start correctly, data is complete and consistent, and the system is functional. This validates that your backup captures everything needed to rebuild a working system, not just the data files.
Annually: full disaster recovery exercise
Simulate a complete site failure and execute your disaster recovery plan end to end. Measure actual recovery time against your targets. Identify bottlenecks, dependencies you had not documented, and assumptions that no longer hold. This is the only way to know whether your recovery plan survives contact with reality.
After significant changes
Any major infrastructure change should trigger backup verification. New servers, new applications, changed configurations, cloud migrations, and office moves all have the potential to break backup coverage. Verify that the change is captured by your backup system and that restore procedures still work as expected.
Common mistakes
These are the issues we see repeatedly across organisations of all sizes. Every one of them is avoidable, and every one of them has contributed to ransomware recovery failures in real incidents.
Using production credentials for backup systems
If your backup infrastructure uses the same Active Directory or Microsoft 365 admin accounts as your production environment, an attacker who compromises one compromises both. Backup systems need separate, dedicated credentials that are not accessible through your primary directory.
Treating Microsoft 365 retention as backup
Retention policies in Microsoft 365 are compliance tools, not backup systems. They have limitations on what they cover, how long data is kept, and how easily it can be restored. A deleted mailbox, a corrupted SharePoint library, or a wiped OneDrive requires proper third-party backup to recover fully.
Keeping only recent backup copies
Ransomware can sit dormant for weeks or months before activating. If your oldest backup is only seven days old and the infection happened three weeks ago, every copy you have is already compromised. Retention policies need to extend well beyond the average dwell time of ransomware in your industry.
No recovery documentation
In a crisis, you will not remember the recovery process. Which systems restore first? What are the dependencies? Where are the credentials stored? Who authorises the restore? If the answers to these questions exist only in someone's head, your recovery plan has a single point of failure.
“Ransomware recovery is not a technology problem. It is an architecture problem. The organisations that recover quickly are the ones that designed their backup systems to be unreachable by the very threat they are protecting against.”
Need help with backup strategy?
We help UK businesses design and implement backup infrastructure that genuinely withstands ransomware attacks. That includes assessing your current backup architecture, identifying gaps in immutability and isolation, and building recovery procedures that have been tested under realistic conditions.
If you are not sure where you stand, a backup resilience review takes around an hour and will give you a clear picture of what needs to change and what the priority order should be.



