Virtualization Technology News and Information
VMblog Expert Interview: Zerto Talks Cybersecurity Challenges and Continuous Data Protection

vmblog interview with zerto 

Ransomware attacks have been slowly escalating for years.  It is not only growing more frequent, but more costly as well.

Cyber Resilience is the ability to prepare for, respond to, and recover from a cyber-attack once it occurs. It requires a shift in the way you think about ransomware: from preventing attacks to being prepared for the eventuality of an attack.

To dive into this subject a bit more, VMblog recently reached out to and spoke with Deepak Verma, director of advanced technology group at Zerto

VMblog:  What's the biggest challenge with cybersecurity today?

Deepak Verma:  As cyber criminals continue to gain sophistication, many organizations attempt to combat the increasing threat intensity and frequency with prevention investments alone. Businesses can no longer afford this mindset. Prevention systems are critical: they serve as an organization's first line of defense, but trends in 2019 demonstrated it is no longer enough-something's always going to get through. Instead, the industry needs to place emphasis on the organizations' comprehensive cyber resilience strategy which includes recovery.

According to a recent IDC survey, 84 percent of organizations have experienced a malicious attack in the past 12 months, with 89 percent of those attacks succeeding. Out of those successful attacks, 93 percent resulted in data corruption or loss. Today, unfortunately, it's inevitable that you're going to be attacked, even if you have the most sophisticated security technology in place.

VMblog:  What's a recent example we can all learn from?

Verma:  Just last year, Arizona Beverage Company endured a significantly damaging ransomware attack. The company suffered revenue losses equaling millions of dollars a day in sales. Moreover, it took two weeks to recover the compromised data, which, at that point, is almost the same as losing some data forever. If you understand the impact of downtime, just a few hours is already too much-much less days.

Attacks like these continue to be devastating to even some of the world's largest organizations because traditional recovery methods are proving to be inherently flawed. Traditional methods do not allow for enough consistent disaster recovery and backup testing to ensure the recovery works as intended. It's not just about slapping a daily backup onto your infrastructure, but validating recovery through rapid, isolated, and randomized testing with no impact to production.

VMblog:  Has Zerto now entered the cybersecurity market?

Verma:  Our IT Resilience platform addresses the cybersecurity threats posed once an attack makes it past an organization's perimeter. In other words, Zerto is a crucial component of our customers' larger cyber resilience strategies. We're leading the industry in cybercrime recovery and testing, two imperative pieces of the puzzle when it comes to being cyber resilient - and this is where the industry is shifting. For the reasons I've already mentioned, IT professionals are realizing the need to move from a pure prevention-centric cybersecurity mindset to one of full cyber resilience where prevention coupled with effortless testing and instantaneous recovery to any prior points-in-time are ready to fight.

VMblog:  Can you elaborate more specifically on the challenges IT groups are experiencing with traditional recovery methods?

Verma:  Here's what happens in a traditional recovery process. Backup software reads the data, or changed data, from the production systems, typically daily, and writes this to backup storage on dedicated infrastructure. This is based on streaming or snapshots of the data, and for those requiring long-term retention there's usually some sort of off-site storage such as tape or a cloud archive involved. 

This approach does not meet the recovery and availability needs of today's business applications. Most IT teams utilize their backups as their means to recovery, specifically snapshot-based backup solutions. Years ago, that would have been enough, but in today's world, it doesn't cut it. There are a few reasons for this.

First, there's the issue of backup windows. Organizations usually take backups every 24 hours, usually after hours and on the weekends. Think of all the data your organization produces and stores within a day. All of those critical transactions, communications and more. Companies would love to take more snapshots during a day, but it's not possible because of the impact this would have on production. Backup simply was never designed with low RPOs in mind, it may be achievable but usually at a great expense.

Secondly, there's the dedicated hardware issue. Often backup infrastructure consists of dedicated hardware which is sized for backup workloads-storing data as compactly as possible for long periods-not for a full disaster recovery scenarios, which ransomware and other cyber-attacks would fall under. While sometimes backup may offer mechanisms to peek into backups without restoring, or mounting VMs straight from secondary storage, these are incapable of supporting frequent isolated tests, let alone at running at production caliber.

Third, there's the complexity of recovering enterprise applications. Think of your environment today; all of the applications you have running and how they interact with each other. These applications can run on multiple VMs, with different operating systems and with completely different storage requirements. In these situations, recovering a multi-VM application is incredibly complex and since most backups are taken per VM, all the VMs are recovered to different points-in-time and all that data needs to be copied back to production then synced up to start operations. Moreover, if you recover from a backup and then realize that the restore is still infected, then you have to repeat the entire process over and over until you get it right for the application components.

VMblog:  So those are the challenges.  What's the answer?  How does Zerto fit in?

Verma:  Dealing with what I've just described is what's considered normal for most organizations. It's been the status quo and what's considered "available." That doesn't mean it's acceptable. 

Organizations need to be always-on; the availability expectations of organizations and the market is only getting stricter. Today the norm needs to shift to maximum protection, where organizations can rely on being completely covered, not just be willing to give up an entire day's worth or more of critical data.

The answer is Continuous Data Protection (CDP). CDP is not new and has been around for many years, but now it's become a critical piece of the puzzle in solving for the rising ransomware threat and achieving lightning speed recovery.

There are five core capabilities of CDP that help solve today's recovery challenges. At Zerto we're the only ones offering these combined capabilities in a single platform to IT organizations in a way that makes consistent testing and recovery seamless. Coupled with multi-cloud capabilities, at an affordable price point, and in a hardware-agnostic manner makes this approach very appealing.  The five capabilities are:

1.  Continuous Data Replication - think of this as the engine that powers CDP, so that data is continually captured without overhead of snapshots on production.

2.  Journal-Based Recovery - this is what allows you to recover to any granular point in time from 5 seconds ago to 30 days ago from that CDP stream, and not just at 24hr intervals.

3.  Application Consistency - Being able to manage and recover your entire application in atomic groups to the same point in time regardless of number of VMs and storage systems it may span.

4.  Long-Term Retention - Expanding your protection scope for long-term use cases such as compliance for beyond 30 days. Want to keep the month end data from the Journal for 7 years, we can move that to low cost storage of your choice.

5.  Test Recovery - with the click of a button, test any application consistency group in an isolated network bubble without having to create an entire copy of the application. Great for penetration testing and forensics. This can be as easily destroyed as created.

Having these five pillars in place, gives the ability for application teams to test and recovery applications as easily and as often as needed which is the only way to build confidence in an organization ability to be comprehensively protected against new threats. Something that regulators, auditors, insurers and stakeholders will undoubtedly appreciate.

VMblog:  To sum things up, what are the top five things you suggest to organizations looking to up their cyber resilience game?


1.  Shift from just securing network perimeters to safeguarding data across systems, devices and in the cloud.

2.  Remember that your prevention investment means nothing when an attack makes it way thorough and your recovery measures become outdated and insufficient as soon as new type of attack is discovered. You must have the ability to prepare for, respond to and recover from a cyberattack quickly to be truly cyber resilient.

3.  Cultivate a culture shift to one focused on all-encompassing cyber resilience complete with modern solutions and procedures in place, like CDP, to rollback your data. Remember, traditional recovery process comes riddled with flaws and cost, and there's no reason for it to be the status quo anymore

4.  Establish best practices centered around the NIST Cybersecurity Framework: Recover, Respond, Detect, Protect, Identify.

5.  Test, test and then test again when it comes to your recovery process. And, make sure you have a solution in place to help you do that easily and seamlessly without the test itself causing an unwanted disruption to business operations.


Published Monday, February 03, 2020 7:36 AM by David Marshall
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
<February 2020>