Why the Capacity to Destroy Data is a Good Thing

Data Cycle

An Intellyx BrainBlog by Jason English, for Verity ES

It’s much more difficult to successfully build and grow something like a business or a career, than to destroy it.

Destruction usually carries with it a negative connotation. Except when it comes to destroying, or more accurately, eradicating, obsolete data that can only come back to harm your business—or by extension, your career—if it is intercepted or shared.

Data needs to be eradicated all the way down to the disk surface layer to meet regulatory or privacy requirements, before any disks are discarded or donated for possible reuse. The destruction of data with a thorough eradication program is the CDO and CISO’s best countermeasure to serious risk and failure.

However, achieving a compliant and safe level of data eradication while reducing unnecessary waste is not as simple as reformatting the drives or throwing them in the trash, as my colleague Jason Bloomberg describes in the first post of this series:

“Optimizing the data eradication process is a classic flow problem: end-of-life drives are the inputs and cleaned drives are the output. Ensure inputs don’t exceed outputs or the work will pile up, leading to slowdowns.”

Establishing a data eradication process that is verifiably complete is important, but what happens when capacity bottlenecks prevent eradication at scale? IT teams can quickly find themselves struggling to keep up with demand, like Lucy and Ethel in a hilarious candy factory scene.

The high cost of low capacity for eradication

Companies experiencing a data eradication backlog are forced to choose between two unappetizing options.

They could abandon equipment reuse goals—and instead, pay fees and potential fines for shredding media while generating electronic waste and missing out on any residual resale value or recycling incentives.

Or, even worse, allow drives with uncertain data contents to pile up in a storeroom somewhere until they eventually get picked through, moved, or shipped out, with the hope they never get exposed to the world and become a criminal or legal liability.

Failing to eradicate data at scale isn’t an uncommon problem, though it is seldom reported. Even companies with extensive IT resources fail at it, because they also generate massive quantities of end-of-life media.

For instance, leading investment firm Morgan Stanley trusted a moving vendor to take its obsolete or failed drives to a data services vendor for destruction, but instead they passed them to a reseller. The data exposure led to a $35 million fine in 2022 from the US Securities and Exchange Commission (SEC), and nearly double that cost to settle contractual penalties with customers.

Backlogs drive poor decisions

Three reasons eradication programs fail to scale

1.   Not a general data management problem

There are constellations of software and hardware vendors in the enterprise data management and storage spaces offering tools that provide different forms of visibility and control over the data estate.

These solutions may be really good at optimizing ‘speeds and feeds’ performance, or providing secure permissions across distributed volumes, or reducing the cost footprint for storage across a wide variety of on-prem and cloud resources. But when it comes to process recommendations to avoid hoarding risky media, or how to create a production line that fully eradicates data at the rate needed, they aren’t specialized enough to offer an answer.

2.   Solutions aren’t universal

As a corollary to the generalist vendor dilemma, there are data deletion utilities offered by major hyperscalers and IT platform vendors that may claim to scale eradication, but only for their own environments and assets within their own technology ecosystems.

That may sound ok if the company is solely a VMware/Tanzu shop, or a fully remote firm that claims to use 100% AWS resources, but that’s not a realistic situation for most firms, which contain assets supplied by acquired companies, business partners, and employees.

Companies performing data eradication face many unique scenarios and large variations in devices targeted. Since most enterprises have heterogeneous hybrid IT environments, they will need universal scaling techniques for data destruction.

3.   Glossing over complete erasure

A brute force “copypasta” method of writing random data over the disk file system multiple times takes time, and if done inefficiently, just won’t scale fast enough to operationalize data destruction at enterprise scale. Further, since each disk can have several different types of software running on it or writing to it, and different reasons for failing, many disk sectors and blocks will hide unerased data that will not appear to a routine file scan.

Given these constraints, many disk and data management tools gloss over the hidden areas they can’t erase, and therefore only verifiably eradicate all data on a disk two-thirds of the time.

The Verity ES eradication solution provides designed intelligence that adapts the data detection and deletion processes based on what a particular disk allows, with the goal of increasing the efficiency and effectiveness of the eradication process.

When accompanied by the best learnings of data center and IT support operations for how and when to pull failing or obsolete drives, the success rate for this method gets closer to 95 percent.

The Intellyx Take

A disk that has data that is successfully and verifiably eradicated means the data it once held can never escape–so there’s no longer a need to destroy it physically.

There will always be some edge cases of disks that are too damaged to verifiably erase, even via a fully operationalized method, and have to be sent to an industrial shredder. But the reduction of backlog risk and the regained yield of potentially thousands of disks that can be reused, resold or safely recycled creates ROI that powers the data eradication program.

© 2023 Intellyx LLC. At the time of writing, Verity ES is an Intellyx customer. No AI was used to write this content. Intellyx retains final editorial control of this article.