Research firm IDC
predicts that, over the next five years, more than 80% of the data collected by
organizations will be unstructured data, and that will only continue to grow
40-50% per year for most enterprises.
With the sheer volume of
unstructured data yet to be created and used in the years ahead, it's safe to
say that the way organizations manage their data will need to evolve.
IDC's Research VP of Infrastructure
Systems, Platforms and Technologies Group Eric Burgener authored an IDC Analyst
Brief, sponsored by Datadobi, titled "The Data Mobility
Engine as the Foundation for an Efficient Data Management Strategy."
In the analyst brief,
Burgener urges organizations to implement a comprehensive data management
strategy to confront this increasing influx of data, noting that a data
mobility engine provides the foundation for an effective data management
strategy and can drive significant benefits for the hybrid multicloud
enterprise.
"A good [data
management] strategy takes into account not only the heterogeneity of storage
in most enterprises, but also a number of other areas, including on-and
off-premises deployment models, application availability, data integrity,
security, compliance and regulatory needs, efficient resource utilization and
the fact that more than 80% of the data created over the next five years will be
unstructured (i.e., file and object-based)," writes Burgener.
In his analysis,
Burgener outlines the five main components of an effective data mobility
engine, including the following:
1) Vendor-neutral interoperability
The data mobility engine
must focus on data, not systems, and be able to move data between different
types of systems as well as cloud targets. Both file (NFS, SMB) and object (S3)
access methods should be supported, preferably in a multiprotocol manner to
support efficient capacity utilization when data must be shared across
different types of applications.
2) Insights and intelligence
The data mobility engine
must provide visibility into data metrics, access patterns, and usage
activities that can provide the basis for classification, and this visibility
must be comprehensive. It should also include AI-driven intelligence that can
analyze these metrics to make policy recommendations that drive an effective
data management strategy around storage location, data protection, security,
compliance, migration and, ultimately, storage cost reduction. With more
complete metrics, data residency can be managed to ensure that data is kept in
the "best" location (given corporate objectives) and obsolete data is
identified and deleted.
3) Orchestration and automation
With the complexity of
today's multilocation IT infrastructures, monitoring data usage and compliance
and managing classification and data migration manually are risky propositions.
Automation improves the speed and reliability of operations while improving
administrative productivity, increasing the span of administrative control to lower
costs.
4) Scan-optimize-copy capabilities
All of these operations
must be comprehensive in nature. Scan provides the visibility around data and
its usage, collecting the metrics needed to search and intelligently manage
data. Delta differentials, compression, and other storage efficiency
technologies maximize resource utilization when moving and storing data,
allowing operations to be completed as quickly as possible. Copy includes
replication capabilities, which can further assist in optimizing data migration
to create the most effective data placement strategy.
5) Integrity enforcement
The data mobility engine
must support data integrity during all operations. File and object-level
verification must go beyond just TCP checksums to be able to catch and correct
silent data corruption using "before" and "after" hash
digest comparisons, chain of custody, and advanced integrity protection
(regular inspection of destination content to detect possible changes
conflicting with the source).
While more companies are reassessing their unstructured data storage
needs to keep up with the increasing amounts of human and machine-generated
data, most enterprises are searching for a viable solution when it comes to
executing on a comprehensive data management strategy, Burgener's report
reveals.
Over the last several
years at Datadobi, we've had more and more IT leaders come to us with concerns
around data classification, data visibility, and organization-wide data
accessibility, as well as how to handle aging data and the high costs that
result from a fragmented data management strategy.
These IT decision-makers
understand that the future is changing for their data-with many of them already
storing petabytes of data that will continue to grow in the years ahead-but they're
hesitant to make changes to how that data is stored and protected. Their
on-prem solution is a "comfort zone," so to speak, and it's intimidating to
think about moving all of those assets to a new platform.
As data mobility experts,
we know that enterprise data mobility, circulation, and protection is not only
achievable, but also a necessity for today's businesses. Traditional IT
infrastructure and manually oriented data management practices won't be able to
handle this amount of data going forward.
In short, Datadobi's
solutions lend organizations significantly greater visibility into their data
so that they can better manage their digital assets, place their data in the
right location depending on tangible metrics and understand critical aspects of
their data, such as aging, level of protection-which Burgener relates to "hot,"
"warm," or "cold" data-in order to optimize cost and security policies.
Finally, Burgener states
in his report that "the benefits of an effective data management strategy
include reduced IT costs, easier data sharing, better security, less legal
exposure, and an improved ability to demonstrate governance and regulatory
compliance."
You can read the IDC Analyst Brief, "The Data
Mobility Engine as the Foundation for an Efficient Data Management Strategy" in
full here:
https://idcdocserv.com/US48881822.