DataCore Software, the authority on software-defined storage, just announced that it has acquired Caringo, Inc., a pioneer in the object storage market. Caringo was founded in 2005 to change the economics of storage by designing software to solve the issues associated with relentless data growth.
To better understand that acquisition and what's happening with object storage, VMblog spoke with industry expert and DataCore Chief Marketing Officer, Gerardo Dada.
VMblog: Why
did DataCore decide to acquire Caringo? What will the acquisition bring to
DataCore? And can you discuss how the acquisition of Caringo further accelerates
the DataCore ONE
vision?
Gerardo Dada: In the past few years, the need to store large amounts of data has increased
more than ever: everyone is doing more backups, privacy and compliance are
raising the bar on how to store business documents, and every company has more
images, videos, and archives. Object
storage is clearly the most efficient way to store petabytes of data. The
acquisition of Caringo is ultimately about giving customers the power to enjoy
the benefits of software-defined technologies across block, file, and object -
from one vendor.
Strategically,
it is about furthering the DataCore ONE vision: to break silos and hardware
dependencies and unify the storage industry-enabling IT to make storage
smarter, more effective, and easier to manage. We already had best-of-breed
solutions for block and file with our SANsymphony and vFilO products, but
object was the missing piece. We did a thorough evaluation of the available
technologies and found that Swarm was the most mature and complete product out
there.
VMblog: The
lines between object and file are blurring. Why launch an object store now when
many file systems have object interfaces?
Dada: There
is a saying that if you sell hammers, everything looks like a nail to you.
Something like that happens in this industry. Vendors who have one product will
try to solve all problems by stretching the capabilities of one technology.
We
believe every technology has strengths and weaknesses. It's like being an
athlete: there are some who can run a marathon or a 100m sprint, but no one is
a world champion at both. Either you can train for resistance or you can train
for burst speed.
File
systems are designed for transactional speed, local network access, and
optimizing performance for the number of files that can be delivered in one
second. Object storage is designed to manage geo-independent objects that can
be very large and accessed via a web protocol - performance in object is about
throughput.
Many
file systems have object gateways or access interfaces and vice-versa. This
does not turn a file system into an object system, instead, this is often done
so an application that has been coded to use an interface can work with a
particular storage technology, and it assumes that the inherent characteristics
of such storage system are adequate for the workload in question.
To
make things even more interesting, we have vFilO, which acts as the
orchestrator or the traffic cop, which can move data from file to object and
can act as a virtualization layer above both, providing a unified, hybrid
namespace and optimizing data placement.
VMblog: With
public cloud vendors offering inexpensive object storage, why should companies
consider on-premises object stores?
Dada: There
are three reasons: the first one is economic. Analysts and industry experts
agree that on-premises object storage can be one third of the cost of object
storage in the public cloud. In addition, leading services like AWS S3 charge a
number of additional fees for data egress, tiering, and other functions - all
this adds cost, complexity and difficulty in forecasting budgets.
The
second reason is that the public cloud is not the answer for everything. Many
companies have found that their own datacenters continue to be better value,
especially when considering governance (compliance, data sovereignty, etc.),
security, and control.
The
third reason is that a best-in-class on-prem object system may offer
capabilities that do not exist in the public cloud. For example, the ability to
deliver data with very high throughput, utilization of existing resources,
content services (classification, indexing and search) - or the ability to
efficiently modify a small part of a large object without having to read and
re-write the entire object itself.
It
is important to note that these benefits are especially true for companies that
have a need for the scale that object has been designed for - usually above
100TBs of data. Organizations who need less than that may be better served
through public cloud services.
VMblog: How
will Caringo's product line exist within DataCore's current portfolio? And what benefit
will this provide to DataCore reseller partners?
Dada: Caringo's product line is very complementary to our existing portfolio. Most
medium and large enterprises have a need for file and object and will benefit
from buying both systems from one company under a common license and a single
support contract.
Swarm is really aligned with the rest of our product
portfolio. It delivers on the same software-defined value: hardware
independence, flexibility, high availability, storage efficiency, and high
performance. It simply does the same but for object storage.
Adding Swarm has particular synergies with vFilOTM which is a next-generation distributed file
and object storage
virtualization technology and allows users to define policies that determine
when a file should be moved from an NFS or SMB system to Swarm object storage. The
user namespace is maintained so the file is still available for users. In
addition, most customers who have a need for vFilO and/or Swarm are likely also
running business-critical primary applications that run on block storage and
are perfect candidates to get more performance, availability, and flexibility
from SANsymphonyTM software.
The Swarm object storage product will exist alongside these two proven
solutions and round out the DataCore SDS portfolio. This will enable our
resellers to offer a complete solution for block, file, and object from a
single supplier.
VMblog: What
are the most common use cases for object storage?
Dada: As
a cloud technology, object storage has been optimal for media, large files, and
low-cost archive. However, today those use cases have evolved. Here are a few
where we are likely to focus:
- Content storage
and streaming, especially media files
- Media post-production and editing
- Compliant and secure storage of medical,
financial, and other records
- High-performance computing, especially when
dealing with massive data sets
- Storing backup images, snapshots, VDI
images, and other data replicas
- Archival of files that are not likely to be
accessed soon
- Primary storage for large files that must
be available globally
VMblog: The
press release notes that the acquisition news comes on the heels of a strong
2020 close for DataCore. Can you please provide more details on this?
Dada: Yes, we are extremely grateful and proud that DataCore completed 2020 on a high
note. The company is consistently adding
well over 100 new customers per quarter and saw double-digit (YoY) growth in
capacity sold and customer expansion in Q4 of 2020. We have been profitable for
over a decade, which is something most of our competitors cannot say.
We
also added multiple key executives in the last year including Kevin
Thimble, CFO and
Geoff Danheiser, chief people
officer.
We
also opened a new, modern office, our HQ in Ft. Lauderdale, FL and a new Research & Development center in
Bangalore, India - which
already has about 30 people - and complements our R&D centers in Bulgaria and
Florida. We are also combining our Austin offices, where we have room to grow. The
upgraded executive team, expanded geographical resources, and expanded product
portfolio position DataCore to accelerate adoption of software-defined
technologies in 2021.
VMblog: As we wrap this up, is
there anything else that you would like to add?
Dada: These
are exciting times. The IT industry is in a stage where it has realized the
future of storage is software-defined, and analysts expect most storage systems
deployed in the near future to be software-defined.
We
are in a really good position to capitalize on this trend, as the authority on software-defined
storage, and as the only company who has a best-in-class portfolio for block,
file, and object. This is good news for the channel - VARs and xSPs, who can
get the best of these technologies from one vendor who is 100% committed to the
channel. It is also good news for customers, because it simplifies and
accelerates their path to a more modern, more flexible datacenter.
##