Modern businesses aim for excellence and search for robust and agile next-generation technologies that optimize their infrastructure and solve the storage challenges they face.
As we dive into the changing and evolving world of storage, who better to answer questions than Boyan Ivanov, CEO and founder of StorPool Storage.
StorPool Storage develops one of the most reliable and speedy primary storage platforms that public and private cloud builders use as the foundation for their clouds. Their team has experience working with various clients - Managed Service Providers, Hosting Service Providers, Cloud Service Providers, enterprises and SaaS vendors.
VMblog: Let's talk about cloud implementations. What's the most common
mistake organizations make when building a public or private cloud?
Boyan Ivanov: The first step when building a new public or private cloud...is
actually to analyze the states - usage, build-up, and other metrics of the
existing cloud. Since very few cloud projects are greenfield, companies should
spend the time to understand their current users, needs, and goals and then use
this information to right-size their new platform.
If the case is that one is building a greenfield environment -
then there should be a sprint to assess the key needs and metrics, which tie
back to these needs. And only then look for the right technologies to deliver
on the needed target KPIs. This would ensure that the new project is geared
towards what matters, rather than just building a cool new cloud.
All too often I see companies focusing on the technologies or cool
new vendors that they want to utilize and give little thought to the workloads,
applications, and users of the new cloud. This is just wrong. The user and the
application should come first.
Last, but not least - in greenfield projects, look for solutions
that are best in class and can start small and grow big seamlessly. Many
vendors and solutions claim they can do this, but when you dig deeper into
actual technology capabilities - it often is the case that most promises were
marketing slogans, rather than technological capabilities.
VMblog: What about MSPs? If an MSP is creating a cloud infrastructure
for their end customers, what should their goals be?
Ivanov: MSPs have several specific issues - a shift from on-prem to cloud
services, improving their operational efficiency and profit margins, changing
customer demands, and staffing issues - would be some of the more acute
ones.
This means that they have to provide increasingly more
sophisticated services, in order to compete with Hyperscaler or at least a mix
of on-prem, hybrid- and public cloud solutions. This makes their business model
even more complex and operationally expensive.
Thus MSPs should be especially focused on working with vendors,
which have deep expertise working with MSPs and deliver solid, scalable, yet
versatile solutions.
In the case of storage, for example, one should look for vendors
who can deliver a versatile data storage platform, which can support multiple
IT stacks (think VMware/KVM/Microsoft/containers), it should also cover several
performance tiers and use cases. It should be able to start small, but scale
both horizontally and vertically, and even adapt to different deployment
scenarios in different locations across the world.
MSPs should also seek to work with partners who would provide
high-quality managed services to them as part of the offering - this would both
improve their operating margins in the end and will address their staffing
issues at the same time.
Finally, since the business and technological landscape is
changing so fast - MSPs have to select technologies that are future-proof. For
example, using products that can support multiple technology stacks will allow
them the flexibility to change directions, depending on how the market shifts.
For example, on the storage side this will be a product that can support most
software stacks - say VMware, Microsoft, KVM, bare metal, and containers. And
if need be - due to market shifts, acquisitions or any other reason - MSPs can
swiftly shift workloads and customers from one IT stack to another one, in
order to adapt to the new realities.
VMblog: IT companies and DevOps teams are another category where
performance is critical. What should these environments be concerned
about?
Ivanov: At the core of performance stays latency. This is maybe the most
misunderstood metric of performance and I have spoken about this a number of times. So focus on
reducing your platform latency.
It is especially important for IT and DevOps teams, because speed
has a very direct time and monetary impact on their operations.
If a legal system at a law firm is slow, this is painful. But if
you have a development team of 100+ programmers who compile code and it takes
them 3 hours a day, instead of 45 minutes - that is a huge waste of both time
and money. Worse, it demotivates the key resource for that company's success -
their people.
VMblog: Similarly in container deployments, storage needs to be fast and
available. Any tips for architecting storage to support containers?
Ivanov: My experience with container storage is limited to larger-scale
deployments with Kubernetes, which need to run on bare metal because of their
performance requirements or the size of the deployment. So take my view with a
grain of salt here.
However what we see on the market for these larger, typically
hybrid-cloud deployments is that users have both containerized and traditional
virtualized or bare-metal apps running alongside the containerized
applications. This boils down to two alternatives:
- either
get a couple of storage systems - one for the containerized stack and one
(or sometimes more) for the traditional stack, each tuned to best serve
the given use case; or
- get
a potent best-in-class SDS that can cover both traditional workloads, but
also has good integration with the container platform of choice. At
StorPool we're a representative of this second group of storage software
products, with a CSI integration with Kubernetes, which has become the de
facto standard for containerized environments.
VMblog: Why is the storage part of the stack so consistently
challenging?
Ivanov: Storage, together with networking are the two most problematic
pieces in the IT stack. This is due to their complexity and the intricate
interdependencies that exist between the 3 main components of IT infrastructure
- compute, network and storage.
Compute (servers) are one of the most straightforward pieces.
Complex in itself, still it is the most well-known piece, as historically
applications were run on a single server. There is the largest body of
knowledge in that area and it's self-dependent.
When it comes to networks things become more complex, as it
connects many servers and networks have myriad features and settings. And when
remote shared storage like a SAN (Storage Area Network) or SDN
(Software-Defined Network) is added - this adds complexity to an already
complex IT stack. As a result troubleshooting storage issues is hard,
especially in the cloud area, where server count and data volumes are
skyrocketing and network complexity to connect one to the other is increasing
in sophistication.
VMblog: In StorPool's evangelism it seems you're primarily competing
with expensive storage hardware like SANs or all-flash arrays. What's that
conversation like, explaining to an organization that they can get the
performance or resources they need without dedicated storage?
Ivanov: It is a generational change. StorPool is the latest generation of
storage software that can replace million-dollar all-flash storage arrays, with
software running on standard servers, alongside applications.
"Software is eating the World" is the cliche, but it's actually
true. Ten to fifteen years ago storage was unavoidably a hardware appliance.
Today people want cloud-native applications, run by APIs, end-to-end
automation, utilizing as little hardware as possible. Most hardware is already
virtualized, so people think of hardware as software anyway.
In this world storage and to a large extent networks are
software-centric constructs, which are part of the magic application software
stack - fluid, programmable, and seamlessly scalable. This is core to the
Infrastructure as a Service (IaaS) business model.
So shifting from hardware-centricity to software-centricity is the
master trend, driving the adoption of so-called "Software-Defined Storage". And
StorPool is a leading implementation of this concept, tailored for large and
demanding IT users, needing block-level storage in particular.
VMblog: Certainly the scale of enterprise data has grown. What's the
best approach to managing scale from the storage and performance
perspective?
Ivanov: There are several ways to handle this. In the old world of SANs,
users had to choose a trade-off between scale, cost, and performance. There was
a considerable complexity when designing a Cloud or large IT infrastructure
setup. In many cases, cloud builders had to choose between speed and
scalability or speed and data management features. So they ended with 2 or 3
different product families from the same or in many cases different vendors,
which delivered different price/scalability/performance/data management
sections.
Today these trade-offs are a thing of the past. The best-in-class
SDS solutions can offer practically unlimited performance and match it to the
needed scale or data management & feature set requirements of most users.
All in a single data storage platform.
VMblog: Ten years ago we were just beginning to talk about the
software-defined data center, and now it's matured and iterated additional
categories like infrastructure as a service. Yet storage remains a foundational
component of all of these iterations. How has StorPool managed to stay right at
the head of this evolution for these past ten years?
Ivanov: It's paramount to focus on what is really important. What drives
this evolution? It's the changing nature of business demands and
use-cases.
20 years ago IT was one of the many functions of a company. Today
IT permeates everything a company does. Let's take a bank as an example - 20
years ago their branch network and top-notch staff and service were what
counted the most. Today it's their mobile banking app and online banking
portal. In other words - even banks have become IT companies.
I'm giving an example with banks, but the same applies to pretty
much any other traditional business. And for the newer digitally native
businesses as SaaS companies - this is 100% the case.
And all this is also driven by technological progress and changing
user behavior. So the users of today want digital, self-service, always-on IT
platforms. These are more convenient and open 24/7.
Then the businesses aim to provide these. And to do that - one
cannot use the technologies developed for a different era, like mainframe
computing, or fiber channel SANs, in terms of storage.
So the infrastructure has to be fluid, programmable, scalable, and
self-healing. It just happens to work best if it is relying on a minimal set of
standard hardware - compute servers and network devices - and then all the
intelligence is in the software layer.
This is why we see an accelerated switch from traditional, now
legacy IT infrastructure designs to Software-Defined DataCenter (SDDC) and
IaaS/PaaS/SaaS designs. What we do with StorPool and our storage and networking
peers is to deliver the tools - such as SDS (Software-Defined Storage) and SDN
(Software-Defined Networking) products in order to fulfill that need. And we
strive to be best-in-class at the SDS layer and enable and fulfill the needs of
businesses and end-users in the best possible manner.
##