Virtualization Technology News and Information
DataCore Software: 2012 Predictions - Storage Hypervisors, Virtualization, SSDs, "Big Data," Clouds and Their Real World Impacts


What do Virtualization and Cloud executives think about 2012? Find out in this series exclusive.

2012 Predictions - Storage Hypervisors, Virtualization, SSDs, "Big Data," Clouds and Their Real World Impacts

Contributed Article by George Teixeira, President, CEO and Co-founder of DataCore Software

Storage has been slow to adopt change and still remains one of the most static elements of today's IT infrastructures. I believe this is due to a history of storage being driven from a hardware mindset. The storage industry, for the most part, has been controlled by a few major vendors who have been resistant to disruptive technologies, especially those that can impact their high profit margins. However, the time is ripe for a major shift and some disruption. An understanding of how software liberates resources from devices and a perfect storm of forces - success of server virtualization, new Cloud models, increasing data  growth, greater complexity and unsustainable buying practices - are driving a mind-shift and forcing real changes to happen at a much faster pace. Therefore, it is a great time to be at the helm of a storage software company, and I am pleased to share my personal observations and 6 predictions for the New Year.

1) SSD cost trade-offs will drive software advances. Auto-tiering software that spans all storage including flash memory devices and SSDs will become a "must-have" in 2012.

The drop in cost of SSDs (solid state drives) and flash memories has already had a sizable impact on IT organizations and this will continue in 2012. The latest models of storage systems incorporate these innovations and tout high performance and high-availability, but they remain beyond an acceptable price point for most companies. In addition to price, SSDs and flash memory useful lifetimes are impacted by the amount of write traffic and need to  be monitored and protected to avoid potential data loss.

However, the big driver for SSDs is the need for greater performance, but performance needs do not apply equally to all data. In fact, the majority of data, on average 90%, can reside on low-cost archives or mid-tier storage. Meanwhile, the major storage vendors continue to implore us to throw new hardware systems and more SSDs at the problem because they want to sell higher priced systems. These innovations are great, but they must be applied wisely. To get the most value out of these expensive devices, software that can protect and optimally manage the utilization of these costly resources as well as minimize write traffic is now a "must-have."

In 2012, businesses will gain a better understanding of why software is needed to expand the range of practical use cases for SSDs and to automate when, where and how best to deploy and manage these devices. Auto-tiering and storage virtualization software is critical to cost-effectively optimize the full utilization of flash memory and SSD-based technologies - as an important element within the larger spectrum of storage devices that need to be fully integrated and managed within today's dynamic storage infrastructures.  

2) Hybrid data storage environments will need storage virtualization and auto-tiering to combine Cloud storage with existing storage. 

Cloud storage has already become a viable option for businesses that don't have the room to add storage devices or are looking for "pay-as-you-go" storage for less critical storage needs, backups and archiving. Industry analysts have already proclaimed that Cloud gateways and heterogeneous storage virtualization solutions combined with auto-tiering functionality can provide a seamless transition path to the Cloud that preserves existing storage investments.  

For most companies, the notion of moving all of their data to the Cloud is inconceivable. However, continuously expanding data storage requirements are fueling a need for more capacity. One way to address this growth is to include Cloud storage in the mix. The benefits of Cloud storage are numerous. Cloud storage can provide virtually limitless access to storage capacity and obviate the need for device upgrades or equipment replacements. Cloud storage can also reduce capital expenses. Look for continued advances in auto-tiering and storage virtualization technologies to seamlessly combine hybrid Cloud and on-premise environments in a way that operates with existing applications.  

3) The real world is not 100% virtual. Going virtual is a "hot topic," but the real world requires software that transcends both the virtual and physical worlds. 

Even VMware, the leading virtualization company in the world, as well as the most ardent virtualization supporters in the analyst community, predict that in 2012 over 50% of x86 architecture server workloads will be virtual. Different reports point out that only 25% of small businesses have been virtualized, and others highlight the many challenges of virtualizing mission  critical applications, new special purpose devices and legacy systems. The key point is that the world is not 100% virtual and the physical world cannot be ignored.

I find it interesting to note the large number of new vendors that have jumped squarely on the virtualization trend and have designed their solutions solely to address the virtual world. Most do not, therefore, deal with managing physical devices or support migrating from one device type to another, or support migrating back and forth between physical and virtual environments. Some nouveau virtual vendors go further and make simplifying assumptions akin to theoretical physicists - disregarding real world capabilities like Fibre Channel and assuming the world is tidy because all IT infrastructures operate in a virtual world using virtual IP traffic. These virtualization-only vendors tend to speak about an IT nirvana in which everyone and everything that is connected to this world is virtual, open and tidy - devoid of the messy details of the physical world. Does this sound anything like your IT shop? 

Most IT organizations have, and will have for many years to come, a major share of their storage, desktops and a good portion of their server infrastructure running on physical systems or on applications that are not virtualized. This new "virtual is all you need" breed of vendors clearly does not want you to think about your existing base of systems or those strange Fibre Channel-connected, UNIX or NetWare systems running in the shadows. All the virtual upstarts have a simple solution - buy all new and go totally virtual. But this is not the real world most of us live in.

Virtualization solutions must work and deliver a unified user experience across both virtual and physical environments. Solutions that can't deal with the physical device world do not work in the real world where flexibility, constant change, and migrations are the norm. While those solutions that "do" virtual will be "hot," I predict those that can encompass the broad range of physical and virtual worlds will be even "hotter."

4) Software will take center stage for storage in 2012 empowering users to a new level of hardware interchangeability and commodity-based "buying power."

"Throw more hardware at the problem" is still the storage vendor mantra; however, the growth rate and the complexity of managing storage are changing the model. The economics of "more hardware" doesn't work. Virtualization and Clouds are all about software. With the high-growth rates in storage, virtualization and Cloud computing, it is becoming increasingly clear that a hardware-bound scale-up model of storage is impractical. The "hardware  mindset" restrains efficiency and productivity, while software that enables hardware interchangeability advances these critical characteristics. The hardware model goes against the IT trends of commoditization, openness and resource pooling, which have driven the IT industry over the last decade. Software is the key to automating and to increasing management productivity, while adding the flexibility and intelligence to harness, pool and leverage the full use of hardware investments.  

As the world moves to Cloud-based, to virtualization-based and to "Big Data" environments, software models that allow for hardware interchangeability, open market purchasing and better resource management for storage will  be the big winners. 

5) "Big Data" will get the Hype, but "Small Data" will continue to be where the action is in 2012. 

Yes, Big Data is getting all the attention and yes Big Data needs "Big Storage," so every storage vendor will make it the buzz for 2012. Big Data has many definitions but what is obvious is that the growth rates and the amount of data being stored continue to grow. This Big Data requires better solutions in order for it to be cost-effectively managed. Analyst firm IDC believes the world's information is doubling every two years. By the end of 2011,  according to IDC, the world will create a staggering 1.8 zettabytes of data. By 2020, the world will generate 50 times that amount of data, and IT departments will have to manage 75 times the number of "information containers" housing this data. Clearly, the largest companies managing petabytes, exabytes and beyond are the main focus of the talk, but the small and mid-size businesses that deal in terabytes of data comprise the vast majority of the  real world. And it is these small-to-midsize companies that need practical data storage and management solutions TODAY. These business consumers can't afford to wait until tomorrow, nor can they afford to throw out their existing storage investments. Rather, they need solutions that build on what devices they currently have installed and make those more efficient, highly-available and easier to manage.  

Software technologies, such as thin provisioning, auto-tiering, storage virtualization and storage hypervisors that empower users to easily manage all the storage assets they require - whether located on-premise, at a remote site or in the Cloud - will be key enablers in 2012. Big Data will be the buzz and will drive many new innovations. Big Data will also benefit greatly from these same software-based enablers, but the Small Data opportunity is  extremely large and that is where I'm betting the real action will be in 2012. 

6) Storage Hypervisors will make storage virtualization and Cloud storage practical. 

Enterprise Strategy Group (ESG) recently authored a Market Report that I believe addresses the industry focus for the year ahead. In "The Relevance and Value of a ‘Storage Hypervisor'," it states "...buying and deploying servers is a pretty easy process, while buying and deploying storage is not. It's a mismatch of virtual capabilities on the server side and primarily physical capabilities on the storage side. Storage can be a ball and chain keeping IT shops in the 20th century instead of accommodating the 21rst century." 

While enterprises strive to get all they can from their hardware investments in servers, desktops and storage devices, a major problem persists - a data storage bottleneck. Ironically, even as vendors promote and sell software-based, end-to-end virtualization and Cloud solutions, too often the reaction to handling storage is to throw another costly hunk of hardware at the problem in the form of a new storage array or device. 

The time has come to resolve the storage crisis, to remove the last bastion of hardware dependency and to allow the final piece of the virtualization puzzle to fall into place. Server hypervisors like VMware and Hyper-V have  gone beyond the basics of creating virtual machines and have created an entire platform and management layer to make virtualization practical for servers, desktops and Clouds. 

Likewise, it's time to become familiar with a component quickly gaining traction and proving itself in the field: the storage hypervisor.  

A storage hypervisor is unique in its ability to provide an architecture that manages, optimizes and spans all the different price-points and performance levels of storage. Only a storage hypervisor enables full hardware  interchangeability. It provides important, advanced features such as automated tiering, which relocates disk blocks of data among pools of different storage devices (even into the Cloud) - thereby keeping demanding workloads operating cost-efficiently and at peak speeds. In this way, applications requiring speed and business-critical data protection can get what they need, while less critical, infrequently accessed data blocks gravitate towards lower-cost disks or are transparently pushed to the Cloud for "pay as you go" storage.  

I think ESG's Market Report stated it well: 

"The concept of a storage hypervisor is not just semantics. It is not just another way to market something that already exists or to ride the wave of a currently trendy IT term...Organizations have now experienced a good taste of the benefits of server virtualization with its hypervisor-based architecture and, in many cases, the results have been truly impressive: dramatic savings in both CAPEX and OPEX, vastly improved flexibility and mobility, faster provisioning of resources and ultimately of services delivered to the business, and advances in data protection. 

"The storage hypervisor is a natural next step and it can provide a similar leap forward."  

DataCore announced the world's first storage hypervisor in 2011. We built it with feedback gained in the real world over the last decade from thousands of customers. We saw this advance as a natural, but necessary, step forward for an industry that has been fixated on storage hardware solutions for far too long. 2012 will be the year that true hardware interchangeability and auto-tiering will move from "wish list" to "to do list" for many companies ready to break the grip by which storage hardware vendors have long held them.  


About the Author

Mr. Teixeira creates and executes the overall strategic direction and vision for DataCore Software.

Mr. Teixeira co-founded the company and has served as CEO and President of DataCore Software since 1998. Prior to that time, Mr. Teixeira served in a number of executive management positions including Worldwide VP of Marketing and GM of the Product Business Group at Encore Computer Corporation, where he also played a major role as the team leader of OEM marketing and sales to Amdahl, IBM, and DEC. His work culminated in the $185 million sale of Encore's storage control business to Sun Microsystems in 1997. He also held a number of senior management positions at the Computer Systems Division of Gould Electronics.

Published Thursday, December 29, 2011 4:39 PM by David Marshall
Predictions and forecasts for 2012 | vInfrastructure Blog - (Author's Link) - December 31, 2011 1:31 AM
DataCore Software: 2012 Predictions – Storage Hypervisors … | Web Hosting Geeks - Shared Web Hosting, VPS, Dedicated Servers, Virtualization and Cloud Computing - (Author's Link) - January 1, 2012 9:29 PM - Virtualization Technology News and Information for Everyone - (Author's Link) - January 4, 2012 7:05 AM

I'd like to personally welcome each and every one of you to the start of 2012! As we begin what will certainly prove to be a fantastic new year, I wanted to make sure to thank all of the loyal member's and readers of Once again, with the help

To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
<December 2011>