What do Virtualization and Cloud executives think about 2012? Find out in this VMblog.com series exclusive.
Solutions will Arise Addressing the Contradiction between Cloud Infrastructure and Platforms Required for Big Data Analytics
Contributed
Article by Shai Fultheim, founder and CEO, ScaleMP
This year we witnessed as
organizations from all industries and of all sizes struggled to keep up with the
growth of data and the emergence of a new trend: big data. According to a 2010
IBM survey, the
equivalent of 2.5 quintillion bytes of data is created by sensors, mobile
devices, online transactions, and social networks daily. This results with 90
percent of the world's data having been generated in just the past two years. On
a monthly basis, people send four billion tweets and post 30 billion messages on
Facebook, and more than five billion mobile devices are in use today.
The computer systems that
are most commonly used to store and process this information are typically
concentrated in high-density, virtualized data centers. These data centers are
getting a growing percentage of the organization's computing cycles and storage
capacity. Still, these data centers consist of servers that lack
the compute power and storage capacity to process large amounts of data that are
being generated. Customers looking to use these new data centers are faced with
growing challenges associated with the gap between workflows and application
characteristics and data center capabilities.
As a result, we see two paradigms
being developed to address the issue of big data. These two paradigms,
software-based, distributed, big data platforms and vertically scaled systems
are contradictory approaches looking to address the same big data analysis
problem. On one side, cloud-focused, distributed software platforms such as
Hadoop have recently entered the ring with companies such as IBM and Oracle in
clear competition on what the best approach is to solving big data with
vertically scaled systems. The latter pose significant challenges such as
monolithic architecture, over-provisioning and high acquisition costs to
customers.
In 2012, the growth rate and pace
of big data will only increase as organizations will continue to search for
better, more effective ways to improve products and services. The demand for
more big data solutions will multiply and even more vendors will enter the arena
to help solve the big data dilemma, further addressing the contradiction between
big data in the cloud and big data analytics.
A recent installation
investigating this challenge is being attempted by the San Diego Supercomputing
Center (SDSC). SDSC has designed data-intensive computers that can address a
broad range of applications, and coupled high-density data center economics with
big data problems using a virtual SMP approach - providing virtual scale-up
systems for the most demanding problems.
###
About the Author
Shai Fultheim, founder and CEO of ScaleMP -
As founder and CEO of ScaleMP, Shai
designed and architected the core technology behind ScaleMP and is responsible
for the company's strategy and direction. Shai has more than 15 years of
experience in technology and business roles in IT and venture-backed firms.
Before founding ScaleMP Shai was CTO of BRM Capital, a first-tier Israeli
venture-capital fund. Shai defined the fund's technology roadmap, which formed
the foundation of BRM's investment strategy. He has also served in the Israeli
Defense Force's prestigious central intelligence unit, where he led a large IT
organization of hundreds of engineers and programmers.