Virtualization and Cloud executives share their predictions for 2016. Read them in this 8th Annual VMblog.com series exclusive.
Contributed by Roger Levy, VP of Product, MariaDB
What’s new for the database in 2016?
Some
say ‘data is the new oil',
but while oil companies have a handle on the processes for turning oil into
gasoline, the same can't be
said for organizations trying to corral their data. How best to store and
manage data is on the minds of most CIOs as they head into the New Year. It's exciting to see that databases,
which underlie every app and enterprise on the planet, are now back in the
spotlight. So what's new?
Securing
your data at multiple layers
2015
saw every type of organization, from global retailers
to the Catholic Church, experience financial losses and
reputation damage from data breaches. Security has long been a concern of CIOs,
but the growing frequency of high-profile attacks and new regulations make data
protection a critical 2016 priority for businesses, governments, and non-profit
organizations.
The
days of relying on a firewall to protect your data are long gone. Amidst a
myriad of threats, a robust security regimen requires multiple levels of
protection including network access, firewalls, disk-level encryption, identity
management, anti-phishing education, and so forth. Ultimately, hackers want
access to the contents of an enterprise's database, so securing the database
itself must be a core component of every organization's IT strategy.
Prudent
software development teams will use database technology with native encryption
to protect data as it resides in the database, and SSL encryption to protect
data as it moves between applications. They also will control access to the
database with passwords and user validation, and a variety of access
authorization levels based on a user's role. Of course organizations can't kick back and rely on software alone; they still have to hold
themselves accountable via regular audits and testing.
Multi-model databases
The
variety, velocity and volume of data is exploding. Every minute we send over 200 million emails and over 300 thousand tweets.
Already by 2013, 90% of the world's data had been
created in two years. But size is not everything. Not only have the volume and
velocity of data increased, there is also an increasing variety of formats of
data that organizations are collecting, storing and processing.
While
different data models have different needs in terms of insert and read rates,
query rates and data set size, companies are getting tired of the complexity in
juggling different databases. Next year will kick off an increased trend toward
data platforms which offer "polyglot persistence" - the ability to handle
multiple data models within a single database. The demand for multi-model
databases is exploding as Structured Query
Language (SQL) relational data from existing applications and connected devices
must be processed along-side JavaScript Object Notation (JSON) documents,
unstructured data, graph data, geospatial and other forms of data
generated in social media, customer interactions, and the many applications
using text and voice recognition.
Growth
in applying machine learning
With
the rapid growth in the type and volume of data being created and collected
comes the opportunity for enterprises to mine that data for valuable
information and insights into their business and customers. As IT recruiters
know well, more and more companies are employing specialist "data scientists"
to introduce and implement machine learning technologies. But the number of
experts in this field simply isn't
growing fast enough, and this rarity makes hiring a data scientist cost-prohibitive
for most companies. In fact, the US alone faces a shortage of 140,000
to 190,000 people with analytical expertise and 1.5 million managers and
analysts with the skills to understand and make decisions based on
the analysis of big data, according to McKinsey & Company. In response,
organizations are turning to machine learning tools that enable all of their
employees to derive insights without needing to
rely on specialists. Just as crucial as collecting the data is the need to provide
analytical processing of large data sets in order to understand what lies in a
company's database and
how it can be turned into valuable insights.
Recently
the major public cloud vendors have introduced a variety of offerings to
provide machine learning services. These include offers such as Azure ML Studio
from Microsoft, the Google Prediction API, Amazon Machine Learning and IBM's Watson Analytics. While these
services do not fully eliminate the need for machine learning expertise,
they are a step in the right direction to making machine learning more widely
accessible. Organizations will also need
database management systems that can integrate and work with machine learning
systems smoothly for fast performance. We'll see more companies adopt these types
of solutions to fill remaining gaps next year and to do more with their data
than just store it.
Hybrid cloud and on-premise application deployment
With
the recent revenue announcements by public cloud
providers such as Amazon AWS and Microsoft Azure, it is clear that adoption of
public cloud services is becoming mainstream. But they may never fully replace
on-premise data storage. While the cloud
offers greater scalability and flexibility, better business continuity,
disaster recovery, and capital cost savings, most organizations won't move all their tech workloads to
the public cloud for economic and security reasons. As a result, companies will
optimize a mix of public and private cloud and traditional on-premise data
management solutions, though operating across multiple environments will present
challenges.
Enterprises
are gradually learning when and how to leverage the public cloud and when
it makes more sense to keep data in their own data centers or use a hybrid
approach. In 2016, we expect to see a greater focus placed on creating
solutions to improve the migration to hybrid cloud architectures. Examples
include cloud bursting from private clouds to public clouds when demand spikes too
high, and the use of hybrid cloud for disaster recovery by replicating
databases in the cloud as backups. Furthermore, as more countries put different
data privacy laws in effect, databases will need to be in different cloud and on-premise
deployments depending on the region.
A database renaissance?
With the recent rise of the Chief Data Officer, the widespread adoption of
new database technologies, and the acute need for better IT security, the database
is back in the spotlight. One of the most foundational technologies is once
again one of the hottest. IT personnel
should pay close attention to the innovations 2016 will bring.
##
About the Author
Roger Levy brings extensive international,
engineering and business leadership experience to his role as VP, Products, at
MariaDB. He has a proven track record of growing
businesses, transforming organizations and fostering innovation in the
areas of data networking, security, enterprise software, cloud computing and
mobile communications solutions, resulting in on-time, high-quality and
cost-effective products and services. Previous roles include VP and GM of
HP Public Cloud at Hewlett-Packard, SVP of Products at Engine Yard, as well as
founding R.P. Levy Consulting LLC.