Virtualization Technology News and Information
Article
RSS
Does the move to cloud native spell the end of the DBA?

By Patrick McFadin, vice president, developer relations, DataStax

If your sole job responsibility is maintaining databases, you might be feeling some anxiety watching the move to cloud and cloud native. First it was the migration of bare-metal to virtual machines, then it was containers. Now it's Kubernetes and cloud databases that require very little upkeep. Where does that leave you when there are fewer databases that need experts? Job fields in IT are changing-but they aren't shrinking. We still have shortages of workers everywhere so let's dig into the details of what's happening here.

The end of the specialization

First, let's get this out on the table for everyone. The number of jobs with the Database Administrator title has been declining for a long time. But that doesn't mean that databases have been left to run themselves. In fact, the skill of database administration has been a steady growth area. How do we reconcile these two facts? Operations and IT jobs have become more generalized and have moved away from specialization. No doubt this has been accelerated with the move to the cloud. LinkedIn data shows this trend very clearly.

 

Source: LinkedIn

The rise of generalization

If the trend is moving toward generalization, where do we go as database professionals? We move away from the specialized concept of "what" and toward the generalized idea of "how" we deploy cloud native applications. To that end, the job and skillsets of Site Reliability Engineer (SRE) have been tracking the explosive growth of cloud adoption. LinkedIn data clearly illustrates this trend; the difference between SRE and DBA job and skill growth is obvious.

 

Source: LinkedIn

Making the move to SRE

What is it that SREs do that is so important for modern deployments? Consider this definition of SRE from Wikipedia:

Site reliability engineering (SRE) is a set of principles and practices that incorporates aspects of software engineering and applies them to infrastructure and operations problems. The main goals are to create scalable and highly reliable software systems. Site reliability engineering is closely related to DevOps, a set of practices that combine software development and IT operations, and SRE has also been described as a specific implementation of DevOps.

            Source: https://en.wikipedia.org/wiki/Site_reliability_engineering

Adopting an SRE mindset means going beyond what you are deploying and putting a greater focus on the entire stack and how it works as a package. How will all of the pieces work together to meet the goals of the application? A holistic view of a deployment considers how each piece will interact, the required access (including security), and the observability of every aspect to ensure service levels are met. Engineers who've learned the skills required to run critical database infrastructure have an essential baseline that translates into what's needed to manage cloud native data:

  • Maintaining Availability
  • Monitoring Latency
  • Change Management
  • Emergency response
  • Capacity Management

New skills need to be added to this list to become better adapted to the more significant responsibility of the entire application. These are skills you may already have, but they include:

CI/CD pipelines

Embrace the big picture of taking code from repository to production. There's nothing that accelerates application development more in an organization. Continuous Integration (CI) builds new code into the application stack and automates all testing to ensure quality. Continuous Deployment (CD) takes the fully tested and certified builds and automatically deploys them into production. Used in combination (Pipeline),  organizations can drastically increase developer velocity and productivity.

Observability

Monitoring is something anyone with experience in infrastructure is familiar with. In the "what" part of DevOps, you know services are healthy and have the information needed to diagnose problems. Observability expands monitoring into the "how" of your application by considering everything as a whole. One example: tracing the source of latency in a highly distributed application by giving insight into every hop that data takes.

Knowing the code

When things go bad in a large distributed application, it's not always a process failure. In many cases, it could be a bug in the code or subtle implementation detail. Being responsible for the entire health of the application, you'll need to understand the code that is executing in the provided environment. Properly implemented observability will help you find problems and that includes the software instrumentation. SREs and development teams need to have clear and regular communication, and code is common ground.

Principles for success

In practical terms, maintaining cloud native applications will probably mean deploying on Kubernetes. When deploying data infrastructure, there are core principles that the community has created and continues to refine. These are the practices that give you consistent results and keep your deployments portable and consistent.

Principle 1: Leverage compute, network, and storage as commodity APIs

One of the keys to the success of cloud computing is the commoditization of compute, networking, and storage as resources we can provision via simple APIs. These are the most basic resources and constitute the primary cost of infrastructure.

Principle 2: Separate the control and data planes

Kubernetes promotes the separation of control and data planes. The Kubernetes API server is the key data plane interface used to request computing resources, while the control plane manages the details of mapping those requests onto an underlying IaaS platform.

Principle 3: Make observability easy

The three pillars of observable systems are logging, metrics, and tracing. Learn how to add them to every deployment and make it a basic part of everything you do. When the inevitable problems arise, you'll be happy you spent the time to do it right.

Principle 4: Make the default configuration secure

Security can be the last thing considered in application deployment. Reverse the trend and learn how Kubernetes deployments can use network access, TLS, and secure passwords that can be easily added to every deployment.

Principle 5: Prefer declarative configuration

In Kubernetes, the deployment configuration describes the desired end state; it's an example of a declarative configuration that enables the system to do the work. DevOps deployment tools like Chef or Ansible are an example of an imperative configuration that describes all the steps to make something happen. Declarative configuration allows for fast and consistent deployments.

The future is waiting

The increase in cloud native application deployments isn't going to slow down and seasoned professionals are needed. If you're working as a database administrator you could make a significant impact on the future. This is something that Jeff Carpenter and I will cover in much more depth in our upcoming O'Reilly book, "Managing Cloud Native Data on Kubernetes." There are communities to join such as Data on Kubernetes community or K8ssandra.io. Work with like-minded people that are on the same journey. We get that this can be stressful to think of career changes and staying on top of new technology There are a lot of us making this move and we hope you can join us.

When you are ready to make a move, we at DataStax have been making a lot of free education available to get your skills updated. We have a lot of online courses available where you can learn more about building cloud native applications and infrastructure. To show the world you are ready to go, we also have a few certifications that you can use to get that upgrade and find your future.

##

To hear more about cloud native topics, join the Cloud Native Computing Foundation and cloud native community at KubeCon+CloudNativeCon North America 2021 - October 11-15, 2021     

ABOUT THE AUTHOR

Patrick McFadin vice president, developer relations, DataStax

Patrick McFadin 

Patrick McFadin started his professional career in the US Navy doing digital communications while touring the world on a destroyer. He joined the internet-all-the-things wave in the 1990s and was an Oracle dba/developer/architect for over 15 years. In 2013 he became the chief evangelist for Apache Cassandra and now is the vice president of Developer Relations at DataStax, and co-author of the upcoming O'Reilly book "Managing Cloud Native Data on Kubernetes.

Published Monday, October 04, 2021 7:32 AM by David Marshall
Filed under: ,
Comments
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
Calendar
<October 2021>
SuMoTuWeThFrSa
262728293012
3456789
10111213141516
17181920212223
24252627282930
31123456