Virtualization Technology News and Information
Article
RSS
Hey, You, Get Off of That Cloud - Reconciling DevOps and Security in Virtualized Cloud Environments
Written by Gary Southwell, General Manager, Security Products, CSPi 

Let's face it, settling differences between DevOps and security teams while driving for better application functionality with reduced time-to-deployment is hard enough in a static, on-premise environment. But what happens when the development process moves to the cloud where applications are intended to run in virtualized cloud environments, or both? Is it even harder for DevOps and security to get along? Will the changes and compromises be even greater than in physical environments? Will time-to-deployment grow even longer?

Strife between DevOps and security is a common occurrence. Often an application is developed independent of security requirements and oversight, then the security team is called in at the end to approve it before it goes into production. Sometimes these applications are deployed and security does not know about them or get involved until they are randomly discovered, or even worse a problem occurs. In either case, security comes in at the end of the process, and their review can be long and painful. Developers are often forced to compromise functionality for the sake of security to get their applications into production. Eleventh-hour changes are generally more drastic or gross in nature, since there is little time to make more complex rearrangements to accommodate desired functionality along with security imperatives.

Best-in-class organizations are integrating security into their DevOps approach so that they can both determine the proper level of risk versus access and functionality from the very beginning. Security can evaluate the applications as it is being developed rather than playing catch-up at the end and being the cause of lengthy delays. Security teams can also address adhering to compliance issues-particularly for the treatment of data-as a part of the development process. Security, too, can work with the operations side to ensure the application complies with security rules and that they can properly monitor the underlying cloud infrastructure. The results are orders-of-magnitude better with this model of integrating the security team with DevOps and starting the process together from the beginning.

So what happens when you move to the public cloud? Moving development to the cloud offers many advantages, but it also requires one to think differently. In static, physical environments, developers can easily map applications to services, such as databases and message servers, with fixed IP addresses safely tucked inside the organizations perimeter, running on infrastructure, hypervisors and OSs that are controlled by the operations team under security teams watchful eye. In the cloud, application service dependencies can be built on the fly without oversight, its easy to spin up instances OSs and pull in application components from places like GitHub to rapidly build prototype applications without anyone's knowledge or guidance. Testing these applications usually takes accessing production data - and this is where things can go off the rails.  

Unsecured applications accessing critical data or running critical processes can put organizations at extreme risk.  If you think this does not happen just look closely at the leaders of the industrial control IoT market.  Hardly a well kept secret now, but several applications were rolled into production without any form of security controls.  How did this happen? It was because security architects knew nothing about these rapidly developed applications, developed in shadow cloud environments, until after they were released with 1000s of instances deployed in the wild. Fixes were long in coming, often taking many months, in part, because of the lack of information security awareness. The challenge of patching IoT devices once deployed in the field made matters even worse. Needless to say it would have been better if these were properly secured and tested before release.

Most organizations want to develop and then operate their applications in the cloud - under their own controls.  What challenges are there: First, getting full details of what a given application is communicating, or connected with, leveraging what resources and from where can be a challenge to verify. Second, developers should be taking advantage of sanctioned cloud-based application lifecycle management components, including test and quality management, source code verification and proper security controls, as well as configuration management.  However the challenge is that these tools for the most part must be brought to the cloud - they aren't available as a complete security-in-depth tool set from any cloud provider today as purchasable options.  Lastly, cloud environments vary - therefore, it is not always a guarantee that the tools you bring in will be compatible, they have to be verified ahead of time.

The benefit of using cloud is powerful and alluring - as developers can quickly and easily spin-up new environments for application design and testing without the efforts and delays required with physical resources.  Yet as we saw in the IoT market - without proper oversight unsecure applications can be deployed without proper controls.

Up until now a real but mostly overlooked issue is that it is difficult, if not impossible, to know the state of security of the underlying infrastructure.  As there is no visibility it is just assumed it is totally secured at all times.  Chances are your cloud provider can't tell you the exact server your cloud instance is running on, nevermind the revision and patch number. This is important information to be aware of given the unending string of vulnerabilities being exposed in the Intel processor and therefore the need to verify that the proper hardware firmware level patches are being applied.  More recently Bloomberg has the technology industry buzzing due to the yet unsubstantiated claims that some very widely deployed Supermicro Cloud motherboards produced in China were implanted with vulnerabilities that for years have allowed a bypass of passwords to allow third party access to your data.  If this is true data in the largest Clouds may allow completely unsecure access, no matter what identity and access management service you are running, unless you are adequately securing the data through encryption before it gets onto these devices yet be aware this technique only works well for cloud storage applications.  Its best to get in writing from your cloud service provider if they use such motherboards, and if so - it's best not to run critical processes in these clouds until such claims are verified or refuted.

What measures and decisions companies need to make in order to secure applications, processes and the data they access or generate.

1)      Deploy proper security agents into your applications as they are being developed. Ideally this is as easy as pulling an agent in from the sanctioned library and as required connecting it with in the application to leverage is security features.  In addition, the agent should automatically alert the InfoSec team of its presence so that the applications can be verified as properly patched, properly secured, and that its connectivity policies have been properly set.

2)      Determine and enforce proper data access polices for cloud-based applications.  What other applications and which data should be accessed and from where?  Should this application access critical data and if so how and under what levels of protection? Should the date be encrypted in motion? Or at rest?  If so should it be uniquely encrypted down to the application of transaction level?  Should these applications have access to the Internet? And if so to where and under what conditions.

3)      Should the data be encrypted by bringing in keys securely from the premises?  And/or should certain data not be processed at all in cloud but only be stored there with appropriate levels of encryption applied before it leaves the premises?

4)      Apply appropriate controls to enforce such polices that are under InfoSec control.  It's easy to assume that the network or virtual network infrastructure run by the networking teams will do the right thing - but it needs to be verifiable and under InfoSec control.  Leverage virtual or inline probes that interrogate and apply appropriate connectivity, threat scanning, and encryption policy verification on the fly in the Cloud or before it leaves the premises.

As organizations move to the public cloud they should take the opportunity to avoid or remove an adversarial relationship between DevOps and security teams, with proper controls the cloud offers an opportunity to get it right from the "restart".

##

About the Author

 

Gary Southwell, General Manager, Security Products, CSPi

With over 25 years of strategic business and security product planning, Gary brings a wealth of data privacy and compliance knowledge to the Cybersecurity team at CSPi. Prior to joining CSPi, Gary co-founded Seceon - a threat detection and remediation company. Having previously served as the CTO at BTI Systems and the GM at Juniper Networks, Gary has developed an art for crafting solutions to solve big data security and reliable content delivery architecture.

Published Monday, October 22, 2018 9:34 AM by David Marshall
Comments
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
Calendar
<October 2018>
SuMoTuWeThFrSa
30123456
78910111213
14151617181920
21222324252627
28293031123
45678910