Virtualization Technology News and Information
Article
RSS
VMblog's Expert Interviews: Pino de Candia CTO of Midokura Talks Network Virtualization and Security

Interview 

As we begin to make our way into 2016, I wanted to follow up on a few of the different directions that bubbled up from VMblog's recent 2016 Prediction Series.  One of those areas that got a lot of play in this year's series is the future of the increasingly hot network virtualization space.  And another, figuring out how the IT industry plans on addressing high profile security breaches that continue to make news headlines.  Security remains a major area of concern for today's businesses, and it will prove to be a vital component within virtualization and cloud technologies. 

To dig into these topics and to specifically find out how they will affect OpenStack's future success, I connected with Pino de Candia, CTO of Midokura, a global innovator in software network virtualization and a key player in the OpenStack community.

VMblog:  To jump start the conversation, can you describe the relationship between network virtualization and security?

Pino de Candia:  Traditionally, security professionals have relied on perimeter security to secure their data centers whereby every host behind the firewall can be considered trusted. What is happening in security nowadays, is that even a host protected behind a firewall cannot be trusted to be uncompromised.

A zero-trust approach to computing is rapidly gaining traction amongst security and networking professionals. This is especially true where where servers/endpoints are actively kept secure - while networks are treated as if they're continually open or breached. Since networking in microservices architecture means more components to manage, and in turn more endpoints to secure, keeping configurations consistent and maintaining security policies can be challenging. Leveraging an automated and intelligent agent that can apply rules, or a chain of rules, on the host can help thwart malicious traffic on a port-by-port basis.

VMblog:  Talk about the security threats facing companies today as they move their applications into production and into a private cloud.  How is this different from a traditional data center?

Pino de Candia:  With floating IPs, virtual machines can be reached from the outside. Security threats coming from "North - South" (in and out of the cloud) can be protected by the DMZ and the perimeter devices. However, security at the perimeter of the cloud is not enough given that 80% of traffic stays within the cloud.

There is a clear gap as "East - West" traffic (within the cloud) does not have the same level of protection. One way of handling this is by implementing additional layers of security like firewalls at each tier (say in front of the web server or application tier).

VMblog:  What do companies need to know about preventing security threats?

Pino de Candia:  Industry research shows the vast majority of firewall-related incidents are caused by administrator misconfigurations. This is not due to lack of training, as it has more to do with manual operations. When someone has to log into each firewall to make changes, it is far too easy to introduce errors over time due to repetition, especially if there are thousands of firewalls in a typical data center. That is why automation makes sense, as it can help eliminate human error.

Also, automated tools are much better at translating requests communicated in different languages, example from a networking language to a security language. Automation also helps forgoes friction within teams when requests are denied. For example, when a request is denied by automation, the requester will know that the decision is based on fairness rather than preference.

VMblog:  How can network virtualization be used as an added security layer?  And can you describe service chaining of security functions (like firewalls, IDS...)?

Pino de Candia:  Traditionally, application deployments were designed to provide observation points for network security (this also, unfortunately, introduced network bottlenecks). The virtualization of workloads using VLANs maintained this approach to security - but at the cost of scalability limitations, vendor lock-in and an overall lack of agility.

The virtualization of workloads using "flat networking" (for example, all workloads are in the same network domain with port-level L4 firewalls) succeeded in removing some network bottlenecks, but inherently reduced security. It has also even left application tiers or entire applications (e.g. in the container space) open to L4 probing/scanning by any endpoint in the virtual network.

The point is that while workload virtualization and network virtualization enable flexibility, agility and scale, they also introduce new security challenges and opportunities as workloads migrate, spin up and down and scale. However, using network virtualization with port-level firewalls or tenant-level firewalls is one way to overcome this to easily enable L4 security.

For example, MidoNet open source network virtualization can block unauthorized traffic at the source VM as it's aware of the policy for VMs on remote hypervisors. But achieving higher layer security (L7) is harder, as it entails detecting or preventing attacks made on authorized L4 ports. In turn, L4 security mechanisms cannot prevent these attacks because they require inspection of the L7 payload.

But trying to redirect virtualized workload traffic out to a physical network for inspection by traditional devices is: 1) too slow and therefore only attempted for workloads requiring the highest security, and 2) so burdensome on the network design that it reducing the agility and flexibility benefits of network virtualization itself.

VMblog:  Can you describe micro-segmentation and why it helps with security?

Pino de Candia:  Traditionally, segmenting the network limits the intruder and helps to contain a breach from affecting the entire network. By segment, the data breach can be contained, and by applying forensics tools, it is easier to diagnose the impact of post-breach when the extent of the damage can be contained. This is a reactive approach.

What is happening today is more proactive: the using software (such as Open vSwitch) to enforce security using policies. This is a game changer as conceivably each and every VM can have its own security (which can be thought of as its own lightweight firewall). The result is fine-grain security that is managed at VM level, rather than at network level as in the past. This ultimately means an application can have its own individual security level - something impossible from a traditional network security perspective.

The advice to companies is to enforce your security by implementing policies. We're starting to see this happen In the OpenStack Neutron project already, through a stadium project developing Group-based policies. So far this includes an API framework to provide an intent-driven model for operators to describe application requirements in a way that remains independent of underlying infrastructure. Of course companies must remember that at the end of the day, preventing a data breach in the first place is more important than containing one.

VMblog:  And how does open source technology play into this?

Pino de Candia:  Open source technology today is driving infrastructure - for example, foundational technologies like OpenStack and Docker are becoming the standard for building infrastructure clouds and platform as a service (PaaS) respectively. Much like OpenStack, which leverages open source technology including libvirt, KVM and mySQL, MidoNet open source network virtualization is built on open source building blocks including Apache Zookeeper, Cassandra and ELK stack. MidoNet relies on Apache ZooKeeper and Cassandra to store the virtual network topology and network state such as MAC and ARP tables to run at massive scale.

VMblog:  Can you describe Midokura's work in this area and its use cases?

Pino de Candia:  Awareness of MidoNet has expanded to more than 122 countries across the globe. Our initial use cases provided an SDN plug-in for OpenStack networking. As Docker gains broad enterprise adoption, operations team are looking to OpenStack to consolidate infrastructure (compute, networking and storage) management. Project Kuryr was born to address the use case for bridging container networking with OpenStack networking, seamlessly mapping Docker APIs with Neutron APIs as containers are instantiated.

##

Once again, a special thank you to Pino de Candia, Midokura's CTO, for taking time out to speak with VMblog.com about network virtualization and security.

Published Tuesday, January 26, 2016 6:37 AM by David Marshall
Comments
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
Calendar
<January 2016>
SuMoTuWeThFrSa
272829303112
3456789
10111213141516
17181920212223
24252627282930
31123456