NextIO recently announced the general availability of their new vNET I/O Maestro, a top of rack appliance that helps data centers and virtualization solutions with deployment and management of complex server I/O by consolidating and virtualizing the I/O resources. To find out more information, I met with NextIO's John Meadows to get a better understanding of their technology.
VMblog.com: Can you describe what you guys are hearing as
some of the current problems in today’s data centers?
NextIO: Sure, IT managers are being asked to design and support
a wide variety of complex projects such as on-demand private and hybrid clouds,
virtual desktop roll-outs, and virtualizing mission critical applications. At the same time, IT budgets are under
tremendous downward pressure. There is a
big need to consolidate hardware and increase utilization of resources throughout
the IT stack.
VMblog.com: Explain if you would how PCI Express-based I/O Virtualization can
alleviate any of the pain points they might be experiencing?
NextIO: PCIe based I/O
virtualization helps alleviate these pain points and we believe will be a key
component of next generation data centers.
I/O virtualization separates the compute from the I/O resources so that each
can be managed independently, providing much greater flexibility and
utilization of both compute and I/O resources.
VMblog.com: So why use PCI-Express at the top of the rack instead
of something else like InfiniBand, 10GbE or FCoE?
NextIO: That's a good question. The industry standard PCI Express (PCIe) switching
technology is built into every server and gives 40Gig of I/O connectivity FOR
FREE to every device. The vNET I/O
Maestro can support any PCIe device out there, from a legacy device to the
latest and greatest technology being offered.
VMblog.com: Ok, that's a good segue. Explain to us, what exactly is the vNET I/O Maestro?
NextIO: The vNET I/O Maestro is a top of rack appliance
that allows servers to share a variety of I/O resources, including Fibre
Channel and Ethernet, through a standard PCI-Express connection.
VMblog.com: Can you explain how using the vNET I/O Maestro can reduce the cost
of running a data center?
NextIO: Well, a key to reducing cost is the "Wire Once"
concept. The vNET Maestro is connected
to each server with a single PCIe cable, replacing numerous Ethernet and Fibre
Channel connections. This simplifies the
deployment and management of complex server I/O by eliminating the need for
dedicated I/O cards in servers, and it significantly reduces the number of leaf
switches and cables in datacenter racks.
Because the vNET I/O Maestro uses PCIe standards, customers avoid the vendor
lock-in created by other proprietary fabric interconnects.
Operationally, the vNET I/O Maestro future-proofs
your devices. For example, as 16 Gb
fibre channel or 40 Gb Ethernet becomes available, that change in I/O can be
made once at the vNET I/O Maestro level rather than a wholesale upgrade of
every server. I/O resources and flash
storage can be deployed and reassigned remotely - no hands have to touch the
hardware. Finally, customers are able
to deploy thinner servers as compute nodes, 1U instead of 4U, shrinking the
hardware footprint and generating savings on power, cooling, and colo space
rentals.
VMblog.com: What types of environments would benefit from using a
solution like vNET I/O Maestro?
NextIO: Virtual environments benefit in a number of ways,
including better virtual desktop performance, higher workload densities, and
improved SLA performance due to dedicated bandwidth assignments. In certain high performance situations, flash-based
storage can be deployed in the vNET device to minimize latency and improve
application processing. For example, an
online advertising company was able to eliminate their I/O bottleneck using the
vNET with flash-based storage and increased their server workloads by 5x.
VMblog.com: I've talked with other I/O companies, can you explain to me and the readers how your vNET I/O Maestro is different from those competitive
offerings out there?
NextIO: The vNET I/O Maestro provides all of the
positives of shared I/O without requiring risky proprietary drivers, changes to
customers' practices, or vendor lock-in.
And it is completely transparent to the server, operating system,
applications, and the network. This means that it can be dropped into
existing datacenters without requiring changes to governance policies.
Your readers can also visit our website for more information, join our
mailing list to stay up to date or contact us to schedule a 1:1
demo.