Virtualization Technology News and Information
Lightbits Labs Discusses the Benefits of NVMe/TCP at Flash Memory Summit 2020

Executives from Lightbits Labs will discuss the growing demand for NVMe/TCP during multiple sessions at the virtual Flash Memory Summit 2020 (FMS) on Nov. 10-12, 2020. Lightbits is one of 13 storage startups featured at FMS 2020.


  • Session B-1: NVMe-oF: The Best Way to Network Enterprise Storage
    Track: NVMe-oF
    Tues., Nov. 10, 8:35 am - 10:05 am

    Lightbits' Chief Technology Officer (CTO) Sagi Grimberg joins panelists from Kalray, Kioxia and StorOne to discuss advancements in the latest NVMe over Fabrics specification (Version 1.1) which is now available to accompany the NVMe 1.4 specification. Moderated by John F. Kim, Director Storage Marketing, NVIDIA, the panel will discuss important new functionality such as the NVMe/TCP transport and Asymmetric Namespace Access (ANA) multipathing. NVMe-oF is quickly gaining traction as a network technology enabling new architectures and use cases for disaggregated and hyperconverged cloud deployments. The panel will present practical ways to raise performance levels for NVMe/TCP networks to meet the demands of a wide variety of applications.
  • Session B-2: NVMe/TCP Use Cases
    NVMe-oF Track
    Tues., Nov. 10, 10:45 am - 11:15 am

    Lightbits' CTO Sagi Grimberg leads a panel discussion about NVMe/TCP's growing popularity for implementing flash storage networks because of its ease-of-use, cost-effectiveness and usability, all over standard Ethernet networks. The panel, including representatives from Marvell, Intel, and Packet will highlight typical use cases include cloud storage, databases, and containerized applications.
  • Session B-12: Best Ways to Achieve AI Model Scalability
    AI/ML Track
    Thurs., Nov. 12, 3:30 pm - 4:00 pm

    Lightbits' CTO Sagi Grimberg joins panelists from EmBestor Technology, Facebook, and Western Digital in discussing how the explosive growth of AI applications will need well-designed storage systems to meet their needs. The panelists will explore how during training, storage systems must be capable of handling large numbers of small data files. They will also address how during model execution, the key problem is maintaining a steady flow of data to expensive chips such as GPUs and AI co-processors.
Published Wednesday, November 04, 2020 9:26 AM by David Marshall
Filed under:
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
<November 2020>