You know what's fascinating about technology cycles? Just when
everyone thinks they understand where the industry is heading, a
collection of companies emerge with solutions that make you question
everything you thought you knew about enterprise infrastructure.
That's exactly what happened during the 62nd Edition of The IT Press
Tour in California, where VMblog met with nine companies that, on the
surface, seemed to be solving completely different problems. Lucidity
was automating cloud storage management. Hunch was building AI workflow
tools. DDN was powering massive GPU clusters. Yet after analyzing all
nine briefings, a clearer picture emerged: these companies represent
five major shifts that could fundamentally alter how enterprises operate
over the next two years.
Here's what we learned, and why it matters for anyone managing enterprise technology infrastructure.
The AI Infrastructure Arms Race Gets Serious
The most striking theme from this year's tour was the sheer scale of
AI infrastructure deployment. DDN's Paul Bloch shared numbers that sound
almost fictional: they're currently managing 700,000+ GPUs, with
expectations to quadruple that within two years. The largest single
deployment? Elon Musk's xAI facility in Memphis with 200,000 GPUs-what
Bloch described as one of the first true "AI factories."
But here's where it gets interesting for enterprise IT teams. Three
companies-DDN, Graid Technology, and Phison-are attacking the same
fundamental bottleneck from different angles: storage can't keep up with
GPU computing power.
DDN operates at hyperscale, powering the
infrastructure behind companies like Meta and Tesla. Their EXAScaler
platform delivers sustained performance across tens of thousands of GPUs
while maintaining enterprise-grade reliability. When Jensen Huang says
"NVIDIA is powered by DDN," that's not just marketing-it's validation of
their technical approach.
Graid Technology targets the middle market with
their SupremeRAID GPU-accelerated storage. Instead of treating storage
and compute as separate systems, they use existing GPU infrastructure to
accelerate storage operations. The result? Over 95% of raw NVMe
performance while providing enterprise RAID protection. More
importantly, they eliminate the complexity that traditionally required
vast networking infrastructure-replacing 1,400 cables with just 40 in
some deployments.
Phison takes the democratization approach with their
aiDAPTIV+ platform. Rather than requiring massive GPU investments, they
use flash storage as an extension of GPU memory, delivering 8-10x cost
reductions for AI training. Their CEO faced the classic enterprise
dilemma: wanting AI capabilities but balking at the $2 million price
tag. Their solution makes AI accessible to organizations that could
never afford traditional deployments.
The market implications are staggering. We're looking at a
trillion-dollar AI infrastructure market where storage architecture
becomes the difference between success and expensive failure.
Organizations that solve the storage bottleneck will see competitive
advantages; those that don't will watch their GPU investments sit idle.
When AI Automates the Automators
Two companies caught our attention with a different approach to AI:
using artificial intelligence to eliminate the busy work that consumes
most knowledge workers' time. Both Hunch and Lucidity are building
automation platforms, but for completely different domains.
Hunch's Overclock platform lets users describe complex workflows in
plain English rather than building intricate automation rules. Their CEO
David Wilson shared a telling insight: most knowledge work involves
"content transformation from one system to another, with very little
value add in that process." Instead of building better pipeline tools,
they're questioning why we need pipelines at all.
The technical approach involves multi-agent systems with built-in
oversight-what they call a "watchdog" agent that monitors execution to
ensure tasks follow instructions correctly. When integrated with
workplace tools like Slack and Teams, the system can handle failures
gracefully and escalate to humans when needed.
Lucidity attacks infrastructure automation with their "no-ops"
approach to cloud storage management. Their AutoScaler technology has
been eliminating manual disk provisioning for four years, but the new
Lumen platform tackles disk tiering-the complex decisions about which
storage tier different workloads should use.
One customer, a major US airline, automated over 7,500 provisioning
activities and saved $88,000 monthly while improving disk utilization
from 21% to 82%. That's not theoretical ROI-that's measurable
operational improvement from eliminating manual tasks.
The broader implication? The "busy work tax" that research suggests
consumes 80% of knowledge worker time is finally addressable.
Organizations implementing these automation layers will see immediate
productivity gains while freeing their teams for higher-value
activities.
Data Architecture Gets a Complete Makeover
And then there was the intellectually compelling presentations that came from
three companies fundamentally rethinking how organizations handle data:
Tabsdata, PuppyGraph, and Cohesity.
Tabsdata proposes something that sounds almost
radical: eliminating data pipelines entirely. Their "Pub/Sub for Tables"
approach lets domain experts-sales teams, finance departments, customer
success organizations-publish specific datasets directly. Data
consumers subscribe to these published tables and automatically receive
updates when new versions become available.
CEO Arvind Prabhakar's insight resonates: all those complex
transformations and joins that data engineers perform are essentially
trying to "recreate the reality that those systems are representing." If
source systems already represent business reality, why work so hard to
recreate it through pipeline transformations?
PuppyGraph eliminates another infrastructure
overhead: graph databases. Instead of forcing organizations to replicate
data into specialized graph systems, they provide graph analytics
capabilities directly on existing data warehouses. Their zero-ETL
approach means complex relationship analysis without the operational
complexity of maintaining another database.
The performance claims are compelling: 20-70x faster than Neo4j on
comparable queries, with the ability to handle 10-hop neighbor queries
across half a billion edges in under three seconds. But the operational
benefit might matter more-no additional infrastructure to manage, no
pipelines to maintain.
Cohesity transforms backup data from passive
disaster recovery into active business intelligence. Their Gaia platform
applies AI to data that's already being protected, turning backup
repositories into query-able knowledge bases. One customer reduced
support ticket resolution times by 40% by indexing their IT
documentation in backup storage and making it searchable through natural
language queries.
These approaches share a common theme: dramatically reducing the time
data analysts spend on preparation rather than analysis. Organizations
implementing these solutions could see faster time to insight while
simplifying their infrastructure.
Industry Fights Back Against Vendor Lock-In
The UALink Consortium represents something you don't see often in
technology: major competitors uniting around a common standard. With
founding members including AMD, Intel, Microsoft, AWS, Apple, and
Google, the consortium aims to standardize AI accelerator
interconnects-providing an alternative to Nvidia's proprietary NVLink.
The technical specifications are impressive: 800Gbps per port
bandwidth, support for up to 1,024 accelerators in a pod, and power
consumption that's one-third to one-half of comparable Ethernet
interfaces. More importantly, the standard leverages existing Ethernet
infrastructure, reducing implementation costs and simplifying adoption.
What makes this particularly interesting is the participation of
companies like AWS and Apple, which rarely join industry consortiums.
Their involvement signals what they've pegged as a strategic importance of breaking Nvidia's
current 99% market share in AI accelerators.
The market implications extend beyond just competitive alternatives.
Standardized interconnects enable data centers to deploy a single
switching infrastructure that works with any UALink-compatible
accelerator. This separates accelerator choice from interconnect
infrastructure, creating competitive pressure that could drive
innovation and reduce costs across the entire stack.
With silicon expected by mid-2026, this initiative could mark the first blow to Nvidia's current market dominance.
The Great Democratization Wave
A theme that cuts across multiple companies is making advanced
capabilities accessible beyond just hyperscalers and large enterprises.
This isn't just about reducing costs-it's about fundamentally changing
who can compete with advanced technology.
Phison's aiDAPTIV+ makes AI training affordable for universities and
small businesses that could never justify traditional GPU deployments.
PuppyGraph eliminates the infrastructure overhead that kept graph
analytics locked away from most organizations. Hunch makes sophisticated
automation accessible through natural language interfaces rather than
complex programming.
Even within infrastructure companies, this democratization trend
appears. Lucidity brings enterprise-grade storage management to smaller
cloud deployments. Cohesity transforms backup data into business
intelligence without requiring separate analytics platforms.
The historical pattern suggests this democratization could accelerate
innovation in unexpected ways. When advanced capabilities become
accessible to broader audiences, new use cases emerge that larger
organizations might never consider. Organizations that couldn't
previously afford these capabilities become new sources of competitive
pressure.
What This Means for Enterprise IT
Looking across these nine companies, several important shifts emerge
for enterprise technology leaders planning their 2025-2026 strategies.
Storage architecture becomes strategic rather than commodity
infrastructure. The traditional approach of treating storage as an
afterthought will fail in AI-heavy environments. Organizations need to
evaluate whether their current storage can actually support the GPU
investments they're planning.
Automation reaches knowledge work in measurable ways. Beyond factory
automation, AI will eliminate routine cognitive tasks across
white-collar roles. The organizations that implement these solutions
first will gain productivity advantages that compound over time.
Data pipeline complexity starts becoming optional. The 80% of time
data analysts spend on preparation could drop dramatically through
zero-ETL approaches and direct data publishing models. Faster time to
insight becomes a competitive advantage.
Open standards gain momentum as business risk mitigation. Proprietary
lock-in becomes a larger concern as alternatives mature. The UALink
Consortium specifically demonstrates how major technology buyers can
coordinate to create competitive alternatives.
The companies featured at this 62nd edition of The IT Press Tour aren't offering
incremental improvements-they're enabling fundamental changes in how
enterprises operate. The question for technology leaders isn't whether
these trends will impact their organizations, but whether they'll adapt
early enough to gain competitive advantages.
Sometimes the most important technology shifts happen not in grand
pronouncements, but in the quiet conversations at industry events where
practitioners share what's actually working. This year's IT Press Tour
revealed that quiet revolution is already underway.
##