Industry executives and experts share their predictions for 2024. Read them in this 16th annual VMblog.com series exclusive.
Trust Takes Root in Artificial Intelligence (and Beyond!)
By Gavin
Ferris, CEO of lowRISC
2023 was clearly the year
of AI, with Director of the Cybersecurity and Infrastructure Security Agency
(CISA) Jen Easterly labeling it at once "the most powerful capability of our
time" and the "most powerful weapon of our time."
While I feel some of the
more extreme concerns about ‘killer AI' are overblown, I am nevertheless in no
doubt that the ever-deeper adoption of deep learning technology is creating
genuine issues that must be addressed. As a result, in 2024 AI regulations will
begin to come into sharper focus worldwide.
Indeed, these things have
already started to happen. For example, the Biden administration has published
its executive order on AI, which calls for the National Institute of Standards
and Technology (NIST) to develop a framework for ethical AI development and
testing, and the US and the UK are working with more than a dozen countries to
make AI secure by design.
However, delivery of genuine
security - and public trust - in AI requires more than strong design and
development guidelines, as important as both of those are. Specifically, secure
by design AI platforms need to be anchored by a hardware root of trust (RoT) to
protect and authenticate their models and data at runtime.
To see why that is,
consider a parallel problem of trust: the increasingly popular concept of a
software bill of materials (SBOM) to mitigate supply chain risks more
generally. It's a great idea, but if deployed at the application software
level, one that relies heavily upon the integrity of the operating system (OS)
it runs on to be effective - yet that OS remains a huge attack surface in
itself. As a result, vendors are starting to anchor SBOM checks in a secure
execution environment provided by a silicon root of trust (SiRoT) - a tiny
‘computer within your computer' if you will, where things are much more tightly
locked down, and consequently whose system measurements and controls can be
trusted to a much greater degree.
In a similar manner,
creating AI that is secure by design will, I believe, require an onboard,
trusted, hardware basis at runtime. This SiRoT can be leveraged to attest to
the structure and weights of the models deployed (and potentially, to validate
that these are appropriately licensed), to monitor the levels of compute
resources used, to grant actuator and sensor access, and to ensure that data is
handled safely and with appropriate confidentiality. This need will only become
more pressing (and obvious) as ‘edge compute' for AI becomes an increasingly
prevalent mode of deployment, another trend I think will strongly accelerate in
2024.
Summing up, I forecast
that these considerations, taken together with the increasingly tough cyber-liability
regulations worldwide (e.g. NIS2 and CRA in Europe, Biden-Harris Cybersecurity
Strategy in the US, etc.), will find AI service providers and hardware vendors
getting serious about integrating silicon root of trust technology into their
offerings in 2024 - which can only be a good thing for everyone.
##
ABOUT THE AUTHOR
Dr. Ferris is a technologist and serial entrepreneur. Following his early career at DreamWorks SKG, he co-founded a number of startups including the DSP / digital radio company RadioScape and fintech business Crescent Technology. When the latter was acquired by Aspect Capital, a multi-billion dollar systematic hedge fund, Dr. Ferris became its chief architect and ultimately, chief investment officer (CIO).After leaving Aspect, Gavin co-founded lowRISC CIC, the non-profit host of the OpenTitan project, where he currently serves as pro bono CEO. Gavin holds a computer science degree and Ph.D. in AI from Cambridge University.