Industry executives and experts share their predictions for 2025. Read them in this 17th annual VMblog.com series exclusive.
As we head into 2025, O'Reilly's VP of Emerging Tech, Mike Loukides, shares his top
predictions on what technology professionals and IT leaders can expect in the
coming year:
Almost every application will incorporate AI, and it will be
essential for developers to acquire skills that will allow them to evaluate AI
API performance, navigate regulations, and safeguard against new and emerging
vulnerabilities.
Simon
Willison recently wrote that using LLMs effectively is all about controlling
context-particularly now that vendors are adding features like OpenAI's
"memory." We understand RAG; but Simon is saying that it's all RAG, even when
you're typing a prompt at the keyboard. Thinking about context will certainly
become a necessary skill.
The next
year will be less about the big LLMs (GPT and friends) and more about custom
applications for specific use cases (one example is Claire Vo's ChatPRD for
product managers). People will need to find the tools that are appropriate to
their jobs and learn how to use them-and that includes understanding how those
tools treat context.
As far as
technology skills: almost every application will incorporate AI. We are seeing
this already. So, every application developer will need to learn how to work
with AI-not as a researcher, but as a user of an API. They will need to learn
how to evaluate the AI's performance: is it giving correct responses? Is it
giving biased responses? Is latency a problem for users? Will operations be too
expensive? When do you try a different model, and how do you compare the
results obtained from different models? This is all relatively new territory;
everyone involved with the software will need to develop evaluation skills.
Regulation
is clearly coming to AI. Developers of AI products will need to understand
which regulations they are subject to and how to test whether their products
comply. It's likely that many companies will hand this to a specialized AI
compliance group, but almost everyone will need some training in regulatory
requirements.
Finally,
developers working with AI will need to understand how to build more secure
systems. AI security is unlike security
for more traditional applications; there are many new vulnerabilities,
including prompt injection, data poisoning, and many others. Security isn't
just for specialists; everyone will need skills to defend their AI applications
against attackers.
I don't
think MLOps or LLMOps will become a new specialty, but I do think that everyone
involved with operations will need to understand operations for systems that
incorporate AI. LLMs will affect every aspect of software operations. And AI
applications are significantly different from traditional
applications-primarily because they're probabilistic, not deterministic, and
the data is much more important than the source code. Operations staff will
need to acquire the skills needed to work with these new kinds of applications.
AI capabilities will transform the way learning platforms
currently personalize and deliver content for users, acting more as counselors
that build on pre-existing knowledge and gaps in knowledge to share true new
insights.
I've
always been disappointed by "personalization," whether in learning platforms or
elsewhere. We've all seen this. You've just bought a new Camera. You go to
Amazon to buy, I don't know, toilet paper, and you see a dozen recommendations
for cameras-the one thing you're not likely to buy. Recommendation systems for
learning platforms are no different; they tend to recommend what you already
know, not what you don't know but need to know.
What I'd
like to see is a platform that can ask, "What do you want to do?" and say, "To
do that, you need this...," taking into account what you already know-and perhaps
even taking into account what you think you know, but where your knowledge is
weak. "What do you want to do?" may be a current project, or it may be a career
goal. I'd like to see learning platforms become counselors rather than
pattern-matches that tell us to learn what we already know. AI can get us
there-if not all the way there, much closer.
The growing demand and increased use of AI will intensify the
demand for fact-checkers and data scientists.
Evaluation
will become a new specialty within software development.
We will
need more fact-checkers. The errors that AI makes are often very subtle and
hard to notice, especially since AI is very good at sounding convincing-and
since the errors AI is likely to make are unlike the mistakes that humans make.
Simon Willison recently did an experiment when he asked an AI to describe two
photos and posted both the photos and descriptions on his blog. The
descriptions were very detailed and really quite good-but there were
mistakes. And they weren't easy to
find-the only way to find them was to look very carefully at every detail. That is not a skill most people have. I certainly don't.
There are
already many data scientists, but we will need more. AI requires huge amounts
of data, and data scientists know how to collect, clean, test, and evaluate
that data. It's not surprising that we see increased use of content about data
engineering on our platform.
As organizations and individuals prepare for the future of work,
prompt engineering, retrieval-augmented generation (RAG), and implementing
intelligent agents will be essential skills to master.
We've
seen a huge increase in interest in topics like prompt engineering,
retrieval-augmented generation (RAG), and intelligent agents. Prompt
engineering and RAG are skills that can be mastered now-although what they mean
is constantly shifting. (Understanding context is something we wouldn't have
talked about a year ago.) In many ways, agents are still a research topic-but
people want to build agentic systems, and they will learn what they need to do
that.
More
generally, we've seen a decline in interest in content specifically about
ChatGPT and GPT, but a significant increase in more general content about
artificial intelligence, generative AI, and language models. This is healthy.
GPT is amazing, but our users are realizing that learning about GPT isn't
really the issue; it's taking a step back and understanding this new, weird
(Ethan Mollick's word) world of AI.
##
ABOUT THE AUTHOR
Mike Loukides, VP of Emerging Tech Content at O’Reilly Media
Mike Loukides is the vice president of emerging tech content at O’Reilly Media. He’s particularly interested in programming languages, Unix, AI, and system and network administration. Mike is the author of System Performance Tuning and a coauthor of Unix Power Tools and Ethics and Data Science. Most recently, he’s been writing about data and artificial intelligence, ethics, and the future of programming.