Virtualization Technology News and Information
Article
RSS
LexisNexis 2024 Predictions: AI in 2024 - Less hype, more focus on outcomes

vmblog-predictions-2024 

Industry executives and experts share their predictions for 2024.  Read them in this 16th annual VMblog.com series exclusive.

AI in 2024: Less hype, more focus on outcomes

By Emili Budell-Rhodes, Director, Engineering Culture - LexisNexis Legal & Professional

AI, and in particular generative AI, has dominated headlines in 2023 and it isn't going anywhere in 2024. The initial hype is giving way to a more nuanced conversation aiming for tangible outcomes as awareness and understanding of new those capabilities - and their opportunities and challenges - have grown. 

AI is here to stay, and it means business.

Trend #1: Transformation beyond adoption

The focus of 2023 has mostly been on adopting AI, with organizations looking to find ways to utilize AI in some way in their existing operations. The main motivation has been to learn about the capabilities AI solutions have to offer in a variety of contexts, and to upskill. There has been a real sense of urgency behind AI adoption in order to stay competitive.  

While this steep learning and adoption curve is likely to continue in 2024, it will evolve into transformation for some players across markets. After an initial phase of experimentation, data-driven value stream creation will take the front seat: enter AI ROI.

There is already a significant amount of activity and feedback from an internal productivity standpoint with people using AI in their workflows to automate repetitive tasks and using AI assistants to get a head start in completing work. 2024 will see some organizations take this a step further and embrace AI as a central element to their digital transformation. An "AI first" mindset will impact both the future of work, as well as commercial product offerings.

Trend #2: Problem solving over task completion

Related to the point on transformation is a major shift in how AI capabilities are utilized, and how these solutions will be increasingly leveraged to enhance decision-making. So far, most of the emphasis has been on narrowly defined task assistance. What we're seeing with the evolution of prompt engineering is a shift towards more intentionality, with users spelling out intended outcomes.

The shift to more a deliberate, outcome-focused use of AI will create interesting dilemmas around the notion of appropriate use. It's very different to use AI capabilities to enhance existing work patterns according to standards that are widely accepted compared to coming up with "AI-first" use cases that are novel, and where the benefits and possible unintended consequences are yet to be understood. This is where AI impact assessments and responsible AI guidelines will become an invaluable tool in the value strategy and risk formulation of those use cases.

Trend #3: Federated AI ecosystems

The benefits of AI depend on the context within which the technology is used, the wider (in most cases societal) system it is impacting, and what specific problems it is set to solve. This is why thorough and iterative use case definition and refinement is so key to creating value.  

The real power of AI for organizations looking to transform will depend on their ability to create their own federated AI ecosystems where a multitude of AI agents (leveraging one or more models) are tuned to focus on specific use cases in a way that is interoperable. These agents will produce outputs in a variety of ways depending on what the user is looking to achieve - either executing specific, individual use cases (tasks), or complementing each other through a combination of outputs. For example, within a single prompt a user could have one AI agent create a summary of several documents, then have another agent send it by email to a recipient with a cover note, and have another agent monitor for a response to that email, sending a notification to the user when it arrives. These ecosystems will evolve over time, learning from past prompts and generating new output combinations. We're already starting to see examples of this being offered - the differentiator will happen when organizations embrace that ecosystem approach for themselves.  

The more complex these ecosystems become, the more important transparency will be to assess accuracy, performance and other quality metrics so as to ensure the system remains fit for purpose. Security and privacy measures will need to take the center-stage as the risks of threat propagation and unintended personal data disclosure will increase exponentially in these interconnected, increasingly autonomous systems.

Trend #4: Regulation creating a more even playing field

Discussions around AI regulation have been going on for some time - whether they are about new laws and policies, or whether they're focused on applying existing laws such as equality legislation to new technology.

2024 will almost certainly see new AI legislation come into force. The most significant aspect of this development is the creation of common benchmarks and standards, which will create more clarity and certainty around what's expected in both the creation and use of AI technologies. This is in turn should have a positive effect on the market because it will even the playing field with the same set of rules for everyone, and an opportunity for more informed choices both at procurement and at consumer level. Now is a great time for technology providers to look into their own governance standards, particularly around accountability and transparency, and to some extent, auditability, of their processes, systems and tools.  

Trend #5: ESG catching up to AI  

Before the world turned its attention to generative AI with the advent of Chat-GPT in late 2022, another global trend had significant attention from businesses across global markets: ESG. The acronym stands for "environmental, societal and governance" investing, and builds on decades of practitioner experience, accounting frameworks and reporting data from the corporate responsibility discipline.

ESG and the future of AI are inextricably linked, and 2024 is likely to see the emergence of a structured approach to accounting for AI creation and use in light of established ESG frameworks. One of the most salient areas revolves around carbon reporting, which is a mature practice in the corporate governance field and a major concern in the AI space.

The environmental footprint of training and using generative AI is significant and has already generated debate in light of the climate crisis. There is also a significant cost implication related to the energy usage required. However, there is no established and recognized methodology to date where AI providers and users can assess their energy usage and resulting environmental impact and find effective ways to reduce it.

While other ESG topics, like responsible supply chain management, are highly relevant and important, the carbon piece is the most likely to see progress in 2024.

Less hype, more focus on outcomes

These five trends indicate that intentionality and impact are going to be two key themes in the AI journey for many organizations as they build on the experimentation and learnings from this year. Shifting our attention to strategic value creation creates an immense opportunity to harness the power of AI for solving challenging problems and creating better outcomes, for everyone.

##

ABOUT THE AUTHOR

Emili-Budell-Rhodes 

Emili is Director for Engineering Culture at LexisNexis Legal & Professional (LNLP), and the main author of RELX's Responsible AI Principles. With a background in the social sciences and years of experience in the non-profit and corporate responsibility sectors, she is a purpose-driven innovator passionate about advancing engineering excellence at LNLP while creating inclusive, responsible technology. Emili advocates for and contributes to shaping the enterprise-level technology strategy, standards and practices that help teams raise the bar, every day. As part of this wider role, Emili navigates the competitive advantages of responsible AI in LNLP's business strategy and risk-based governance framework, and creates and implements guidelines for utilizing generative AI within the company's ecosystem. She helps employees guide actions beyond the products they develop as it defines LNLP as a company.

Published Tuesday, November 28, 2023 7:31 AM by David Marshall
Comments
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
Calendar
<November 2023>
SuMoTuWeThFrSa
2930311234
567891011
12131415161718
19202122232425
262728293012
3456789