Industry executives and experts share their predictions for 2022. Read them in this 14th annual VMblog.com series exclusive.
Responsible AI for a Better Future
By
Alicia Frame, Director of Product Management for Data Science at Neo4j
As technology continues to evolve, so does our
responsibility to use it ethically. At my job, I'm constantly thinking about
ways we can leverage our innovations to leap forward into a better future.
Although daunting, societal improvement is possible when we think about both
the benefits and consequences caused by how we use technology, and this is
particularly true for AI.
AI's rapid growth has propelled us into an
exciting and still unknown future - and as creators and users of AI, we're
responsible for guiding the development and application of the technology in
ways that fit our social values, in particular, to increase accountability,
fairness and public trust. Heading into the new year, here's what I expect to
see as tech leaders begin to consider how to use AI more responsibly.
Industry/technology predictions
Behavioral
predictions will only continue to advance as more models are scored in real
time, using live data, to interpret images, speech, and text. Even though
growth in real time machine learning has paved the way for functionally
autonomous vehicles and voice assistants that can deliver intelligent
responses, the complicated behavioral context behind these models are lagging.
Next year, we'll start to see organizations notice these diminishing returns
and seeking to add back in a human element of review and responsibility. From
governmental inquiries to new solutions from startups, ethical AI is on
everyone's minds. Even as we see greater investment into more explainable AI
and ML, it's still an open question as to whether or not this is where big tech
wants to invest time and resources.
While
we're seeing increased demands for responsible AI, we're also seeing explosive
growth in low code and no code solutions for machine learning. Driven by a skills crunch, these tools help domain
experts build best in class solutions without the need for deep data science
knowledge. While this trend is
particularly exciting for the democratization of data science, removing data
science experts will only further the need for ethical guardrails to be easy to
access and implement.
Societal/use case predictions
Saying
that responsible AI is important is easy; actually doing AI responsibly is
hard. We'll see widespread adoption of more explainable machine learning
techniques, where end users can better see the exact data that went into
drawing conclusions. Addressing predictions fast enough to use in production is
already possible and frequently used, but the framework to understand them is
currently missing. Admitting the problem is only half the battle when it comes
to technology ethics. The next step is blueprinting the solutions and
implementing them.
One
facet of responsible AI that's often forgotten is the energy that goes into
training all those predictive models. As complicated deep learning models have
gone mainstream, the fact that they require billions of kilowatt hours and the
lack of sustainability in this is often overlooked. Rethinking our current
performance gains to be more ecologically friendly might look like using
slightly less accurate models that are nevertheless cheaper to train;
relocating server farms to places that make use of renewable energy, or even
just places that require less air conditioning.. Regardless, as the climate
crisis worsens, we're going to need to address the fact that a single NLP
transformer model produces more CO2 emissions than 20 people going about their
daily lives for a year (Strubell et al, 2019).
Despite all odds, the developer community is
equipped to overcome the challenges that lay ahead of us in the new year. The
incredible work being done by developers in open source lay the groundwork to
solving critical real-world problems like tackling climate change and ethically
expanding technological growth.
##
ABOUT THE AUTHOR
Alicia Frame is currently the Director of Product Management for Data Science at Neo4j, where she works on building the world’s first enterprise ready data science platform for graph. She earned her Ph.D. in Computational Biology from the University of North Carolina at Chapel Hill and a B.S. in Biology and Mathematics from the College of William and Mary in Virginia and has over 10 years of experience in enterprise data science at BenevolentAI, Dow AgroSciences, and the EPA.