By Ed Watal,
Founder & Principal - Intellibus
It's clear that people have come to appreciate
the power of artificial intelligence. Ask online shoppers what they think about
personalized recommendations - a service
driven by AI's analysis of their shopping history - and 71 percent say they
expect it to be available, while 76 percent of shoppers say they get frustrated
with businesses that don't provide it.
Yet people are also concerned about how
companies are handling the data used to produce those recommendations. A recent
survey shows that 70 percent of people who have heard of AI have
"little to no trust" that companies are making responsible decisions about how
they use personal data.
This tension between wanting the advantages of
AI and fearing its disadvantages is central to the ongoing public debate over
the expanding use of the technology. As a result, the business world faces the
challenge of developing effective protocols that preserve data privacy while
still giving AI access to the data it needs to function.
The new era of data collection
Collecting data that is unique to a customer
is common in the business world. Processes that gather and store a buyer's
personal information - including identifying information, payment information,
and buying history - streamline business, and most consumers accept it
willingly.
AI, however, has increased the stakes. By
giving businesses the power to analyze a greater volume of data, it has
inspired the collection of broader categories of data. Businesses that know
what potential customers have searched for and the behavior they displayed
while searching for it, for example, can fine-tune their efforts to more
effectively draw people through their sales funnels.
Another part of the drive to collect more data
flows from the need to obtain data for training AI models, as bringing AI to
life and nurturing its development requires a lot of data. Businesses can
repurpose the data they are collecting on customers and their activity for AI training.
The key concerns surrounding data
privacy
Data security is the primary concern that has
surfaced as AI has prompted increased data collection. As the volume of data a
company holds increases, the attack surfaces also increase, raising the
security risk. More volume also typically means more complexity, which requires
a more complex approach to security that can be more challenging to establish
and maintain.
Transparency and explainability is another key
concern related to data privacy. AI suffers from what has been described as a
"black box" issue. Essentially, the issue involves the lack of knowledge
regarding how AI makes the connections that allow it to translate data inputs
into data outputs.
This lack of transparency makes it difficult
to determine exactly how personal data is being used by AI platforms.
Consequently, it is difficult to establish parameters for data usage and to
hold organizations accountable for operating within those parameters.
The emergence of new security
protocols
As the use of AI has expanded, organizations
have begun to experiment with several new approaches for addressing the unique
security issues they are facing. The approaches include data minimization,
which seeks to limit the amount of data collected and stored for AI development
to a bare minimum, and federated learning, which restricts data
storage to local servers to minimize the potential for data breaches. Homomorphic encryption, which allows for data
analysis to be conducted on encrypted data, is another approach with the
potential to increase the security of personal data.
For consumers, it is more important than ever to understand what type of data
is being collected and for what purposes it will be used. The "terms of use" agreements that many people
scroll through without truly digesting will most likely be the place
organizations explain how they will use data, as a recent article reporting on updates to Zoom's data
policy revealed. Agreeing to a platform's terms could mean agreeing
to an elevated risk of unauthorized data exposure.
Currently, both the potential value of AI and
the risks it brings to the realm of personal data security are being explored.
The challenge for consumers, organizations, and regulators is finding a balance
that allows for greater convenience with limited potential for abuse.
##
ABOUT THE AUTHOR
Ed Watal is an AI Thought Leader and Technology
Investor. One of his key projects includes BigParser (an Ethical AI Platform
and Data Commons for the World). He is also the founder of Intellibus, an INC 5000 "Top 100 Fastest Growing Software Firm" in the USA, and
the lead faculty of AI Masterclass - a joint operation between NYU SPS and
Intellibus. Forbes Books is collaborating with Ed on a seminal book on our AI
Future. Board Members and C-level executives at the World's Largest Financial
Institutions rely on him for strategic transformational advice. Ed has been
featured on Fox News, QR Calgary Radio, and Medical Device
News.