Virtualization Technology News and Information
Article
RSS
AI has become mainstream; how companies can build transparent AI ethics
By Sam Babic, chief innovation officer at Hyland

Tech behemoths repeatedly have called the race: Artificial intelligence (AI) has officially entered the mainstream, and it's done so in a resounding fashion. AI is, quite literally, everywhere.

A PwC survey found that more than half of respondents accelerated AI efforts due to COVID, with nearly 90 percent indicating they view AI as a mainstream technology. Similarly, an IDC report shows that AI system spending will grow by 140 percent by 2025 - on top of the already massive amount of growth the technology already has experienced.

Mainstream indeed, and of course when your favorite technology - like your favorite band - goes mainstream, there's good and bad that comes along with it.

The good news? Among other benefits, developing comfort with, and embracing, AI can lead to the development of new products and services while allowing employees to focus on more strategic projects and rid their daily schedules of mundane, repetitive tasks.

The bad news? AI introduces many potential risks to an enterprise, from security issues to bias.

There's more good news though: Companies absolutely can build their AI with transparent ethics, eliminating many of those risks that have given so many pauses when implementing AI within their organizations. Here's how:

Ensure transparency and proper privacy measures are in place: Data security is a topic that's on everyone's minds right now - from the C-suite and cybersecurity teams to marketing and sales teams. And when capturing data that informs AI models, security and privacy should be at the forefront, too.

But how? Start with designing and deploying solutions that have encryption and access control features, then move on to enabling consumers to choose how personal data is collected, stored and used through settings that are clear and accessible.

For transparency, companies can adopt and communicate policies that are clear about who is training and accessing models, how data will be used and for what purpose.

Avoid data bias: AI models often rely on tons of information and humans then play a critical role in training those models, by setting parameters, and filtering and curating data. No matter how neutral humans attempt to be, biases can come into play. 

In turn, it's important to assess parameters to ensure technologists building the AI algorithm are not introducing bias into the process. Those humans create AI models, feed them, train them, and ultimately interpret the ensuing data - actions that may unwittingly be influenced by those humans' beliefs, backgrounds or other environmental factors.

There are several strategies to avoid bias in models. For supervised models - where humans have a strong influence on the data - ensure that stakeholders involved in preparing the dataset are equitably formed and have received some bias training. It's also important to use the right training dataset. Machine learning is only as good as its training data. Training data should replicate real-world scenarios with proper demographic distributions and not contain human predispositions. Also monitoring the models to ensure they reflect real-world performance so that they can then be tweaked if biases are detected.

Remember: Models can be complex and have hundreds and thousands of variables - each of which may introduce bias in imperceptible ways. Checks and balances are vital to avoid bias when designing your models.

Check yourself (and your data) frequently: Increasingly, AI use cases are dealing with more than just marketing intelligence. It's great that Facebook and other social media platforms can use AI-based learning to target you with ads for shoes. However, healthcare providers and government officials are developing machine learning models that impact daily life, sometimes literally life and death situations.

Testing your model is key: Doing so can help you avoid costly mistakes and life-changing impacts. Like one example in the UK from 2020, where thousands of COVID case records were excluded from modeling data - or Zillow's widely publicized algorithm error that led to miscalculations in home purchase prices and subsequent layoffs at the housing giant.

Relevant data also strongly impacts ethical AI practices and the correctness of your models. The world is changing fast, and what was true and relevant a year ago, a month ago, a week ago may not still be relevant to your AI model. If you're building a post-pandemic model to identify warning signs for mental health concerns, for instance, you'd be ill-advised to use data from 2019 to create and train the model. The world is a different place, and so too are the factors currently impacting mental health. The data you use must be timely to be relevant and useful.

If data are old or irrelevant, the desired outcome will not be achieved and may be useless or ineffective at best and life altering at worst. Keeping in mind some of this guidance will improve AI relevance, increase transparency, bolster trust, and ultimately lead you on the path to more ethical AI.

##

ABOUT THE AUTHOR

Sam-Babic 

Sam Babic is Chief Innovation Officer at Hyland. As Chief Innovation Officer, Babic is responsible for driving enterprise innovation by exploring business opportunities and emerging technologies to expand the company's product portfolio and accelerate delivery of differentiated solutions to its global customers.

Published Tuesday, September 27, 2022 7:31 AM by David Marshall
Filed under: , ,
Comments
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
Calendar
<September 2022>
SuMoTuWeThFrSa
28293031123
45678910
11121314151617
18192021222324
2526272829301
2345678