Virtualization Technology News and Information
Back to Basics: Artificial Intelligence and Inference vs. Prediction

Written by Shahrokh Shahidzadeh, CEO at Acceptto

It's estimated that artificial intelligence (AI) will be a $47 billion industry by next year, and therefore it's no surprise that AI is a hot topic amongst CISOs and IT security professionals. But for many, AI can often times be a term thrown around without complete understanding. There is still a lot to be learned when it comes to artificial intelligence. But starting off, the basic concepts still cause some confusion.

A helpful way of grasping the concept of AI is to describe AI as the capability of a machine to imitate intelligent human behavior. It's a branch of computer science or a kind of computer system. What makes this system special is its ability to perform tasks that usually require human intelligence. These include decision-making, translation, and visual perception.

Many people don't realize that the concept of AI has been around for a long time. Forms of AI exist as far back as Greek mythology, in stories of ‘mechanical men' that are described to mimic the behavior of humans. Along with this, the very first engineers were known to understand their job, in part, as attempting to create mechanical brains.

Our understanding of technology and neuroscience has come leaps and bounds since these early days. As a result, the concept of what actually constitutes AI has changed drastically. Today, AI offers a pretty legitimate human-to-machine interaction. Machines are moving toward an ability to connect data points, understand requests, and even draw conclusions.

Inference vs. Prediction

One basic way of staying ahead of the curve when it comes to understanding AI is the difference between inference and prediction. Often people will confuse prediction with inference. While the differences are actually very subtle, they have magnitudes of significance when it comes to making decisions on who, when and how one should be accessing your information infrastructure. CISOs implicitly understand that it only takes one bad actor to gain access to their information assets and cause millions in damage. Therefore, how identity authentication solutions make decisions on if the user is truly who they say they are, versus merely identifying a bad actor impersonating a valid credential, can mean the difference between safety and remediating cyber damage.

What Is Inference?

Inference is simply a way of you asking yourself questions to figure something out in order to reach a conclusion, but you are not always able to confirm the results by the end of the situation. 

According to a video from the Johns Hopkins University course on "Managing Data Analysis," the goals of inferential questions include:

  • Association backed with outcome and key predictor while adjusting for confounders
  • Single or small number of key predictor(s)
  • Sensitivity analysis to check associations

It may be better to understand why we need to make inferences:

  • Inferences help the source algorithm understand things the target wants them to know but do not directly relate to situation. As it relates to artificial intelligence and machine learning (AIML) authentication, this is the identification of cyber applications and associated hardware.
  • Inferences help the source algorithm to understand behaviors, what they may have done in the past and what they may do next. As it relates to AIML authentication, this is the cataloging of behavior patterns with those cyber applications and associated hardware.
  • Inferences help the source algorithm draw logical conclusions about what is happening. As it relates to AIML authentication, this is the determination of whether or not the credential being used is actually the one intended for use based on previous behaviors.

Inferences alone aren't an adequate method of determining immutable identity for cybersecurity authentication. An effective solution will also understand how to make predictions.

What Is Prediction?

Predictions are simply a way of you asking yourself what will happen next and confirming your thoughts by the end of the situation.

Going back to the Johns Hopkins University video, the goals of predictive questions include:

  • Develop a model that best predicts the outcome
  • Use all available information
  • No predictions favored over the others
  • Little focus on mechanism

Ultimately, we make and confirm prediction to better understand the complexity and entirety of the situation. As it relates to identity authentication, it is a way of building up a knowledge base of accurate inferences and learning from inaccurate inferences to better predict immutable identity authentication with less drag (need for further authentication) in the future.

Though our understanding of AI still contains several holes, it is becoming (and really, already has) such an impressive and regularly used part of today's technology world that it's best to jump on the learning wagon now. AI learning is the future and it only makes sense to build skill in this arena. A lot of possibilities are available with its implementations and its benefits span from industry to industry.


About the Author

Shahrokh Shahidzadeh 

Shahrokh Shahidzadeh leads a team of technologists, driving a paradigm shift in Cybersecurity through Acceptto's Cognitive Continuous AuthenticationTM. Shahrokh is a seasoned technologist and leader with 27 years of contribution to modern computer architecture, device identity, platform trust elevation, large IoT initiatives and ambient intelligence research with more than 20 issued and pending patents. Prior to Acceptto, Shahrokh was a senior principal technologist contributing to Intel Corporation for 25 years in a variety of leadership positions where he architected and led multiple billion-dollar product initiatives.

Published Thursday, May 16, 2019 7:33 AM by David Marshall
Filed under: , ,
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
<May 2019>