Virtualization Technology News and Information
MixMode 2020 Predictions: Top Predictions for AI in 2020

VMblog Predictions 2020 

Industry executives and experts share their predictions for 2020.  Read them in this 12th annual series exclusive.

By Dr. Igor Mezic, CTO and Chief Scientific Officer, MixMode

Top Four Predictions for AI in 2020

The year 2020 will be another landmark for artificial intelligence and machine learning as a whole, particularly so in the cybersecurity space.

As we develop tools to defend ourselves, hackers too are developing better methods of breaching our networks and stealing our valuable data, and with the advent of laws like CCPA, enterprises will now be held financially responsible to ensure their consumers' data is not being stolen.

Because of these factors, I predict we will see great strides both in the development and application of AI technology in 2020. Last year the buzzword "AI" has been muddled by marketers who claim just about any product has AI, when the AI actually does very basic processing.

As we move forward it is vital that the consumer be able to tell the difference between marketing hype and True AI.


To do so, one first must distinguish between AI as a process and the mathematical components of the AI. The components can be "deep learning support vector" machines or "Koopman operator theory". They can be supervised or unsupervised AI.

When I think of AI, I think of a system rather than any of those single components, and I believe the trend will be a new understanding of AI more as a system rather than having any of the underlying components prevail. In recent years there has been too much conflating of the two. The mathematics of deep learning, for example, has been conflated with AI because it was modeled on the operation of certain parts of your brain,but not all of it. The system processing components are not modeled precisely with deep learning. I think that this will become more clear because right now there are too many things labeled AI that are not.

Machines can do very precise computations in a small amount of time using a very large amount of data. That is their advantage and that is what we want to use in the application of AI. Otherwise, we humans would be able to do it. There is a lower component of AI that does very low level processing jobs that humans can do, but here we're talking about higher functions whereby a machine can take a very large amount of data, process it, and summarize it in a very short amount of time  -- and either enable decision making or actually make some decisions. That's true AI.


Machines are capable of processing enormous amounts of data, but humans are uniquely capable of making sense of data that might not be entirely precise, but is close enough. What's important, particularly in the cybersecurity industry, is to merge those two. In cybersecurity, for example, current sensing is completely different from human senses. We have sight and smell, but in cybersecurity we have logs. So, the true intelligence in that context is a notion that might be beyond even human intelligence, not just AI, with connections made that humans necessarily wouldn't make because a lot of what we do is based on logic and machines are better than we are at performing logical operations. That's why it's important to use true AI, because machines are more capable of making very long sequences of logical deductions. They are better at that than humans.

Let's say you have a massive amount of data on a network. As a human, you have all this traffic available to you. What are you going to base your decision on? You cannot follow the traffic of each IP address because there are far too many, so you're probably going to code some piece of software that summarizes the traffic data for you, and make some decisions based off of that info.

The process is to first have a massive amount of data and summarize it so the human brain can actually understand the deviations. For example, there is a 24 hour typical cycle and sometimes during the night something pops up and a human operator is obviously very capable of figuring out what just happened if they have it in the summary of their data. But they cannot scan through all the different data. A little ping in the middle of the night is somewhere in this mass of data, and a human would need a summary to start with.

True AI, however, would process "Well, I've summarized all of this and I've detected that something really unusual so I'm going to inform the human that something happened or I am going to make a detection of what that precisely was myself if I have it in my memory."

That's true intelligence.


What I described above is an unsupervised learning process because I said that I want to process a bunch of data without anyone telling me anything about it, and without any human labeling it. Plus, I can understand that certain things happened, or that something happening in the future is unusual, based off of the data I've processed. That's completely unsupervised learning.

Today, most companies are focused solely on using supervised learning to train their AI, but the method is quickly becoming outdated. In this new decade, we should instead continue pushing forward and experimenting with active learning -- combining unsupervised and supervised learning. We are going to see more players entering the market with these new ideas because this is clearly where AI is heading.


Right now, hackers are getting better and better at designing AI-like systems that generate text that is very natural and believable, generating logos, even using some of the adversarial AI methods like GANs to attack. This is definitely going to continue, which makes defensive AI an absolute necessity rather than a luxury. However, with the range of breaches that show up on a daily basis and new ways to attack a network, there is no way to predict and label them as malicious if you've never seen them before. That is why it is becoming necessary to have an AI system capable of detecting malicious activity based on its own deductions, not because an instance has previously been labeled by a human as potentially harmful.

Clearly, a rule- and signature-based model gets tossed out the window here. We need something new. I believe that right now we need unsupervised learning, which will grow into the active learning model I described above. This is important because we are going to see the adversarial side developing the same types of approaches. They're already starting to use deep neural networks, GANS and all the machinery of deep learning and AI to attack. The stakes are huge.

Ideally, it is best to have an AI system that recognises when it's being played. But that's hard for humans too as we can be tricked. For example, I believe that AI will be much better at phishing AI attacks, than humans.

Organizations in all industries are being affected by AI, and if you aren't using it, you better start now. The efficiency embedded in the use of AI algorithms is almost incredible and knows no bounds, so it more than pays for itself when enterprise or other organizations invest in this technology. On the network security side in particular, the loss of value can be horrendous if you don't employ the proper protections.


About the Author

Igor Mezic 

Dr. Igor Mezic has spent his career developing highly complex algorithms and artificial intelligence for data analytics . He graduated with a doctorate from CalTech, holds 5 patents, and is a professor of mechanical engineering at the University of California, Santa Barbara. The MixMode AI, which has been used in projects at DARPA and the DoD is the first commercial use of true third-wave AI.

Published Friday, January 31, 2020 7:35 AM by David Marshall
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
<January 2020>