Virtualization Technology News and Information
Imperva 2024 Predictions: Generative AI Will Bolster Bad Bot Activity in 2024, Putting APIs and Data at Risk


Industry executives and experts share their predictions for 2024.  Read them in this 16th annual series exclusive.

Generative AI Will Bolster Bad Bot Activity in 2024, Putting APIs and Data at Risk

By Karl Triebes, SVP, Product Management & General Manager, Application Security, Imperva

Nearly half of all internet traffic came from bots in 2022, and that percentage will likely exceed 50% by the end of this year. Malicious, automated software applications are engaging in increasingly sophisticated attack activity by overloading servers with nonhuman traffic, trying to exploit user accounts, and scraping data from your website and APIs. Not all bots are "bad," but even relatively benign bots - like search crawlers, feed fetchers, and monitoring bots - can cause significant problems for organizations when the volume of automated web traffic becomes overwhelming.

As businesses across every industry begin planning for 2024, understanding the reasons behind the rise in bot activity will be important. One of the most influential factors is the advent of generative AI tools and large language models (LLMs). Attackers use these to produce bots that can emulate human behavior effectively enough to evade simple detection tools and conduct API abuse, data exfiltration, account takeover, and other advanced attack tactics. Businesses that need to protect themselves against these multiplying threats will need the right tools for the job. 

How generative AI will impact automated web traffic

Bot activity isn't going anywhere, and it's easy to imagine a world in which as high as 70% or 80% of all internet traffic comes from bots within the next 10 years. The emergence of generative AI will be a driving factor. Generative AI leverages crawlers to scrape websites and collect information from across the internet, and as those systems become a normal and accepted element of business operations, the traffic generated by those crawlers will continue to increase. This isn't a particular concern on its own-web crawlers that collect and index data, such as Google, do serve an important function and are generally and are categorized as "good bots." Unfortunately, cybercriminals will also explore ways to leverage generative AI and bots (i.e., bad bots) to improve their own attack tactics-and therein lies the problem for security teams.

AI-generated programs are helping attackers write scripts capable of convincingly emulating human behavior and obfuscating themselves-which means bot detection solutions that rely on behavioral analytics to detect malicious activity will need to improve. As AI tools grow smarter, bot detection solutions like CAPTCHA and reCAPTCHA will no longer be enough to consistently mitigate bot traffic. The result is that organizations will need to shift their approach to detection, adopting a ‘Zero Trust' approach that focuses more on limiting access privileges while identifying and remediating known exposures.

This is particularly true as business logic attacks (BLAs) become more common. These attacks involve exploiting security weaknesses or flaws in the intended functionality and processes of an application. These attacks often bypass traditional security measures and allow attackers to manipulate workflows and misuse legitimate features. APIs are particularly vulnerable to this type of attack - 17% of all attacks on APIs originated with bad bots abusing business logic in 2022. Expect this figure to rise in the coming year as more attackers take advantage of expanding ecosystems of APIs and third-party dependencies. The updated OWASP Top 10 for API Security includes several exploits that are associated with business logic attacks, which are the most difficult to defend against.

Further, API security is a blind side for many organizations, and a growing number of attackers are realizing that targeting applications is easier than targeting an organization's digital infrastructure directly. Protecting APIs should be a priority for businesses in the new year, and solutions capable of detecting changes to APIs and monitoring how they're accessed will become increasingly important. Many businesses will look to improve their API testing to ensure there are no inherent flaws in the business logic of an application before it goes live.

RASP technology will enjoy a renaissance

The increasing sophistication of AI will render traditional behavioral detection systems less effective. Runtime Application Self-Protection (RASP) will likely enjoy a resurgence as the technology is capable of identifying violations at a more foundational level, making it harder for attackers to spoof it with AI behavioral evasion.

Additionally, RASP can monitor systems at the transactional level, enabling detection when a user is attempting to run illegal commands, overflow memory, or access files they have no legitimate reason to access. These are all activities commonly associated with bad bot activity, making RASP a helpful tool to manage the influx of nonhuman traffic to websites and APIs. The ability to identify and mitigate suspicious activity early in the attack cycle improves the organization's odds of stopping the attack before the intruder has a chance to do significant damage.

RASP solutions have traditionally been difficult to scale because they need to be deployed on a per-server and per-application basis. Today, businesses can incorporate RASP as a "defense-in-depth" mechanism, used to assist in the detection of both bots and business logic exploits. Rather than relying on RASP as a top-line defense, it can be used to fortify bot management programs and better equip security teams to detect and stop incidents like Broken Object Level Authorization (BOLA) attacks, which remain among the most common (and dangerous) API vulnerabilities in today's threat landscape. As bot-based attacks target APIs in increasing numbers, we can expect to see a growing share of organizations use RASP to protect their web applications from malicious automated activity.  

Organizations will prioritize timely detection as bad bot activity grows

With automated web traffic set to eclipse human traffic, and with bots growing more advanced by the day, organizations need to understand how to identify and contain malicious nonhuman activity. It's always a cat-and-mouse game between attackers and defenders, and while adversaries will leverage these new capabilities in unpredictable ways, organizations have access to tools that can help them mitigate complex threats. In 2024, we can expect security teams to move toward a Zero Trust approach that prioritizes remediating vulnerabilities likely to lead to business logic attacks and other, similar dangers.

Organizations will need solutions that are equipped to look for potential holes in business logic by learning and understanding the normal flow of data and how it's accessed. I expect RASP solutions to become more viable (and valuable) in a support role with the ability to address difficult-to-prevent incidents like BOLA attacks. Looking to 2024 and beyond, it's clear that stopping bad bots will be a priority for organizations.



Karl Triebes, SVP and GM, Application Security, Imperva

Karl Triebes 

Karl Triebes is a technology leader that has helped some of the world's largest organizations conceive and build products, services, and businesses for networking, application software, storage, and cloud. As Senior Vice President and General Manager, Application Security, he oversees product roadmap and go-to-market strategy for the Imperva Application Security portfolio Prior, he was Executive Vice President of Product Development and CTO at F5. Triebes has held senior leadership positions with Amazon Web Services, Foundry Networks, and Alcatel.

Published Wednesday, November 15, 2023 7:36 AM by David Marshall
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
<November 2023>