AI-based Bot Attacks are Here. WAPP is the Response.
Nearly 50 percent of all internet traffic now comes from bots. Not all of it is bad — web crawlers that index the web, or uptime monitors that do checks on APIs, servers, and websites serve legitimate purposes — but that statistic highlights a new reality we must face. Nearly half of what touches your applications is not human.
With the rise of AI-based bots, the challenge isn’t just blocking bots. It’s figuring out which automation is legitimate, which is malicious, and which is actively trying to look legitimate. If organizations struggle to effectively distinguish the difference, the cost can be brutal. In retail alone, revenue lost to fraud results in nearly 300 percent more in prevention and recovery costs. This monetary cost is on top of operational headaches, erosion of customer trust, and irreparable brand damage. A solution that can properly protect applications must be able to work contextually and correlate information as a platform.
Why Bots are Harder Than Ever to Stop
If bot attacks were obvious, such as brute-force login attempts, or flood attacks, this would be a much simpler problem. But modern bot attacks don’t behave that way.
Today’s bot attacks increasingly leverage AI-driven techniques designed to mimic legitimate users:
- Human-like navigation patterns: interacting with a webpage the way a human would
- Stealthy transaction rates: carrying out transactions at a rate that goes under the radar
- Falsified API workflows: AI finds business logic-type vulnerabilities in an API and can generate a valid workflow that gives attackers access to sensitive information
The common thread across these examples is business logic abuse. These bots are masking their malicious intent by behaving in a manner that is malicious but is technically within the rules.
As an example, I am a teacher trying to go through my lesson plan. I have students that are trying to delay me from finishing on time. If students are obviously making a ruckus, or asking completely unrelated questions, it will be obvious to me that I should ignore those students or just kick them out of class. Their attempts at distraction are clear. However, what if there is a student who purposefully asks what seem like insightful questions, but in reality, their goal is to hinder the progress of the lesson plan? How would I distinguish whether the student is full of curiosity or is just trying to distract me from finishing the lesson plan?
The answer in one word: context. Context can include the behavior of the student, the history of the behavior, the content of the question being asked, etc. This is similar in security. We’re looking to distinguish between legitimate automated traffic versus those with malicious intentions.
Many organizations attempt to solve this by buying more point solutions: a WAF for application-layer attacks, an API security tool for APIs, a bot manager for bots, and a WAF or L7 DDoS solution for L7 DDoS — all stitched together after the fact. The result is higher cost, more operational complexity, duplicated effort, and alerts that still need to be manually reviewed. Instead, a different approach is needed, using a unified system to protect the end target (applications).
ThreatX’s Approach: Protect the Application, Not Just the Bot Vector
The ThreatX decision engine doesn’t start by asking, “Is this traffic a bot?” It starts by asking, “What transactions and entities are targeting the application and for what purpose?”
ThreatX sits directly on the wire, observing entities/transactions that interact with your application ecosystem. they’re using, we stop attacks and attackers in real time, including bots, regardless of whether they are AI-driven or not. Our decision engine continually evolves and learns and identifies patterns of behavior that are deemed dangerous.
ThreatX uses an adaptive risk score generated by Hacker Mind, built of these components:
- Battle-tested machine learning algorithms
- Transaction-based tracking (i.e., the snowballs being thrown)
- Entity-based tracking (i.e., the snowball thrower)
- Cross-vector correlation through Hacker Mind
Early signals — subtle bot-like behaviors, anomalous API usage, strange authentication flows — raise an entity’s risk score. As behavior escalates or crosses vectors (for example, API abuse followed by L7 DDoS-like behavior), ThreatX correlates that activity instead of treating it as isolated events.
The result is generalized attacker and attack profiles that remain effective even when bots attempt to disguise themselves as legitimate users. The emphasis is on identifying dangerous entities or behavior and stopping them inline.
Watch Jamison Utter explain Hacker Mind.
WAPP: The Real Differentiator
This is where the web application protection platform (WAPP) approach becomes the real differentiator.
Most vendors sell “best-of-breed” application solutions by vector:
- Bot protection
- API protection
- WAF
- L7 DDoS
But regardless of the vector, every one of these tools is ultimately trying to protect the same thing: the application.
ThreatX doesn’t sell bot protection as an add-on. Bot defense is a native capability of a unified platform designed to protect applications holistically. Because protections are integrated by design, ThreatX gains context that stitched-together solutions aren’t privy to.
This is the difference between a collection of talented individuals and a cohesive team. Integrated defenses amplify effectiveness. Siloed ones compete for attention and generate noise.
ThreatX’s Real-world Scenario
In one customer environment, there were already multiple products deployed: a CDN, a WAF, and API protection. Bots were still a major issue. They initiated a proof-of-concept for an add-on bot solution. At the same time, they evaluated ThreatX by A10 Networks. Architecturally, ThreatX was deployed behind the existing stack — upstream from the applications but downstream of the other security tools. In this position, ThreatX was able to clearly identify and stop malicious bot activity that passed through four other products. That alone was enough for the customer to end the bot POC. The customer removed the other point solutions and allowed ThreatX to protect multiple attack vectors on its own, holistically as a WAPP. As expected, ThreatX began seeing more traffic and more attack attempts but the depth and sophistication of the attacks it uncovered, made visible with the holistic, Hacker Mind decision engine was unexpected.
The Human Layer: Why the ThreatX SOC Matters
ThreatX isn’t just software. Every deployment includes an expert-managed SOC team that continuously monitors, tunes, and responds on behalf of customers.
This creates a triple-check effect:
- ThreatX generates a list of alerts
- ThreatX double-checks its work
- The list of alerts is reviewed by a human ThreatX soc team
- This finalized list of alerts is then sent to the customer’s SOC team
The result is fewer false positives, faster responses, and far less operational burden on already-stretched security teams.
ThreatX, a Complementary Solution that Protects your Applications, Period.
ThreatX sits inline and secures the entire application ecosystem against attacks and attackers. Bots are becoming one of the most common and damaging attack vectors today, and ThreatX protects against them as a unified platform. In a world where bots are increasingly intelligent, adaptive, and hard to distinguish from legitimate users, protecting applications requires more than adding on another tool. It requires context, correlation, and a platform. That platform is WAPP, and bot protection is stronger when it’s built into the system, not bolted onto the side.