
2 minute read
The problem with agents.
from TBtech April Edition
by Launched
A prime example from recent years is the introduction of APIs to banking services, to facilitate Open Banking. It’s now possible to have a multitude of credit cards and savings accounts displayed in a single app, enabling users to make payments via an app and without the use of cards. APIs are only going to become more common as services are embedded in websites, in apps, and even elsewhere.
But the march of the APIs means the agents often used to protect websites won’t help.
Advertisement
Agents And Website Protection
A great deal of website protection requires software agents, autonomous pieces of software that performs tasks without user input. These agents are sometimes referred to as “bots”; which is ironic given that the threats they are often deployed against are bot attacks.
Bot attacks are performed by hackers for a number of nefarious purposes. In a credential stuffing attack, they will use automation to check a list of breached passwords against another service to try and take over the account. Card cracking is similar but checks the validity of stolen credit card details. Bots can also be used to buy and sell high-value items for a profit, most often airline seats, gig tickets and limited edition sneakers. However, not all bots are bad.
Sites that aggregate prices often use bots to fetch data from multiple sites, and search engines use them to crawl and index the web. Clearly blocking bots en masse is not advisable, as it can mean blocking valuable services, or even being invisible to search engines, damaging a website’s SEO. Identifying the good bots from the bad bots is a must, and this is where software agents are typically employed—bots fighting bots.
While this may sound like the premise for a sci-fi thriller, there are problems with this approach. Agents are inserted into websites using snippets of javascript code or integrated into apps using SDKs, and this means some level of reverse engineering is possible. Plus agents need to be managed, and our research suggests that this task alone can require up to 40 employees in the biggest corporations.
The testing and deployment of agents can take a great deal of time as different versions have to be created for websites and mobile apps, and often cut-down, if used at all, for APIs. Agents require a front end to work—unless the API sits behind a website or mobile app, and many do not, there is no way to deploy an agent to protect it.
The Growth Of Apis
API traffic now exceeds web traffic by quite some distance. Cloudflare, for instance, estimates that more than 50% of the traffic it sees is calling an API. It’s easy to see why—these are often automated calls from other websites, from mobile apps, from IoT devices and more. That makes it even more difficult to tell what is a bot attack, and yet APIs often enjoy less protection from software agents than they offer websites and mobile apps.
The market needs to move away from software agents as the preferred method of protection against bot attacks, and instead look to agentless approaches that provide equal protection for websites, apps and APIs.
These approaches can be better equipped to fight against bot attacks. Software agents are rule-based, meaning that if a way around can be found (for example, staying just below a rate limit rule) then there is no way for an agent to adapt to this. Agentless approaches, by comparison, can evolve without the need for an update to be pushed out to different agents.
As the web evolves, so should the technology that protects it. With increasing automation comes increased risk of automated attacks. The limits of software agents are more and more obvious in an API-led world and need to be left behind.