Of these there is a sample integration for XDPDropper to fail2ban that never got merged https://github.com/fail2ban/fail2ban/pull/3555/files -- I don't think anyone else has really worked on that junction of functionality yet.
There's also wazuh which seems to package ebpf tooling up with a ton of detection and management components, but its not a simple to deploy as fail2ban.
Why do you need eBPF for it? Why is IP filtering and header/cookie analysis not enough? What is shopping cart fraud? What is your false positive and false negative rate?
Why is it useless and harmful? Many of us are struggling—without massive budgets or engineering teams—to keep services up due to incredible load from scrapers in recent years. We do use rate limiting, but scrapers circumvent it with residential proxies and brute force. I often see concurrent requests from hundreds or thousands of IPs in one data center. Who do these people think they are?
Residential proxy users are paying on the order of $5 per gigabyte, so send them really big files once detected. Or "click here to load the page properly" followed by a trickle of garbage data.
It is harmful because innocent users routinely get caught in your dragnet. And why even have a public website if the goal is not to serve it?
What is the actual problem with serving users? You mentioned incredible load. I would stop using inefficient PHP or JavaScript or Ruby for web servers. I would use Go or Rust or a comparable efficient server with native concurrency. Survival always requires adaptation.
How do you know that the alleged proxies belong to the same scrapers? I would look carefully at the values contained in the IP chain as determined by XFF to know which subnets to rate-limit as per their membership in the XFF.
Another way is to require authentication for expensive endpoints.
The truth is often unpleasant, and it owes you nothing. I ask you in return to be more open to it. By wanting to hide and suppress the truth, it is you who is not nice.
I guess the blame is on me here for providing only a very brief context on the topic, which makes it sound like this is just anti-scraping solutions.
This kind of fingerprinting solutions are widely used everywhere, and they don't have the goal of directly detecting or blocking bots, especially harmless scrapers. They just provide an additional datapoint which can be used to track patterns in website traffic, and eventually block fraud or automated attacks - that kind of bots.
If it's making a legitimate request, it's not an automated attack. If it's exceeding its usage quota, that's a simple problem that doesn't require eBPF.
What kind of websites do you have in mind when I talk about fraud patterns? not everything is a static website, and I absolutely agree with you on that point: If your static website is struggling under the load of a scraper there is something deeply wrong with your architecture. We live in wonderful times, Nginx on my 2015 laptop can gracefully handle 10k Requests per second before I even activate ratelimiting.
Unfortunately there are bad people out there, and they know how to write code. Take a look at popular websites like TikTok, amazon, or facebook. They are inundated by fraud requests whose goal is to use their services in a way that is harmful to others, or straight up illegal. From spam to money laundering. On social medial, bots impersonate people in an attempt to influence public discourse and undermine democracies.
I run simple static sites from a (small) off-grid server at home. It has plenty of capacity for normal use, but cannot fully handle the huge traffic overshoots that bots and DoSes and poorly-written systems of household-name-multinationals inflict. I should not have to pay/scale to over-provision by an order of magnitude or more to stop the bullies and overbearing/idle from hurting genuine users. Luckily some relatively simple but carefully considered rules shut out much of the bad traffic while hurting almost no legitimate human visitor that I can find. Nuance and local circumstances are everything. But that took some engineering time on my part, that I also should not have had to spend. Particularly in fending off the nominally-nice multinationals.
The simple reality is that if you don't want to put something online, then don't put it online. If something should be behind locked doors, then put it behind locked doors. Don't do the dance of promising to have something online, then stop legitimate users when they request it. That's basically what a lot of "spam blockers" do -- they block a ton of legitimate use as well.
Who cares if they pay attention to 429s? Your load balancer is giving them the boot, and your expensive backend resources aren't being wasted. They can make requests until the cows come home; they're not getting anything until they slow down.
I downvoted you due to the way you're communicating in this thread. Be kind, rewind. Review the guidelines here perhaps since your account is only a little over a year old.
I found this article useful and insightful. I don't have a bot problem at present I have an adjacent problem and found this context useful for an ongoing investigation.
Regular users retry, that's the point of temp-fail on first attempt. Botnets never retry because they aren't real mailers with send queues. And the number of people who have ever intentionally operated a mailer from a Windows XP box on a residential ADSL line is very accurately approximated by zero.
Are there canned OS images / browsers / libraries / tools for resisting such fingerprinting? Similar in concept to how some browsers try to make themselves look homogenous across different users?
E.g. Can the MTU / Maximum Segment Size (MSS) TCP option be influenced from the client end to be less unique, retransmission timing logic deliberately managed, etc?
why do fingerprinting always happens right at connection start ,usually gives clean metadata during tcp syn. but what is it for components like static proxies or load balancers or mobile networks ,all of these can shift stack behavior midstream. this can make this activity itself a obsolete
One of the biggest use cases for fingerprinting is as a way to reject requests from bot traffic, as mentioned in the article. That accept/reject decision should be made as early in the session lifecycle as possible to minimize resource impact and prevent exfiltration of data. You're right that TCP flags don't provide as much signal, as the TCP stack is mostly handled by the OS and middleboxes. A better source of fingerprinting info is in the TLS handshake - it has a lot more configurability, and is strongly correlated with the user agent.
TCP fingerprinting remains effective because most proxies and load balancers preserve the original TCP options and behaviors from the client, passing through the distinctive stack characteristics that make fingerprinting possible.
This is a good point. I guess that if you have the luxury of controlling the front-end side of the web application you can implement a system that polls the server routinely. Over time this will give you a clearer picture. You can notice that most real-world fingerprint systems run in part on the Javascript side, which enables all sort of tricks.
I have work reasons for needing to learn a lot about kernel-level networking primitives (it turns out tcpdump and eBPF compatible with almost anything, no "but boss, foobar is only compatible with bizbazz 7 or above!").
So when an LLM vendor that shall remain nameless had a model start misidentifying itself while the website was complaining about load... I decided to get to the bottom of it.
eBPF cuts through TLS obfuscation like a bunker buster bomb through a ventilation shaft or was it, well you know what I mean.
i've been looking at this recently and this isn't just for bots. ebpf fingerprinting is how cloudflare quickly detects ddos attacks.
https://blog.cloudflare.com/defending-the-internet-how-cloud...
What's the simplest way to implement eBPF filtering?
As in a NFTables/Fail2Ban level usability.
https://bpfilter.io/ https://github.com/facebook/bpfilter https://lwn.net/Articles/1017705/
Thank you!
something like https://github.com/renanqts/xdpdropper or cilium's host firewall or https://github.com/boylegu/TyrShield exist or https://github.com/ebpf-security/xdp-firewall today and implement ebpf filter based firewalling.
Of these there is a sample integration for XDPDropper to fail2ban that never got merged https://github.com/fail2ban/fail2ban/pull/3555/files -- I don't think anyone else has really worked on that junction of functionality yet.
There's also wazuh which seems to package ebpf tooling up with a ton of detection and management components, but its not a simple to deploy as fail2ban.
Thank you
More useless and harmful anti-bot nonsense, probably with many false detections, when a simple and neutral rate-limiting 429 does the job.
There are MANY cases for such an implementation. My service [1] implements such a thing, eBPF too, and my users do it for many valid reasons such as:
- shopping cart fraud
- geo-restricted content (think distributing laws)
- preventing abuse (think ticket scalpers)
- preventing cheating and multi-accounting (think gaming)
- preventing account takeovers (think 2FA trigger if fingerprint suddenly changed)
There is much more but yeah, this tech has its place. We cannot just assume everyone has a static website with a free for all content.
[1] https://visitorquery.com/
Why do you need eBPF for it? Why is IP filtering and header/cookie analysis not enough? What is shopping cart fraud? What is your false positive and false negative rate?
Why is it useless and harmful? Many of us are struggling—without massive budgets or engineering teams—to keep services up due to incredible load from scrapers in recent years. We do use rate limiting, but scrapers circumvent it with residential proxies and brute force. I often see concurrent requests from hundreds or thousands of IPs in one data center. Who do these people think they are?
Residential proxy users are paying on the order of $5 per gigabyte, so send them really big files once detected. Or "click here to load the page properly" followed by a trickle of garbage data.
There is no real way to confidently tell if someone using a residential proxy.
Once you spot a specific pattern you can detect that pattern.
It is harmful because innocent users routinely get caught in your dragnet. And why even have a public website if the goal is not to serve it?
What is the actual problem with serving users? You mentioned incredible load. I would stop using inefficient PHP or JavaScript or Ruby for web servers. I would use Go or Rust or a comparable efficient server with native concurrency. Survival always requires adaptation.
How do you know that the alleged proxies belong to the same scrapers? I would look carefully at the values contained in the IP chain as determined by XFF to know which subnets to rate-limit as per their membership in the XFF.
Another way is to require authentication for expensive endpoints.
[flagged]
The truth is often unpleasant, and it owes you nothing. I ask you in return to be more open to it. By wanting to hide and suppress the truth, it is you who is not nice.
You are wrong and coming across entirely pompous. I see what I see on my servers, and you clearly have not seen the same.
I guess the blame is on me here for providing only a very brief context on the topic, which makes it sound like this is just anti-scraping solutions.
This kind of fingerprinting solutions are widely used everywhere, and they don't have the goal of directly detecting or blocking bots, especially harmless scrapers. They just provide an additional datapoint which can be used to track patterns in website traffic, and eventually block fraud or automated attacks - that kind of bots.
If it's making a legitimate request, it's not an automated attack. If it's exceeding its usage quota, that's a simple problem that doesn't require eBPF.
What kind of websites do you have in mind when I talk about fraud patterns? not everything is a static website, and I absolutely agree with you on that point: If your static website is struggling under the load of a scraper there is something deeply wrong with your architecture. We live in wonderful times, Nginx on my 2015 laptop can gracefully handle 10k Requests per second before I even activate ratelimiting.
Unfortunately there are bad people out there, and they know how to write code. Take a look at popular websites like TikTok, amazon, or facebook. They are inundated by fraud requests whose goal is to use their services in a way that is harmful to others, or straight up illegal. From spam to money laundering. On social medial, bots impersonate people in an attempt to influence public discourse and undermine democracies.
I run simple static sites from a (small) off-grid server at home. It has plenty of capacity for normal use, but cannot fully handle the huge traffic overshoots that bots and DoSes and poorly-written systems of household-name-multinationals inflict. I should not have to pay/scale to over-provision by an order of magnitude or more to stop the bullies and overbearing/idle from hurting genuine users. Luckily some relatively simple but carefully considered rules shut out much of the bad traffic while hurting almost no legitimate human visitor that I can find. Nuance and local circumstances are everything. But that took some engineering time on my part, that I also should not have had to spend. Particularly in fending off the nominally-nice multinationals.
This is an overly simplistic view that does not reflect reality in 2025.
The simple reality is that if you don't want to put something online, then don't put it online. If something should be behind locked doors, then put it behind locked doors. Don't do the dance of promising to have something online, then stop legitimate users when they request it. That's basically what a lot of "spam blockers" do -- they block a ton of legitimate use as well.
Sure, buts its a nice exploration to layer 4 type of detection
Almost nothing pays attention to 429s, at least not in a good way, including big-name sites. I've written a whole paper about it...
Who cares if they pay attention to 429s? Your load balancer is giving them the boot, and your expensive backend resources aren't being wasted. They can make requests until the cows come home; they're not getting anything until they slow down.
If you're rate-limiting by IP, well... some entire countries have only a handful (or one) externally visible IP.
And some of the bad bots are snowshoeing across many many IPs (and fabricating UAs). How is that load balancer going to help?
For IPv4 sure, but have you heard of our Lord and Savior IPv6?
My local monopoly hasn't. Maybe in 20 years.
I downvoted you due to the way you're communicating in this thread. Be kind, rewind. Review the guidelines here perhaps since your account is only a little over a year old.
I found this article useful and insightful. I don't have a bot problem at present I have an adjacent problem and found this context useful for an ongoing investigation.
As a rule, strong feelings about issues do not emerge from deep understanding. -Sloman and Fernbach
[flagged]
Please. Save your assumptions.
You can stop spam, but you will also stop regular users, and that is the problem. Your classifier is not as powerfully accurate as you think.
If you don't want to put something online, then don't put it online!
Regular users retry, that's the point of temp-fail on first attempt. Botnets never retry because they aren't real mailers with send queues. And the number of people who have ever intentionally operated a mailer from a Windows XP box on a residential ADSL line is very accurately approximated by zero.
Are there canned OS images / browsers / libraries / tools for resisting such fingerprinting? Similar in concept to how some browsers try to make themselves look homogenous across different users?
E.g. Can the MTU / Maximum Segment Size (MSS) TCP option be influenced from the client end to be less unique, retransmission timing logic deliberately managed, etc?
why do fingerprinting always happens right at connection start ,usually gives clean metadata during tcp syn. but what is it for components like static proxies or load balancers or mobile networks ,all of these can shift stack behavior midstream. this can make this activity itself a obsolete
One of the biggest use cases for fingerprinting is as a way to reject requests from bot traffic, as mentioned in the article. That accept/reject decision should be made as early in the session lifecycle as possible to minimize resource impact and prevent exfiltration of data. You're right that TCP flags don't provide as much signal, as the TCP stack is mostly handled by the OS and middleboxes. A better source of fingerprinting info is in the TLS handshake - it has a lot more configurability, and is strongly correlated with the user agent.
TCP fingerprinting remains effective because most proxies and load balancers preserve the original TCP options and behaviors from the client, passing through the distinctive stack characteristics that make fingerprinting possible.
This is a good point. I guess that if you have the luxury of controlling the front-end side of the web application you can implement a system that polls the server routinely. Over time this will give you a clearer picture. You can notice that most real-world fingerprint systems run in part on the Javascript side, which enables all sort of tricks.
I have work reasons for needing to learn a lot about kernel-level networking primitives (it turns out tcpdump and eBPF compatible with almost anything, no "but boss, foobar is only compatible with bizbazz 7 or above!").
So when an LLM vendor that shall remain nameless had a model start misidentifying itself while the website was complaining about load... I decided to get to the bottom of it.
eBPF cuts through TLS obfuscation like a bunker buster bomb through a ventilation shaft or was it, well you know what I mean.