Skip to main content


Excerpt from a message I just posted in a #diaspora team internal forum category. The context here is that I recently get pinged by slowness/load spikes on the diaspora* project web infrastructure (Discourse, Wiki, the project website, ...), and looking at the traffic logs makes me impressively angry.
In the last 60 days, the diaspora* web assets received 11.3 million requests. That equals to 2.19 req/s - which honestly isn't that much. I mean, it's more than your average personal blog, but nothing that my infrastructure shouldn't be able to handle.

However, here's what's grinding my fucking gears. Looking at the top user agent statistics, there are the leaders:
  • 2.78 million requests - or 24.6% of all traffic - is coming from Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; GPTBot/1.2; +https://openai.com/gptbot).
  • 1.69 million reuqests - 14.9% - Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/600.2.5 (KHTML, like Gecko) Version/8.0.2 Safari/600.2.5 (Amazonbot/0.1; +https://developer.amazon.com/support/amazonbot)
  • 0.49m req - 4.3% - Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; ClaudeBot/1.0; +claudebot@anthropic.com)
  • 0.25m req - 2.2% - Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; Amazonbot/0.1; +https://developer.amazon.com/support/amazonbot) Chrome/119.0.6045.214 Safari/537.36
  • 0.22m req - 2.2% - meta-externalagent/1.1 (+https://developers.facebook.com/docs/sharing/webmasters/crawler)
and the list goes on like this. Summing up the top UA groups, it looks like my server is doing 70% of all its work for these fucking LLM training bots that don't to anything except for crawling the fucking internet over and over again.

Oh, and of course, they don't just crawl a page once and then move on. Oh, no, they come back every 6 hours because lol why not. They also don't give a single flying fuck about robots.txt, because why should they. And the best thing of all: they crawl the stupidest pages possible. Recently, both ChatGPT and Amazon were - at the same time - crawling the entire edit history of the wiki. And I mean that - they indexed every single diff on every page for every change ever made. Frequently with spikes of more than 10req/s. Of course, this made MediaWiki and my database server very unhappy, causing load spikes, and effective downtime/slowness for the human users.

If you try to rate-limit them, they'll just switch to other IPs all the time. If you try to block them by User Agent string, they'll just switch to a non-bot UA string (no, really). This is literally a DDoS on the entire internet.

Just for context, here's how sane bots behave - or, in this case, classic search engine bots:
  • 16.6k requests - 0.14% - Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)
  • 15,9k req - 0.14% - Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm) Chrome/116.0.1938.76 Safari/537.36
Because those bots realize that there's no point in crawling the same stupid shit over and over again.

I am so tired.
People giving unsolicited advice are not being helpful, just annoying.

They are a scourge.
@Andreas G > viral offtopic

It's the the brainstorming mode of an interconnected internet social being species called mono sapiens.
Take it or leave it ..
A chimp watching over his glasses into the camera, reading in a book called "human behavior"



btw
At the end of this 14 year old take Kruse reefers to semantic understanding, I guess that's exactly LLM and the big brother event we are right now. And that's why people in our free web are going crazy leading to the viral reaction Dennis described.
btw btw
Looks like Dennis went viral in the activitPub space thx to friendica ..
:)
Looks like Dennis went viral in the activitPub space thx to friendica …

no. it was primarily someone taking a screenshot and posting it. someone who took a screenshot of.. diaspora. while being logged into their geraspora account.

but of course it's a friendica user who also sees nothing wrong about posting unsolicited advice who is making wrong claims.
@Dennis Schubert /offtopic viral

@denschub
I stumbled over it on a post from a friendica account on a mastodon account of mine.
👍
posting unsolicited advice

Do you refer to someting I wrote in this post of yours?
If so and you point me to it I could learn about what for you is a unsolicited advice and could try to prevent doing that in the future.
@utopiArte Your grasp of human psychology, internet culture, and science in general, is weak.

Consider staying off the internet.

(How'd you like that unsolicited advice?)
@ Andreas Geisler - eat more chicken ;)
I am sated, honest.

But some people exist in a mode of constant omnidirectional condescension, like little almighties, looking down in all directions.

Mostly lost causes. Deflating their egos sometimes helps, but usually just makes them worse.
It was fun, but it's time to stop. This post is about LLM-bots being assholes, but that doesn't mean we have to go down to the same levels.
This is the reason why our FOSS project restricted viewing the diffs to logged in accounts. For us some chinese bots have been the main problem - not google or bing.
Just started using this repo for exactly this reason.
@Dennis Schubert Was kann / sollte man dagegen unternehmen?
"don't host web properties for foss projects" seems to be a good advice.
Is it unique to wikis for foss projects?
The silly way they crawl it makes me think this is a general thing happening to every service on the web.
Is there a way to find out/compare whether the crawlers are trying to target specific kinds of things?
so, I should provide some more context to that, I guess. my web server setup isn't "small" by most casual hosters definition. the total traffic usually is always above 5req/s, and this is not an issue for me.

also, crawlers are everywhere. not just those that I mentioned, but also search engine crawlers, and others. a big chunk of my traffic is actually from "bytespider", which is the LLM training bot from the TikTok company. It wasn't mentioned in this post, because although they make a lot of traffic (in terms of traffic size), that's primarily because they also ingest images, and their request volume is generally low.

some spiders are more aggressive than others. a long, long time ago - I've seen a crawler try to enumerate diaspora*s numeric post IDs to crawl everything, but cases like this are rare.

in this case, what made me angry was the fact that they were specifically crawling the edit history of the diaspora wiki. that's odd, because search engines don't care about old content generally. it was also odd, because the request volume was so high, it caused actual issues. MediaWiki isn't the best performance-wise, and especially the history pages are really, really slow. and if you have a crawler firing requests are multiple requests per second, this is bad - and noteworthy.

I've talked privately to others with affected web properties, and it indeed looks like some of those companies have "generic web crawlers", but also specific ones for certain types of software. MediaWiki is frequently affected, and so are phpBB/smf forums, apparently. those crawlers seem to be way more aggressive than their "generic web" counterparts - which might actually just be a bug, who knows.

a few people here, on mastodon, and on other places, have made "suggestions". I've ignored all of them, and I'll continue to ignore all of them. first, suggesting blocking user agent strings or IPs is not a feasible solution, which should be evident to everyone who read my initial post.

I'm also no a huge fan of the idea to feed them trash-content. while there are ways to make that work in a scalable and sustainable fashion, the majority of suggestions I got were along the lines of "use an LLM to generate trash content and feed it to them". this is, sorry for the phrase, quite stupid. I'm not going to engage in a pissing contest with LLM-companies about who can waste more electricity and effort. ultimately, all you do by feeding them trash content is to make stuff slightly more inconvenient - there are easy ways to detect that and to get around that.

for people who post stuff to the internet and who are concerned that their content will be used to train LLMs, I only have one suggestion: use platforms that allow you to distribute content non-publicly, and carefully pick who you share content with. I've gotten a lot of hate a few years ago for categorically rejecting a diaspora feature that would implement a "this post should be visible to every diaspora user, but not search engines" feature, and while that post was written before the age of shitty LLMs, the core remains true: if your stuff is publicly on the internet, there's little you can do. the best thing you can do is be politically engaged and push for clear legislative regulation.

for people who host their own stuff, I also can only offer generic advice. set up rate limits (although be careful, rate limits can easily hurt real users, which is why the wiki had super relaxed rate limits previously). and the biggest advice: don't host things. you'll always be exposed to some kind of abuse - if it's not LLM training bots, it's some chinese or russian botnet trying to DDoS or crawl for vulnerabilities, or some spammer network who want to post viagra ads on your services.
I have deleted one comment in this post because I will not be offering a platform to distribute legal hot takes. If you want legal advice, talk to a lawyer, don't just Google things.

utopiArte doesn't like this.

Roger for the advice for people who post stuff to the internet and who are concerned that their content will be used to train LLMs, I only have one suggestion: use platforms that allow you to distribute content non-publicly, and carefully pick who you share content with. and thanks @Dennis Schubert
Maybe worth a try: Nepenthes

https://www.404media.co/email/7a39d947-4a4a-42bc-bbcf-3379f112c999/