The majority of the traffic on the web is from bots. For the most part, these bots are used to discover new content. These are RSS Feed readers, search engines crawling your content, or nowadays AI bo
I doubt there are many bots scraping Wikipedia, considering they offer downloads of the entire site compressed as an archive already, and have for years. They even have a page showing you how to do it. It’s unlikely that anyone who wanted to download Wikipedia wouldn’t have stumbled over this already and saved themselves the time and effort.
On the other hand, there are lots of bots scraping Wikipedia even though it’s easy to download the entire website as a single archive.
So they’re not really that smart…
I doubt there are many bots scraping Wikipedia, considering they offer downloads of the entire site compressed as an archive already, and have for years. They even have a page showing you how to do it. It’s unlikely that anyone who wanted to download Wikipedia wouldn’t have stumbled over this already and saved themselves the time and effort.
https://en.wikipedia.org/wiki/Wikipedia:Database_download