I think there are two angles to look at this. Yes, there’s the attack on the weblog. But there’s also pressure on archive.today, e.g. an FBI investigation [1] and some entity using fictitious CSAM allegations [2].
Jani Patokallio who runs gyrovague.com published a blog post attempting to dox the owner of archive.today.
Jani justifies his doxing as follows "I found it curious that we know so little about this widely-used service, so I dug into it" [1]
Archive.today on the other hand is a charitable archival project offered to the public for free. The operator of Archive.today risks significant legal liability, but still offers this service for free.
It's weird to see people getting fixated on the DDoS, which is obviously far less nasty than actually attempting to dox someone. The only credible reason for Jani to publish something like this is if he desires to cause physical harm to the operator of archive.today
People were critical of the Banksy piece, but this is much nastier. At least Banksy is a huge business, archive.today does not make money.
Archive.today's attack on https://gyrovague.com is still on-going btw. It started just over two months ago. Some IPs get through normally but for example finnish residential IPs get stuck on endless captchas. The JS snippet that starts spamming gyrovague appears after solving the first captcha.
I'm not a web developer, but I've picked up some bits of knowledge here and there, mostly from troubleshooting issues I encounter while using websites.
I know there are a number of headers used to control cross-site access to websites, and the linked blog post shows archive.today's denial-of-service script sending random queries to the site's search function. Shouldn't there be a way to prevent those from running when they're requested from within a third-party site?
You can't completely prevent the browser from sending the request—after all, it needs to figure out whether to block the website from reading the response.
However, browsers will first send a preflight request for non-simple requests before sending the actual request. If the DDOS were effective because the search operation was expensive, then the blog could put search behind a non-simple request, or require a valid CSRF token before performing the search.
> I know there are a number of headers used to control cross-site access to websites
Mostly these headers are designed around preventing reading content. Sending content generally does not require anything.
(As a kind of random tidbit, this is why csrf tokens are a thing, you can't prevent sending so websites test to see if you were able to read the token in a previous request)
This is partially historical. The rough rule is if it was possible to make the request without javascript then it doesn't need any special headers (preflight)
One side publishes words, the other DDoSes. One side could just ignore the other and go about their business, the other cannot. One is using force, which naturally leads to resistance and additional attention, the other is not.
Both sides look like they have been bullied in the past and not found their way out of reproducing the pattern yet.
> The blog is still online and only exists as a part of a harassment campaign targeting archive.today
The blog has a lot of more posts on random topics. Why do you imply that the owner of the bloh is part of a harassment campaign and "only" that is the reason for this years old blog to exist?
There are only two posts about archive.today on the blog, and one of them only exists because archive.today started DDoSing them. I fail to see how you could consider the entire blog to be a "harassment campaign", especially considering that the original blog post isn't even negative, it ends with a compliment towards archive.today's creator.
Okay, there's one filler post I missed. I'm sure it took a lot of time to write the 16739382nd post explaining what the various things on a boarding pass mean.
This is a weird way of saying that you wish gyrovague updated more frequently. You could just say “Big fan of his writing, I’d love it if he posted more” if your only complaint is that there aren’t enough recent blog posts on that website
While I would it also better to a bit redact names and details mentioned in the original article in hindsight, I hardly find real defamation. I guess you want to provide random unproven evidence if someone is target of various foreign law enforcement and commercial sites.
In the article they even call for donations to archive.today . As far as I read the tone of the post is full of admiration. Funny thing is that IMHO the rather childish JavaScript attack gives credibility to the post after all.
In all this I somehow hope that we see a legal solution to all this major global copyright crisis that has been reinforced by LLM training. (If you want conspiracy theory: that I guess would be easy monetization for archive these days selling their snapshots)
A bit context if you are confused why Public DNS server blocking websites. 1.1.1.2 is Malware blocking DNS server similar to AdBlock DNS server. It is not 1.1.1.1 and 1.0.0.1
Breach of trust by a site whose unstated primary purpose is bypassing paywalls and ripping off content?
20 years ago during the P2P heyday this was assumed to come with the territory. Play with fire and you could get burned.
If you walk into a seedy brothel in the developing world, your first thought should be "I might get drugged and robbed here" and not what you're going to type in the Yelp review later about their lack of ethics.
Well if we are going to use this analogy, 20 years ago virus scanners also flagged malicious stuff from p2p as a virus, and people still thought putting malicious content on p2p was a shitty thing for someone to do (even if it was somewhat expected).
Nobody was shedding any tears 20 years ago for the virus makers who had their viruses flagged by virus scanners.
I always thought that mainstream media sites with paywalls were pretty far down there in the tier list of websites though. Not sure if this analogy lands unless irony was the goal.
There are many things people disapprove of that others will unilaterally visit upon them anyway. This is the world of 2026. It's not a normative claim but a descriptive one of the reality we live in today.
Cloudflare dns has gone back and forth on whether it wants to resolve them since 2019. It’s taken that away and restored it again (intentionally? mistake?) at least four times.
The c&c/botnet designation would seem to be new though.
As far as I am aware, all previous issues with archive.today and Cloudflare were on account of archive.today taking measures to stop Cloudflare's DNS from correctly resolving their domains, not the other way around.
The current situation is due to Cloudflare flagging archive.today's domains for malicious activity, Cloudflare actually still resolves the domains on their normal 1.1.1.1 DNS, but 1.1.1.2 ("No Malware") now refuses. Exactly why they decided to flag their domains now, over a month after the denial-of-service accusations came out, is unclear, maybe someone here has more information.
Sounds a bit like when "Finland geoblocked archive.today". In all actuality, there was no geoblocking of the site in Finland by any authorities or ISPs, but rather it was the website owner blocking all Finnish IPs after some undisclosed dispute with Finnish border agents. When something bad happens, people seem a bit too willing to give archive.today the benefit of the doubt.
Have they? The thing I remember previously was archive.is, and it wasn’t a block, archive.is was serving intentionally wrong responses to queries from cloudflare’s resolvers.
This is notably not a change to how 1.1.1.1 works, it’s specifically their filtered resolution product.
Intentionally, I believe? archive.today iirc has explicitly blocking Cloudflare from resolving them at various times over the years due to Cloudflare DNS withholding requesting-user PII (ip address) in DNS lookups.
Looking forward to when Google Safe Browsing adds their domains as unsafe, as that ripples to Chrome and Firefox users.
Why? It’s accurate and if the owner has chosen to do this for months now, why should we ever trust they won’t again? Nobody should ever use that site and every optional filter should block them.
There's probably a worthwhile discussion to be had about what it takes for a site in this situation to be removed from blocklists. An apology? Surrender to authorities? Halting the malicious activity for a certain period of time?
Regardless, another user reports the attack is still ongoing[1], so this isn't a discussion that's going to happen about archive.today anytime soon.
I suppose “evidence that the site’s leadership has permanently changed” would convince me. Whoever decided to put in the code that causes visitors to DDOS someone should never be running a web site again.
1.1.1.1 is simply a free DNS, 1.1.1.2 blocks malware, and 1.1.1.3 blocks both malware and adult content. It's a service that does exactly what it's supposed to do.
If I specifically choose a DNS server that promises to not resolve sites that will use my computer in a botnet, then it is that DNS resolver’s place to do that.
Because once the problematic content is removed it should no longer be blocked.
>It's accurate
It is neither a C&C server for a botnet, nor any other server related to a botnet. I would not call it accurate.
>Nobody should ever use that site
It has a good reputation for archiving sites, has stead the test of time, and doesn't censor pages like archive.org does allowing you to actually see the history of news articles instead of them being deleted like archive.org does on occasion.
The site started doctoring archived versions as part of the petty feud. That is, what was supposed to be a historical record, suddenly had content manipulated so as to feed into this fight[0]. There is no redemption. You want to be an archive, you keep it sacrosanct. Put an obvious hosting-site banner overlay if you must, but manipulating the archive is a red-line that was crossed.
...On 20 February 2026, English Wikipedia banned links to archive.today, citing the DDoS attack and evidence that archived content was tampered with to insert Patokallio's name.[19] The decision was made despite concerns over maintaining content verifiability[19] while removing and replacing the second-largest archiving service used across the Wikimedia Foundation's projects.[20] The Wikimedia Foundation had stated its readiness to take action regardless of the community verdict.[19][20]
While I disagree with that action I still trust the site as a reliable source. Redemption is possible. Maybe not for Wikipedia, but I don't care about that site and consider it rotten.
That line of argument is rather misleading, as some kind of content manipulation is inherent to the service an archive that violates paywalls has to provide. It needs to conceal the accounts it uses to access these websites, and their names and traces are often on the pages it's archiving.
Did AT go beyond that and manipulate any relevant part? That's rather difficult to say now. AT is obviously tampering with evidence, but so is Wikipedia; their admins have heavily redacted their archived Talk pages out of fear one of these pseudonyms might be an actual person, so even what exactly WP accuses AT of is not exactly clear.
If archive.today was known to be run by God himself, I would still describe what he is doing as a DDoS and breaching the trust of its users by abusing their browser and bandwidth to conduct his battles.
Are Hacker News users part of a botnet since they link to sites that when people click they go down due to all of the traffic? Am I part of a botnet if I have HN open as it means HN can execute javascript? I think it's stretching the definition.
It's not just problematic content, it's criminal behavior. And the site has a bad reputation for archival, given that the owner altered the content of archived articles.
[1]: https://arstechnica.com/tech-policy/2025/11/fbi-subpoena-tri... [2]: https://adguard-dns.io/en/blog/archive-today-adguard-dns-blo...
reply