We need archives built on decentralized storage. Don't get me wrong, I really like and support the work Internet Archive is doing, but preserving history is too important to entrust it solely to singular entities, which means singular points of failure.
You could say the same thing about perpetual motion. Being realistic about why past efforts have failed is key to doing better in the future: for example, people won’t mirror content which could get them in trouble and most people want to feel some kind of benefit or thanks. People should be thinking about how to change dynamics like those rather than burning out volunteers trying more ideas which don’t change the underlying game.
There are certainly research questions and cost questions and practicality and subsetting and whatnot. Addressed by some ideas and not by others.
What there isn't is a currently maintained and advertised client and plan. That I can find. Clunky or not, incomplete or not.
There are other systems that have a rough plan for duplication and local copy and backup. You can easily contribute to them, run them, or make local copies. But not IA. (I mean you can try and cook up your own duplication method. And you can use a personal solution to mirror locally everything you visit and such.) No duplication or backup client or plan. No sister mirrored institution that you might fund. Nothing.
I want it to protect all sorts of random obscure documents, mostly kind of crappy, that I can't predict in advance, so I can pursue my hobby of answering random obscure questions. For instance:
* What is a "bird famine", and did one happen in 1880?
* Did any astrologer ever claim that the constellations "remember" the areas of the sky, and hence zodiac signs, that they belonged to in ancient times before precession shifted them around?
* Who first said "psychology is pulling habits out of rats", and in what context? (That one's on Wikiquote now, but only because I put it there after research on IA.)
Or consider the recently rediscovered Bram Stoker short story. That was found in an actual library, but only because the library kept copies of old Irish newspapers instead of lining cupboards with them.
The necessary documents to answer highly specific questions are very boring, and nobody has any reason to like them.
You could let users choose what to mirror, and one of those choices could be a big bucket of all the least available stuff, for pure preservationists who don't want to focus on particular segments of the data.
Sort of like the bittorrent algorithm that favors retrieving and sharing the least-available chunks if you haven't assigned any priority to certain parts.
Since the IA had a collection of emulators (some of them running online*), and old ROMs and floppies and such, it could probably help with that one too.
* Strictly speaking, running in-browser, but that sounded like "Bowser" so I wrote online instead.
Aren't torrents terrible at handling updates in general? If you want to make a change to the data, or even just add our remove data, you have to create a new torrent and somehow get people to update their torrent and data as well.
It doesn't really, you can host a server off a raw IP.
Downloading from example.com is just peer to peer with someone big. There's lots of hosting providers and DNS providers that are happy to host illegal-in-some-places content.
Torrents have a bad reputation due to malicious executables, I have never met someone who genuinely saw piracy as stealing, only as dangerous. In fact, stealing as a definition cannot cover digital piracy, as stealing is to take something away, and to take is to possess something physically. The correct term is copying, because you are duplicating files. And that’s not even getting into the cultural protection piracy affords in today’s DRM and license-filled world.
My understanding is that that court case did not show that operating a torrent tracker is illegal, but specifically operating a (any) service with the explicit intent of violating copyright... huge difference IMO.
To me that's not even related to it being a torrent tracker, just that they were "aiding and abetting" copyright infringement.
Ok. But what is the case law in hosting illegal content? Sure you may operate a torrent, but if your client is distributing child porn, in my view, you bear responsibility.
Trackers generally do not host any content, just hashcodes and (sometimes) meta data descriptions of content.
If "your" (ie let's say _you_ TZubiri) client is distributing child pornography content because you have a partially downloaded CP file then that's on _you_ and not on the tracker.
The "tracker" has unique hashcode signatures of tens of millions of torrents - it literaly just puts clients (such as the one that you might be running yourself on your machine in the example above) in touch with other clients who are "just asking" about the same unique hashcode signature.
Some tracker affiliated websites (eg: TPB) might host searchable indexes of metadata associated with specific torrents (and still not host the torrents themselves) but "pure" trackers can literally operate with zero knowledge of any content - just arrange handshakes between clients looking for matching hashes - whether that's UbuntuLatest or DonkeyNotKong
We agree in that if my client distributes illegal content, I am responsible, at least in part.
On the other hand I also believe that a tracker that hosts hashes of illegal content, provides search facilities for and facilitates their download, is responsible, in a big way. That's my personal opinion and I think it's backed in cases like the pirate bay and sci hub.
That 0 knowledge tracker is interesting, my first reaction is that it's going to end up in very nasty places like Tor, onion, etc..
A tracker (bit of central software that handles 100+ thousand connections/second) is not a "torrent site" such as TPB, EZTV, etc.
A tracker handshakes torrent clients and introduces peers to each other, it has no idea nor needs an idea that "SomeName 1080p DSPN" maps to D23F5C5AAE3D5C361476108C97557F200327718A
All it needs is to store IP addresses that are interested in that hash and to pass handfuls of interested IP addresses to other interested parties (and some other bookkeeping).
From an actual tracker PoV the content is irrelevant and there's no means of telling one thing from another other than size - it's how trackers have operated for 20+ years now.
Trackers can hand out .torrent files if asked (bencoded dictionaries that describe filenames, sizes, checksums, directory structures of a torrents contents) but they don't have to; mostly they hand out peer lists of other clients .. peers can also answer requests for .torrent files.
A .torrent file isn't enough to determine illegal content.
Pornography can be contained in files labelled "BeautifulSunset.mkv" and Rick Astley parody videos can frequently be found in files labelled "DirtyFilthyRepubicanFootTappingNudeAfrica.avi"
Given that it's not clear how trackers could effectively filter by content that never actually traverses their servers.
Oh ok, it seems to be a misconception of mine then.
Mathematically a tracker would offer a function that given a hash, it returns you a list of peers with that file.
While a "torrent site" like TPB or SH, would offer a search mechanism, whereby they would host an index, content hashes and english descriptors, along with a search engine.
A user would then need to first use the "torrent site" to enter their search terms, and find the hash, then they would need to give the hash to a tracker, which would return the list of peers?
Is that right?
In any case, each party in the transaction shares liability. If we were analyzing a drug case or a people trafficking case, each distributor, wholesaler or retailer would bear liability and face criminal charges. A legal defense of the type "I just connected buyers with sellers I never exchanged the drug" would not have much chance of succeding, although it is a common method to obstruct justice by complicating evidence gathering. (One member collects the money, the other gives the drugs.)
> A user would then need to first use the "torrent site" to enter their search terms, and find the hash, then they would need to give the hash to a tracker, which would return the list of peers?
> Is that right?
More or less.
> In any case, each party in the transaction shares liability.
That's exactly right Bob. Just as a telephone exchange shares liability for connecting drug sellers to drug buyers when given a phone number.
Clearly the telephone exchange should know by the number that the parties intend to discuss sharing child pornography rather than public access to free to air documentaries.
How do you propose that a telephone exchange vet phone numbers to ensure drugs are not discussed?
Bear in mind that in the case of a tracker the 'call' is NOT routed through the exchange.
With a proper telephone exchange the call data (voices) pass through the exchange equipment, with a tracker no actual file content passes through the trackers hardware.
The tracker, given a number, tells interested parties about each other .. they then talk directly to each other; be it about The Sky at Night -s2024e07- 2024-10-07 Question Time or about Debbie Does Donkeys.
Also keep in mind that trackers juggle a vast volume of connections of which a very small amount would be (say) child abuse related.
I don't think TPB ever hosted any copyrighted content, even indirectly by its users. Torrent peers do not ever send any file contents through the tracker.
This kind of talk is simply modern politik-speak. I can't stand it and the people who fall for their deception. Stretch the truth to disarm the constituents
In what way? Torrents are used all over for content delivery. Battle.net uses a proprietary version of BitTorrent. It’s now owned by Microsoft. There’s many more legitimate uses as commented by many others.
Criminals using tools does not make the tools criminal.
That precedent was and still is legally used to federally regulate marijuana harsher than fentanyl, a precedent I strongly disagree with, so you'll have to forgive me for believing that the degree to which something causes harm matters more than the amount of misuse
This is a brilliant system relying on a randomised consensus protocol. I wanted to do my info sec dissertation on it, but its security model is extremely well thought out. There wasn't anything I felt I could add to it.
I wish IPFS wasn't so wasteful with respect to storage. I tried pinning a 200mb PDF on IPFS and doing so ended up taking almost a gigabyte of disk space altogether. It's also relatively slow. However its implementation of global deduplication is super cool – it means that I can host 5 pages and you can host 50, and any overlap between them means we can both help one another keep them available even if we don't know about one another beforehand.
For a large-scale archival project, it might not be ideal. Maybe something based on erasure coding would be better. Do you know how LOCKSS compares?
To make the web distributed-archive-friendly I think we need to start referencing things by hash and not by a path which some server has implied it will serve consistently but which actually shows you different data at different times for a million different reasons.
If different data always gets a different reference, it's easy to know if you have enough backups of it. If the same name gets you a pile of snapshots taken under different conditions, it's hard to be sure which of those are the thing that we'd want to back up for that particular name.
(this doc is 5-6 years old though, and I'm not sure what may have changed since then)
In my own (toy-scale) IPFS experiments a couple years ago it has been rather usable, but also the software has been utterly insane for operators and users, and if I were IA I would only consider it if I budgeted for a from-scratch rewrite (of the stuff in use). Nearly uncontrollable and unintrospectable and high resource use for no apparent reason.
IPFS has shown that the protocol is fundamentally broken at the level of growth they want to achieve and it is already extremely slow as it is. It often takes several minutes to locate a single file.
The beauty is that IA could offer their own distribution of IPFS that uses their own DHT for example, and they could allow only public read access to it. This would solve the slow part of finding a file, for IA specifically. Then the actual transfers tend to be pretty quick with IPFS.
What's the point of using IPFS then? Others can still spread the file elsewhere and verify it's the correct one, by using the exact same ID of the file, although on two different networks. The beauty of content-addressing I guess.
That isn’t solving the problem, it’s just giving them more of it to work on. IA has enough material that I’d be surprised if they didn’t hit IPFS’s design limits on their own, and they’d likely need to change the design in ways which would be hard to get upstream.
There was a startup called Space Monkey that sold NAS drives where you got a portion of the space and the rest was used for copies of other people’s content (encrypted). The idea was you could lose your device, plug in a new one and restore from the cloud. They ended up folding before any of their resilience claims could be tested (at least by me).
Would be people be willing to buy an IA box that hosted a shard of random content along with the things they wanted themselves?
Does anyone remember wua.la? It worked similar in that you offered local disk space in exchange for cloud storage. It was later bought by LaCie and killed off shortly after.
I designed a system where you could say "donate this spare 2 TB of my disk space to the Internet Archive" and the IA would push 2 TB of data to you. This system also has the property that it can be reconstructed if the IA (or whatever provider) goes away.
Unfortunately, when I talked to a few archival teams (including the IA) about whether they'd be interested in using it, I either got no response or a negative one.
Is anyone using ArchiveBox regularly? It's a self-hosted archiving solution. Not the ambitious decentralized system I think this comment is thinking of but a practical way for someone to run an archive for themselves. https://archivebox.io/
I am self-hosting ArchiveBox through yunohost, for the odd blog article I come across and like.
Not a heavy user per se, but it's doing its thing reliably.
I'd never heard of it, but their responses to question and comments in that thread were really really good (and I now have "install and configure archivebox on the media server" on my upcoming weekend projects list).
The legal side is a big issue, true. The simplest and best workaround that I'm aware of is how the Arweave network handles it. They leave it up to the individual what parts of the data they want to host, but they're financially incentivized to take on rare data that others aren't hosting, because the rarer it is the more they get rewarded. Since it's decentralized and globally distributed, if something is risky to host in one jurisdiction, people in another can take that job and vice versa. The data also can not be altered after it's uploaded, and that's verifiable through hashes and sampling. Main downside in its current form is that decentralized storage isn't as fast as having central servers. And the experience can vary of course, depending on the host you connect to.
As for technical attacks, I'm not an expert but I'd assume it's more difficult for bad actors to bring down decentralized networks. Has the BitTorrent network ever gone offline because it was hacked for example? That seems like it would be extremely hard to do, not even the movie industry managed to take them down.
> decentralized storage isn't as fast as having central servers.
With the 30-second "time to first byte" speed we all know and love from IA, I'm pretty sure it'd only get faster when you're the only person accessing an obscure document on a random person's shoebox in Korea as compared to trying to fetch it from a centralised server that has a few thousand other clients to attend to simultaneously
> decentralized storage isn't as fast as having central servers.
Depending on scale that’s not necessarily true. I find even today there are many services that cannot keep up with my residential fiber connection (3Gbps symmetrical), whereas torrents frequently can. IA in particular is notoriously slow when downloading from their servers, and even taking into account DHT time torrents can be much faster.
Now if all of their PBs of data were cached in a CDN, yeah that’s probably faster than any decentralized solution. But that will take a heck of a lot more money to maintain than I think is possible for IA.
I collect, archive, and host data. Haven't gotten any threats or attacks. Not one. The average r/selfhosted user hiding their personal OwnCloud behind the DDoS maffia seems more afraid than one needs to be even for hosting all sorts of things publicly. I guess this fearmongering comes from tech news about breaches and DDoS attacks on organisations, similar to regular news impacting your regular worldview regardless of how it's actually going in the world or how things personally affect you
Its not a problem until it suddenly is, and by the time it becomes a problem its too late. Its not fear mongering, its risk management and the laws are draconian and fail fundamental basis for a "rule of law", we have a "rule by law".
This has really shown that the be true. I am stuck in a situation right now where I have some lost media I want to upload but they have been down for over a week. I plan to create a torrent in the meantime but that means relying on my personal network connection for the vast majority of downloads up front. I looked into CloudFlare R2, not terrible but not free either.
I was looking into using R2 as a web seed for the torrent but I don't _really_ want to spend much to upload content that is going to get "stolen" and reuploaded by content farms anyway you know?
Why not subscribe to a seedbox? They’re about $5/2TB/mo. It protects your IP, you can buy for only the month, and since seedboxes are hosted in DMCA-resistant data centers you can download riskier torrents lightning fast, meaning you’re not just spending money for others, you can get something out of it too.
I’ve only used r/Seedboxes on Reddit, and that’s yet to fail me. The specific one I mentioned is EvoSeedBox’s $5/mo tier with 130GB HDD + 2TB bandwidth which is all I’ve needed so far.
You say this as if the IA is not already deeply invested in the DWeb movement. If you go to a DWeb event in the Bay Area, there is a good chance it will be held at the IA.
The internet archive shepherded the early https://getdweb.net/ community, and works with groups like IPFS, so they're well aware and offering operational support to decentralized storage projects. This has been going since at least 2016 when I was involved in some projects involving environmental data archiving during the Trump transition
There's no real financial incentive for people to archive the data as a singular entity so even less for a distributed collection. Also it's probably easier to fund a single entity sufficiently so they can have security/code audits than a bunch of entities all trying to work together.
Yes, it's a good point. Though they could take that money and reward people for hosting the data as well, couldn't they? They don't have to be in charge of hosting.
Yes, they could, that's not much different than a single company distributing the archive to multiple storage centers though. My original comment was about it being more cost effective for a single company to do that than coordinating with a bunch of disjoint entities.
Our digital memory shouldn't be in the hands of a small number of organizations in my view. You're right about cost effectiveness. There are pros and cons to both but it's not just external threats that have to be considered.
History has always gotten rewritten throughout time. If you have a giant library it's easier for bad actors to gain influence and alter certain books, or remove them. This isn't just theoretical, under external pressure IA has already removed sites from its archive for copyright and political reasons.
There are also threats that are generally not even considered because they happen with rare frequency, but when they happen they're devastating. The library of Alexandria was burned by Julius Caesar during a war. Likewise, if all your servers are in one country that geographic risk, they can get destroyed in the event of a war or such. No one expects this to happen today in the US, but archives should be robust long term, for decades, ideally even centuries.
>Our digital memory shouldn't be in the hands of a small number of organizations in my view.
I would wager at least 95% of "digital memory" archived is just absolute garbage from SEO spam to just some small websites holding no actual value.
The true digital memory of the world is almost entirely behind the walls of reddit, twitter, facebook, and very few other sites. The internet landscape has changed massively from the 90s and 2000s.
Yea so, who pays for the decentralized storage long term? What happens when someone storing decentralized data decides to exit? Will data be copied to multiple places, who is going to pay for doubling, tripling or more the storage costs for backups?
Centralized entities emerge to absorb costs because nobody else can do it as efficiently alone.
At the moment, IA stores everything, and I imagine that most people are picturing a scenario where the decentralized data is in addition to IA's current servers. At least, that's the easiest bootstrapping path.
>What happens when someone storing decentralized data decides to exit?
They exit, and they no longer store decentralized data. At the very least, IA would still have their copy(s), and that data can be spread to other decentralized nodes once it has been determined (through timeouts, etc) that the person has exited.
> Will data be copied to multiple places[...]?
Ideally, yes. It is fairly trivial to determine the reliability of each member (uptime + hash checks), and reliable members (a few nines of uptime and hash matches) can be trusted to store data with fewer copies while unreliable members can store data with more copies. Could also balance that idea with data that's in higher demand, by storing hot data lots of times on less reliable members while storing cold data on more reliable members.
> who pays for the decentralized storage long term? [...] who is going to pay for doubling, tripling or more the storage costs for backups?
This is unanswered for pretty much any decentralized storage project, and is probably the only important question left. There are people who would likely contribute to some degree without a financial incentive, but ideally there would be some sort of reward. This in theory could be a good use for crypto, but I'd be concerned about the possible perverse incentives and the general disdain the average person has for crypto these days. Funding in general could come from donations received by IA, whatever excess they have beyond their operating costs and reserve requirements - likely would be nowhere near enough to make something like this "financially viable" (i.e. profitable) but it might be enough to convince people who were on the fence to chip in few hundred GB and some bandwidth. This is an open question though, and probably the main reason no decentralized storage project has really taken off.
> "It's dispiriting to see that even after being made aware of the breach weeks ago, IA has still not done the due diligence of rotating many of the API keys that were exposed in their gitlab secrets," reads an email from the threat actor.
With everything that’s going on, it’s highly suspicious that this is happening right after they upset some very rich rent seekers.
> With everything that’s going on, it’s highly suspicious that this is happening right after they upset some very rich rent seekers.
Absolutely moronic and unbased implication. The “rent-seekers” won their case and have zero interest in being implicated in dumb palace-intrigue style hacking. I mean, fuck those guys, but to bring up allegations like that is big stupid.
> Absolutely moronic and unbased implication. The "rent-seekers" won their case and have zero interest in being implicated in dumb palace-intrigue style hacking. I mean, fuck those guys, but to bring up allegations like that is big stupid.
That makes no sense.
The fact that they won their case gives even greater cause in ensuring that what they want goes through. Doesn't mean they have to be classy about it, or that Internet-based means of sabotage are impossible implications (given that the IA literally is about putting things up on the Internet that some want to be taken down).
That's like saying movies aren't imaginary either because there's blu-rays. Even if we take that point at face value though, the vast majority of money is imaginary, only existing on ledgers. When the fed "prints money", it's just adjusting an entry on a database somewhere.
anything with tons of traffic going to it is a target. it has nothing to do with what the entity does, more with what potential reach it has. criminal behaviour is what it is. people pulling loads of visitors need to properly secure their shit, to prevent their their customers becoming their victims.
Seems like the actor did it only for the street credit and the second breach is only a reminder that IA didn’t properly fixed it after the first breach.
A different framing is: be grateful that it's these types of people breaching IA and being vocal about it & asking IA to fix their systems. Others might just nuke them, or subtly alter content, or do whatever else bad thing you can think of.
They're providing a public service by pointing out that a massive organization controlling a lot of PII doesn't care about security at all.
Not defending attacker, because I see IA as common good. That said one of the messages from this particular instance reads almost as if they were trying to help by pointing out issues that IA clearly missed:
"Whether you were trying to ask a general question, or requesting the removal of your site from the Wayback Machine your data is now in the hands of some random guy. If not me, it'd be someone else."
I am starting to wonder if the chorus of 'maybe one org should not be responsible for all this; it is genuinely too important' has a point.
I imagine they're referring to the fact that the leadership showed extremely bad judgement in deciding to pick a battle with the major publishing companies that everyone knew they would lose before it even began [0].
I don't think that justifies blaming the victim here, and from what I can see the attacker doesn't seem to be motivated by anything other than funsies, but I absolutely lost a lot of faith in their leadership when they pulled the NEL nonsense. The IA is too valuable for them to act like a young activist org—there's too much for us to lose at this point. They need to hold the ground they've won and leave the activism to other organizations.
"Us" means all of humankind for hopefully many generations to come. It's not about my personal entitlement, it's that the IA serves a vital role for humanity (one which they fought hard to make permissible).
I don't know what their funding model looks like but if they have some cash I'd say hiring a security team would be on top of the list of things to invest in.
Does anyone know who is targeting the Internet Archive, and why? I get the impression the attacks are too sophisticated for it to just be vandal punks.
I get the impression it's just pissing into the salt shaker. Internet Archive is obviously held together by duct tape (okay, okay, strong and durable duct tape) and personal willpower. Moreover, its main mission is spreading data, not hiding it from others to generate revenue.
Those who don't get the salt shaker bit, here's the original of the ancient wisdom:
> I get the impression the attacks are too sophisticated for it to just be vandal punks.
What gives that impression? Everything I've seen about the attacker's messaging says "vandal punk(s)" to me, and nothing in what I've seen of the IA's systems screams Fort Knox. It wouldn't surprise me if they actually had a pretty lax approach to security on the assumption that there's very little reason to target them.
The group that claimed to be responsible for the first hack was said to be Russian-based, anti-U.S., pro-Palestine, and their reasoning for the attack was because of IA's violation of copyright....
I think you should draw your own more informed conclusions, but it smells a lot like feds to me.
He's pointing out that it doesn't make any sense: why would someone pro-Russian and anti-US care about violating western IP? In reality, it's the opposite: Russia is happy to help with that because they think it helps weaken the west.
It strikes me as reasonable to assume (or at least strongly bet on) -- I'm not sure of the right phrase for it -- but like a mercenary type operation on behalf of some larger old media company?
There's just too much "means, motive and opportunity" there.
I'd like to imagine a world where every lawyer, when their case is helped by a Wayback Machine snapshot of something, flips a few bucks to IA. They could afford a world-class admin team in no time flat.
That's a terrible solution. The Wayback Machine takes down their snapshots at the request of whoever controls the domain. That's not archival.
If the state of a webpage in the past matters to you, you need a record that won't cease to exist when your opposition asks it to. This is the concept behind perma.cc.
No, they don’t delete the archived content. When the domain’s robots.txt file bans spidering, then the Wayback Machine _hides_ the content archived at that domain. It is still stored and maintained, but it isn’t distributed via the website. The content will be unhidden if the robots.txt file stops banning spiders, or if an appropriate request is made.
In some cases they do appear to delete, on request.
edit: "Other types of removal requests may also be sent to info@archive.org. Please provide as clear an explanation as possible as to what you are requesting be removed for us to better understand your reason for making the request.", https://help.archive.org/help/how-do-i-request-to-remove-som...
Don't be asinine; of course there are exceptions. But the general rule is that nothing is deleted. Even if you have a fancy expensive lawyer send them a C&D letter asking them to delete something or else, they’ll just hide it. You can’t tell the difference from the outside. In fact there are monitoring alarms that are triggered if something _is_ deleted.
Claiming to have deleted something while just having hidden from public view… that’s basically begging content owners to sue and very easily win damages.
Copyright only regulates the distribution of copies of copyrighted works. Possessing copies and distributing copies to other people are two different things.
If you were photocopying a textbook and giving it to your classmates, the publisher could have their lawyer send you a Cease and Desist letter telling you to stop (or else). But if they told you to burn your copy of the textbook then they would be overreaching, and everyone would laugh at them when you took that story to the papers.
Legal reasoning from made‐up examples is generally a bad idea, but I think you can safely reason from that one.
I’m not privy to the actual communications in these cases, but I suspect that instead of replying back with “we deleted the content from the Archive”, they instead say something anodyne like “the content is no longer available via the Wayback Machine”. Smart lawyers will notice the difference, but then a smart lawyer wouldn’t have expected anything else.
What’s the reasoning behind hiding content upon request? Doesn’t that defeat the purpose of archival?
My intuition would say there are 3 cases when content ceases to become available at the original site:
- The host becomes unable to host the content for some reason (bankruptcy, death, etc.) in which case I assume the archive persists.
- The host is externally required to remove the content (copyright, etc.) in which case I assume IA would face the same external pressure? But I’m not sure on that.
- The host/owner has a change of heart about publishing the content. This borders more on IA acting as reputation management on the part of the original host/owner. Personally I think this is hardest to defend but also probably the least common case. In this case I’d think it’s most often to hide something the original host doesn’t want the public finding out later, but that also seems to make it more valuable to be publicly available in the archive. Plus, from a historian/journalist perspective, it’s valuable to be able to track how things change over time, and hiding this from the public prevents that. Though to be honest I’m kind of in two minds here because on the other hand I’m generally of the opinion that people can grow and change, and we shouldn’t hold people to account for opinions they published a decade ago, for example. I’m also generally in favor of the right to be forgotten.
It’s all about copyright. Copyright law in the US gives a monopoly on distribution of copies of things (hand‐waving because the definitions are hard, basically artistic works) to their author. Of course authors usually delegate that right to their publisher for practical and financial reasons. There are some fair use exceptions, but this basically makes it illegal for anyone else to make and distribute copies of the author’s work. Again, hand‐waving because I don't want to have to write a dissertation.
When IA shows you what a website looked like in the past, they are reproducing a copyrighted work and distributing it to you. In some cases, perhaps many, this is fair use. IA cannot really know ahead of time which viewers would be exercising their fair use rights and which would not. Instead, IA just makes everything available without trying to guess whether the access would fall under fair use or not. That means that many times, possibly most of the time, IA is technically breaking the law by illegally distributing copies of copyrighted works.
But _owning_ a copy of a copyrighted work is never prohibited by copyright. It doesn’t matter how you got the copy either.
Therefore, pretty much any time someone asks for something to be hidden or removed on copyright grounds, they go ahead and hide it. They don’t bother to delete it though, because copyright doesn’t require them to. If a copyright holder asks for it to be deleted then they are overreaching, and should know that any sane person would object. But as far as I am aware IA doesn’t actually bother to object in writing; they just hide the content and move on.
This means that researchers can visit the archive in person and request permission to see those copies. For example if you are studying the history of artistic techniques in video games using emulated software on IA, you might eventually notice that all the games from one major publisher are missing (except iirc the original Donkey Kong, because they don’t actually own the copyright on that one). You could then journey to the Archive in person to see the missing material and fill in the gaps in your history. Or you could just ignore them entirely out of spite. This is no different than viewing rare books held by any library, or viewing unexhibited artifacts held by a museum, etc
Ooo, excellent. Yes, hiding items is imperfect, but I understood that it was legally required or something. (IANAL and IDFK, TBH) I wonder how perma.cc gets around that.
> The article concludes that Perma.cc's archival use is neither firmly grounded in existing fair use nor library exemptions; that Perma.cc, its "registrar" library, institutional affiliates, and its contributors have some (at least theoretical) exposure to risk
It seems that the article is about copyright, but of course there are several other reasons that might justify takedown of content stored on perma.cc:
- Right to be forgotten... perma.cc might be able to ignore it, but could this lead to perma.cc being blocked by european ISPs
- ITAR stuff
- content published by entities recognized by $GOVERNMENT as terrorist organizations
That's correct, but only for present evidence - what about the past evidence, that you didn't know you needed until it was too late? IA is broad enough to cover the past five times out of ten.
I sent them a resume almost a year ago, and got nothing back in response until yesterday. Looks like they are going through their backlog right now to find more hands.
The Library of Congress should be archiving the Internet and it should have the budget required to do so.
This is in line with its mission as the "Library of Congress". Being able to have an accurate record of what was on the Internet at a specific point in time would be helpful when discussing legislation or potential regulation involving the internet.
The Library of Congress does currently archive limited collections of the internet[0]. They have a blog post[1] breaking down the effort, currently it's 8 full time staff with a team of part time members. According to Wikipedia[2], it's built on Heritrix and Wayback which are both developed by the Internet Archive (blog post also mentions "Wayback software"). Current archives are available at: http://webarchive.loc.gov/
As awkwardpotato write they do. Many national libraries all over the word treat the internet as covered by their requirements of legal deposit, and crawl their respective TLD.
Depends on the topology, my guess would be no though. Generally speaking, a compromise requires a lot of non-public work to be done in a very short time period. If they don't know how they were initially compromised (and you can't take attacker's word on things), simply throwing up another copy isn't going to fix the issue and often eggs them on to continue.
You basically have to re-perimeterize your topology with known good working security, and re-examine trusted relationships starting with a core group of servers and services, and then expanding outwards, ensuring proper segmentation along the way. Its a lot easier with validated zero trust configurations, but even then its a real pain (especially when there is a hidden flaw in your zero-trust config somewhere) and its very heavy on labor. Servers and services also need to ensure they have not deviated from their initial known desired states.
Some bad guys set traps in the data/services as timebombs, that either cross-polinate, or re-compromise later. There are quite a lot of malicious ****s out there.
The Internet Archive had legal gems such as the Jamendo Album Collection, a huge CC haven. Yes, most of it under NC licenses, but for non-commercial streaming radio with podcasts, these have been invaluable.
Do you know Nanowar? They began there.
Also, as commercal music has been deliberately dumbed down for the masses (in paper, not by cheap talking), discovering Jamendo and Magnatune in late 00's has been like crossing a parallel universe.
> "It's dispiriting to see that even after being made aware of the breach weeks ago, IA has still not done the due diligence of rotating many of the API keys that were exposed in their gitlab secrets," reads an email from the threat actor.
This is quite embarrassing. One of the first things you do when breached at this level is to rotate your keys. I seriously hope that they make some systemic changes, it seems that there were a variety of different bad security practices.
IA is in bad need of a leadership change. The content of the archive is immensely valuable (largely thanks to volunteers) but the decisions and priorities of the org have been far off base for years.
Putting the organisation at risk by playing chicken with large publishing corporations. Trying to stretch fair use a little too far so they had to go to court.
I don't believe IA itself takes down pages that kiwifarms archives/links to. Rather they get a request to take it down and comply with it (correct me if I'm wrong here). I think IA is actually in a tough spot on this issue because they might be able to be sued eg. for defamation if they don't take down pages with personal info after a request to do so is made. Lastly, I doubt any new leadership would be less harsh on kiwifarms.
There was no illegal content on kiwi farms. Even then, I’d say taking down a single page by request is understandable. However, they surrendered to the mob and chose to stop archiving the entire site. This was to censor any criticism of the people involved, but as a result, we lost all of the other information on the rest of the site as well. It’s clear this organization cannot handle pressure, and is relying on people treating it kindly.
They chose to stop serving archives of a site that had started explicitly using tham as a distribution mechanism to get around much a much broader attempt to censor them.
I'm curious what other information on that site you think was valuable to have available to the general public? Nothing has been lost in terms of historical data, it's only the immediate disemmination that has been slowed.
I'm really trying to understand why I should disagree with the IA's choice here. The IA is an archival service, not a distribution platform and it is not their job to help you distribute content that other people find objectionable. Their job is to make and keep an archive of internet content so that we don't lose the historical record. Blocking unrestricted public access to some of that content doesn't harm that mission and can even support it.
the funny thing about the internet archive is that anyone else on this planet could do exactly what they are doing, but they consistently choose not to.
kiwifarms could spin up their own infrastructure, serve their own content for the world, but it turns out technology is a social problem more than a technical problem.
anyone that wants to stand up and be the digital backbone of “kiwi farms” can, but only the internet archive gets flack for not volunteering to be the literal kiwi farm.
for example, the pirate bay goes offline all the time, but it turns out the people that use it, care enough to keep it online themselves.
It's the least worst option. Remember when that happened with Mozilla? Now they're an ad company. Take the bad (some bad mis-steps re:multiple lending during the pandemic, not rotating keys immediately after a hack) with the good (staying true to the human centric mission and not the money flows).
I support archival of films, books, and music, but those items need to be write-only until copyright expires. The purpose of the Internet Archive is to achieve a wide-reaching, comprehensive archival, not provide easy and free read access to commercial works.
Website caches can be handled differently, but bulk collection of commercial works can't have this same public access treatment. It's crazy to think this wouldn't be a huge liability.
Battling for copyright changes is valiant, but orthogonal. And the IA by trying to do both puts its main charter--archival--at risk.
The IA should let some other entity fight for copyright changes.
I'd agree with you if you live in a country where you can walk into your local library and read these for "free." For people who live where there may not even be a library, your argument makes no sense except to make the publishers richer. They typically price some of these books at "library prices" so normal people won't be able to afford them, but libraries will.
Copyright is copyright. If you don't like the idea of a publisher owning the rights to content they published doesn't mean you have a right to their content. Let alone worldwide distribution of that content.
What makes you feel entitled to the content of the publisher before the copyright expires? Do you feel that you deserve access to everything because you've deemed the concept of ownership around book publishing immoral?
You can't just take a digital copy of a physical book and give it to everyone worldwide. That isn't your choice or decision to make nor is it ethical to ascribe malice to simply retaining distribution rights to content they own.
"Make publishers richer", it's actually just honoring the concept of ownership...
I don’t like the idea of infinite ownership, which is the current problem of copyright. The public may never be able to own these ideas and build off of them. Further, just because you own something in one country doesn’t mean you can own it in another country. For a physical example, you can’t own a gun in the US and take it to Australia.
If publishers didn’t engage in tactics like “library pricing” and preventing people from actually purchasing the books, I might feel differently. Right now, I see this archiving stuff as a Robin Hood story (which fwiw, every version of this story you may have seen/heard is probably still copyrighted) and I hope the publishers die or are replaced.
> I support archival of films, books, and music, but those items need to be write-only until copyright expires.
Which means no one alive today would ever be able to see them out of copyright. It also requires an unfounded belief that major copyright owning companies won't extend copyright lengths beyond current lengths which are effectively "forever".
The Internet Archive Lending Library did. And there are music, movie, and video game ROMs found throughout the user uploads.
IA should collect these materials, but they shouldn't be playing fast and loose by letting everyone have access to them. That's essentially providing the same services as the Pirate Bay under the guise of archivism.
This puts IA at extreme legal risk. Their mission is too important to play such games.
The words came from a message written by the people you are calling script kiddies, rather than being editorializing by bleepingcomputer, as you seem to believe.
I highly doubt they are script kiddies. More than likely they are state actors or mercenaries of state actors attempting to bring down the free transmittal of information between regular folks. IA evidently has not so good security and wikipedia must be doing pretty well I guess? I can’t recall the last time one of these attacks worked on Wiki.
Why would they publicly call them out and lay open the way they breached them if they were "attempting to bring down the free transmittal of information between regular folks"?
They could have done much worse but they chose not to and instead made it public. Which state actor does that?
There are many "first things" you need to do if breached, and good luck identifying and doing them all in a timely fashion if you're a small organization, likely heavily relying on volunteers and without a formal security response team...
Restating my love for Internet Archive and my plea to put a grownup in charge of the thing.
Washington Post: The organization has “industry standard” security systems, Kahle said, but he added that, until this year, the group had largely stayed out of the crosshairs of cybercriminals. Kahle said he’d opted not to prioritize additional investments in cybersecurity out of the Internet Archive’s limited budget of around $20 million to $30 million a year.
Military grade has different meanings. I’ve worked in the electronics industry a long time and will say with confidence that the pcbs and chips we sent to the military were our best. Higher temperature ranges, much more thorough environmental testing, many more thermal and humidity cycles, lots more vibration testing. However we also sell them for 5-10x our regular prices but in much lower quantities. It’s a failed meme in many instances as the internet uses it though.
Hot take, this is the way it should be. If you want better security then you update the requirements to get your certification.
Security by its very nature has a problem of knowing when to stop. There's always better security for an ever increasing amount of money and companies don't sign off on budgets of infinity dollars and projects of indefinite length. If you want security at all you have bound the cost and have well-defined stopping points.
And since 5 security experts in a room will have 10 different opinions on what those stopping points should be— what constitutes "good-enough" they only become meaningful when there's industry wide agreement on them.
There never will be an adequate industry-wide certification. There is no universal “good enough” or “when to stop” for security. What constitutes “good enough” is entirely dependent on what you are protecting and who you are protecting it from, which changes from system to system and changes from day to day.
The budget that it takes to protect against a script kiddy is a tiny fraction of the budget it takes to protect from a professional hacker group, which is a fraction of what it takes to protect from nation state-funded trolls. You can correctly decide that your security is “good enough” one day, but all it takes is a single random news story or internet comment to put a target on your back from someone more powerful, and suddenly that “good enough” isn’t good enough anymore.
The Internet Archive might have been making the correct decision all this time to invest in things that further its mission rather than burning extra money on security, and it seems their security for a long time was “good enough”… until it wasn’t.
We can’t all have the latest EPYC processors with the latest bug fixes using Secure Enclaves and homomorphic encryption for processing user data while using remote attestation of code running within multiple layers of virtualization. With, of course, that code also being written in Rust, running on a certified microkernel, and only updatable when at least 4 of 6 programmers, 1 from each continent, unite their signing keys stored on HSMs to sign the next release. All of that code is open source, by the way, and has a ratio of 10 auditors per programmer with 100% code coverage and 0 external dependencies.
Then watch as a kid fakes a subpoena using a hacked police account and your lawyers, who receive dozens every day, fall for it.
No, it’s your demeanor that is unbecoming and not worth engaging with. Villianizing your poor behavior not successfully baiting people into replying as you want is childish too. Take a breather.
A non-grownup analysis is to criticize a decision in hindsight. If Internet Archive shifted funds to security, it would mean cutting something from its mission. Given their history, it makes sense IMHO to spend on the mission and take the risk. As long as they have backups, a little downtime won't hurt them - it's not a bank or a hospital.
The Internet Archive has a management problem. They seem to be more comfortable disrupting libraries than managing an online, publicly accessible database of disputed, disorganized material.
Despite all of the positive self-talk, I don't know if they realize how important they are, or how easy it would be for them to find good help and advice if their management were transparent and everything was debated in public. That may have protected it to some extent; as a counterexample, Wikipedia has been extremely fragile due to its transparency and accessibility to everyone. With IA being driven by its creator's ideology, maybe that ideology should be formalized and set in stone as bylaws, and the torch passed to people openly debating how IA should be run, its operations, and what it should be taking on.
I don't mean they should be run by the random set of Confucian-style libertarian aphorisms that is running the credibility of Wikipedia into the ground, but Debian is a good model to follow. Or maybe do better than both?
While I have no idea how Debian is actually funded I'd agree. One issue might be that The Internet Archive actually need to have people on staff, not sure if Debian has that requirement. You're not going to get people to man scanner or VHS players 8 hours a day without pay, at least not at this scale.
The Internet Archive needs a better funding strategy that asking for money on their own site. People aren't visiting them frequently enough for that to work. They need a fundraising team, and a good one.
Finding managers are probably even worse. They can't get a normal CEO type person, because they aren't a company and the type of people who apply to or are attracted to running non-profit, server the community, don't be evil organisation are frequently bat-shit crazy.
Don't forget the time Brewster tried to run a bank -- Internet Archive Federal Credit Union. Or that the physical archives are stored on an active fault line and unlikely to receive prompt support during an emergency. Or that, when someone told him that archives are often stored in salt mines he replied, "cool, where can I buy one?"
> Confucian-style libertarian aphorisms that is running the credibility of Wikipedia
Can you elaborate? I'm aware of Wikipedia having very particular rules and lots of very territorial editors, but I'm not sure how this runs their credibility into the ground aside from pissing off the far right when they come in with an agenda to push.
I appreciate their ethos and I've used the site many times (and donated!), but clearly it's at the point where Kahle et al just aren't equipped either personally (as a matter of technical expertise) or collectively (they are just a handful of people) to be dealing with what are probably in many cases nation-state attacks. Kahle's attitude towards (and misunderstanding of) copyright law is IMO proof that he shouldn't be running things, because his legal gambles (gambles that a first year law student could have predicted would fail spectacularly) have put IA at long term risk (see: Napster). And this information coming out over the past few weeks about their technical incompetence is arguably worse, because the tech side of things are what he and his team are actually supposed to be good at.
It's true that Google and Microsoft and others should be propping up the IA financially but that isn't going to solve the IA's lack of technical expertise or its delusional hippie ethos.
A genuine question to commenters asking to "put a grownup in charge of the thing" and saying that "Kahle shouldn't be running things": he built the thing, why exactly he can't run it the way he sees fit?
Speak for yourself, the internet archive successfully increased its scope and made creative contributions to case law (although it lost at the appeals court)
We need archives built on decentralized storage. Don't get me wrong, I really like and support the work Internet Archive is doing, but preserving history is too important to entrust it solely to singular entities, which means singular points of failure.
This seems to get brought at least once in the comments for every one of these articles that pops up.
The IA has tried distributing their stores, but nowhere near enough people actually put their storage where their mouths are.
Nearly every entry in the library has a torrent file (which is a distributed storage system), but with the index pages down, they're not accessible.
They're not using DHT?
And it's guaranteed not to happen if the efforts don't continue.
You could say the same thing about perpetual motion. Being realistic about why past efforts have failed is key to doing better in the future: for example, people won’t mirror content which could get them in trouble and most people want to feel some kind of benefit or thanks. People should be thinking about how to change dynamics like those rather than burning out volunteers trying more ideas which don’t change the underlying game.
There are certainly research questions and cost questions and practicality and subsetting and whatnot. Addressed by some ideas and not by others.
What there isn't is a currently maintained and advertised client and plan. That I can find. Clunky or not, incomplete or not.
There are other systems that have a rough plan for duplication and local copy and backup. You can easily contribute to them, run them, or make local copies. But not IA. (I mean you can try and cook up your own duplication method. And you can use a personal solution to mirror locally everything you visit and such.) No duplication or backup client or plan. No sister mirrored institution that you might fund. Nothing.
Perhaps one idea is to let people choose what they want to protect. This way people wanting to support it can have their mission.
I want it to protect all sorts of random obscure documents, mostly kind of crappy, that I can't predict in advance, so I can pursue my hobby of answering random obscure questions. For instance:
* What is a "bird famine", and did one happen in 1880?
* Did any astrologer ever claim that the constellations "remember" the areas of the sky, and hence zodiac signs, that they belonged to in ancient times before precession shifted them around?
* Who first said "psychology is pulling habits out of rats", and in what context? (That one's on Wikiquote now, but only because I put it there after research on IA.)
Or consider the recently rediscovered Bram Stoker short story. That was found in an actual library, but only because the library kept copies of old Irish newspapers instead of lining cupboards with them.
The necessary documents to answer highly specific questions are very boring, and nobody has any reason to like them.
You could let users choose what to mirror, and one of those choices could be a big bucket of all the least available stuff, for pure preservationists who don't want to focus on particular segments of the data.
Sort of like the bittorrent algorithm that favors retrieving and sharing the least-available chunks if you haven't assigned any priority to certain parts.
My favorite question is: whether or not Bowser took the princess to another castle.
Since the IA had a collection of emulators (some of them running online*), and old ROMs and floppies and such, it could probably help with that one too.
* Strictly speaking, running in-browser, but that sounded like "Bowser" so I wrote online instead.
You already can, they have torrents for everything.
> they have torrents for everything
Including the index itself? That would be awesome.
Their torrents suck and IME don’t update to changes in the archive.
Aren't torrents terrible at handling updates in general? If you want to make a change to the data, or even just add our remove data, you have to create a new torrent and somehow get people to update their torrent and data as well.
There's a mutable torrent extension (BEP-46) but unfortunately I don't think it's widely supported. I think IPFS/IPNS is the more likely direction.
Which IA has moved into and hasn’t found much luck in, unfortunately.
How come?
This is accurate, their torrent-generating system is basically broken to the point of being useless.
> nowhere near enough people actually put their storage where their mouths are.
Typically because most people who have the upload, don't know that they can. And if they come to the notion on their own, they won't know how.
If they put the notion to a search engine, the keywords they come up with probably don't return the needed ELI5 page.
As in: How do I [?] for the Internet Archive?, most folks won't know what [?] needs to be.
This is literally torrents. Just give up
> This is literally torrents. Just give up
Most casual visitors to IA don't know that. Which is the point.
Giving up is for others.
The problem with torrents is they have a bad reputation since people use it to steal and redistribute other people’s content without their consent.
The problem with websites is they have a bad reputation since people use it to steal and redistribute other people’s content without their consent.
The problem with file transfer is they have a bad reputation since people use it to [insert illegal or immoral activity here].
Then rename it from "torrent" to something else.
I'm not sure what the argumentative line is here. But file uploading and downloading needs to have accountability for hosting, which p2p obscures.
The bad reputation is inherent to the tech, not a random quirk.
It doesn't really, you can host a server off a raw IP.
Downloading from example.com is just peer to peer with someone big. There's lots of hosting providers and DNS providers that are happy to host illegal-in-some-places content.
Is there any form of torrent where you can do a full text search? That, to me, is the more important problem with torrents.
But internet archive doesn't do this? It's a key based search (url keys)
Internet archive allows full text search of books, newspapers, etc.. Or anyway it did, before being breached.
Torrents have a bad reputation due to malicious executables, I have never met someone who genuinely saw piracy as stealing, only as dangerous. In fact, stealing as a definition cannot cover digital piracy, as stealing is to take something away, and to take is to possess something physically. The correct term is copying, because you are duplicating files. And that’s not even getting into the cultural protection piracy affords in today’s DRM and license-filled world.
Give it a good reputation then.
What are some legal torrent trackers?
Humble Bundle. Various Linux iso
archive.org to name one
That's debatable. Most of their torrents are for things under copyright, though any other decentralized archive would have the same problem.
That’s a copyright problem. 99% of things made in the last 100 years fall under copyright.
and a good number of things that were going to pass into copyright were further extended to 2053.
Except when their own employees publicly tell people not to worry about copyright and just upload stuff anyway, they make it their own problem.
What is your definition of a legal torrent tracker? I was not aware there were even any illegal ones.
> I was not aware there were even any illegal ones.
Depends on the jurisdiction. Remember what happened in the The Pirate Bay trial?
My understanding is that that court case did not show that operating a torrent tracker is illegal, but specifically operating a (any) service with the explicit intent of violating copyright... huge difference IMO.
To me that's not even related to it being a torrent tracker, just that they were "aiding and abetting" copyright infringement.
Ok. But what is the case law in hosting illegal content? Sure you may operate a torrent, but if your client is distributing child porn, in my view, you bear responsibility.
I'm backing ranger_danger here.
In Law the technicalities matter.
Trackers generally do not host any content, just hashcodes and (sometimes) meta data descriptions of content.
If "your" (ie let's say _you_ TZubiri) client is distributing child pornography content because you have a partially downloaded CP file then that's on _you_ and not on the tracker.
The "tracker" has unique hashcode signatures of tens of millions of torrents - it literaly just puts clients (such as the one that you might be running yourself on your machine in the example above) in touch with other clients who are "just asking" about the same unique hashcode signature.
Some tracker affiliated websites (eg: TPB) might host searchable indexes of metadata associated with specific torrents (and still not host the torrents themselves) but "pure" trackers can literally operate with zero knowledge of any content - just arrange handshakes between clients looking for matching hashes - whether that's UbuntuLatest or DonkeyNotKong
We agree in that if my client distributes illegal content, I am responsible, at least in part.
On the other hand I also believe that a tracker that hosts hashes of illegal content, provides search facilities for and facilitates their download, is responsible, in a big way. That's my personal opinion and I think it's backed in cases like the pirate bay and sci hub.
That 0 knowledge tracker is interesting, my first reaction is that it's going to end up in very nasty places like Tor, onion, etc..
> That 0 knowledge tracker is interesting,
Most actual trackers are zero knowledge.
A tracker (bit of central software that handles 100+ thousand connections/second) is not a "torrent site" such as TPB, EZTV, etc.
A tracker handshakes torrent clients and introduces peers to each other, it has no idea nor needs an idea that "SomeName 1080p DSPN" maps to D23F5C5AAE3D5C361476108C97557F200327718A
All it needs is to store IP addresses that are interested in that hash and to pass handfuls of interested IP addresses to other interested parties (and some other bookkeeping).
From an actual tracker PoV the content is irrelevant and there's no means of telling one thing from another other than size - it's how trackers have operated for 20+ years now.
Here are some actual tracker addresses and ports
Here's the bittorrent protocol: http://bittorrent.org/beps/bep_0052.htmlTrackers can hand out .torrent files if asked (bencoded dictionaries that describe filenames, sizes, checksums, directory structures of a torrents contents) but they don't have to; mostly they hand out peer lists of other clients .. peers can also answer requests for .torrent files.
A .torrent file isn't enough to determine illegal content.
Pornography can be contained in files labelled "BeautifulSunset.mkv" and Rick Astley parody videos can frequently be found in files labelled "DirtyFilthyRepubicanFootTappingNudeAfrica.avi"
Given that it's not clear how trackers could effectively filter by content that never actually traverses their servers.
Oh ok, it seems to be a misconception of mine then.
Mathematically a tracker would offer a function that given a hash, it returns you a list of peers with that file.
While a "torrent site" like TPB or SH, would offer a search mechanism, whereby they would host an index, content hashes and english descriptors, along with a search engine.
A user would then need to first use the "torrent site" to enter their search terms, and find the hash, then they would need to give the hash to a tracker, which would return the list of peers?
Is that right?
In any case, each party in the transaction shares liability. If we were analyzing a drug case or a people trafficking case, each distributor, wholesaler or retailer would bear liability and face criminal charges. A legal defense of the type "I just connected buyers with sellers I never exchanged the drug" would not have much chance of succeding, although it is a common method to obstruct justice by complicating evidence gathering. (One member collects the money, the other gives the drugs.)
> A user would then need to first use the "torrent site" to enter their search terms, and find the hash, then they would need to give the hash to a tracker, which would return the list of peers?
> Is that right?
More or less.
> In any case, each party in the transaction shares liability.
That's exactly right Bob. Just as a telephone exchange shares liability for connecting drug sellers to drug buyers when given a phone number.
Clearly the telephone exchange should know by the number that the parties intend to discuss sharing child pornography rather than public access to free to air documentaries.
How do you propose that a telephone exchange vet phone numbers to ensure drugs are not discussed?
Bear in mind that in the case of a tracker the 'call' is NOT routed through the exchange.
With a proper telephone exchange the call data (voices) pass through the exchange equipment, with a tracker no actual file content passes through the trackers hardware.
The tracker, given a number, tells interested parties about each other .. they then talk directly to each other; be it about The Sky at Night -s2024e07- 2024-10-07 Question Time or about Debbie Does Donkeys.
Also keep in mind that trackers juggle a vast volume of connections of which a very small amount would be (say) child abuse related.
I don't think TPB ever hosted any copyrighted content, even indirectly by its users. Torrent peers do not ever send any file contents through the tracker.
A tracker that only tracks legal torrents, e.g. free software, OCRemix content, etc.
https://linuxtracker.org/ http://www.publicdomaintorrents.info/ https://ocremix.org/torrents
How would you keep the definition of legality without a centralizing authority?
A tracker is a centralized authority.
I don't see how that would be enforceable. Policy perhaps, but it would be impossible to absolutely prevent it from being used for that purpose IMO.
To me this is like saying you shouldn't use a knife because they are also used by criminals.
This kind of talk is simply modern politik-speak. I can't stand it and the people who fall for their deception. Stretch the truth to disarm the constituents
In what way? Torrents are used all over for content delivery. Battle.net uses a proprietary version of BitTorrent. It’s now owned by Microsoft. There’s many more legitimate uses as commented by many others.
Criminals using tools does not make the tools criminal.
It's a matter of numbers, if tens of thousands of criminals use tech X, and it has few genuine uses, it's going to be restricted.
This has precedent in illegal drug categorization, it's not just about the damage, but its ratio of noxious to helpful use.
That precedent was and still is legally used to federally regulate marijuana harsher than fentanyl, a precedent I strongly disagree with, so you'll have to forgive me for believing that the degree to which something causes harm matters more than the amount of misuse
Keep in mind the IA archives a lot of garbage. If it could be more focused it would be more likely to work.
The IA only works because it archives everything. You don't know what you need until you need it.
Archives generally purposefully don’t have a strong editorial streak. My trash is your treasure.
The attempts have actually been focused on specific types of content, such as historical videos.
personally I love all the random crap on IA!
Lots of Copies Keeps Stuff Safe
https://www.lockss.org/
This is a brilliant system relying on a randomised consensus protocol. I wanted to do my info sec dissertation on it, but its security model is extremely well thought out. There wasn't anything I felt I could add to it.
I wish IPFS wasn't so wasteful with respect to storage. I tried pinning a 200mb PDF on IPFS and doing so ended up taking almost a gigabyte of disk space altogether. It's also relatively slow. However its implementation of global deduplication is super cool – it means that I can host 5 pages and you can host 50, and any overlap between them means we can both help one another keep them available even if we don't know about one another beforehand.
For a large-scale archival project, it might not be ideal. Maybe something based on erasure coding would be better. Do you know how LOCKSS compares?
> I tried pinning a 200mb PDF on IPFS and doing so ended up taking almost a gigabyte of disk space altogether
Was that any file in particular? I just tried it myself with a 257mb PDF (as reported by `ls -lrth`) and doesn't seem to add that much overhead:
High Costs Makes Lots of Copies Unfeasible
That was actually one of the key constraints in the LOCKSS system, since it was designed to be run by libraries that don't have big budgets.
The design is really very good.
Is there a high level explanation of the model?
To make the web distributed-archive-friendly I think we need to start referencing things by hash and not by a path which some server has implied it will serve consistently but which actually shows you different data at different times for a million different reasons.
If different data always gets a different reference, it's easy to know if you have enough backups of it. If the same name gets you a pile of snapshots taken under different conditions, it's hard to be sure which of those are the thing that we'd want to back up for that particular name.
Done. It is called IPFS. The IA already supports it.
https://github.com/internetarchive/dweb-archive/blob/master/...
Which has a rather lengthy section explaining why it's currently a failed experiment: https://github.com/internetarchive/dweb-archive/blob/master/...
(this doc is 5-6 years old though, and I'm not sure what may have changed since then)
In my own (toy-scale) IPFS experiments a couple years ago it has been rather usable, but also the software has been utterly insane for operators and users, and if I were IA I would only consider it if I budgeted for a from-scratch rewrite (of the stuff in use). Nearly uncontrollable and unintrospectable and high resource use for no apparent reason.
IPFS has shown that the protocol is fundamentally broken at the level of growth they want to achieve and it is already extremely slow as it is. It often takes several minutes to locate a single file.
The beauty is that IA could offer their own distribution of IPFS that uses their own DHT for example, and they could allow only public read access to it. This would solve the slow part of finding a file, for IA specifically. Then the actual transfers tend to be pretty quick with IPFS.
What's the point of using IPFS then? Others can still spread the file elsewhere and verify it's the correct one, by using the exact same ID of the file, although on two different networks. The beauty of content-addressing I guess.
That isn’t solving the problem, it’s just giving them more of it to work on. IA has enough material that I’d be surprised if they didn’t hit IPFS’s design limits on their own, and they’d likely need to change the design in ways which would be hard to get upstream.
Several minutes sounds more than fine for this purpose ?
Especially if it's about having an Internet Archive backup.
I think the point is that it's already slow at the current amount of data, let alone when you stuff dozens more PB into it
Right, what I'm saying is that now we need to get the rest of the web (or at least the parts we want to keep) on board.
There was a startup called Space Monkey that sold NAS drives where you got a portion of the space and the rest was used for copies of other people’s content (encrypted). The idea was you could lose your device, plug in a new one and restore from the cloud. They ended up folding before any of their resilience claims could be tested (at least by me).
Would be people be willing to buy an IA box that hosted a shard of random content along with the things they wanted themselves?
Does anyone remember wua.la? It worked similar in that you offered local disk space in exchange for cloud storage. It was later bought by LaCie and killed off shortly after.
What happens when the user base explodes (eg. due to this event), and a few months layer they all get bored and drop out?
I designed a system where you could say "donate this spare 2 TB of my disk space to the Internet Archive" and the IA would push 2 TB of data to you. This system also has the property that it can be reconstructed if the IA (or whatever provider) goes away.
Unfortunately, when I talked to a few archival teams (including the IA) about whether they'd be interested in using it, I either got no response or a negative one.
Why reinvent the wheel ?
There are so many proven distributed archiving systems, a lot of which are mentioned in these comments.
Is anyone using ArchiveBox regularly? It's a self-hosted archiving solution. Not the ambitious decentralized system I think this comment is thinking of but a practical way for someone to run an archive for themselves. https://archivebox.io/
I am self-hosting ArchiveBox through yunohost, for the odd blog article I come across and like. Not a heavy user per se, but it's doing its thing reliably.
@nikisweeting the dev of archivebox was active in a thread about out here last week.
https://news.ycombinator.com/item?id=41860909
I'd never heard of it, but their responses to question and comments in that thread were really really good (and I now have "install and configure archivebox on the media server" on my upcoming weekend projects list).
We'll need to find even more people willing to expose themselves to legal threats and cyberattacks then.
The legal side is a big issue, true. The simplest and best workaround that I'm aware of is how the Arweave network handles it. They leave it up to the individual what parts of the data they want to host, but they're financially incentivized to take on rare data that others aren't hosting, because the rarer it is the more they get rewarded. Since it's decentralized and globally distributed, if something is risky to host in one jurisdiction, people in another can take that job and vice versa. The data also can not be altered after it's uploaded, and that's verifiable through hashes and sampling. Main downside in its current form is that decentralized storage isn't as fast as having central servers. And the experience can vary of course, depending on the host you connect to.
As for technical attacks, I'm not an expert but I'd assume it's more difficult for bad actors to bring down decentralized networks. Has the BitTorrent network ever gone offline because it was hacked for example? That seems like it would be extremely hard to do, not even the movie industry managed to take them down.
> decentralized storage isn't as fast as having central servers.
With the 30-second "time to first byte" speed we all know and love from IA, I'm pretty sure it'd only get faster when you're the only person accessing an obscure document on a random person's shoebox in Korea as compared to trying to fetch it from a centralised server that has a few thousand other clients to attend to simultaneously
> decentralized storage isn't as fast as having central servers.
Depending on scale that’s not necessarily true. I find even today there are many services that cannot keep up with my residential fiber connection (3Gbps symmetrical), whereas torrents frequently can. IA in particular is notoriously slow when downloading from their servers, and even taking into account DHT time torrents can be much faster.
Now if all of their PBs of data were cached in a CDN, yeah that’s probably faster than any decentralized solution. But that will take a heck of a lot more money to maintain than I think is possible for IA.
I collect, archive, and host data. Haven't gotten any threats or attacks. Not one. The average r/selfhosted user hiding their personal OwnCloud behind the DDoS maffia seems more afraid than one needs to be even for hosting all sorts of things publicly. I guess this fearmongering comes from tech news about breaches and DDoS attacks on organisations, similar to regular news impacting your regular worldview regardless of how it's actually going in the world or how things personally affect you
Its not a problem until it suddenly is, and by the time it becomes a problem its too late. Its not fear mongering, its risk management and the laws are draconian and fail fundamental basis for a "rule of law", we have a "rule by law".
This has really shown that the be true. I am stuck in a situation right now where I have some lost media I want to upload but they have been down for over a week. I plan to create a torrent in the meantime but that means relying on my personal network connection for the vast majority of downloads up front. I looked into CloudFlare R2, not terrible but not free either.
I was looking into using R2 as a web seed for the torrent but I don't _really_ want to spend much to upload content that is going to get "stolen" and reuploaded by content farms anyway you know?
Why not subscribe to a seedbox? They’re about $5/2TB/mo. It protects your IP, you can buy for only the month, and since seedboxes are hosted in DMCA-resistant data centers you can download riskier torrents lightning fast, meaning you’re not just spending money for others, you can get something out of it too.
Any hints or recommendations on how to find a decent seedbox vendor? (working email in profile if you'd rather not name any in public)
I’ve only used r/Seedboxes on Reddit, and that’s yet to fail me. The specific one I mentioned is EvoSeedBox’s $5/mo tier with 130GB HDD + 2TB bandwidth which is all I’ve needed so far.
2TB of bandwidth or storage?
Bandwidth, though some provide multi-TB storage (I assume you pay out the nose however).
I watched a hbo series about this once, I think it was called Pied Piper.
You say this as if the IA is not already deeply invested in the DWeb movement. If you go to a DWeb event in the Bay Area, there is a good chance it will be held at the IA.
Yes, I was quite shocked when I found out that all their DCs are within driving distance.
[dead]
The internet archive shepherded the early https://getdweb.net/ community, and works with groups like IPFS, so they're well aware and offering operational support to decentralized storage projects. This has been going since at least 2016 when I was involved in some projects involving environmental data archiving during the Trump transition
There's no real financial incentive for people to archive the data as a singular entity so even less for a distributed collection. Also it's probably easier to fund a single entity sufficiently so they can have security/code audits than a bunch of entities all trying to work together.
Some people are motivated by more than just financial incentive.
That's true, but something like archiving the internet is very costly, IA has an annual budget in the tens of millions.
Yes, it's a good point. Though they could take that money and reward people for hosting the data as well, couldn't they? They don't have to be in charge of hosting.
Yes, they could, that's not much different than a single company distributing the archive to multiple storage centers though. My original comment was about it being more cost effective for a single company to do that than coordinating with a bunch of disjoint entities.
Our digital memory shouldn't be in the hands of a small number of organizations in my view. You're right about cost effectiveness. There are pros and cons to both but it's not just external threats that have to be considered.
History has always gotten rewritten throughout time. If you have a giant library it's easier for bad actors to gain influence and alter certain books, or remove them. This isn't just theoretical, under external pressure IA has already removed sites from its archive for copyright and political reasons.
There are also threats that are generally not even considered because they happen with rare frequency, but when they happen they're devastating. The library of Alexandria was burned by Julius Caesar during a war. Likewise, if all your servers are in one country that geographic risk, they can get destroyed in the event of a war or such. No one expects this to happen today in the US, but archives should be robust long term, for decades, ideally even centuries.
>Our digital memory shouldn't be in the hands of a small number of organizations in my view.
I would wager at least 95% of "digital memory" archived is just absolute garbage from SEO spam to just some small websites holding no actual value.
The true digital memory of the world is almost entirely behind the walls of reddit, twitter, facebook, and very few other sites. The internet landscape has changed massively from the 90s and 2000s.
So, about $0.01 per person per year ?
We are talking about an (almost) worldwide archive after all.
Yea so, who pays for the decentralized storage long term? What happens when someone storing decentralized data decides to exit? Will data be copied to multiple places, who is going to pay for doubling, tripling or more the storage costs for backups?
Centralized entities emerge to absorb costs because nobody else can do it as efficiently alone.
At the moment, IA stores everything, and I imagine that most people are picturing a scenario where the decentralized data is in addition to IA's current servers. At least, that's the easiest bootstrapping path.
>What happens when someone storing decentralized data decides to exit?
They exit, and they no longer store decentralized data. At the very least, IA would still have their copy(s), and that data can be spread to other decentralized nodes once it has been determined (through timeouts, etc) that the person has exited.
> Will data be copied to multiple places[...]?
Ideally, yes. It is fairly trivial to determine the reliability of each member (uptime + hash checks), and reliable members (a few nines of uptime and hash matches) can be trusted to store data with fewer copies while unreliable members can store data with more copies. Could also balance that idea with data that's in higher demand, by storing hot data lots of times on less reliable members while storing cold data on more reliable members.
> who pays for the decentralized storage long term? [...] who is going to pay for doubling, tripling or more the storage costs for backups?
This is unanswered for pretty much any decentralized storage project, and is probably the only important question left. There are people who would likely contribute to some degree without a financial incentive, but ideally there would be some sort of reward. This in theory could be a good use for crypto, but I'd be concerned about the possible perverse incentives and the general disdain the average person has for crypto these days. Funding in general could come from donations received by IA, whatever excess they have beyond their operating costs and reserve requirements - likely would be nowhere near enough to make something like this "financially viable" (i.e. profitable) but it might be enough to convince people who were on the fence to chip in few hundred GB and some bandwidth. This is an open question though, and probably the main reason no decentralized storage project has really taken off.
Ipfs
[dead]
> "It's dispiriting to see that even after being made aware of the breach weeks ago, IA has still not done the due diligence of rotating many of the API keys that were exposed in their gitlab secrets," reads an email from the threat actor.
With everything that’s going on, it’s highly suspicious that this is happening right after they upset some very rich rent seekers.
> With everything that’s going on, it’s highly suspicious that this is happening right after they upset some very rich rent seekers.
Absolutely moronic and unbased implication. The “rent-seekers” won their case and have zero interest in being implicated in dumb palace-intrigue style hacking. I mean, fuck those guys, but to bring up allegations like that is big stupid.
> Absolutely moronic and unbased implication. The "rent-seekers" won their case and have zero interest in being implicated in dumb palace-intrigue style hacking. I mean, fuck those guys, but to bring up allegations like that is big stupid.
That makes no sense.
The fact that they won their case gives even greater cause in ensuring that what they want goes through. Doesn't mean they have to be classy about it, or that Internet-based means of sabotage are impossible implications (given that the IA literally is about putting things up on the Internet that some want to be taken down).
You don’t think you’re being a bit harsh here?
Conspiracy theorists exhaust many.
People with solid info sec knowledge: this is a good opportunity to offer your expertise pro-bono for a good cause!
[flagged]
They're buried in these offers right now.
I wonder how many offers are legitimate.
An org amidst an attack might not be the most open to giving credentials and access to strangers.
why not? it's already been given away
To one (or two) attackers, it can always be worse.
At this point they should consider a rewirte from scratch. I bet they are running a tech stack from 1992.
It’s incredibly sad to see threat actors attack something as altruistic as an internet library. Truly demoralizing to see such degeneracy.
When there are plenty of people who are steeped in the dogma of Imaginary Property, and whose lives depend on it, it's not too surprising.
FYI: "Money" is imaginary property. Not sure you want to call people supporting "imaginary property" dogmatic. It's what our society is built on.
Money is not imaginary. You can touch and interact with it.
That's like saying movies aren't imaginary either because there's blu-rays. Even if we take that point at face value though, the vast majority of money is imaginary, only existing on ledgers. When the fed "prints money", it's just adjusting an entry on a database somewhere.
anything with tons of traffic going to it is a target. it has nothing to do with what the entity does, more with what potential reach it has. criminal behaviour is what it is. people pulling loads of visitors need to properly secure their shit, to prevent their their customers becoming their victims.
Seems like the actor did it only for the street credit and the second breach is only a reminder that IA didn’t properly fixed it after the first breach.
Could be worse.
A different framing is: be grateful that it's these types of people breaching IA and being vocal about it & asking IA to fix their systems. Others might just nuke them, or subtly alter content, or do whatever else bad thing you can think of.
They're providing a public service by pointing out that a massive organization controlling a lot of PII doesn't care about security at all.
Not defending attacker, because I see IA as common good. That said one of the messages from this particular instance reads almost as if they were trying to help by pointing out issues that IA clearly missed:
"Whether you were trying to ask a general question, or requesting the removal of your site from the Wayback Machine your data is now in the hands of some random guy. If not me, it'd be someone else."
I am starting to wonder if the chorus of 'maybe one org should not be responsible for all this; it is genuinely too important' has a point.
There are many state actors that attack targets of opportunity just to cause chaos and asymmetric financial costs.
Blame bad leadership.
Is there a reason to blame the victim, rather than the attackers?
I’m asking seriously - did IA do shitty things that make them a worthy cause for politically/ideologically motivated hacking?
I imagine they're referring to the fact that the leadership showed extremely bad judgement in deciding to pick a battle with the major publishing companies that everyone knew they would lose before it even began [0].
I don't think that justifies blaming the victim here, and from what I can see the attacker doesn't seem to be motivated by anything other than funsies, but I absolutely lost a lot of faith in their leadership when they pulled the NEL nonsense. The IA is too valuable for them to act like a young activist org—there's too much for us to lose at this point. They need to hold the ground they've won and leave the activism to other organizations.
[0] https://www.wired.com/story/internet-archive-loses-hachette-...
> there's too much for us to lose at this point
Feeling entitled?
"Us" means all of humankind for hopefully many generations to come. It's not about my personal entitlement, it's that the IA serves a vital role for humanity (one which they fought hard to make permissible).
Only if you don't care about history
I don't know what their funding model looks like but if they have some cash I'd say hiring a security team would be on top of the list of things to invest in.
I believe that, at this point in time at least, IA's funding model consists of sweating profusely while awaiting a colossal legal judgement.
Does anyone know who is targeting the Internet Archive, and why? I get the impression the attacks are too sophisticated for it to just be vandal punks.
I get the impression it's just pissing into the salt shaker. Internet Archive is obviously held together by duct tape (okay, okay, strong and durable duct tape) and personal willpower. Moreover, its main mission is spreading data, not hiding it from others to generate revenue.
Those who don't get the salt shaker bit, here's the original of the ancient wisdom:
https://web.archive.org/web/20060619131835/http://xelios.liv...
Choose any translation:
https://malaya-zemlya.livejournal.com/697779.html
https://personal-view.com/talks/discussion/25915/humor-hacke...
https://www.linkedin.com/pulse/hacker-restaurant-alexander-s...
> I get the impression the attacks are too sophisticated for it to just be vandal punks.
What gives that impression? Everything I've seen about the attacker's messaging says "vandal punk(s)" to me, and nothing in what I've seen of the IA's systems screams Fort Knox. It wouldn't surprise me if they actually had a pretty lax approach to security on the assumption that there's very little reason to target them.
The group that claimed to be responsible for the first hack was said to be Russian-based, anti-U.S., pro-Palestine, and their reasoning for the attack was because of IA's violation of copyright....
I think you should draw your own more informed conclusions, but it smells a lot like feds to me.
What do Palestine, Russia, and the U.S. have to do with the Internet Archive? The Internet Archive is a supremely boring target politically.
That's the point they're making. It's such a seeming non-sequitur that people are suspicious and coming up with fun theories.
He's pointing out that it doesn't make any sense: why would someone pro-Russian and anti-US care about violating western IP? In reality, it's the opposite: Russia is happy to help with that because they think it helps weaken the west.
With the amount of comments calling for a leadership change my tinfoilhat theory is that this is a concerted effort to get a leadership change.
Is it sophisticated if IA leaves the door wide open? I blame shit leadership.
It strikes me as reasonable to assume (or at least strongly bet on) -- I'm not sure of the right phrase for it -- but like a mercenary type operation on behalf of some larger old media company?
There's just too much "means, motive and opportunity" there.
I'd like to imagine a world where every lawyer, when their case is helped by a Wayback Machine snapshot of something, flips a few bucks to IA. They could afford a world-class admin team in no time flat.
That's a terrible solution. The Wayback Machine takes down their snapshots at the request of whoever controls the domain. That's not archival.
If the state of a webpage in the past matters to you, you need a record that won't cease to exist when your opposition asks it to. This is the concept behind perma.cc.
No, they don’t delete the archived content. When the domain’s robots.txt file bans spidering, then the Wayback Machine _hides_ the content archived at that domain. It is still stored and maintained, but it isn’t distributed via the website. The content will be unhidden if the robots.txt file stops banning spiders, or if an appropriate request is made.
In some cases they do appear to delete, on request.
edit: "Other types of removal requests may also be sent to info@archive.org. Please provide as clear an explanation as possible as to what you are requesting be removed for us to better understand your reason for making the request.", https://help.archive.org/help/how-do-i-request-to-remove-som...
Nope. Nothing is deleted, just hidden.
How do you know?
I worked there for a short while.
So if the Internet Archive accidentally archived child porn, they wouldn’t delete it?
I suspect they DO delete some things.
Don't be asinine; of course there are exceptions. But the general rule is that nothing is deleted. Even if you have a fancy expensive lawyer send them a C&D letter asking them to delete something or else, they’ll just hide it. You can’t tell the difference from the outside. In fact there are monitoring alarms that are triggered if something _is_ deleted.
Claiming to have deleted something while just having hidden from public view… that’s basically begging content owners to sue and very easily win damages.
Copyright only regulates the distribution of copies of copyrighted works. Possessing copies and distributing copies to other people are two different things.
If you were photocopying a textbook and giving it to your classmates, the publisher could have their lawyer send you a Cease and Desist letter telling you to stop (or else). But if they told you to burn your copy of the textbook then they would be overreaching, and everyone would laugh at them when you took that story to the papers.
Legal reasoning from made‐up examples is generally a bad idea, but I think you can safely reason from that one.
I’m not privy to the actual communications in these cases, but I suspect that instead of replying back with “we deleted the content from the Archive”, they instead say something anodyne like “the content is no longer available via the Wayback Machine”. Smart lawyers will notice the difference, but then a smart lawyer wouldn’t have expected anything else.
What’s the reasoning behind hiding content upon request? Doesn’t that defeat the purpose of archival?
My intuition would say there are 3 cases when content ceases to become available at the original site:
- The host becomes unable to host the content for some reason (bankruptcy, death, etc.) in which case I assume the archive persists.
- The host is externally required to remove the content (copyright, etc.) in which case I assume IA would face the same external pressure? But I’m not sure on that.
- The host/owner has a change of heart about publishing the content. This borders more on IA acting as reputation management on the part of the original host/owner. Personally I think this is hardest to defend but also probably the least common case. In this case I’d think it’s most often to hide something the original host doesn’t want the public finding out later, but that also seems to make it more valuable to be publicly available in the archive. Plus, from a historian/journalist perspective, it’s valuable to be able to track how things change over time, and hiding this from the public prevents that. Though to be honest I’m kind of in two minds here because on the other hand I’m generally of the opinion that people can grow and change, and we shouldn’t hold people to account for opinions they published a decade ago, for example. I’m also generally in favor of the right to be forgotten.
Would appreciate your thoughts here.
It’s all about copyright. Copyright law in the US gives a monopoly on distribution of copies of things (hand‐waving because the definitions are hard, basically artistic works) to their author. Of course authors usually delegate that right to their publisher for practical and financial reasons. There are some fair use exceptions, but this basically makes it illegal for anyone else to make and distribute copies of the author’s work. Again, hand‐waving because I don't want to have to write a dissertation.
When IA shows you what a website looked like in the past, they are reproducing a copyrighted work and distributing it to you. In some cases, perhaps many, this is fair use. IA cannot really know ahead of time which viewers would be exercising their fair use rights and which would not. Instead, IA just makes everything available without trying to guess whether the access would fall under fair use or not. That means that many times, possibly most of the time, IA is technically breaking the law by illegally distributing copies of copyrighted works.
But _owning_ a copy of a copyrighted work is never prohibited by copyright. It doesn’t matter how you got the copy either.
Therefore, pretty much any time someone asks for something to be hidden or removed on copyright grounds, they go ahead and hide it. They don’t bother to delete it though, because copyright doesn’t require them to. If a copyright holder asks for it to be deleted then they are overreaching, and should know that any sane person would object. But as far as I am aware IA doesn’t actually bother to object in writing; they just hide the content and move on.
This means that researchers can visit the archive in person and request permission to see those copies. For example if you are studying the history of artistic techniques in video games using emulated software on IA, you might eventually notice that all the games from one major publisher are missing (except iirc the original Donkey Kong, because they don’t actually own the copyright on that one). You could then journey to the Archive in person to see the missing material and fill in the gaps in your history. Or you could just ignore them entirely out of spite. This is no different than viewing rare books held by any library, or viewing unexhibited artifacts held by a museum, etc
They do delete entire domains from the archive upon request & proof of ownership.
Again, no they don’t. They just hide them.
Ooo, excellent. Yes, hiding items is imperfect, but I understood that it was legally required or something. (IANAL and IDFK, TBH) I wonder how perma.cc gets around that.
I'm afraid that it just hasn't been tested in court yet.
I haven't read this paper yet, but...
https://www.tesble.com/10.1080/0270319x.2021.1886785
from the abstract:
> The article concludes that Perma.cc's archival use is neither firmly grounded in existing fair use nor library exemptions; that Perma.cc, its "registrar" library, institutional affiliates, and its contributors have some (at least theoretical) exposure to risk
It seems that the article is about copyright, but of course there are several other reasons that might justify takedown of content stored on perma.cc:
- Right to be forgotten... perma.cc might be able to ignore it, but could this lead to perma.cc being blocked by european ISPs
- ITAR stuff
- content published by entities recognized by $GOVERNMENT as terrorist organizations
- revenge porn
- CSAM
Most likely by breaking the law.
That's correct, but only for present evidence - what about the past evidence, that you didn't know you needed until it was too late? IA is broad enough to cover the past five times out of ten.
It's Matt Mullenweg trying to erase the vast records of his deranged megalomania.
I sent them a resume almost a year ago, and got nothing back in response until yesterday. Looks like they are going through their backlog right now to find more hands.
Interesting, for a security position?
It was a while ago, I think it was for their general position option, though I did talk about sec experience in it
Ouch. Once can happen, twice in a row...
Once makes the second time more likely. Shows you are a soft target.
The Library of Congress should be archiving the Internet and it should have the budget required to do so.
This is in line with its mission as the "Library of Congress". Being able to have an accurate record of what was on the Internet at a specific point in time would be helpful when discussing legislation or potential regulation involving the internet.
The Library of Congress does currently archive limited collections of the internet[0]. They have a blog post[1] breaking down the effort, currently it's 8 full time staff with a team of part time members. According to Wikipedia[2], it's built on Heritrix and Wayback which are both developed by the Internet Archive (blog post also mentions "Wayback software"). Current archives are available at: http://webarchive.loc.gov/
[0] https://www.loc.gov/programs/web-archiving/about-this-progra...
[1] https://blogs.loc.gov/thesignal/2023/08/the-web-archiving-te...
[2] https://en.m.wikipedia.org/wiki/List_of_Web_archiving_initia...
As awkwardpotato write they do. Many national libraries all over the word treat the internet as covered by their requirements of legal deposit, and crawl their respective TLD.
Is it the same email spoofing attack vector of zendesk which was disclosed last week?
Article says API token was stolen in original breach.
Waiting for trufflehog and gitguardian vendors to come up with article, tweets on how their tools would have stopped this incident :sweatsmile:
Is there any way IA could be mirrored in read-only mode, while security concerns are addressed?
Depends on the topology, my guess would be no though. Generally speaking, a compromise requires a lot of non-public work to be done in a very short time period. If they don't know how they were initially compromised (and you can't take attacker's word on things), simply throwing up another copy isn't going to fix the issue and often eggs them on to continue.
You basically have to re-perimeterize your topology with known good working security, and re-examine trusted relationships starting with a core group of servers and services, and then expanding outwards, ensuring proper segmentation along the way. Its a lot easier with validated zero trust configurations, but even then its a real pain (especially when there is a hidden flaw in your zero-trust config somewhere) and its very heavy on labor. Servers and services also need to ensure they have not deviated from their initial known desired states.
Some bad guys set traps in the data/services as timebombs, that either cross-polinate, or re-compromise later. There are quite a lot of malicious ****s out there.
Do any organizations have a mirror of this?
Even if it's not publicly available...
The Internet Archive had legal gems such as the Jamendo Album Collection, a huge CC haven. Yes, most of it under NC licenses, but for non-commercial streaming radio with podcasts, these have been invaluable.
Do you know Nanowar? They began there.
Also, as commercal music has been deliberately dumbed down for the masses (in paper, not by cheap talking), discovering Jamendo and Magnatune in late 00's has been like crossing a parallel universe.
> "It's dispiriting to see that even after being made aware of the breach weeks ago, IA has still not done the due diligence of rotating many of the API keys that were exposed in their gitlab secrets," reads an email from the threat actor.
This is quite embarrassing. One of the first things you do when breached at this level is to rotate your keys. I seriously hope that they make some systemic changes, it seems that there were a variety of different bad security practices.
IA is in bad need of a leadership change. The content of the archive is immensely valuable (largely thanks to volunteers) but the decisions and priorities of the org have been far off base for years.
Do you have any examples?
Putting the organisation at risk by playing chicken with large publishing corporations. Trying to stretch fair use a little too far so they had to go to court.
[flagged]
I don't believe IA itself takes down pages that kiwifarms archives/links to. Rather they get a request to take it down and comply with it (correct me if I'm wrong here). I think IA is actually in a tough spot on this issue because they might be able to be sued eg. for defamation if they don't take down pages with personal info after a request to do so is made. Lastly, I doubt any new leadership would be less harsh on kiwifarms.
There was no illegal content on kiwi farms. Even then, I’d say taking down a single page by request is understandable. However, they surrendered to the mob and chose to stop archiving the entire site. This was to censor any criticism of the people involved, but as a result, we lost all of the other information on the rest of the site as well. It’s clear this organization cannot handle pressure, and is relying on people treating it kindly.
They chose to stop serving archives of a site that had started explicitly using tham as a distribution mechanism to get around much a much broader attempt to censor them.
I'm curious what other information on that site you think was valuable to have available to the general public? Nothing has been lost in terms of historical data, it's only the immediate disemmination that has been slowed.
I'm really trying to understand why I should disagree with the IA's choice here. The IA is an archival service, not a distribution platform and it is not their job to help you distribute content that other people find objectionable. Their job is to make and keep an archive of internet content so that we don't lose the historical record. Blocking unrestricted public access to some of that content doesn't harm that mission and can even support it.
the funny thing about the internet archive is that anyone else on this planet could do exactly what they are doing, but they consistently choose not to.
kiwifarms could spin up their own infrastructure, serve their own content for the world, but it turns out technology is a social problem more than a technical problem.
anyone that wants to stand up and be the digital backbone of “kiwi farms” can, but only the internet archive gets flack for not volunteering to be the literal kiwi farm.
for example, the pirate bay goes offline all the time, but it turns out the people that use it, care enough to keep it online themselves.
That's something I completely support. There's a limit and that site crosses it.
It's the least worst option. Remember when that happened with Mozilla? Now they're an ad company. Take the bad (some bad mis-steps re:multiple lending during the pandemic, not rotating keys immediately after a hack) with the good (staying true to the human centric mission and not the money flows).
[flagged]
[dead]
I support archival of films, books, and music, but those items need to be write-only until copyright expires. The purpose of the Internet Archive is to achieve a wide-reaching, comprehensive archival, not provide easy and free read access to commercial works.
Website caches can be handled differently, but bulk collection of commercial works can't have this same public access treatment. It's crazy to think this wouldn't be a huge liability.
Battling for copyright changes is valiant, but orthogonal. And the IA by trying to do both puts its main charter--archival--at risk.
The IA should let some other entity fight for copyright changes.
I say this as an IA proponent and donor.
I'd agree with you if you live in a country where you can walk into your local library and read these for "free." For people who live where there may not even be a library, your argument makes no sense except to make the publishers richer. They typically price some of these books at "library prices" so normal people won't be able to afford them, but libraries will.
Copyright is copyright. If you don't like the idea of a publisher owning the rights to content they published doesn't mean you have a right to their content. Let alone worldwide distribution of that content.
What makes you feel entitled to the content of the publisher before the copyright expires? Do you feel that you deserve access to everything because you've deemed the concept of ownership around book publishing immoral?
You can't just take a digital copy of a physical book and give it to everyone worldwide. That isn't your choice or decision to make nor is it ethical to ascribe malice to simply retaining distribution rights to content they own.
"Make publishers richer", it's actually just honoring the concept of ownership...
I don’t like the idea of infinite ownership, which is the current problem of copyright. The public may never be able to own these ideas and build off of them. Further, just because you own something in one country doesn’t mean you can own it in another country. For a physical example, you can’t own a gun in the US and take it to Australia.
If publishers didn’t engage in tactics like “library pricing” and preventing people from actually purchasing the books, I might feel differently. Right now, I see this archiving stuff as a Robin Hood story (which fwiw, every version of this story you may have seen/heard is probably still copyrighted) and I hope the publishers die or are replaced.
> I support archival of films, books, and music, but those items need to be write-only until copyright expires.
Which means no one alive today would ever be able to see them out of copyright. It also requires an unfounded belief that major copyright owning companies won't extend copyright lengths beyond current lengths which are effectively "forever".
[dead]
> but bulk collection of commercial works can't have this same public access treatment
And it doesn't.
The Internet Archive Lending Library did. And there are music, movie, and video game ROMs found throughout the user uploads.
IA should collect these materials, but they shouldn't be playing fast and loose by letting everyone have access to them. That's essentially providing the same services as the Pirate Bay under the guise of archivism.
This puts IA at extreme legal risk. Their mission is too important to play such games.
>"It's dispiriting to see that even after being made aware of the breach weeks ago..."
These people are not dispirited whatsoever, if anything they are half-cocked that these script kiddies found an easy target.
The words came from a message written by the people you are calling script kiddies, rather than being editorializing by bleepingcomputer, as you seem to believe.
script kiddie or blackhat hacker is irrelevant. IA has shit security practices, and that's a fact regardless of who figures that out
I highly doubt they are script kiddies. More than likely they are state actors or mercenaries of state actors attempting to bring down the free transmittal of information between regular folks. IA evidently has not so good security and wikipedia must be doing pretty well I guess? I can’t recall the last time one of these attacks worked on Wiki.
Why would they publicly call them out and lay open the way they breached them if they were "attempting to bring down the free transmittal of information between regular folks"?
They could have done much worse but they chose not to and instead made it public. Which state actor does that?
Subtitling: half clocked means not fully prepared
There are many "first things" you need to do if breached, and good luck identifying and doing them all in a timely fashion if you're a small organization, likely heavily relying on volunteers and without a formal security response team...
[dead]
Restating my love for Internet Archive and my plea to put a grownup in charge of the thing.
Washington Post: The organization has “industry standard” security systems, Kahle said, but he added that, until this year, the group had largely stayed out of the crosshairs of cybercriminals. Kahle said he’d opted not to prioritize additional investments in cybersecurity out of the Internet Archive’s limited budget of around $20 million to $30 million a year.
https://archive.ph/XzmN2
In security, industry standard seems to be about the same as military grade: the cheapest possible option that still checks all the boxes for SOC.
Military grade has different meanings. I’ve worked in the electronics industry a long time and will say with confidence that the pcbs and chips we sent to the military were our best. Higher temperature ranges, much more thorough environmental testing, many more thermal and humidity cycles, lots more vibration testing. However we also sell them for 5-10x our regular prices but in much lower quantities. It’s a failed meme in many instances as the internet uses it though.
Basically, whatever the liability insurance wants for you to be in compliance, than that’s the standard.
Hot take, this is the way it should be. If you want better security then you update the requirements to get your certification.
Security by its very nature has a problem of knowing when to stop. There's always better security for an ever increasing amount of money and companies don't sign off on budgets of infinity dollars and projects of indefinite length. If you want security at all you have bound the cost and have well-defined stopping points.
And since 5 security experts in a room will have 10 different opinions on what those stopping points should be— what constitutes "good-enough" they only become meaningful when there's industry wide agreement on them.
There never will be an adequate industry-wide certification. There is no universal “good enough” or “when to stop” for security. What constitutes “good enough” is entirely dependent on what you are protecting and who you are protecting it from, which changes from system to system and changes from day to day.
The budget that it takes to protect against a script kiddy is a tiny fraction of the budget it takes to protect from a professional hacker group, which is a fraction of what it takes to protect from nation state-funded trolls. You can correctly decide that your security is “good enough” one day, but all it takes is a single random news story or internet comment to put a target on your back from someone more powerful, and suddenly that “good enough” isn’t good enough anymore.
The Internet Archive might have been making the correct decision all this time to invest in things that further its mission rather than burning extra money on security, and it seems their security for a long time was “good enough”… until it wasn’t.
Yep. And worse, now matter how much you pay for security it is still possible for someone to make a mistake and publish a credential somewhere public.
> since 5 security experts in a room will have 10 different opinions
If that happens you need to seriously rethink your hiring process.
This ^
We can’t all have the latest EPYC processors with the latest bug fixes using Secure Enclaves and homomorphic encryption for processing user data while using remote attestation of code running within multiple layers of virtualization. With, of course, that code also being written in Rust, running on a certified microkernel, and only updatable when at least 4 of 6 programmers, 1 from each continent, unite their signing keys stored on HSMs to sign the next release. All of that code is open source, by the way, and has a ratio of 10 auditors per programmer with 100% code coverage and 0 external dependencies.
Then watch as a kid fakes a subpoena using a hacked police account and your lawyers, who receive dozens every day, fall for it.
[flagged]
No, it’s your demeanor that is unbecoming and not worth engaging with. Villianizing your poor behavior not successfully baiting people into replying as you want is childish too. Take a breather.
[dead]
A non-grownup analysis is to criticize a decision in hindsight. If Internet Archive shifted funds to security, it would mean cutting something from its mission. Given their history, it makes sense IMHO to spend on the mission and take the risk. As long as they have backups, a little downtime won't hurt them - it's not a bank or a hospital.
The Internet Archive has a management problem. They seem to be more comfortable disrupting libraries than managing an online, publicly accessible database of disputed, disorganized material.
Despite all of the positive self-talk, I don't know if they realize how important they are, or how easy it would be for them to find good help and advice if their management were transparent and everything was debated in public. That may have protected it to some extent; as a counterexample, Wikipedia has been extremely fragile due to its transparency and accessibility to everyone. With IA being driven by its creator's ideology, maybe that ideology should be formalized and set in stone as bylaws, and the torch passed to people openly debating how IA should be run, its operations, and what it should be taking on.
I don't mean they should be run by the random set of Confucian-style libertarian aphorisms that is running the credibility of Wikipedia into the ground, but Debian is a good model to follow. Or maybe do better than both?
> Debian is a good model to follow.
While I have no idea how Debian is actually funded I'd agree. One issue might be that The Internet Archive actually need to have people on staff, not sure if Debian has that requirement. You're not going to get people to man scanner or VHS players 8 hours a day without pay, at least not at this scale.
The Internet Archive needs a better funding strategy that asking for money on their own site. People aren't visiting them frequently enough for that to work. They need a fundraising team, and a good one.
Finding managers are probably even worse. They can't get a normal CEO type person, because they aren't a company and the type of people who apply to or are attracted to running non-profit, server the community, don't be evil organisation are frequently bat-shit crazy.
Don't forget the time Brewster tried to run a bank -- Internet Archive Federal Credit Union. Or that the physical archives are stored on an active fault line and unlikely to receive prompt support during an emergency. Or that, when someone told him that archives are often stored in salt mines he replied, "cool, where can I buy one?"
> Confucian-style libertarian aphorisms that is running the credibility of Wikipedia
Can you elaborate? I'm aware of Wikipedia having very particular rules and lots of very territorial editors, but I'm not sure how this runs their credibility into the ground aside from pissing off the far right when they come in with an agenda to push.
https://www.wired.com/story/internet-archive-memory-wayback-...
I appreciate their ethos and I've used the site many times (and donated!), but clearly it's at the point where Kahle et al just aren't equipped either personally (as a matter of technical expertise) or collectively (they are just a handful of people) to be dealing with what are probably in many cases nation-state attacks. Kahle's attitude towards (and misunderstanding of) copyright law is IMO proof that he shouldn't be running things, because his legal gambles (gambles that a first year law student could have predicted would fail spectacularly) have put IA at long term risk (see: Napster). And this information coming out over the past few weeks about their technical incompetence is arguably worse, because the tech side of things are what he and his team are actually supposed to be good at.
It's true that Google and Microsoft and others should be propping up the IA financially but that isn't going to solve the IA's lack of technical expertise or its delusional hippie ethos.
A genuine question to commenters asking to "put a grownup in charge of the thing" and saying that "Kahle shouldn't be running things": he built the thing, why exactly he can't run it the way he sees fit?
He is. But at the cost of the greater good.
Most of us care mainly about the Wayback Machine and archiving webpages; not borrowing books still under copyright and fighting publishers.
Speak for yourself, the internet archive successfully increased its scope and made creative contributions to case law (although it lost at the appeals court)
> the greater good
(Hot Fuzz reference. https://www.youtube.com/watch?v=oQzrR6nOkYg )
A good place to direct that question might be in a reply to the person who made that comment.
[dead]
[dead]
[dead]