A nice social attack is to create an internet archive looking website call it archive.newtld and use it to create social proof of things you didn't actually do. "Oh yeah the Washington Post did a redesign but here are my past 10 posts which I saved in archive: link"
In post truth internet, proving archives is going to be tough and unless there's some other form of verification it's going to be useless fast for "proving" purposes.
You can think bigger and do this to forge stories about anything you want in any website. Nobody checks authenticity of archive urls and there's several sites already, plus a lot of these services do URL re-writing, so it's hard unless there's some authorative thing.
Yes, pretty easily. In general actively faking old data, and especially faking it to convince the public is not what should be our main concern. Any given archive site can try that once, with high risk of quickly being caught.
Keybase would have been a good solution, essentially a GUI for a pgp keyserver but with social proofs like "the owner of this account also controls the domain washingtonpost.com (as evidenced by publishing the fingerprint as a TXT record)". Too bad about the zoom acquisition, development stalled since then.
I also don't know exactly what bag of bytes you would be checking the signature against, the raw normalized text of an story? Maybe if news articles were distributed as PDFs this would be a solved problem, but news sites actually don't want their content to be portable, they want you to sign in to their domain to view anything.
If you know you might want to prove something's authenticity later, post the SHA256 somewhere now. E.G. Multiple social media sites. Large and trusted web archives. Cryptocurrency blockchains. The last one's a stronger proof, with lots of money making sure it stays immutable.
Or hash everything you produce/consume. Then hash the list of hashes, and post that.
Or alternatively, counter forgeries by capturing more data. For web, but all data in general. Sensor RAWs to prove an image isn't "AI". Browser and network stack RAM dumps to prove a website's authenticity. Etc. There's what, a couple dozen accelerometers, GPS's, LIDARs etc. on the latest iPhones?
In post truth internet, proving archives is going to be tough and unless there's some other form of verification it's going to be useless fast for "proving" purposes.
You can think bigger and do this to forge stories about anything you want in any website. Nobody checks authenticity of archive urls and there's several sites already, plus a lot of these services do URL re-writing, so it's hard unless there's some authorative thing.