I'm old enough to remember back in 2005 when terrorist in Iraq claimed to be holding a US soldier hostage, and it turned out the whole thing was staged using photos of a doll:
It's pretty easy to see that the photo is a hoax, but many news outlets didn't notice and ran the story anyway.
I have little faith in our ability to detect deepfakes using these recommendations. It seems we'll have to assume something is fake unless we have a way to cryptographically verify its provenience.
I always have to think of the Wu Ming Foundation/former Luther Blisset collective and their fake news/LARPing missions[1] that showed how easy it was to fool major news outlets back in the 90s. Too bad that now those same techniques have been weaponized with great success.
Interesting lecture on the connections between Q and Qanon by Wu Ming 1, also detailing some of their LARPing and hoaxing exploits: https://www.youtube.com/watch?v=VdcAT7pXYko
NPR is high in factual accuracy (they very rarely lie) while being extremely selective in the stories they tell and the facts they omit to push their bias. They seem to be worse in that respect than they used to be.
I think that's still much better than a news source that just outright lies to your face whenever it's convenient. As long as you remember that the bias is there, you can at least trust what they say is the truth. When a news org tells a mix of truths and lies you don't even have a foundation of trustworthy fact to start from.
NPR dutifully ran the "Saddam Weapons of Mass Destruction" *fake* 100s of times with utter sincerity.
ps- also the "dumping babies on the ground" fake too IIR, but maybe only a few times on that one. Congressional testimony on camera by a daughter of a State Department official IIR to justify the Kuwait invasion by Bush I
Almost every issue on the 2A they discuss is untruthful. The Rittenhouse case comes to mind immediately where they lied about all the encounters that were proven wrong in court. They avoid anything that makes the left look bad. Lying by omission is still lying. They continuously link stories to racism, lgqbtq, "people of color" that have nothing to do with any of that as well.
Keep listening to them if you want. I used to be an avid listener, just realize that they've lost a TON of loyal listeners due to their heavily biased reporting in the last few years.
Here's a fun one for ya. They backtracked on it after not getting away with it but there are many many more where no one notices and they just get away with it.
not really possible since the people that planned and staged the thing the story was about were pretending to actually do something when they hired people and rented things to fake an origin story that was then covered by the news while another team that the first team wasn't aware of went a bit overboard and brought in surface-to-air missiles and shot down a passenger plane full of dying people.
you see, the issue is not NPR being untruthful. the issue is NPR not even having a chance to check. member how "logic"/"rationality" works on faked premises? so smooth and yummy.
No, it implies that it's not possible lie if you're left wing and if you tell the truth then it must likely be left wing. Mainstream media is almost all left-wing and all of them lie or omit truth.
Perhaps ban them from receiving advertising revenue. Force them to go subscription only. Advertising revenue comes from clicks, not reads, we want people to actually read and contemplate. This is closer to what old-school newspapers did.
In theory, subscribers will leave outlets that lie to them for outlets that are more honest
and fact-based. Or, if I'm pessimistic, subscribers will give money to whoever does rage-bait the best...
Ban advertising supported information sources. If people want information they need to pay for it in order that the information serves them not the people with a budget for manipulation.
You could I suppose allow adverts where the information is strictly for entertainment, but given that was Fox News defence... I think it probably wouldn't work.
News operations live and die by eyeballs and clicks - they will say or report anything if it will give them more traffic, or if it appeals to their audience; don't kid yourself, getting you to click on a news story - true or not - IS the goal.
They ran the story as a claim that a US soldier was taken hostage, not as a fact that a US soldier was taken hostage. From the NYT article:
"The authenticity of the militants' claim Tuesday could not be immediately verified. Defense officials at the Pentagon in Washington said that the U.S. military was investigating the incident but that had no indication any of its soldiers were missing in Iraq."
If I take a photo with a cryptographically signed camera of a polaroid containing an image that i deep faked, how does that ensure the image can be trusted?
Those depth estimation algorithms can't be used to distinguish a photo of a photo from just a photo. They will report false depth in a photograph of a flat photograph.
Yes, that's my point. You can't rely on the depth map in the image metadata to be the differentiator because it can easily be faked with depth estimation.
It's dismissive to say 'simply'. In many cases yes they are, and in many cases they aren't. It's just as likely the journalist or agency is self-serving their own political or activist agenda
Could you please stop posting unsubstantive comments and flamebait? You've unfortunately been doing it repeatedly. It's not what this site is for, and destroys what it is for.
What you call 'unsubstantive' and 'flamebait' are sincerely held opinions. If you want to express how a comment is unsubstantive or wrong, feel free to respond to that comment - I always endeavour to respond - and if I have it wrong I will thank you for any corrections.
My position is that everyone is 'on the take' - and as that appears to be the truth I don't want to pretend otherwise - let's just be honest about the state of the world. What is the value of pretending - as if our jobs depended on it (which for most, they do) - that there is some great moral cause in play, or that the future will be wonderful? "We", on this site, code the dystopia. Fine, take the money but then don't also try to tell me how this is a good thing and for the benefit of humanity. With the corporate and governance structure, we are dealing with the manipulation and manoeuvring of the masses into doing things that someone, somewhere has decided will be of personal benefit. I think the notion that they are 'there to help' and are 'fair authorities' of how we proceed, needs to be disabused, if there is genuine human progress to be made.
Part of the issue is that there is a massive excess of trust in the institutions and corporations, despite it being clear that they are socially engineering the masses to want what they have to sell. The scale of what they have achieved is remarkable.
That these institutions are taken seriously - despite their obviously self-serving and nefarious nature - is problematic for everyone, even those who want nothing to do with it, like myself. Amazingly, most high flying people who have been through the propagandising system, seem incapable of recognising how much trust they are placing in these institutions, and are unaware of how little personal discernment they apply. I am addressing that.
WRT your point that I am making unsubstantive comments - take a moment to reflect on the other comments on the site. When they are cheerleading the military, or liberal causes (unsubstantively, like me on occasion) or whatever other media promoted causes - is that ok? If you agree with the thrust of the lightweight comment, are you also capable of telling them they are insubstantial, or do you reserve your judgements (as above) only for those who express views against what you hold?
PS I genuinely appreciate the opportunity to discuss the way HN is run with you. I have previously been banned here (2 years ago?), my comments are frequently downvoted, or upvoted and then massively downvoted, my posts flagged, etc - ie it seems that I am quietly supressed despite my making an effort to express a sincere view. I do not understand the value of the low level de-platforming I have experienced - all that occurs is that you create a corporate echo chamber. I can only conclude that it is likely that this is what is desired.
However, in the name of truth, as I believe I have valid, considered opinions that happen to challenge accepted convention, and as I also value the platform and the information submitted here, with respect, I will be entirely ignoring your request as it seems unreasonable to me. Thanks again.
From an HN guidelines point of view it's not so relevant whether your opinions are sincere or not, or even whether you're right or not. HN isn't an anything-goes website—there's a particular type of game we're trying to play, which can be summed up as curious conversation. Flamebait and unsubstantive comments destroy that; so do grandiose rhetoric, ideological battle, snark, fulmination, and various other things the site guidelines ask you not to do. If you want to post here, we need you to play by those rules.
I must admit I found that element of the story...surprising. Has realtime faking gotten that good yet? Presumably there was back and forth in this call so this person was either disguising their voice or typing responses to be generated on the fly.
I know all this can be done, I'm just surprised it's reached the maturity where an attacker would choose to impersonate someone the call recipient presumably knew vs just being a vague "Bob from IT".
Although to be fair the article does say the employee was suspicious so maybe there was a delay which (if you were looking for it) you would spot.
You could probably reduce the "delay" by using a soundboard of pre-generated filler material and playing that while you type the real response. "Let me find that bookmark", "So the thing about that is...", "ummm yeah. so...", "hmmm no not really"
You can also use text macros to type the response faster. Here they were trying to get MFA access, so you could map longer phrases that will come up often like "Okta multi factor authentication" to numpad 1. Company name to numpad 2. IT supervisor name to numpad 3.
If you know the target of the conversation you can tailor what you pre generate. I like to mess with scam callers when I get one, and I've noticed some are using some kind of soundboard with a woman's voice (I'm pretty positive it is real and not AI) and they have a planned flow / script. If you try to deviate from the script they have some options to bring you back into it. If you ask them to repeat something you can notice it's the exact same audio snippet as before. If you accuse them of being a bot they have a few samples of the woman being shocked and mildly embarrassed. "Oh my goodness, do I really sound like a bot? No it's just been a long work day for me. I'm sorry about that."
Why type or use a sound board. You aren’t thinking mission impossible enough
Live transcribing in realtime has been a thing for, forever, so there’s no reason for me to think this couldn’t all be glued together into a “voice changer” like the typical super deep “I have your son give me a million dollars” boxes, except instead of doing frequency modulation it is pipes to a model trained on someone’s voice, and applies it. Transcribing to text probably isn’t even needed because why would it be for machine to machine modification. It only needs to go to text for human consumption.
Raw pcm bits from audio in -> AI model trained on victims voice -> line out to phone or voip app.
We totally have the compute to do that. Probably with our phones.
I can't remember which election it was, but the 3D animated character was pushing the limits of real-time rendering for its day when he appeared on a morning talk show and answered questions live. So the live thing has been around for quite some time. The deep fake just allows for the models to look believable. Once you have a model, you can make it do anything.
Faking a famous person would seem to me to be easier (for various reasons) than faking my colleague. It's not enough to fake the sound of their voice, it's also the manner in which they speak - word choice, attitude, responses, knowledge, sense of humour etc. But I'm guessing the target of this attack only knew the fake person they were speaking to marginally.
The approach seems to be unnecessarily risky vs just phoning up pretending to be someone they didn't know is my point.
We got lazy with the internet age. We’re going to have to go back to basic principles again. That means if you didn’t see it with your own eyes, it’s not real. If someone didn’t tell you in person, then they didn’t tell you. Maybe this won’t be a bad thing after all.
This is bad epistemology. I can't possibly verify all of the facts of physics, chemistry, biology, history, economics, medicine, law, politics, etc. through my own senses and through people that I know personally. It would be impossible to have any knowledge of the world that way.
The real solution is to carefully select a variety of experts who you trust as sources of information, and rely on the consensus of those experts to determine what is true, while maintaining a healthy level of skepticism.
I think the idea is what you say, and what the parent said.
Before internet and telephone, we still took things on faith from people we met, be it travellers or politicians or newspapers. But newspapers and politicians were more local, and didn't buy into whatever they read like scripture the way many seem to do these days.
Before the 20th century, newspapers were explicitly partisan and were basically free to publish whatever suited their agenda. It is only in the 20th century that we see a more objective journalism emerge with the ideas of journalistic ethics, reliable sources, fact-checking, etc.
I really don't think the situation today is as bad as people say it is. You can still go to the news pages (not the opinion pages) of the New York Times, Washington Post, Wall Street Journal, Financial Times, and the Economist, and the vast majority of what they publish is reliable information. We can debate over the partisan nature of what stories they choose to emphasize and editorialized headlines, but it's not like they are regularly publishing lies and nonsense.
This is how the world worked before the internet. Your high school science teacher would directly teach you things they also learned in a lab in college.
It might bring down all the bridges the internet helped build. Also, it will make the world more centralized, having to trust more on big names and institutions. As a libertarian, this is the opposite direction I’d like the world to go in.
We need to start trusting people we know again. Not institutions or celebrities on pedestals. We need to trust our family and friends and community again.
It's interesting that you've gotten so many downvotes. The sad thing is that as things are progressing in the west, a large number of people won't have families or communities to lean on.
Fertility rates going down, divorce rates going up, and young people finding it harder to navigate dating and relationships. To the latter point, soon most new relationships will start through dating apps, and these have their share of problems too. These are some of the things that I think make finding companionship and building families harder.
on the other hands, dating apps have made it easier than ever to find compatible partners, and technology means that even if your parents divorce there's no problem keep in touch with either parent which applies just as well to extended family/friends. It's a lot easier to keep in touch with friends after graduation than it was when your only option was often "pen pals". It's not all doom and gloom. In many ways we're more connected than we've ever been and we're just in an awkward state where we're adapting to all the changes.
I couldn't find the comment again, but as somebody else said here on HN a while ago. The comment was like ..
We live in special times. Until 100 years ago we had just news from other people, mouth to mouth. So we had to decide what is wright and what just BS. Last 100 years we had newspapers, radio and TV. News was mostly true, we could trust what we saw on pictures, on screen. But now again .. pictures can so easy be just fake. So we have to decide again, what is true and what BS:
this timeline is way off .. intellectuals of the mid-1600 in certain places got news from all over the world. There were contests between language groups in all the major sciences and engineering. The printing press was beginning mass produced content.. that was all a solid 500+ years ago
Courts already have processes in place to challenge evidence; I’m not worried about them.
The problem with deepfakes is the instant amplification they’ll get from media and well-intentioned responses from people of good faith.
So there will be the deepfake of (insert your favorite politician) saying something unsavory and it’ll be taken at face value and a bunch of people will cancel their social media and bank accounts and protest in front of their houses etc etc.
The solution is super simple but unpalatable in today’s culture - waiting periods.
What if corporations were required by law to wait 30 days after a public spectacle before taking action? This would provide time for the victim to defend themselves and the truth to come out.
Similarly I have always thought that passing laws in the heat of the moment results in people’s rights being tramples. What if legislators were required to wait one year before passing laws in response to a public event of some sort? This would give time for passions to cool and alternatives to be considered and impacts to be weighed.
But this is all a pipe dream, let’s just cancel them immediately like we do now.
> What if legislators were required to wait one year before passing laws in response to a public event of some sort?
Someone discovers a way to bypass regulations and legislature and easily acquire the material needed to create a dirty bomb and sets one off. Now we wait 365 days to fix the loophole.
Pretty obviously an absurd extremity for that example, but I sure as shit don't want there to be that level of lag time on important things like that, and I imagine most people would also want to be able to move faster in situations where it is dangerous not to. How do we determine where that line is, what the exceptions are, etc.? What happens when we need an exception and didn't realize it?
What happens in real life is that in emergencies, the legislature passes some “sentencing enhancement” that adds on a decade or three of prison time in the circumstance. Or they create a new bureaucracy that in the end just makes life worse for everyone, like the DHS and all the airport suckage that happened after 9/11 - if you’re too young to remember, flying on airplanes did not always suck.
These are some of the very same government entities that were caught working with their surrogates at social media companies to censor speech that they declared misinformation. In their vernacular, misinformation may be 100% true information that goes against the narrative that they want to promote.
This again? They pointed out posts they believed violated places like Twitter's TOS and in the case of Twitter most of the time Twitter left them up and took no action against the material. A pretty shoddy censorship campaign in my opinion, it was blown up by Elon and the partisan actors he released the data to to support a political point they've been griping about for ages.
You're talking about something that has been ruled by two courts as a massive first amendment violation, and is heading to the Supreme Court (if they even want it), as if it were a conspiracy theory.
And what political point are you even talking about, where do you learn to use this tone about serious issues, and why the fixation on celebrities like Musk?
“ A federal district court in California dismissed the claims, and the U.S. Court of Appeals for the 9th Circuit upheld that decision. The court of appeals ruled that although “it is possible to draw a causal line from the OEC’s flagging of the November 12th post to O’Handley’s suspension,” there was no “state action” for O’Handley to challenge under the First Amendment. California certainly exercised governmental authority when it flagged O’Handley’s tweet, the 9th Circuit reasoned, but it took no explicit action restricting his speech. And although Twitter did limit O’Handley’s speech, the court explained, it was following its own rules, rather than acting on the state’s behalf.”
Where did _you_ learn this tone of victimization when discussion issues with no references? I assume Musk was brought up as he has framed this issue in the same way the GP commenter did and he is the current owner of the company in question. You can dismiss that as celebrity fixation but it only undermines your own comment.
> The 5th Circuit appeals court saw things differently, finding that Biden administration "officials made express threats and, at the very least, leaned into the inherent authority of the President's office. The officials made inflammatory accusations, such as saying that the platforms were 'poison[ing]' the public, and 'killing people.' The platforms were told they needed to take greater responsibility and action. Then, they followed their statements with threats of 'fundamental reforms' like regulatory changes and increased enforcement actions that would ensure the platforms were 'held accountable.'... Given all of the above, we are left only with the conclusion that the officials' statements were coercive."
Except most of the posts pointed out to places like Twitter stayed up with no moderation applied and the worst thing that's happened to Twitter recently is it's acquisition by Musk.
I haven't looked into this issue in detail and was surprised to see such a brazen threat so I wanted to find information on your quote "comply or there will be consequences". I'm not able to see that in the context of white house or twitter. I haven't found anything close to it yet.
> President Biden, press secretary Jen Psaki and Surgeon General Vivek Murthy later publicly vowed to hold the platforms accountable if they didn’t heighten censorship.
The government even ASKING about accounts and when they're going to be removed or censored is a clear violation of the 1st amendment. Whether it was done every single time they asked is irrelevant. One time is more than enough. Nothing is getting blow up, you just don't like the information you're getting because it goes against the team you're rooting for.
I don't want a government or DOJ in place that allows me to have my 1st amendment rights "most of the time" except the times they don't like me having it. Are you serious with this comment?
> President Biden, press secretary Jen Psaki and Surgeon General Vivek Murthy later publicly vowed to hold the platforms accountable if they didn’t heighten censorship.
> These emails establish a clear pattern: Mr. Flaherty, representing the White House, expresses anger at the companies’ failure to censor Covid-related content to his satisfaction. The companies change their policies to address his demands.
There are no deepfake threats. Deepfakes is just the hype word for digitally altering photos/videos. In the past doing this well was restricted to governments, large corporations, large institutions. And they did it regularly. Now individual human people can do it themselves. That's good. That's not bad in any form or manner.
I really enjoy generative AI and think that we're better off with it being in the open than controlled by specific corporations.
I also think this is an absurd comment. People are already making deepfakes of classmates, coworkers, etc. and sharing them around. I'll ignore the ethics of generating them for your own personal entertainment, but I can't think of any argument that you could make that when they start getting shared that this isn't "bad in any manner" - obviously this can cause emotional distress and damage to someone.
The difference is that rather than altered, they’re generated. It’s a new seeming-photograph that can’t be discovered as an alteration by checking against other photos to play “spot the difference”. Nobody really had this capability before.
I find your distinction between altering and creating to be ... a sidetrack. But okay, let's pretend I said "Deepfakes is just the hype word for digitally altering or creating photos/videos".
Large institutions/etc have definitely been able to create novel "photos" from scratch for a long time now. It was just a lot of work, which is why it was restricted to large groups of people with lots of capital. Now even individuals can do it. And that's good. The idea that prior to deepfakes that photos could not be created is not supported by my lived experience since the 1980s. Just look at any hollywood movie.
I see your point. I'm not sure I agree with all of the statements in your original comment but this one that caught my attention is a pretty minor distinction. CGI in movies is essentially this same thing but done manually by humans, sometimes even without input from the actor: see Carrie Fisher in whatever Star Wars movie I forgot.
There's still something about the scale of effort that seems to change this, though. For example, even if a government could hire people to produce a fake photograph, and even do so in time to use the photograph for some political manipulation, I'm not so convinced they could do so with a video. Even considering that large organizations could already do this technically, the ease of function that this allows is pretty worrying, though I do otherwise think it is a good thing that this technology is "out of the bag", so to speak.
Deepfake videos are quite terrible right now. The frame to frame consistency for novel created images in series is difficult to do with stablediffusion even with controlnet, etc. Maybe in another handful of years novel deepfakes videos will look real. But right now they definitely do not.
Unless you're reversing your prior point and now talking exclusively about only deepfakes that alter existing video (like face replacement) and not about novel created from scratch video. The face replacement stuff can almost look real.
I don't think photos are the real issue, it's video with voice.
Think of telephone scams, and now apply deepfake versions of a family member to it. Imagine your 70yo aunt having to deal with knowing if the whatsapp audio or zoom call that sounds/looks like you is really you.
I don't think this was possible before, feel free to prove me wrong with some evidence.
In the past to find the right person to imitate someone elses voice you'd have to do a lot of casting trials/etc that would cost a lot of money. So it was restricted to only large corps, governments, institutions, etc. But now any human person can do it. And that's great.
Yes, the featured article makes that clear the tools are not the threat, the “democratization of” the tools is the specific threat they are alerting people to. Hollywood and big brother have been doing propaganda and fake photos since forever, that’s not the problem CISA is reporting.
I worry about this. It's already quite difficult getting to the truth of something - every political party essentially flinging shit at each other with accusations of lies.
At this point, giving up entirely on any form of media seems appealing.
> The Pentagon claimed at the time that there was no chance of an explosion and that two arming mechanisms had not activated.
> A United States Department of Defense spokesperson stated that the bomb was unarmed and could not explode.
> In 2013, information released as a result of a Freedom of Information Act request confirmed that a single switch out of four (not six) prevented detonation.
Nobody should ever trust a single thing these people say.
Entities like CFOs and political leaders will have to start cryptographically signing their statements. There is no practical way to detect fakes after the fact.
All official materials should primarily be posted on the original authors' websites and signed using asymmetric cryptography. Furthermore, new open standards should be established to enable the presentation of such signatures/verification on well-known platforms like YouTube, FB, etc. These platforms should always provide a clear reference to the original material along with its digital signature.
For example, when watching a video on YouTube containing a speech by the president (provided on an official channel like the White House's), there should be a clear indication that the video has a digital signature and the option to verify it on an independent government website.
Currently difficult to display something a modern smartphone camera will not be able to distinguish from real, right? (Pixel artifacts, lighting too consistent, etc. right?).
It might be doable with an 8k TV and a source video with a lens-distortion applied to generate the opposite expected lens distortion of the crypto camera, so once it's recorded the perspective does not look like it is a recording of a flat video. Depth sensors would help with defeating that idea.
Or you could just smear some Vaseline on the lens and tell people the lens got dirty. It hurts the credibility for anyone who knows about these cameras but I doubt the public would think about it that much.
> Yes but the idea is that you trust the camera which unique and works as a physical private key.
You're pushing a (bad) technical solution to a social problem.
Cameras that cryptographically sign their output will not solve anything. The idea has more flaws than it's possible to list, but here's a big one: do you really think a technological gimmick like that would stand up to a nation state? Do you really think the CIA, NSA, FSB, Chinese Ministry of State Security, etc. will not be able to sign whatever the hell image they want with a camera's signature?
Is that good, though? If a hole in a system is exploited by only "the top", it may be disregarded and "the top" will be able to inject anything there, but if it is exploitable by anyone from a wide group, then info from the system will be widely distrusted and communication may work around it?
Also, how to protect a chip from reverse engineering even from all except "top actors"? I remember the price for reverse engineering of certain ICs was between 5 and 7 figures of USD. Don't know about modern IC processes, but it may be affordable for many even for those?
How would that work with video editing? Like if someone records something and then trims it for length or needs to combine multiple streams. Seems like hardware level verification only goes so far.
For editing it does not matter if you just remove or move frames. Video is just a series frames and each of them are signed, each frame can be validated if the content is unmodified. If the same root key is used for another stream, then frames can be combined easily.
I don't know audio well enough how it happens there. But potentially it can be signed in chunks as well.
Of course, one needs to consider risks if editing can make content appear different than originally intended, when the video as "whole" is not signed.
But for that, different entity can be used again.
You do get into issues because video files aren't just raw frames and haven't been for ages. Plus any changes on top of the video wouldn't just pass the frames through beyond the fact that current video encoding would reencode the embedded video when the larger video it was embedded in was exported. You'd have to add support for seamless passthrough of the original frames so the signatures could be validated plus some additional layers if you wanted to enable having graphics on top of the footage.
It would require completely changing how software currently handles video editing in short.
Let's say that camera records in 60fps.
Maybe all the data on all the channels can be recorded in chunks of 1/60s and signed separated.
Then camera combines it as whole playable video, but then there is a separate metadata for each time/byte offset which have been signed.
At the beginning, camera manufacturers might need to provide their own editors, to make editing possible. How much we can trust the camera holders, if the editor software even allows using the key from the camera for better editing in certain limits?
Intraframe compression where each frame is individually compressed is barely used any more outside of movies and other professional nonsteamed production because it barely compresses the resulting video. Most streaming and consumer video cameras use interframe compression where you get a full frame every few frames and the rest are moving pieces of that around. This video by Captain Disillusion [0] goes over it much better than I can and any time the video is edited it goes through that process again of creating I-frames P-frames and whatever new homunculus frames are invented to further compress video while maintaining quality.
If you just cut away to the original clip and didn't have any modifications like motion graphics over the top of it you could in theory pass through the original video with the same compression and signing without too much drama but any modifications over that or presenting it as picture in picture would be a big difference as now you need to have both the original frames with the added graphics on top.
You could but then that only really works for rehosting the exact same video. Most places would at least embed it in another video for commentary which would have a reencoding step that would wipe out the original video. See my other comments for more detail but it would require changing how we handle videos to preserve the signing info.
Follow the discussion chain back up: jacobsimon's posed the question: "How would that work with video editing?"
The answer is that it doesn't. Some might try to make it work by using proprietary video editing software that signs a ledger of what edit operations were performed, or something like that, but that doesn't work. The signing keys will eventually be extracted from the video editor or the camera, or the video editor or camera will be hacked to sign something it shouldn't. You might say that this at least stops low-skilled attackers, but the misinformation that is most dangerous to humanity, that created by governments to start wars, won't be impeded by any of these schemes. The whole cryptographic signature proposal is worse than useless.
You've nailed it this whole thread is about actually using the footage in anything other than it's raw original format. As soon as you start embedding it in other footage to say present and comment on it you run into all sorts of issues dealing with maintaining the signatures.
It could be used to hunt down reporters and whistleblowers if the cameras have to be purchased with an ID. So the very people who would benefit might be forced to strip this extra data to protect themselves.
I wonder if you could use the camera to record deepfaked video and in effect bless a lie. Even just filming a TV set might be enough for low grade blackmail and much more complicated methods are available.
Isn’t this a problem? Someone can take an actual clip of a speech but because it’s not signed by the speaker no matter how bad the speech, it could be declared inauthentic or deepfake because it has no signature?
For example the whitehouse is known to revise the text of the president’s speeches when he says the wrong thing. If we only have officially released videos where the gaffes and fables are left out, how is anyone to know what he actually said?
We don't have to live in the world where people are maximally naive (even if it seems so today). That also assumes there's not a signed video available of the event, usually things are recorded by more than one person especially a speech by the president.
The biggest risk IMO is that key becomes immediately one of the most important secrets to keep since it holds the promise of validating anything you want to lie about.
Now you have to securely deliver those keys to the cameras and people have to keep them up to date. With smartphones it's a bit easier because that can just be pushed to the phone automatically but for news orgs and other professional outfits their camera's aren't internet connected. So then you have a weird mishmash of deciding if an out of date key is being used because it's been cracked/stolen or if the NBC stringer just didn't update their camera before heading to the event.
* Text: You have little (definitive) clue who wrote what. You essentially have to ask the (apparent) writer.
* Photo: You used to have high confidence that a picture shows who appears to be shown. Not 100%, sure, but it's high.
* Video & Audio: You used to have very high confidence that the video including its audio are genuine. It was very difficult to replace video and/or audio.
Nowadays, none is trustworthy by default anymore. You can say: Well, just trust the company or Reuters.
Sure, but I don't think anyone cares about this case. It's not controversial. But how will they be able to verify controversial sources?
If they get sent a video claiming to be about Ukrainins killing civilians, and outfits & speech matching that, how can Reuters be sure about anything now?
Trust can't be given to the source, nor to the video, nor to the audio, nor to the metadata.
> Photo: You used to have high confidence that a picture shows who appears to be shown. Not 100%, sure, but it's high.
I don’t agree. Many important photos don’t show what we think they do.
The Soviet flag on The Reichstag. When it was taken and what it showed are different to the impression you get looking at the photo. It was taken after the event and the signs of looting were removed.
https://en.m.wikipedia.org/wiki/Raising_a_Flag_over_the_Reic...
There are bound to be loads more, and the faking goes way back. The US Civil War has examples where bodies were dragged around and made more dramatic. Added cannon balls in Crimean War photos etc.
This has long been a solved problem out in the real world.
Think back to the Nixon watergate scandal. When the reporters were going to press about that, they made damn sure it was 100% real first. By interviewing varying sources, human trust, etc.
All that really changes is they can't take video and audio evidence as fact anymore. So they have to, in essence, audit the video/audio trail, so they will want to talk to the person that filmed it, make sure the story holds up, etc.
Some technology changes can help with authenticity here, but it's not really a technical problem, it's a human trust problem.
There will be learning curves and maybe one or two of the currently well known and trusted news sources totally burn their brand because they didn't do their homework. Nothing really new though.
But that is out of scope of what you are replying to.
If a CFO makes a statement and that is on the company's website we can have reasonable confidence that the CFO made that statement and we can act on it.
Reporting on a video of unknown (possibly unknowable) provenance is a different kettle of fish.
> You used to have very high confidence that the video including its audio are genuine.
The physical artifacts yes, but not the narrative they were portraying. The “news” media has been spinning fictional narratives with physically authentic video and audio for a long time.
>>>> Nowadays, none is trustworthy by default anymore.
Perhaps that is a good thing. Maybe this is a good excuse to stop and consider multiple news outlets, even if it conflicts with our own opinions, for our news sources.
This assumes people are consuming news through official channels which I don't think is true in a lot of cases. For many people, news is whatever pops up in their facebook/instagram/twitter feed, and it's relatively easy to slip fake content in there.
You rarely need a perfect fake because you rarely need to convince everyone, you can often achieve the same goal by just convincing a large group of people.
It’s trying to solve the social issue of ‘omg react!’ videos and random reshare clips through technical means (proving the clip isn’t original).
Which it won’t. Eventually might be relevant when in a context where someone actually stops and spends time looking at evidence (civil and criminal court cases perhaps?) but those already use chain of custody for evidence because evidence has already been easy to fake for… well forever.
Still should be done IMO though, as it’s cheap and easy and will hopefully make it a little harder (or easier to detect) to do mass faking in the ‘middle’ - like fake IDs for online services, fake blackmail photos, etc.
What happens when the news paper just makes stuff up because they need a more click baity article? People click links on emails without verifying the sender what makes you think readers will track back through the chain you describe to verify anything?
I think that's a really bad take. The difficulty of making many categories of lies is radically decreasing. That it has long been possible for a well-funded vfx team to do something doesn't mean nothing will change when it becomes possible for anyone with a cellphone and five minutes of free time to do the same thing.
> anyone with a cellphone and five minutes of free time
One could argue that this will be a good thing because deep fakes will be so prevalent (e.g. kids making videos of their parents saying and doing funny things) that the default assumption is that everything is fake until proven not fake.
> default assumption is that everything is fake until proven not fake.
This is what it's like living under an authoritarian government. "Of course the government is lying", "Of course the politician is lying", "Of course my neighbor is lying", "Of course the company is providing me with a fraudulent product"
This eventually turns into a kind of learned helplessness and is how you create a crapsack nation/world. "Everything is bad, so there is no reason I should do anything good"
I can promise you that you won't enjoy this world we're creating if you don't live in an authoritarian shithole already.
The vast majority of uses for deep fakes is not for content that would appear on an official site: surreptitious videos of CEO/Politician doing illegal or embarrassing behaviour, racist tweets and emails from when they were college students etc.
Time will tell. I think the answer is yet. I am inclined to avoid naming specific, relatively recent instances, but I think it would be fair to say that we're in a world with very high skepticism of the media and politicians have been taking that into account by claiming that the news is simply lying, that what they're reading is fake, etc.
It cuts both ways. We don't trust the media, and we don't trust the politicians. So when a politician says that the media is lying, we tend to believe whatever we want to believe.
The loss of truth is a serious thing for a society. (Yes, back in the Walter Cronkite days we had less truth than we thought we did. We had more agreed-upon truth that matched reality than we do today, though, and I think the difference matters.)
I honestly find it suprising that photo's and video's taken with a smart phone are not signed in some way to ensure they are not modified.. Would love to see it become more mainstream, since editing is so easy.
Original story: https://www.nytimes.com/2005/02/02/world/africa/rebels-say-t... Confirmation of hoax: https://www.nbcnews.com/id/wbna6894934
It's pretty easy to see that the photo is a hoax, but many news outlets didn't notice and ran the story anyway.
I have little faith in our ability to detect deepfakes using these recommendations. It seems we'll have to assume something is fake unless we have a way to cryptographically verify its provenience.