There are no deepfake threats. Deepfakes is just the hype word for digitally altering photos/videos. In the past doing this well was restricted to governments, large corporations, large institutions. And they did it regularly. Now individual human people can do it themselves. That's good. That's not bad in any form or manner.
I really enjoy generative AI and think that we're better off with it being in the open than controlled by specific corporations.
I also think this is an absurd comment. People are already making deepfakes of classmates, coworkers, etc. and sharing them around. I'll ignore the ethics of generating them for your own personal entertainment, but I can't think of any argument that you could make that when they start getting shared that this isn't "bad in any manner" - obviously this can cause emotional distress and damage to someone.
The difference is that rather than altered, they’re generated. It’s a new seeming-photograph that can’t be discovered as an alteration by checking against other photos to play “spot the difference”. Nobody really had this capability before.
I find your distinction between altering and creating to be ... a sidetrack. But okay, let's pretend I said "Deepfakes is just the hype word for digitally altering or creating photos/videos".
Large institutions/etc have definitely been able to create novel "photos" from scratch for a long time now. It was just a lot of work, which is why it was restricted to large groups of people with lots of capital. Now even individuals can do it. And that's good. The idea that prior to deepfakes that photos could not be created is not supported by my lived experience since the 1980s. Just look at any hollywood movie.
I see your point. I'm not sure I agree with all of the statements in your original comment but this one that caught my attention is a pretty minor distinction. CGI in movies is essentially this same thing but done manually by humans, sometimes even without input from the actor: see Carrie Fisher in whatever Star Wars movie I forgot.
There's still something about the scale of effort that seems to change this, though. For example, even if a government could hire people to produce a fake photograph, and even do so in time to use the photograph for some political manipulation, I'm not so convinced they could do so with a video. Even considering that large organizations could already do this technically, the ease of function that this allows is pretty worrying, though I do otherwise think it is a good thing that this technology is "out of the bag", so to speak.
Deepfake videos are quite terrible right now. The frame to frame consistency for novel created images in series is difficult to do with stablediffusion even with controlnet, etc. Maybe in another handful of years novel deepfakes videos will look real. But right now they definitely do not.
Unless you're reversing your prior point and now talking exclusively about only deepfakes that alter existing video (like face replacement) and not about novel created from scratch video. The face replacement stuff can almost look real.
I don't think photos are the real issue, it's video with voice.
Think of telephone scams, and now apply deepfake versions of a family member to it. Imagine your 70yo aunt having to deal with knowing if the whatsapp audio or zoom call that sounds/looks like you is really you.
I don't think this was possible before, feel free to prove me wrong with some evidence.
In the past to find the right person to imitate someone elses voice you'd have to do a lot of casting trials/etc that would cost a lot of money. So it was restricted to only large corps, governments, institutions, etc. But now any human person can do it. And that's great.