The New York Times ran a war story yesterday with a fascinating premise. It stood out because it tracked one of my long-standing predictions. The headline said: “
A.I. Muddies Israel-Hamas War in Unexpected Way.” The sub-headline further explained, “Fakes related to the conflict have been limited and largely unconvincing, but their presence has people doubting real evidence.”
https://substackcdn.com/image/fetch...bd3-e0b1-4287-b2b3-8e1ca42992d9_2608x1008.png
Haha, yet another reason to doubt what the Times says! But the paper had a point.
[clip]
But how does this affect the
New York Times? The Gray Lady frets that people aren’t going to believe
its photos anymore either:
People have accused political figures, media outlets and other users of brazenly trying to manipulate public opinion by creating A.I. content, even when the content is almost certainly genuine. Disinformation watchdogs fear that fakes created by A.I. tools, including the realistic renderings known as deepfakes, would confuse the public and bolster propaganda efforts.
Disinformation researchers have found relatively few (real) A.I. fakes, and even fewer that are convincing. Yet the mere possibility that A.I. content could be circulating is leading people to dismiss genuine images, video and audio as inauthentic.
It’s not just photos. My teenagers were playing with tools last month that take an audio sample of someone speaking and then help the user create brand-new audio in the sampled voice saying virtually anything.
Mom, Dad said to play this for you: “Honey, I’m tied up. Please give Grant a hundred and twenty-six bucks and a ride to Best Buy.”
Emerging A.I.-based tools let users create photo-realistic videos from text prompts. Not to mention all the well-known PDF and image editors, which let people modify document images, photos, emails, and all sorts of digitized evidence. Think about all the arguments over Obama’s birth certificate. Document-related authenticity issues have been around for a while, but we’ve all been mostly ignoring their larger significance.
Here in the legal world, we are racing toward an apocalyptic endpoint where the argument over whether a particular bit of media evidence is authentic will require a whole trial of its own, with media experts, other hard evidence verifying the media, authenticating media (like photos taken of the same subject from other angles), and live eyewitness testimony.
And at that point, you might as well just avoid the media evidence altogether and go back to the basics.
The Times’ article included the first quote I’ve seen from someone accurately predicting where we are headed: digital evidence will be abolished as reliable evidence at all:
“Proving what’s fake is going to be a pointless endeavor and we’re just going to boil the ocean trying to do it,” said Chester Wisniewski, an executive at the cybersecurity firm Sophos. “It’s never going to work, and we need to just double down on how we can start validating what’s real.”
This raises a profound spiritual or metaphysical question about what
is real and what
isn’t, and how we define “real,” but I don’t have enough time for that today. Maybe you guys can flesh it out in the comments.
There’s just no way to tell how all the pro’s and con’s will shake out, but I think overall this development is good news. Accelerating technology is crushing the government’s omnipresent panopticon. That’s a plus for we regular citizens. And, be of good cheer: for the first 200 years of the Nation’s life, we didn’t even
have all this digitized media evidence, and we still did just fine.
The legal system will have to revert back to
human evidence and exhibits you can hold in your hand. Same as 2,000 years ago.
War powers in Syria and the jabs; Ozzie covid commission and int'l disquiet; great Speaker news; Times finds AI fakes creating reader distrust; 2x SADS actor; DeSantis vs WHO; epic Disney fail; more.
www.coffeeandcovid.com