fake stamp

STAFF NEWS & ANALYSIS
How “Deep-Fakes” Could Actually Protect Privacy
By Joe Jarvis - January 29, 2019

A deep fake is an edited photo or video that cannot be differentiated from the original.

It’s been easy to fake photos for some time. Basically, every image you see in a magazine of a sexy female is fake, edited to the point where it bears little resemblance to the real original photo.

But video fakes have been easier to detect. Until now, you needed expensive software and Hollywood budgets to really manipulate footage. But now requests are done online by amateurs.

What kind of requests? Usually putting someone’s head on a porn star’s body to imagine you are watching whoever the face came from.

Of course, the applications of any emerging technology will be used for sex if it can–just look how quickly pornography proliferated on the internet.

But imagine all the other nefarious uses for this type of fake.

You could blackmail political opponents or tip an election with compromising videos.

You could hold judges hostage threatening to destroy their career.

You could start a war.

Or suppose it is just the same old-fashioned police corruption framing suspects, but with a new technological twist. They simply doctor the video surveillance evidence or interrogation video, and boom, easy convictions.

And just think about all the data companies like Google (and therefore agencies like the NSA) have on all of us. They could tailor a fake video to something that people would believe, and bolster it with real audio…

Or doctor surveillance footage based on where they know you were… try coming up with an alibi for that.

For all the sketchy crap the government has done before, I wouldn’t put it past them to easily dispatch some political dissidents or troublesome reporters.

So seeing it with your own eyes will no longer be enough. We will have to rely on the dreaded… experts.

Already our court system is inundated with experts who tell us what’s what. And these people are easily corrupted, and sometimes just wrong.

Take the case of a Massachusetts lab technician who falsified thousands of drug test results in order to advance her own career. (That was a different Massachusetts lab worker from the one who got high from the confiscated drugs at work while testing samples.)

Or the man facing conviction based on DNA tested by a machine and interpreted with a computer algorithm. And the court won’t allow his defense to examine the algorithm.

This is after his first conviction was thrown out, because of a faulty algorithm in the DNA testing machine!

So with deep fakes, will it all come down to an expert sitting behind a keyboard, testifying to the authenticity of the video that depicts you cackling with glee while dumping toxic waste on an endangered sea turtle breeding ground?

It is another arms race scenario, where everytime a new method of detection is invented, the fakes get a little bit better to avoid detection. Already the deep fake technology is outpacing the security countermeasures.

And it’s not just pictures and videos either. People are already coming up with ways to fake biometric data like fingerprints. So tight security requirements like fingerprints and iris scans might not be so secure after all.

Could faking DNA be next? Or bluffing the facial recognition systems the TSA is already starting to use for boarding flights?

Basically when you have to rely on scientists to test, and experts to interpret, and programmers to build algorithms, then we are in the same position we have always been in. Only now, it might be easier to hide the fact that no one really knows what they are talking about.

All the benefits of the advanced technology can easily be worked around with a little corruption, or a basic human error.

But what if deep fakes truly outpace the technology to detect them? And suppose everyone agrees that this fact is true.

Then it is basically like a reset.

Anyone could realistically claim that a video, picture, or biometric access was faked. And we would be back to square one, doing the typical gumshoe detective work that can really pin down a suspect to a time and place.

People could believe whatever narrative they want by simply assuming every video they don’t like is faked, and all the faked videos they want to believe are real.

So in that sense, it seems like nothing would change… people already believe whatever they want to believe.

Perhaps instead we would avoid character assassinations and trial by publicity. But we also couldn’t hold people accountable for their actual transgressions.

But would that be the worst thing in the world, if the reset button got hit?

Privacy could return. But criminals could get away with more.

Innocent people could no longer be framed. And guilty people could believably claim they were framed.

We would have to actually investigate claims, or guilt or innocence. No more relying on experts or algorithms.

No more relying on our own eyes–which even before deep fakes, weren’t super reliable. After all, we’ve all seen the videos of cops shooting people, and we still don’t agree on what happened.

But I suppose we are already entrenched in a culture of believing experts and algorithms without question.

So will these advances only hand more power to the corrupt? Or hit the reset button on how we decide what is true?

Tell me what you think.


 

Tagged with: , , ,
Posted in STAFF NEWS & ANALYSIS
loading
Share via
Copy link
Powered by Social Snap