top of page

Deepfake Destroys Trust: AI enters a world of illusion with dire political and personal implications


Photo by Christian Gertenbach on Unsplash

Children appear in pornographic videos, and so do celebrities and politicians. Senator Ted Cruz dancing seductively in an inappropriate outfit? No, it’s not a “lie,” but an illustration of what Deepfake software can do when it’s aim is high distribution, money, political distortion, or reputation destruction of journalists.


Fake news is one of the prime targets of this capable software, and it can be compelling to even the most professional users. What can be done with a simple Saturday Night Live video? Here’s an example of it.


While the SNL videos may be funny, the seriousness of Deepfake cannot be dismissed. Ian Goodfellows initially developed the concept in 2014. Goodfellows is the author of the textbook “Deep Learning.”


Not initially created for pornographic videos, its use has spread rapidly as the underlying AI software continuously improved its product. The machine learning system, GAN (Generative Adversarial Network), engages in machine training of continuously evaluating its product to improving it on its own. No human intervention needed. Given its full ability, the end product can be stunningly believable.


The idea is incredibly simple; two programs are working with each other with one as the “adversary.” The first program picks out photos or videos for the desired end product; the second program evaluates what the first gleaned from the input and makes alterations. The end product is a convincing new photo or video with matching audio where needed. The photo or video never existed before the intervention of the GAN input.


Who has the software?


The software isn’t expansive or challenging to use. It requires nothing more than a sufficiently hefty video card for a PC, such as gamers use, or access to a cloud server that does the work. 

Tutorials are readily available on the internet. In 15 minutes, any user can acquire sufficient knowledge to create simple fake videos and then go on to improve their skills.


A bit more training is required if you don’t want to use templates provided by the software, but it’s still within reach of anyone determined to make a Deepfake video. Or you can employ an on-line person who for $10-$30 will do the work for you.


The most likely images to be utilized are those persons who have appeared in hundreds of hours of videos; think online or family videos. Uploading family videos to Facebook or other platforms provides grist for the kiddie porn mill, but families are unaware of this nefarious activity. How many thousands of hours of innocent children will be put to use by the porno mills?


Politicians are also prime targets as is any person who appears regularly on TV or the internet in webinars or presentations.


But what about what they say? How could you change that? Adobe had been working on software that allowed you to type anything, and it would put a voice into a video. But it has since disappeared after discussion of it in 2016. Where and why did it go? No one seems to know.

Stanford University hasn’t released its voice-creation program that will alter videos.


As an article on the Stanford project indicated, “The research team behind this software makes some feeble attempts to deal with its potential misuse, proffering a solution in which anyone who uses the software can optionally watermark it as a fake and provide “a full ledger of edits.” This is no barrier to misuse.”


Video watermarks do not protect videos, and one site will remove it if you upload the video to them. I’m sure there is a fee, but how much Deepfake protection there would be is not known.


The legal implications


Deepfakes are the latest weapon in the war against truth, and Congress is paying attention. The technology allows anyone to create convincing videos of events that never happened, stoking fear that a Deepfake could emerge that fuels political divisions, provoke violence, or targets individuals. Indeed, the technology has already been used to create nonconsensual pornography. But it is the fear of a Deepfake disrupting the 2020 presidential election that is propelling Congress into action.


“The first federal bill targeted at Deepfakes, the Malicious Deep Fake Prohibition Act, was introduced on December 2018, and the DEEPFAKES Accountability Act followed this June. Legislation targeting Deepfakes has also been introduced in several states, including California, New York, and Texas. During a House Intelligence Committee hearing on the subject in June, legislators signaled that more governance is coming, likely in the form of social media regulation.”


Several states have banned the use of this technology as it violates privacy and promotes defamation incursions. How long it will take for Federal law to prevent the use of Deepfakes in our political or corporate system is questionable given current logjams.


How can Deepfakes be stopped?


Assuredly, Deepfake software can create laughter and increase the staying power of learning in school programs. Why wouldn’t kids want a dinosaur to begin reciting the Bill of Rights? They’d eagerly learn it, and that dinosaur image would imbed the learning deep within their brains.

Truly wonderful tool. The tool can also create “unreal” people. Want a woman voter to say why she’s voting for someone? Create that person, and there you have “her.”


As guns enabled us to keep ourselves fed during the move into the Western wilderness, they also can be put to illegal use. So, too, Deepfake videos have a disturbing side, and that prompted a Google engineer, Supasorn Suwajanakorn, to begin devising a way to discover Deepfake videos. He is working on Reality Defender, an app to detect fake photos and videos.


Reality Defender is intelligent software built to run alongside digital experiences (such as browsing the web) to detect potentially fake media. Similar to virus protection, it scans every image, video, and other media that a user encounters for known fakes, allows reporting of suspected fakes, and runs new media through various AI-driven analysis techniques to detect signs of alteration or artificial generation.”

He is quoted as saying, “Video manipulation will be used in malicious ways unless counter-measures are in place.”


The future of Deepfake is not “future” at all because it is here and actively being used for uses never intended by its developers. But good or bad, the guys in the white hats have to help us in our vigilance against the black hats who would try to manipulate our elections, defame us, harm our children and our corporate entities.

13 views0 comments

Comentarios


bottom of page