‘Deepfakes’ – a portmanteau of ‘deep learning’ and ‘fake’ – which refers to the use of artificial intelligence to combine and superimpose existing images and videos onto source images and videos, entered public consciousness back in 2017.
At that time, ‘deepfakes’ were typically seen as a mostly harmless, yet potentially worrisome, internet phenomenon that consisted primarily of widely shared pornographic images and comedic videos.
However, given the speed of its development and rise in accessibility of the means for making your own ‘deepfakes’ since then, lawmakers and experts around the world have grown worried about its high potential for interference with the political process.
Speaking at the House Intelligence Committee hearing, which took place last Thursday, David Doermann – a former official with the Defence Advanced Research Projects Agency – warned the public about the looming threat of intensifying misinformation campaigns.
With regards to the problem of accessibility, Doermann said one doesn’t “have to be an [artificial intelligence] expert to run [deepfake algorithms]. A novice can run these types of things”.
One of the key problems with getting a handle of faked video footage is the fact that as approaches to detecting content that was tampered with improve, so does the technology used to make it – a situation that is likely to lead to a game of cat and mouse between lawmakers and purveyors of false information.
Clint Watts, a research fellow with the Foreign Policy Research Institute, suggested that some of the burden caused by misleading videos should be borne by tech companies which provide the platform for them to spread, but was ultimately shot down by concerns over the appropriateness of granting private companies the right to make such judgment calls.
Another proposition voiced at the hearing was to amend the laws regulating online video, which are currently decades behind the latest developments.
“We have an audience primed to believe things like manipulated video of lawmakers,” said Professor Danielle Citron from the University of Maryland. “I would hate to see the deepfake where a prominent lawmaker is purported to … (be) seen taking a bribe that you never took.”
Despite the urgency of the situation and the commendable nature of the proposed attempts to crack down on digital fakery, it remains unclear as to which of the specific measures are going to be implemented, and how effective they are going to be.
Given that the spread of technologically mediated misinformation has only been intensifying, the stage is clear for suggestions and potential solutions – whether final or (more likely) merely remedial.