How Will We Prevent AI-Based Forgery?

Executive Summary

Recent developments in Artificial Intelligence (AI) point to an age where forgery of documents, pictures, audio recordings, videos, and online identities will occur with unprecedented ease. Forgery is ancient but AI will make high-fidelity forgery inexpensive, and automated, leading to potentially disastrous consequences for democracy, security, and society. Historically, society has relied on signatures to ensure authenticity. The Sumerians used signatures over 5000 years ago with intricate seals stamped in clay tablets to endorse their writings. On the Internet, we rely on digital signatures. A digital signature is a computer method (based on cryptography) of ensuring that an item wasn’t tampered after it was signed. Automated messages between Web sites governed are also authenticated by digital signatures, but digital signatures are not widely used to certify the authorship of e-mails, social media posts, images, videos, etc. The computer methods to support reliable digital signatures exist, but they are not seamless enough for ubiquitous use. We need to jumpstart digitally-signed emails, social-media posts, documents, images, videos, and even phone calls before it’s too late.

John M Lund Photography Inc/Getty Images

Recent developments in artificial intelligence (AI) point to an age where forgery of documents, pictures, audio recordings, videos, and online identities will occur with unprecedented ease. AI is poised to make high-fidelity forgery inexpensive and automated, leading to potentially disastrous consequences for democracy, security, and society. As an AI researcher, I’m here to sound the alarm, and to suggest a partial solution.

In February, AI-based forgery reached a watershed moment–the OpenAI research company announced GPT-2, an AI generator of text so seemingly authentic that they deemed it too dangerous to release publicly for fears of misuse. Sample paragraphs generated by GPT-2 are a chilling facsimile of human- authored text. Unfortunately, even more powerful tools are sure to follow and be deployed by rogue actors.

Automated forgery is already prevalent on social media, as we witnessed during the 2016 U.S. elections. Twitter has uncovered tens of thousands of automated accounts linked to Russia in the months preceding the 2016 election, according to The Washington Post. Facebook estimated that fake news spread by Russian-backed bots from January 2015 to August 2017 reached potentially half of the 250 million Americans who are eligible to vote.

I have called for regulations requiring bots to disclose they are not human, and the state of California introduced a corresponding law that will take effect in July, 2019. This is a valuable step, but in the international digital world legislation has limited practical impact.

The problem extends far beyond bots. Doctored images are commonplace, and recent advances in image processing have enabled the creation of realistic fake video. Researchers demonstrated this new capability with AI-generated video of former President Barack Obama speaking phrases that were previously only audio clips. Then came “deepfakes,” AI-generated videos of entirely new facial expressions of a target person created by stitching together two faces in an eerily convincing way. This face-swapping technology is sufficiently available that it has started appearing in pornography, with several high-profile celebrities‘ faces added to pornographic videos. A viral video of Obama issuing a warning about deepfakes was, itself, a fake.

When attempting to decide whether an item is genuine, it’s natural to consider its source. Yet, it turns out that a website, an e-mail address, and even the origin of a phone call can be easily faked or “spoofed”. I found this out the hard way, when my phone rang, and I looked at the caller id—only to find out that seemingly I was calling myself! The  adage “on the Internet, nobody knows you’re a dog” implies that you cannot be certain of the author or origin of most items you receive via email, through social media, or even by phone. This Internet blindness is the basis for “phishing”— cyber- attacks where a communication purporting to be from a trusted source induces you to reveal private information such as a password or credit card number. Today, the text of automatically-generated phishing e-mails is easy to spot as phony, but AI is about to change that.

Historically, society has relied on signatures to ensure authenticity. The Sumerians used signatures over 5000 years ago with intricate seals stamped in clay tablets to endorse their writings. Marks, stamps, and seals evolved into handwritten text as literacy became widespread, and references to signing documents appear throughout history.

On the Internet, we rely on digital signatures. A digital signature is a computer method (based on cryptography) of ensuring that an item wasn’t tampered after it was signed. Services like DocuSign certify contracts using digital signatures. Automated messages between websites can also be authenticated by digital signatures, but digital signatures are not widely used to certify the authorship of e-mails, social media posts, images, videos, etc.

The specter of AI forgery means that we need to act to make digital signatures de rigueur as a means of authentication of digital content. First, we need to certify signatures, which can be done by central authorities, or via more democratic computer methods such as encryption and blockchain. Second, we need to make the acts of signing and verifying signatures as seamless as possible. Signing should be enabled by default in our email software, word processor, smartphone cameras, and in any production of digital content. Our browsers, social-media applications, and other media-reading software should highlight whether content is signed, and by whom. Finally, and perhaps most challenging, we need to promulgate the norm that any item that isn’t signed is potentially forged. We don’t accept checks that aren’t signed—the same should hold for digital content.

Of course, we want to preserve the option of anonymity so that digital signatures aren’t used to suppress dissent or discourage whistle blowers. Moreover, we want to allow for pseudonyms so that an author can choose to hide their identity but still be recognized as a particular individual or organization.

Digital signatures will not prevent a bot from masquerading as some person, but the signatures will stop the bot from impersonating you, and from disseminating content that you didn’t author in your name. The computer methods to support reliable digital signatures exist, but are not seamless enough for ubiquitous use. We need to jumpstart “zero click” digitally-signed emails, social-media posts, documents, images, videos, and even phone calls before it’s too late.

Powered by WPeMatico

Antiques

AdSense