The perils of social media - 2 | Daily News

The perils of social media - 2

Richard Nixon is on TV, delivering a solemn address to the American people and indeed, to the wider world. “Fate has ordained that the men who went to the Moon to explore in peace, will stay on the Moon to rest in peace. In ancient days, men looked at stars and saw their heroes in the constellations. In modern times, we do much the same, but our heroes are epic men of flesh and blood,” Nixon goes on. “Others will follow, and surely find their way home. Man's search will not be denied. But these men were the first, and they will remain the foremost in our hearts.”

Then it hits you like a bolt of lightning. Just what on Earth is he talking about? Neil Armstrong and Buzz Aldrin did indeed land safely on the moon and came back to Earth safely too. The disaster described by Nixon in the video never happened. But wait – there is more. The entire “Moon Disaster” speech never happened. There is no record of Nixon delivering a speech like this anywhere before his death.

Thus there is only one possibility. This video is a fake. Not a simple fake, but a “deepfake” where the audio and video is so convincingly real that people often mistake it for the genuine article. But today’s computer, audio video and above all, Artificial Intelligence programmes are so powerful that it is actually possible to recreate an entire speech “made” by any person, dead or alive. This deepfake video convincingly, terrifyingly, shows the former U.S. president giving that speech in the full glory of television, circa 1969. The footage is false, but shows just how hard it is to ferret out fake videos from real ones.

A team at the Massachusetts Institute of Technology (MIT) USA created the video using a voice actor and a company, known as Respeecher, that can make synthetic speech by implementing “deep learning.” This is an artificial intelligence technique that mimics the connections in the human brain to process information and create patterns for a network to make decisions. Another company called Canny AI used “video dialogue replacement techniques” to imitate how Nixon moved his mouth and lips in real life. The Nixon deepfake is meant to provoke a wider reflection not only on how fake news influences our decision making, but how artificial intelligence is used to curate news and to deliver customized ads for consumers reading online content. (On YouTube, you can watch a 30-minute documentary on how the deepfake video was made).

The biggest danger here is that deepfakes like the Nixon one can be used to spread misinformation and disinformation, shifting reality, perspectives and even history. They can give you “alternative facts” and “Post-Truths” in a world where nothing is what it seems to be. And these are gaining ground on all social media platforms including Facebook and Twitter. While the MIT researchers spent nearly 18 months creating the video, the technology needed for putting together “almost real” fake audio and video is getting cheaper all the time thanks to the forward march of technology. Give them a couple of years, and the likes of Cary Grant and Charlton Heston will be in the movies again.

This phenomenon is especially dangerous during election time in any country, because political opponents can manipulate audio and video to create a negative perception about a given candidate. In fact, some simple manipulations can be done on a laptop or even on a smartphone. On Facebook, there are many screenshots of politicians from all sides saying various things with accompanying subtitles and scroll bars – none of which is often true. But the danger is that at least a certain percentage of Facebook users will believe those screenshots. This is, of course, highly disadvantageous to the aggrieved candidates.

Unfortunately, Facebook, led by CEO Mark Zuckerberg, opted to remain more or less silent as the triad of fake news, disinformation and hate speech practically took over the site. But now it has also started labelling posts and videos for offensive content, which rival platform Twitter has been doing for some time now. Twitter also does not accept political ads of any kind, whereas Facebook does, even if they contain brazen lies.

But one cannot blame the companies alone for the flood of misinformation on these popular sites and apps. Whenever you see a post on Facebook, WhatsApp or Twitter, take a moment to think whether it could be true or whether it could cause harm to someone or a particular group of people. If you think so, do not share or forward it.

The indiscriminate forwarding of posts and videos on social media platforms has resulted in many unnecessary clashes and deaths around the world. If you break the chain of misinformation, that is a victory in itself. Social media do have many benefits – but we must use them wisely for the benefit of society, not to its detriment.


Add new comment