Logo

Spotting Fake News: A New Career Skill?

Published: Jul 31, 2017

 Consulting       Interviewing       Job Search       Technology       
Article image

Ask anyone involved in journalism, and they'll tell you that the role of media organizations in general—and journalists in particular—has become significantly more technical in recent years. Gone are the days when a journalist could make some calls, churn out some copy, and call it a day. Many newsrooms now screen candidates for their proficiency with different content management systems, their ability to wrangle large data sets, and their understanding of social media trends and platforms.

So where might we be headed next, in terms of the skills demanded of a person whose role is to report what's happening in the world around them?

According to a recent Radiolab episode, the answer to that question is one with frightening implications for all of us: the ability to detect digital manipulation and attempts to promulgate the spread of "fake news." And it's not just print stories with dubious origins, or badly-Photoshopped Facebook memes: the core of the Radiolab report concerns the rise of voice and video simulation techniques that have the potential to be used to literally change the words that come out of someone's mouth.

For an example of what that looks like, consider this presentation—which is described during the Radiolab episode—of VoCo, a new technology from Adobe. In it, the presenter demonstrates how the software can be used to not only alter the order of words used in a digital audio file, but also to insert words that were never used, and have them appear as if they had actually been spoken by the same person as in the original clip.

 

How you react to that depends on how you see the world—people who do post-production for commercials, for example, will likely be more enthusiastic than those who worry about the potential for this kind of technology being used to alter comments made by a sitting President.

It's the latter case that seems to worry the Radiolab hosts, who go on to examine some other technologies in the episode, including the ability to digitally manipulate video footage of someone's face to sync it with an altered sound file. Their conclusion: that, while the technology might not be there to make convincing fakes yet (see their effort here: http://futureoffakenews.com/), it is most certainly on the way.

The advent of these kinds of technologies will undoubtedly place more pressure on anyone involved in sourcing and verifying audio and video clips—just as they have done for images, only with significantly more at stake.

Which brings us back to the evolution of a journalist's skillset. As former CNN President Jon Klein tells the Radiolab hosts during the episode, the ability to identify skilled fakes is pretty much beyond the average person's grasp already, and is only likely to get harder as these technologies continue to develop. As such, Klein says, "I don't think journalists, English majors, are going to be the ones to solve this. You know, you may have been editor of your school paper, but this is beyond your capability. But if you're good at collaborating with engineers and scientists, you'll have a good chance of working together to figure it out. So we need technical expertise more than we ever have."

So there you have it: the ability to create fake artifacts to dupe reporters is evolving rapidly. Combating it is likely to become a full-time job, for people who understand both the underlying technologies, and the motivations of people who want to use them for their own ends. In the unlikely event that a cash-strapped media organization will hire people solely to verify or debunk these kinds of sources, we could be looking at the rebirth of fact-checking as a profession. More likely, however, is that existing journalists face a choice: either pick up this kind of knowledge for themselves or be replaced by people who have it.

(Image credit: Kayla Velasquez)

***