Good Afternoon and thank you, Chairwoman Sherrill, for convening this important hearing.
We are here today to explore technologies that enable online disinformation. We’ll look at trends and emerging technology in this field, and consider research strategies that can help to detect and combat sophisticated deceptions and so-called “deepfakes.”
Disinformation is not new. It has been used throughout history to influence and mislead people.
What is new, however, is how modern technology can create more and more realistic deceptions. Not only that, but modern disinformation can be spread more widely and targeted to intended audiences.
Although media manipulation is nothing new, it has long been limited to altering photos. Altering video footage was traditionally reserved for Hollywood studios and those with access to advanced technological capabilities and financial resources.
But today, progress in artificial intelligence and machine learning have reduced these barriers and made it easier than ever to create digital forgeries.
In 1994, it cost $55 million to create convincing footage of Forrest Gump meeting JFK. Today, that technology is more sophisticated and widely available.
What’s more, these fakes are growing more convincing and therefore more difficult to detect. A major concern is this: as deepfake technology becomes more accessible, the ability to generate deepfakes may outpace our ability to detect them.
Adding to the problem of sophisticated fakes is how easily they can spread. Global interconnectivity and social networking have democratized access to communication.
This means that almost anyone can publish almost anything and can distribute it at lightspeed across the globe.
As the internet and social media have expanded our access to information, technological advancements have also made it easier to push information to specific audiences.
Algorithms used by social media platforms are designed to engage users with content that is most likely to interest them. Bad actors can use this to better target disinformation.
For example, it is difficult to distinguish the techniques used in modern disinformation campaigns from the those used in ordinary online marketing and advertising campaigns.
Deepfakes alone are making online disinformation more problematic. But when combined with novel means for distributing disinformation to ever more targeted audiences, the threat is even greater.
Fortunately, we are here today to discuss these new twists to an old problem and to consider how science and technology can combat these challenges.
I look forward to an engaging discussion with our distinguished panel of witnesses on how we can better address online disinformation.
Thank you again, Chairwoman Sherrill, for holding this important hearing, and thanks to our witnesses for being here today to help us develop solutions to this challenge. I look forward to hearing your testimony.
I yield back.