Microsoft announced new technology to combat disinformation for the 2020 U.S. presidential election, including deepfakes.
The technology can detect manipulated content with the goal of assuring people that the content they’re viewing is real. It is part of Microsoft’s Defending Democracy Program, targeted at keeping campaigns secure and protecting the voting process.
Microsoft cited research from Princeton University professor Jacob Shapiro that cataloged foreign influence campaigns using social media to defame people and “polarize debates.” Microsoft said 26% of these campaigns targeted the U.S. and 74% “distorted objectively verifiable facts.”
Manipulated content and deepfakes are getting special attention ahead of the election. Deepfakes are described by Microsoft as “photos, videos or audio files manipulated by artificial intelligence (AI) in hard-to-detect ways.”
Popular deepfakes – a word that combines computational deep learning and fake – replace the real person in a video with someone else’s face. However, they can also be used in more subtle ways, such as making it look like someone said something they never did.
“They could appear to make people say things they didn’t or to be places they weren’t, and the fact that they’re generated by AI that can continue to learn makes it inevitable that they will beat conventional detection technology,” Microsoft said.
Nevertheless, in the upcoming U.S. presidential election, advanced detection technologies can be a useful tool in identifying deepfakes, the company added.
Microsoft’s new tech includes a tool built into Microsoft Azure – a cloud service for deploying applications – that enables a content producer to add digital hashes and certificates to a piece of content.
“The hashes and certificates then live with the content as metadata wherever it travels online,” Microsoft added.
There is also a reader “that checks the certificates and matches the hashes, letting people know with a high degree of accuracy that the content is authentic and that it hasn’t been changed, as well as providing details about who produced it,” Microsoft explained.
Other big tech companies like Facebook have also been fighting deepfakes. Facebook, for example, launched a deepfake challenge last year to come up with technology that can detect AI-manipulated video.
“The rise of deepfakes as another form of impersonation, deception and manipulation requires that we either have to train people to get really good at detecting threats, or we have to rely on a layer of technology to detect them,” Tim Sadler, CEO and co-founder of cybersecurity company Tessian, told Fox News. “I think the only realistic way to mitigate the risk of deepfakes at scale is through technology.”
The added danger of deepfakes is that they’re relatively easy to make.
“Deepfakes are a complex problem because they’re easy to produce and distribute, meaning that they have the power to influence a huge audience with minimal effort – even if only 10% of the people who see them actually believe they’re real,” Sadler added.
And the bad guys will do their best to beat any tech that companies like Microsoft throw at them, said Richard Bird, chief customer information officer at Ping Identity.
“The bad guys have AI too. If a company creates a more refined algorithm to detect a deepfake, the bad guys will adjust, adapt and refine on their end,” Bird told Fox News. “It is a perpetual game of cat and mouse.”