Even in the best of times, it’s difficult to keep up with the flood of real versus fake user-generated content in our social media news feeds. But during war times? Well, that task becomes nearly impossible.
Since the outbreak of the Israel-Hamas War on October 7, social media users have been exposed to videos of world leaders with inaccurate English captions, the recirculation of old videos, and fabricated governmental statements that are interspersed amongst credible content.
The war is happening as both users and regulators are trying to figure out how to cope with the massive rise of generative AI. Yes, misinformation is as old as the internet itself. But why is there so much more disinformation and misinformation right now?
The reality is that we are now dealing with fewer content moderators, more platforms, and a growing number of sophisticated tools that make it harder to distill fact versus fiction, says David Schweidel, Chair and Professor of Marketing at Emory University’s Goizueta Business School.
The Problem: Can You Trust Your Eyes?
It is hard to overstate just how much generative AI has changed our online consumption habits recently, Schweidel told Hypepotamus.
“It used to be that we could trust our eyes. We said: Okay, show me a picture of an event and I believe it. I can’t do that anymore with the quality of generative AI tools,” he said. “The problem is that social media algorithms are designed to keep us on the platforms….[they serve] content that is arousing. And study after study has shown that fake news is more arousing than actual news. So the algorithm is going to prioritize content which evokes a reaction from individuals.”
That opens more doors for bad actors using platforms like ChatGPT for text and DALL-E for images to find increasingly nefarious use cases.
Typically, social media platforms rely on both their algorithm and their human content moderators to keep misinformation off of platforms. But there are limits to both, added Schweidel.
“The general approach to content moderation is to flag content that has to go to human review, and then have people make the decision of what is acceptable versus unacceptable within the boundaries of what should not be on the platform. But the more content that is being shared, that’s going to increase the burden on what the algorithms do versus what people can do,” he said. “Generative AI platforms are making it very hard to distinguish what comes from a human being at an actual news source, versus what comes from a bot operating somewhere trying to sow discord and pump out misinformation.”
Tech’s Next Steps
While tech giants like Meta, Twitter, Reddit, and TikTok are at the root of the misinformation problem online, there are tech companies looking to create better online content. Misinformation and disinformation mitigation startups have been gaining traction in the venture capital community over the last year, according to Crunchbase.
In the Southeast there is Bark, an Atlanta-based machine-learning company focused on online safety for kids, that has curated resources on how to limit a child’s exposure to violent and disturbing content on a platform-by-platform basis.
Schweidel said new regulation could also help.
For example, President Biden issued an Executive Order this week that requires “developers of the most powerful AI systems share their safety test results” and is set to “develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy.” Additionally, the US Commerce Department is set to “develop guidance for content authentication and watermarking” for labeling items that are generated by AI, to make sure government communications are clear.”
The question will be: How will the next generation of AI startups and social media platforms respond?