Is Social Media Disinformation an InfoSec Community Issue?
Key Points:
- Facebook is under fresh pressure around its business model, algorithms, and role in spreading misinformation.
- Recent NYU studies show “fake news” gets six times as much engagement than “credible” news sources on Facebook.
- Is the “fake news” fight only a structural one, or can users impact how social platforms spread disinformation by changing behavior?
Commentary:
The last several years have been defined by a campaign against social media disinformation. Namely, Facebook has been the target of critique in the US and internationally, with specific pushback to the company’s handling of “fake news” during the 2020 presidential election and the pandemic.
Over the course of these various critiques around the spread of conspiracy theories, misinformation and even implications in the January Capitol riot, Facebook has responded with different strategies for curbing hate speech and bringing trust back to its news sourcing. This includes pre-approving media sources for reliability as well as letting users rank outlets for credibility, a double-pronged approach to determining source quality. How effective these strategies have been at curbing actual misinformation is still up for debate, and we wanted to get in on that debate.
Is Facebook’s approach to curbing misinformation and hate speech a sustainable & efficacious way to create more trust in content sources? Dr. Chirag Shah, Associate Professor in the Master of Science in Information Management program at the University of Washington, gave his perspective. Dr. Shah’s research focuses on fairness in data science, machine learning, search and recommendation, and responsible AI.
Abridged Thoughts:
I believe what Facebook and many other companies are doing in terms of blocking some bad actors or bad media channels that generated misinformation, disinformation is not nearly enough. It’s just a small part of the problem. There’s a whole vicious circle here, which sometimes starts with some bad actors, but really takes a life of its own. Once we all start sharing and doing things now, we know that people react very strongly to certain to different emotions fear, hatred. They tend to be the ones that cause for the biggest engagement and participation. So until we do something about it, we’re going to continue seeing this misinformation and disinformation blocking certain channels sources is not going to cut it. So there are things we can do that are easy. There are things that are hard. The easiest thing to do is obviously to block these sources that we believe are spreading social media disinformation. The more difficult thing to do in the next level would be creating transparency because it is these algorithms that are responsible for even propagating to recommending these kind of information to the end users. These algorithms are tuned to business objectives.