Home Career An experimental platform that puts moderation in the hands of its users...

An experimental platform that puts moderation in the hands of its users shows that people are actually effective at rating posts and sharing their ratings with others — ScienceDaily

48
0

In the fight against the spread of misinformation, social media platforms usually put most users in the passenger seat. Platforms often use machine learning algorithms or human fact-checkers to flag false or misleading content for users.

“Just because it’s the status quo doesn’t mean it’s the right or the only way to do it,” says Farnaz Jahanbakhsh, a graduate student at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).

She and her colleagues conducted a study in which they put that power in the hands of social media users.

First, they surveyed people to find out how they avoid or filter misinformation on social media. Using their findings, the researchers developed a prototype platform that allows users to rate the accuracy of content, indicate which users they trust to rate the accuracy, and filter the messages that appear in their feed based on those ratings.

In a field study, they found that users can effectively evaluate misinformation messages without prior training. Moreover, users appreciated the ability to rate posts and view ratings in a structured way. The researchers also saw that participants used content filters in different ways – for example, some blocked all misinformation content, while others used filters to search for such articles.

This work shows that a decentralized approach to moderation can lead to increased trustworthiness of content on social networks, Jahanbakhsh says. The approach is also more efficient and scalable than centralized moderation schemes, and may appeal to users who distrust platforms, she adds.

“A lot of research on misinformation suggests that users can’t decide what’s true and what’s not, so we have to help them. We didn’t see that at all. We saw that people really care about the content and they also try to help each other. But these efforts are currently not supported by platforms,” she says.

Jahanbakhsh co-authored the paper with Amy Zhang, assistant professor at the Washington Allen School of Computer Science and Engineering; and senior author David Karger, professor of computer science at CSAIL. The research will be presented at the ACM Conference on Computer-Supported Cooperative Work and Social Computing.

Fight against misinformation

The spread of misinformation online is a widespread problem. However, the current methods that social media platforms use to flag or remove misinformation content also have downsides. For example, when platforms use algorithms or fact-checkers to evaluate posts, this can cause tension among users who interpret these efforts as a violation of free speech, among other things.

“Sometimes users want misinformation to appear in their feed because they want to know what their friends or family are being exposed to, so they know when and how to talk to them about it,” adds Jahanbakhsh.

Users often try to self-assess and flag misinformation, and they also try to help each other by asking friends and experts to help them make sense of what they’re reading. But these efforts can backfire because they are not supported by the platforms. A user may leave a comment on a misleading post or respond with an angry emoji, but most platforms consider these actions to be signs of engagement. On Facebook, for example, this can mean that disinformation content will be shown to more people, including a user’s friends and followers – the exact opposite of what that user wanted.

To overcome these challenges and pitfalls, the researchers sought to create a platform that enables users to provide and review structured post accuracy ratings, identify others they trust to rate posts, and use filters to control the content displayed in their feed. Ultimately, the researchers’ goal is to make it easier for users to help each other evaluate misinformation on social media, which reduces the burden on everyone.

The researchers began by surveying 192 people recruited through Facebook and a mailing list to see if users would appreciate the features. The survey found that users are all too aware of misinformation and try to track and report it, but fear that their ratings could be misinterpreted. They are skeptical of platforms’ efforts to rate content for them. And while they would like filters that block untrustworthy content, they wouldn’t trust filters that the platform manages.

Using these insights, the researchers created a prototype Facebook-like platform called Trustnet. On Trustnet, users post and share full news articles and can follow each other to see content that others are posting. But before a user can post any content to Trustnet, they must rate that content as accurate or inaccurate or request its veracity, which will be visible to others.

“The reason people share misinformation is usually not because they don’t know what’s true and what’s false. Rather, their attention is misdirected to other things at the time of the exchange. Asking them to rate content before sharing it helps them be more discerning,” she says.

Users can also select trusted individuals whose content ratings they will see. They do this privately when they follow someone they’re socially connected to (perhaps a friend or family member) but who they don’t trust to rate content. The platform also offers filters that allow users to customize their feed based on how and by whom posts have been rated.

Testing Trustnet

After the prototype was completed, they conducted a study in which 14 people used the platform for one week. Researchers have found that users can effectively evaluate content, often based on experience, the source of the content, or judging the logic of an article, despite having no training. They could also use filters to manage their feeds, although they used them in different ways.

“Even in such a small sample, it was interesting to see that not everyone wanted to read their news in the same way. Sometimes people wanted to post misinformation on their feeds because they saw a benefit in doing so. It shows that this agency is missing from social media platforms right now and needs to be returned to users,” she says.

Users sometimes found it difficult to evaluate content when it contained multiple statements, some true and some false, or when the headline and article were disjointed. That points to the need to give users more options for evaluation — perhaps claiming that an article is true but misleading, or that it contains political bias, she says.

Because Trustnet users sometimes found it difficult to rate articles where the content did not match the headline, Jahanbakhsh launched another research project to create a browser extension that would allow users to change news headlines to better match the content of the article.

While these findings suggest that users can play a more active role in combating misinformation, Jahanbakhsh cautions that empowering users is not a panacea. On the one hand, this approach can create situations where users only see information from like-minded sources. However, filters and structured assessments can be reconfigured to help mitigate this problem, she says.

In addition to exploring improvements to Trustnet, Jahanbakhsh wants to explore methods that could encourage people to read content reviews from people with different perspectives, perhaps through gamification. And because social media platforms can be reluctant to make changes, it’s also developing methods that allow users to post and view content ratings through a regular web browser rather than on the platform.

This work was supported in part by the National Science Foundation.

Source link

Previous articleConsultants, Universities and When McKinsey Comes to Town
Next articlePausing for self-reflection can lead to decisions that reduce others’ risk of COVID-19, study finds – ScienceDaily