Holding information technologies accountable and addressing misinformation on the web

Yu-Ru Lin, Associate Professor

Before the World Health Organization categorized COVID-19 as a pandemic, leaders called the emerging threat an “infodemic.” The coinage came after weeks of virus misinformation flooding the internet, something that continues even as virus trends change. Social media platforms have employed technologies to prevent further spreading of misinformation, but what about the Facebook posts and articles that have already circulated? This Pitt-based research team investigates Data Science tools to understand what entices internet users to click on certain posts and help them recognize misinformation about the coronavirus.

Yu-Ru Lin’s research dissects the partisan from the practical

Yu-Ru Lin has a knack for digital communication. More precisely, she works as a computational social scientist to understand how humans connect with each other online and how this has changed. The last few years have seen a rise in misinformation on social media, especially in ways that align with political agendas. But now with the coronavirus pandemic, recognizing misinformation goes beyond personal online bubbles — it can now dramatically impact health outcomes on a large scale.

“It could lead people to have behavior that risks their life and increases their risk to get COVID,” Lin says. “Things like, ‘Don’t wear a mask, the mask is offensive,’ … is actually influencing many people and creating some kind of unnecessary resistance to wearing masks.”

Combating false or misleading information starts with understanding what’s already out there and what makes users want to engage, Lin says. With a RAPID Grant from the National Science Foundation, Lin and her team have begun employing machine learning and artificial intelligence to understand what makes certain information powerful on Twitter and Facebook. They then plan to develop a system to stop the spread. 

So far, the team has found patterns that psychology and information literature have predicted. For example, people tend to engage with posts that resemble a conspiracy theory, such as claims of secret labs or hidden agendas of figures in power, without any substantial evidence. The misleading information tends to come from highly partisan, for-profit websites trying to use sensational claims and attention-grabbing images to boost internet traffic. These websites rely on the natural draw to images, using pictures such as a Chinese flag or a woman’s face knowing they’re highly partisan and could elicit fear.

Knowing what makes posts clickable, Lin and her co-researchers, Adriana Kovashka of SCI and Wen-Ting Chung of the School of Education, want to know why one person would be more likely to click something than someone else. Groups with low media literacy skills — meaning a lack of experience critically engaging with content — often face disproportionate effects of the misinformation, Lin says. The same goes for people with a strong group identity, such as party or religion.

Lin says they can’t change the information that’s already out there, just prevent people from engaging with it. Instead, they plan to focus on a user’s social network to disseminate more accountable information to those who see the posts. They envision this tool will help identify what misinformation will be circulated more and to whom and engage “citizen journalists” to counter the spread of such misinformation. This will rely on their research on visualization and visual storytelling and a deeper understanding of what kinds of photos make someone want to read, like, or retweet.

They hope to finish their understanding of coronavirus misinformation by the end of the year, then launch the circulation of more trustworthy or reliable information. Until then, Lin says they are designing media literacy intervention strategies to curb the circulation of misinformation.

“If we want to redress the misconception,” Lin says, “[we want to know] what kinds of things we can do in order to help them be more resilient to this misinformation.”