October 31, 2025
How do social media algorithms shape discourse, for better or worse? Two SCI doctoral students are investigating these trends through their research project, recently published in the Misinformation Review.
The research article, “Toxic Politics and TikTok Engagement in the 2024 U.S. Election”, was written by PhD students Ahana Biswas and Alireza Javadian Sabet, in collaboration with Department of Informatics and Networked Systems (DINS) Associate Professor Yu-Ru Lin. A reputable journal for timely, policy-relevant research on media and democracy, Misinformation Review prioritizes work with real-world application, connecting academic scholarship with practitioners in fields such as public policy, journalism, and communication.
This work is a continuation of the collaborative work that Biswas and Javadian Sabet conducted through the Pitt Computational Social Dynamics Lab (PICSO), and led by Dr. Lin. According to Biswas, their study emerged naturally from the shared research environment PICSO promotes, often encouraging students to dissect questions intersecting computational social science, political communication, and responsible AI.
Biswas’s interest in studying toxic politics stems from a broader focus on the interactions and behaviors of individuals in digital spaces.
“I’ve long been interested in understanding how social media platforms shape political discourse—particularly how engagement-driven algorithms can amplify certain types of content over others,” Biswas said.
She added that TikTok presents a particularly compelling case because of its recommendation-based feed and younger audience, making it an important platform for examining how political toxicity and partisanship spread online.
The recently published article summarizes a study of 51,680 political TikTok videos from the 2024 U.S. election as they reveal several striking patterns in media consumerism.
Through the study, the researchers found that partisan and toxic content, particularly surrounding major political events, and topics like immigration, racism, and election fraud, drive higher likes, shares, and comments.
“We found that 77% of videos were partisan, and these attracted nearly twice as much engagement as nonpartisan posts.” Biswas stated. "Content containing toxic language received 2.3% more interactions, with partisan toxic posts performing especially well.”
According to Biswas, the research’s confirmation of a direct influence of partisanship and toxicity on online engagement raises concerns about the role of algorithms in shaping political discourse. The authors warn that such engagement dynamics risk amplification of divisive and harmful content through the algorithmic recommendation system used by TikTok and other social media platforms, underscoring the need for more transparent and accountable platform governance.
Read the paper published in Misinformation Review.