SCI Researchers Receive Pitt Grant to Explore AI Misinformation

March 4, 2026

A team of three Pitt researchers recently earned themselves a Pitt Cyber Accelerator Grant (PCAG) for their research into AI’s ability to counter health misinformation.

Two of the researchers, Yu-Ru Lin, a professor and Rr Nefriana, a PhD student, come from SCI’s Department of Informatics and Networked Systems, and have combined forces with Jamie Zelazny, an assistant professor from the Pitt School of Nursing, for their study. The group was awarded a PCAG last fall-- a grant that has given initial funding to projects that support Pitt Cyber’s values for the last eight years.

The research project aims to investigate whether large language model (LLM) AI’s that are based in conversation, such as ChatGPT or Gemini, can counter health misinformation while also building greater trust with their diverse groups of users. The inspiration behind the project comes from Nefriana’s own experience.

“The main project was motivated by Nefriana's personal observation about how health misinformation has had negative impacts on her extended family members and her community,” Lin said. “This then becomes her PhD thesis research.”

Lin said the project is not just to stop misinformation, but to reconsider how everyday AI systems communicate with their users and how to build trust about AI’s information in sensitive areas, such as health.

“Health misinformation is not just a technical problem…Our goal is not just to debunk; we want to understand how meaningful conversations can lead to real understanding,” Lin said. “Therefore, in this project, we are not only asking if AI can correct misinformation, but how it can do so in a way that people feel trustworthy and supportive.”

The PCAG will support the researchers in the first phase of their work: participant recruitment and pilot data collection. Nefriana explained that the first step will allow the researcher to gain information from actual users.

“It will enable us to conduct a national survey to assess familiarity with health misinformation and follow up with qualitative interviews to better understand how people process health information and misinformation,” Nefriana said. “This foundation is essential for developing effective AI-based approaches to address misinformation.”

Lin also emphasized the importance of PCAG supporting the very beginnings of their research project.

“This early support is critical because it allows us to ground our AI development in real user perspectives rather than assumptions,” Lin said. “It also provides momentum for building an interdisciplinary team working on trustworthy AI and health communication.”

This interdisciplinary team includes assistant professor Zelanzy who supports the project through her health field expertise to create a perfect mixture of technology and health. Nefriana said Zelanzy assists the team by utilizing her prior experiences.

“Collaborating with Pitt’s School of Nursing, through Dr. Jamie Zelazny, is important because their expertise in patient-provider communication and health education ensures that our AI-based interventions and the assessment are grounded in real-world healthcare practices,” Nefriana said.

Lin and Nefriana both highlighted the importance of being able to receive a grant like the PCAG and how it can help their research, and others.

“The funding mechanisms like PCAG create meaningful opportunities for early-stage researchers to test bold ideas and build the foundation for larger or longer-term projects,” Lin said.

The PCAG will allow this interdisciplinary team to get a strong start on their work towards making AI more accurate and helpful, and because of it, real-world healthcare will benefit.

Sarah George (A&S '28)