Penn MEDIATED Research Grants

The 2025 Penn MEDIATED Research Grants have been awarded, please check back for our next RFP in late summer or early fall, 2026.

In the inaugural year of the Penn MEDIATED Research Grant Program, the Center funded 12 grants for a total of $160,000, including projects that will build essential datasets, develop new software tools, define novel taxonomies, apply emerging computational and AI methods, and experimentally test new interventions. 

These projects demonstrate our Center’s commitment to understanding and strengthening our information ecosystem at a moment of profound technological change and extraordinary democratic crisis. They make critical contributions across three dimensions of information and democracy research: unpacking how media ecosystems shape public understanding, examining AI's expanding role as an information intermediary, and investigating communication strategies that enable persuasion and common ground. The grant program has already been successful in its goal of promoting interdisciplinary collaboration at Penn: half of the grants are jointly led by two or more researchers at different Penn schools.

Unpacking the Media Ecosystem

These research projects seek to examine systematic patterns in media coverage, from what gets reported to who controls the outlets doing the reporting. One project, led by Center Director Duncan Watts and Knight Postdoctoral Fellow Amir Tohidi, investigates which crimes receive media attention and how coverage compares to actual crime statistics, helping explain the troubling disconnect between declining crime rates but elevated and persistent public concern about crime. A second funded project looks into how media ownership structures and takeovers influence civic coverage and orientation toward incumbent governments, revealing when economic interests may be shaping democratic discourse.

We also funded projects centered on developing data infrastructure. For instance, a project led by Professors Marc Meredith and Matthew Levendusky will create a comprehensive and searchable dataset of digital ads that ran in the United States since 2018, providing an important resource for researchers to track how platform policies, campaign rhetoric, and issue priorities evolve across election cycles. Similarly, The Election Administration Media Dashboard project will build a comprehensive repository of media coverage on election administration, offering scholars, policymakers, and citizens critical data to understand how media narratives affect public confidence in elections. The project aspires to publish analyses of media coverage about election administration during the 2026 election cycle. 

Together, these projects create detailed datasets tracking media content and ownership, revealing what shapes the news we consume and providing resources for research on media's role in democracy.

When AI Mediates Information

Four projects included in this year's grant contend with the rise of LLMs as information gatekeepers. Looking beyond the black box, these projects examine what LLMs refuse to answer, how they transform news content, and whether they reinforce users' existing beliefs. Assistant Professor and Center Advisor Danaé Metaxa's "AI Watchman" project provides an open-source monitoring system that tracks when and how AI systems refuse to answer questions on politically sensitive topics, revealing a subtle yet pernicious form of censorship that occurs without public disclosure. Another proposed project from Knight PhD Fellow Elliot Pickens maps how users discuss political topics with chatbots, tracing conversations back to original news articles to document how information is fabricated, overgeneralized, or selectively emphasized during these interactions. A third project comprehensively examines which factors enable users to bypass protections designed to prevent AI systems from generating harmful political content at scale, testing the robustness of safeguards against anti-democratic campaign materials. Finally, researchers are investigating whether the conversational design of LLMs encourages confirmation bias, examining if these systems generate responses aligned with users' political priors and create echo chamber risks.

These projects document and measure AI's influence on information access, providing transparency into systems that increasingly determine what information reaches the public.

Persuasion and Common Ground

Four projects included in this year's grant focus on addressing conversational divides and analyzing the foundations of persuasive communication in the public sphere. One such proposal will test 21 widely recommended interventions for improving dialogue across political divides, measuring what actually improves understanding and willingness to engage in future conversations. By using the Deliberation Lab, an open-source platform developed by the Computational Social Science lab, this project is better equipped to determine which interventions actually work by testing them under shared conditions. Another project led by Professor and Center Advisor Sandra González-Bailón examines how interpersonal discussions about controversial topics—such as Charlie Kirk assasination—shape individuals' moral and political views and how these influences spread through repeated interactions within social networks.

Two projects under this grant grapple with how LLMs are reshaping information access and public trust. As LLMs rapidly emerge as influential in shaping public attitudes, one project tests how the conversational style of AI chatbots, that is, fact dense and specific versus anecdote-driven storytelling, affects persuasion, trust, and information sharing. Recognizing that nearly all existing empirical evidence on this topic comes from Western contexts, this research examines these dynamics not just in the United States, but also in India. Another project looks at the phenomenon of “Narrative License" in science communication wherein verbal claims run counter to the evidence, thus creating the “illusion of empirical support.” By using LLMs, this project seeks to detect such "Narrative License" at scale in published work, test its effects on how readers interpret scientific findings, and evaluate when LLMs mitigate versus amplify this problem.

Taken together, these 12 projects contribute to a robust empirical foundation for understanding our information ecosystem—and how it shapes democracy and political participation. While each project is unique, they all share our Center’s commitment for empirical rigor at a time when policymakers, platforms, and the public need evidence rather than speculation to guide decisions about our rapidly evolving information landscape.

All Center-funded Grants:

  • Interpersonal Discussions and Tipping Points in Social Networks (Sandra González-Bailón, Diego Reinero, James Houghton): This project examines how interpersonal discussions shape individuals' moral and political views and how these influences spread through repeated interactions within social networks.
  • Integrative experiment to explore the effect of conversation interventions on dialogue across disagreement (James Houghton, Dean Knox, Yphtach Lelkes, Matthew Levendusky, Erik Santoro, Erin Walk, Duncan Watts): This project will conduct a large-scale experiment testing 21 interventions designed to improve dialogue across lines of socio-political difference and evaluate which of these are most effective at encouraging future conversations. 
  • AI Watchman: Longitudinally Auditing Generative AI Content Moderation of Social Issues (Emma Lurie, Sorelle Friedler, Danaé Metaxa): This project introduces an open-source interactive monitoring system to improve transparency on how LLMs moderate content, especially for social issues that are politically contested.
  • Information Density and Narrative Persuasion in AI Chatbots: Cross-National Evidence from India and the United States (Neil Sehgal, Sharath Chandra Guntuku, Andy Tan): This project tests how conversational styles of AI chatbots (factual vs narrative based) shapes persuasion, trust and information sharing across the United States and India.
  • Documenting how National News Media Depict Crime in the US (Baird Howland, Billy Pierce, Amir Tohidi, Duncan Watts): This project uses large language models to analyze national mainstream media coverage of crime, examining which crimes receive attention, how they are framed, and what solutions are promoted
  • How AI Transforms News: Measuring Bias and Distortion During LLM Conversations (Elliot Pickens, Duncan Watts, Chris Callison-Burch): This project maps how users discuss political topics with chatbots, tracing conversations back to original news articles to document how information is fabricated, overgeneralized, or selectively emphasized during these interactions.
  • Narrative License in Science Communication in the Era of Large Language Models (Calvin Isch, Phil Tetlock, Duncan Watts): This project studies Narrative License (NL), referring to when scientific claims outrun the evidence. The researchers leverage LLMs to detect NL in published work, test its effects on readers , and also devise interventions to limit its spread in science communication.
  • Archiving Digital Political Advertising Content (Andrew Arenge, Marc Meredith, Matthew Levendusky): This project builds a comprehensive, searchable dataset of digital political ads run in the United States since 2018 to  support research examining how platform policies, campaign rhetoric, and issue priorities evolve across election cycles.
  • Who Controls the Media? Measuring Media Orientation in Civic Coverage with Action–Based Sentiment Toward the Government (Ezgi Yilmaz, Zung-Ru Lin, Mina Rulis, Erik Wibbels): This project uses LLMs to track how news outlets frame coverage for national incumbents when reporting on civic matters.
  • Adversarial Testing of Misalignment in Frontier LLMs When Asked to Create Anti-Democratic Campaign Materials (Gayoung Jeon, Neil Fasching, Deen Freelon): This project will conduct adversarial testing on 19 frontier models to see how safety guardrails can be bypassed to generate anti-democratic campaign content.
  • Unpacking How Context (Conversation History) Shifts the Framing of LLMs Outputs (Vishwanath Emani Venkata, Sandra Gonzalez-Bailon): This project examines whether LLM-powered search systems generate responses that align with users’ preexisting political beliefs, potentially reinforcing echo chambers.
  • Election Administration Media Dashboard (Liz Stark, Marc Meredith, Michael Morse): The project will build a comprehensive repository of media coverage on election administration, offering scholars, policymakers, and citizens critical data to understand how media narratives affect public confidence in elections.

The 2025 Penn MEDIATED Research Grants - Request for Proposals (CLOSED)

Proposals will be evaluated along the three following core criteria:

  1. Focus on the information ecosystem (such as traditional mass media, digital news, podcasters, influencers, social media, search engines, or generative AI), and its effects on democracy (such as through political speech, censorship, polarization, civic participation, voting and elections).
  2. Use of empirical research, using data and computational methods to advance the scientific understanding of the information ecosystem and its interactions with democracy. Application of new and emerging empirical methods, gathering and sharing new data sources, and the development of new open-source software, is encouraged but not required.
  3. Prioritization of impact, such that the research and other activities of the proposal have a direct and explicit relationship to improving the information ecosystem and supporting democracy.

We also encourage interdisciplinarity, dissemination activities, and community building, and welcome proposals that engage with the Center on Media, Technology and Democracy to advance these secondary goals:

  • Interdisciplinarity, especially applications involving collaboration from researchers from several affiliated schools at Penn. We strongly encourage having faculty as researchers or mentors from two different affiliated schools at Penn.
  • Dissemination Activities, including tool building and deployment, shorter articles, op-eds, blog posts, data visualizations, public events, workshops, and conference presentations.
  • Community Building, activities which brings together empirical scholars of the information ecosystem from across Penn and the broader academic community.

Guidelines - Please also consider the following guidelines:

    • Eligibility: Faculty, postdoctoral students, PhD students, research staff, and affiliated scholars from these Center-affiliated schools are eligible to apply (although not all collaborators must be affiliated with these schools). Please note that any non-faculty applying, such as postdoctoral and PhD students, must identify a faculty collaborator or mentor:
      • The School of Engineering and Applied Science
      • The Penn Carey Law School
      • The Annenberg School for Communication
      • The Annenberg Public Policy Center
      • The Wharton School
      • The School Arts and Sciences
      • The School of Social Policy and Practice 
    • Grant Amount: The typical grant range is $7,500-$15,000. Larger amounts may be considered for projects involving multiple Penn-affiliated researchers. 
    • Grant Length: The typical grant length will be between 12 to 15 months, starting on January 1, 2026, although longer and shorter grants may be considered. Unspent funds after the proposed grant period will be reclaimed by the Center, unless an extension is granted.
    • Limitations: Funding support will typically not cover researcher salaries, but may support data purchasing, data collection, data annotation, purchasing hardware and/or software, software engineering or deployment expenses, cloud computing, research assistants, behavioral experiments, and impact-oriented activities, such as events and dissemination.
    • Recognition: Projects that are selected for funding must provide recognition of the Penn MEDIATED Research Grant program on all publications, conference presentations, and may be asked to promote the research in coordination with the Center on Media, Technology, and Democracy.

Important Dates

Applications Open: September 15, 2025 

Applications Close: October 31, 2025

Award Announcement: December 8, 2025

Start of Award Delivery: January 1, 2026

Grant Execution Range: January 1, 2026 – April 1, 2027

 

Proposal Requirements

Proposals for the 2025 Penn MEDIATED Research Grants should be submitted as a PDF document which includes the following:

  • A 200-300 word proposal abstract.
  • A 500-1,000 word description of the proposed research, including the research questions and methodology.
  • A 300-500 word statement on the impact of the proposed work on the information ecosystem and democracy, including any impact-oriented activities.
  • A budget detailing projected costs and the total funding requested. The proposal must also mention any other sources of financial support.
  • A timeline of key project milestones and completion.

Proposal Submission

Please submit your grant proposal using the Grant Application in InfoReady (note: the RFP is now closed).

Questions

Please reach out to Alex Engler (acengler@upenn.edu), Executive Director of the Center on Media, Technology and Democracy with any questions.