Exploring the news judgment of large language models

Abstract

Large Language Models (LLMs) increasingly assist news production, raising concerns about algorithmic bias. We investigate racial bias using a simulated editorial task where LLMs select missing persons cases for news coverage. Computational experiments reveal LLMs consistently prefer cases explicitly labeled “Black” or “Latino” over “white” or “Asian,” diverging from known human biases. This preference largely disappears when race is signaled only by names. Models also show idiosyncratic preferences for other aspects of the tested cases.

Date
Aug 8, 2025
Event
108th Annual Conference of the Association for Education in Journalism and Mass Communication
Location
San Francisco, CA
Jacob A. Long
Jacob A. Long
Assistant Professor of Mass Communications