Large Language Models (LLMs) increasingly assist news production, raising concerns about algorithmic bias. We investigate racial bias using a simulated editorial task where LLMs select missing persons cases for news coverage. Computational experiments reveal LLMs consistently prefer cases explicitly labeled “Black” or “Latino” over “white” or “Asian,” diverging from known human biases. This preference largely disappears when race is signaled only by names. Models also show idiosyncratic preferences for other aspects of the tested cases.