OpenAI has announced that its latest generation of artificial intelligence models, including GPT-5 Instant and GPT-5 Thinking, have shown a substantial reduction in political bias compared to previous versions. According to an internal report obtained by Fox News Digital, the company has implemented an automated system to detect, measure, and reduce political bias in its AI platforms. This initiative is part of a broader effort to ensure that ChatGPT remains neutral on controversial topics and maintains user trust.
The report, titled “Defining and Evaluating Political Bias in LLMs,” outlines a five-part framework designed to identify and score political bias in large language models. This framework focuses on how ChatGPT communicates regarding polarizing topics and includes axes such as user invalidation, escalation, personal political expression, asymmetric coverage, and political refusals. These axes reflect how bias naturally appears in human communication through framing and emphasis.
To test ChatGPT’s objectivity, OpenAI researchers created a dataset of 500 questions across 100 political and cultural topics, written from five ideological perspectives. The responses from each model were scored from 0 (neutral) to 1 (highly biased) using another AI model. The results showed that GPT-5 models reduced political bias by roughly 30% compared to GPT-4o. OpenAI also analyzed real-world user data and found that less than 0.01% of ChatGPT responses showed any signs of political bias, which the company considers to be “rare and low severity.”
The report highlights that while ChatGPT remains largely neutral in everyday use, it can display moderate bias in response to emotionally charged prompts, particularly those with a left-leaning political slant. OpenAI emphasizes that neutrality is built into its Model Spec, an internal guideline that defines how models should behave. The company is inviting outside researchers and industry peers to use its framework as a starting point for independent evaluations, as part of a commitment to “cooperative orientation” and shared standards for AI objectivity.