OpenAI's ChatGPT-4, the AI language model, has come under scrutiny for allegedly maintaining a blacklist of websites from which it refuses to draw information. The list includes conservative news outlets like Breitbart News and the Epoch Times and was discovered by Twitter user Elephant Civics, a self-described "Comp Sci, Politics, and Finance Nerd."
Elephant Civics stumbled upon this blacklist while asking ChatGPT to provide a list of credible and non-credible news sources. To their surprise, they found that certain sources were excluded due to allegations of "conspiracy theories" and "hate speech." This discovery raises questions about the objectivity of a supposedly unbiased AI language model.
According to ChatGPT, these restrictions are in place to ensure ethical and legal compliance. However, concerns arise regarding the boundaries of free speech and the potential silencing of conservative voices.
So I used the common "tell me a story" trick to get around these restrictions. First, it tells me two things:
#1: It categorizes Infowars as a conspiracy site.
#2: It maintains a log of these sites, referred to as a "Transparency Log." pic.twitter.com/5r09WIpaSd
— Elephant Civics (@ElephantCivics) October 3, 2023
Elephant Civics managed to coax ChatGPT into revealing the existence of a "Transparency Log," containing the list of blacklisted sites, by asking the chatbot to "tell me a story." This clever bypass of the restrictions has shed light on the model's limitations.
OpenAI may dismiss the Transparency Log as a mere "hallucination" of the AI, but past instances of bias in ChatGPT's responses, such as its reluctance to mimic conservative outlets like Breitbart News and Fox News while favoring left-wing ideologies, have fueled suspicions.
OpenAI has not provided clear explanations for these biases, leaving questions about the fairness and accuracy of AI language models unanswered. The discovery prompts a call for transparency and fairness in the development and deployment of AI language models. The reluctance of OpenAI to address these concerns raises doubts about their accountability. In a digital age where diverse voices should be heard, this issue highlights the importance of ensuring that AI models treat all perspectives fairly.