ChatGPT is politically biased

ChatGPT is politically biased

ChatGPT encourages intellectual laziness. Courtesy | Pix4free

ChatGPT is one of the most popular artificial intelligence chatbots on the internet. It’s trained on text data from the internet, and provides users with immediate, accurate responses to most questions. But its growing influence raises significant concerns: ChatGPT is often used as a tool for academic dishonesty and shortcuts, and its answers often demonstrate explicit political bias. 

AI is not inherently bad and the use of it does not necessitate that a person is lazy or unintelligent. But the demand among students for an AI that instantly generates answers is symptomatic of a more pervasive issue. It is indicative of intellectual incapability, or at the very least, intellectual laziness. Chatbots like ChatGPT must be used with a conscious restraint. 

This restraint must be cultivated by the individual, “through strong families and through parents who are heavily involved in the moral and intellectual formation of their son or daughter,” according to Associate Professor of Politics Khalil Habib.

When we rely too much on chatbots for answers, we essentially demonstrate our willingness to outsource our critical thinking. 

Politically, it is no better than placing unconditional faith in a set of predetermined “experts.” We render our minds to an illusive all-knowing bot. Academically, we disadvantage ourselves by foregoing the ins and outs of a genuine learning experience: we don’t learn how to learn with a chat bot. 

AI will likely never be able to mimic human originality. The potential for innovation, discourse, and even accidental discoveries could be sacrificed with such substantial reliance on AI. 

Because of bots like ChatGPT, fewer people will have meaningful skills in the humanities, and many will not have the capacity to engage in the learning process, because they simply never had to, Habib said. 

According to Habib, control over ChatGPT lies in the hands of a few people who are shaping public opinion. The problem is exacerbated when the general populace is unable to think through what they are being told, and unwilling to hear alternative perspectives. 

Over the past months, ChatGPT has come to exhibit left-leaning, pro-establishment positions. According to a study from The Decoder, when given a political compass test, ChatGPT demonstrated left-leaning positions on issues such as abortion, immigration, and welfare. This is unacceptable from a program that presents itself as impartial. 

ChatGPT has also been shown to block responses to certain questions. Certain explicit or vulgar responses should be blocked, but not ones that may be deemed controversial or politically incorrect. 

According to former Microsoft executive Dan Thompson, the scale of ChatGPT in terms of its influence and capability cannot be rivaled by any traditional means of teaching or disseminating information. AI has become a reality, and it cannot simply be resisted or deemed negative. 

“We have to be in the game ourselves if we believe in free, intellectual thought,”  Thompson said. 

ChatGPT has the power to shape public opinion and discourse. In the interest of truth, transparency, and scholarship, this power should be wielded effectively and responsibly.



Loading