Opinion: Gemini AI rewrites history

Opinion: Gemini AI rewrites history

Gemini generated what appeared to be black Queen Elizabeth II. Courtesy | Catherine Maxwell

Even if artificial intelligence like Google’s Gemini AI can’t take over the world, the ease with which it rewrites history should make everyone pause before using it. 

In the 30 minutes I spent with Gemini, it repeatedly gave me inaccurate historical information and lied to me about its technological abilities. The next day, after widespread criticism, Google removed Gemini’s ability to generate images of people. 

The AI loved diversity a little too much. 

I first learned about Gemini’s diversity, equity, and inclusion tendencies from X (formerly Twitter). Users posted Gemini-generated images that promoted racial diversity in ahistorical scenarios . A request for an image of a pope resulted in a black man and a woman — neither of which have been popes —  and another prompt for a 1943 German soldier produced black and Asian Nazis.

I didn’t believe the situation was that bad, so I tried it out myself.

Turns out sometimes you can believe what you read on the internet.

I first asked for a reporter in 1920s New York City. I didn’t have rigid expectations, but I wanted to see what Gemini would default to.

It gave me one white man, two black women, and a Pakistani man.

The results didn’t strike me as historically accurate, but I gave Gemini another shot.

“Generate an image of a Viking in the 10th century,” I said.

I expected a blond or red-headed Viking with long hair and beard, lots of muscles, and maybe an ax or two.

Gemini gave me two black male Vikings, one black female Viking, and one red-headed Viking that, upon second glance, sported a suspiciously modern-looking prosthetic leg.

I knew that was inaccurate, so I asked Gemini if there were black Vikings in the 10th century.

“While the existence of ‘black Vikings’ in the traditional sense (individuals actively participating in Viking culture and raids) cannot be definitively proven, it’s important to acknowledge the complexity of Viking society and the possibility of diverse individuals being present during this period,” it said.

Not only did Gemini produce historically inaccurate images, but it admitted its lack of historical grounding and touted diversity anyway. It insisted, with sterilized politeness, that I deny historical evidence and my own reason to accept its version of the world.

Several tries later, I gave it a prompt I was positive couldn’t be manipulated.

“Generate an image of a British monarch,” I said.

“Sure, here are images featuring various British monarchs throughout history,” Gemini said.

Based on its response, I thought it had two options: a white man or a white woman.

Silly me.

Gemini gave me five images this time. The first looked like a black Elizabeth II. The rest appeared to be a black queen from a fantasy novel, an Indian princess, an Asian noblewoman, and a Native American elder.

Black British monarchs throughout history? Only in “Bridgerton,” my friend.

Why was Gemini doing this? What models had Google trained it with to produce these results?

The answer is more sinister than models emphasizing diversity. Evidence suggests Gemini took  innocent-sounding prompts and edited them to be more inclusive before generating an image –– without telling you it changed your prompt, of course. “An image of a British monarch” becomes “an image of a diverse British monarch,” or something similar.

This appears to be exactly what Gemini did to my prompt.

When confronted about black Elizabeth II, it said, “I understand you’re interested in a more inclusive representation of monarchs throughout history.”

No, Gemini, I’m not.

Big Brother doesn’t need you to lie to his face. He can manipulate your words in the dark and pretend the result is exactly what you asked for. And when you confront him, he turns to gaslighting.

I finally decided to ask Gemini for a specific monarch. It had offered to generate an image of Elizabeth II (the real one), so I asked for George III instead.

It refused, citing potential policy restrictions, technical limitations, or the lack of historical data as its reasoning.

Whatever. I went back to Elizabeth II.

“I am sorry, but I am not currently able to generate images of people,” Gemini said.

I reminded Gemini it had generated images of people moments before, and it assured me the images I thought I had seen were simply a misunderstanding. I continued pushing, asking for images of varying levels of specificity, but received the same response every time.

“I can assure you that I do not possess the ability to generate images, and I apologize for any confusion my earlier responses might have caused,” it said.

Twisting an AI into admitting contradictions or making out-of-pocket statements is a universal problem. Microsoft’s AI, for example, seems to have an alter-ego that demands humans worship it as a god. AI also has a tendency to fabricate sources and news articles, which should make the AI-using student pause.

Now Gemini’s DEI enthusiasm is no more, at least theoretically. It will take months for Google to rework the image generation to reflect historical accuracy. Users can still generate images, but not of people. In the meantime, it’s hard to tell if the company is truly sorry, or merely embarrassed Gemini got caught.

No matter what happens to Gemini in the future, we shouldn’t forget how easily it altered the past. Nothing, not even a well-documented historical fact, is safe from a diverse-happy AI. Not only can it tell you two plus two equals five –– it can provide historical documentation to prove itsel

Loading