Gemini bias fiasco reminds us that AI is no smarter than we make it


When an AI doesn’t know history, you can’t blame the AI. It always comes down to the data, programming, training, algorithms, and every other bit of built-by-humans technology. It’s all that and our perceptions of the AI’s “intentions” on the other side.
When Google‘s recently rechristened Gemini (formerly Bard) started spitting out people of color to represent caucasian historical figures, people quickly assessed something was off. For Google’s part, it noted the error and pulled all people generation capabilities off Gemini until it could work out a solution.
It wasn’t too hard to figure out what happened here. Since the early days of AI, and by that I mean 18 months ago, we’ve been talking about inherent and baked-in AI biases that, often unintentionally, come at the hand of programmers who train the large language and large image models on data that reflects their experiences and, perhaps, not the world’s. Sure, you’ll have a smart chatbot, but it’s likely to have significant blind spots, especially when you consider that the majority of programmers are still male and white (one 2021 study put the percentage of white programmers at 69% and found that just 20% of all programmers were women).
Still, we’ve learned enough about the potential for bias in training and AI results that companies have become far more proactive about getting ahead of the issue before such biases appear in a chatbot or generative results. Adobe told me earlier this year that it’s programmed its Firefly Generative AI tool to take into account where someone lives and the racial makeup and diversity of their region to ensure that image results reflect their reality.
Doing too much right
Which brings us to Google. It likely programmed Gemini to be racially sensitive but did so in a way that over-compensated. If there were a weighting system for historical accuracy versus racial sensitivity, Google put its thumb on the scale for the latter.
The example I’ve seen tossed about is Google Gemini returning a multi-cultural picture of the US’s founding fathers. Sadly, men and women of color were not represented in the group that penned the US Declaration of Independence. In fact, we know some of those men were enslavers. I’m not sure how Gemini could’ve accurately depicted these white men while adding that footnote. Still, the programmers got the bias training wrong and I applaud Google for not just leaving Gemini’s people image-generation capabilities out there to further upset people.
Since the early days of AI, and by that I mean 18 months ago, we’ve been talking about inherent and baked-in AI biases
However, I think it is worth exploring the significant backlash Google received for this blunder. On X (which is the dumpster fire formerly known as Twitter), people, including X’s CEO Elon Musk, decided this was Google trying to enforce some sort of anti-white bias. I know, it’s ridiculous. Pushing a bizarro political agenda would never serve Google, which is home to the Search engine for the masses, regardless of your political or social leanings.
What people don’t understand, despite how often developers get it wrong, is that these are still very early days in the generative AI cycle. The models are incredibly powerful and, in some ways, are outstripping our ability to understand them. We’re using mad scientist experiments every day with very little idea about the sorts of results we’ll get.
When developers push a new Generative model and AI out into the world, I think they only understand about 50% of what the model might do, partially because they can’t account for every prompt, conversation, and image request.
More wrong ahead – until we get it right
If there’s one thing that separates AIs from humans it’s that we have almost boundless and unpredictable creativity. AI’s creativity is solely based on what we feed it and while we might be surprised by the results, I think we’re more capable of surprising programmers and the AI with our prompts.
This is, though, how AI and the developers behind it learn. We have to make these mistakes. AI has to create a hand with eight fingers before it can learn that we only have five. AI will sometimes hallucinate, get the facts wrong, and even offend.
If and when it does though, that’s not cause to pull the plug. The AI has no emotion, intention, opinions, political stands, or axes to grind. It’s trained to give you the best possible result. It won’t always be the right one but eventually, it will get far more right than it does wrong.
Gemini produced a bad result, which was a mistake of the programmers, who will now go back and push and pull various levers until Gemini understands the difference between political correctness and historical accuracy.
If they do their job well, the future Gemini will offer us a perfect picture of the all-white founding fathers with that crucial footnote about where they stood on the enslavement of other humans.
You might also like
When an AI doesn’t know history, you can’t blame the AI. It always comes down to the data, programming, training, algorithms, and every other bit of built-by-humans technology. It’s all that and our perceptions of the AI’s “intentions” on the other side. When Google‘s recently rechristened Gemini (formerly Bard) started…
Recent Posts
- I tried ChatGPT’s Dall-E 3 image generator and these 5 tips will help you get the most from your AI creations
- Gabby Petito murder documentary sparks viewer backlash after it uses fake AI voiceover
- The quirky Alarmo clock is no longer exclusive to Nintendo’s online store
- The government is still threatening to ‘semi-fire’ workers who don’t answer an email from Elon Musk
- Sigma’s latest camera is so minimalist it doesn’t have a memory card slot
Archives
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- December 2019
- November 2019
- September 2018
- October 2017
- December 2011
- August 2010