In last week’s post, I reviewed John Lennox’s book 2084 about AI. This week saw a controversy erupt over bias in AI. When Gemini, Google’s AI engine, asked to generate pictures in response to specific prompts, it produced wildly inaccurate results. American founding father? Black. German soldiers in 1943? Black men and Asian women; British women? All black. Popes? A black man and a black woman. American or Swedish women? All black. German women? One black, one Asian, and two probably white. And so on. A good sampling of these can be found here. Gemini would also produce images on request of blacks and Asians but refused to do so for whites
Google responded with an admission that there were “some historical inaccuracies” in the generated images, but also insisted that depicting racial and ethnic diversity was a good thing, especially since they are a global business. The algorithms for creating images were specifically created to produce “diverse” images and to downplay whites. In any event, the controversy caused Google to shut down image generation in Gemini.
Other parts of Gemini remain decidedly Woke. Its coverage of Gaza is pro-Palestinian to the point of denying that there is evidence for what happened on October 7. It also refuses to say that it is wrong for adults to prey sexually on children.
All of this points to a significant danger in AI that has not received the attention it deserves: The programmers who design AI engines can inject their biases into the AI, steering results in preferred directions while shutting the door to others. This is all the more dangerous because younger people typically do not read books or long form articles. They rely on images and Googling for their information, impressions, and opinions. By cooking the results, Google is self-consciously attempting to brainwash people into adopting their ideas and values.
The problem actually goes deeper than that. To put it bluntly, Google makes you stupid. In order to learn anything new, you have to associate it with something you already know. In other words, what we know already provides mental hooks to hang new ideas and information. With smartphones and Google, we no longer feel the need to remember anything, depriving us of the mental hooks we need to learn. Further, without the information in our minds already, we do not have the mental resources to develop informed opinions on new issues that arise. We are left at the mercy of whatever our preferred sources on the internet decide to feed us. As we have just seen, that can be highly and intentionally deceptive. Search engines are bad enough in this regard; AI makes the situation even worse.
So what should we do about all of this?
Be aware of the biases that are embedded in search engine results and AI. They are not neutral, though some are better than others. This applies to news as well. We should get in the habit of asking questions like: Why are they telling me this? What do they want me to think or think about? What aren’t they telling me or telling me about?
Use smartphones sparingly. They are a great tool for some things, but they are also a source of propaganda and misinformation.
For parents, think carefully about your children’s internet use. Social media has been shown to be addictive and a source of depression and other mental health issues as well as social contagions like transgenderism and Queer sexuality. Also remember that when you give a child a smartphone, you are giving them easy access to pornography.
Read. A lot. C.S. Lewis makes a case for old books. Every era has its biases that are typically invisible to us. The advantage of old books is that there biases are different from our own. We will recognize theirs readily enough, but they can also expose our own blind spots. He suggests one old book for every three new books.
Look particularly for books that will help you understand the biblical worldview and its competitors. This will help you identify assumptions in the things you read and alert you to the biases of the author or programmers.
Long form articles from trusted sources on current events and issues can give you perspective. It’s also helpful to read articles from multiple perspectives. Podcasts can also be helpful for this, but again, choose wisely.
Glenn,
This really good and actionable advice. Thanks for taking the time to do this. Especially love your relaying Lewis’ advice about reading old books.
You are spit-on about it being a big mistake for anyone to approach AI as unquestionably trustworthy. But there is something about these large language models that people find riveting. In some of my own scribblings, I have suggested some reasons why that might be a common reaction.
https://open.substack.com/pub/keithlowery/p/jordan-peterson-and-the-demons-of
A good piece, Keith. Thanks for sharing it.
Thanks, Dr. Sunshine. Appreciate and am grateful for your ministry.