AI chatbots are Bullshit…academically speaking. Should we care?

A

I’m very concerned with the truth. Since I am human I am occasionally wrong, but I will always want to correct that wrong and make it into what I now know is true. This is not how AI chatbots work. 

Claiming that AI chatbots like OpenAI’s ChatGPT or Anthropic’s Claude are bullshit might just seem like an inflammatory opinion. But, there really is an academic foundation for this as presented in the paper ChatGPT is bullshit by Michael Townsen Hicks, James Humphries, and Joe Slater. In the paper they review the already academically established definition of “bullshit” and build their case for why AI chatbots fit the definition. 

What bullshit is, and what it isn’t

The authors of ChatGPT is bullshit first establish what the term bullshit really means. In a paper by Harry G. Frankfurt (On Bullshit, Princeton 2005), the term bullshit is characterized as any format of speech that is “unconcerned with the truth.” 

In this case, bullshit isn’t the same as lying. A lie, or rather the liar, is concerned with the truth in that they want you to believe something other than the truth. And, to do this they must know and be concerned with the truth. 

Bullshit is also not the same as misinformation. That’s something said by someone who thinks what they are communicating is true, but isn’t. This could include anything from incorrect information in a serious news story to merely repeating gossip about someone that is believed to be true, but isn’t. The communicators in both those cases are also concerned with the truth.

Instead, bullshitting is intended to convince the audience that the speaker is concerned with the truth while they, in fact, are not. This is the essence of bullshit. The communicator must both be unconcerned with the truth yet want the audience to believe they are. 

The examples they give range from a student presenting a report on a subject they actually know little about (but want you to believe they know a lot) to a politician waxing on about a topic they aren’t informed on (but want you to think they are). 

In cases of authentic bullshitting, the authors further state that many true statements may be uttered by the bullshitter along with false statements, but they are unconcerned with the truth of either. They just want you to believe that it’s true and that the truth is important to them when it isn’t. 

Why are AI chatbots bullshit?

The authors then make their case for why AI chatbots are bullshit and produce bullshit. 

First we get a review of what AI chatbots are. They are Large Language Models or LLMs. We’ve all heard this term a lot lately, but let’s review it just to be clear. LLMs are built to deliver text in a format that convincingly mimics what a person would create. They do this by using a complex algorithm trained on vast amounts of data (i.e. the Internet). The LLMs make calculations to predict the next word that should be in the sequence. This is done by creating “nodes” of potential choices based on the initial prompt and the words that preceded within the sequence.  They then select a word that fits. This word may or may not be the “best” choice (AI chatbots are programmed to make a choice at each node but are intentionally not programmed to always offer the best choice). But, it is intended to be a convincing choice. 

To be clear, AI chatbots are not “thinking” about what to say or how to say a message. They are only programmed to determine what word is likely to come next in a sequence. This sequence is intended to be indistinguishable from human-created text. AI chatbots are not capable (or programmed) to know what is true and what is false. And regardless of what the AI industry wants us to think, these chatbots are also not “intelligent.” Meaning they don’t have independent reasoning capabilities. They don’t think. They calculate. 

Because of this, AI chatbots are, therefore, not capable of being concerned with the truth. Their creators merely want you to perceive their output as matching that of what a person would create and as trustworthy (the truth). This solidly fits the previously established definition of bullshit. AI chatbots may sometimes or even often produce text that is true. But, as it is not their intent, the output can not be trusted to be true or useful. This is why they will recommend rocks or glue as good pizza toppings along with cheese and mushrooms. 

The authors further refute the use of the term “hallucinations” when describing untruthful information that AI chatbots produce. This is an anthropomorphic view of the technology, in that it assumes human-like behavior. A person who hallucinates sees or hears something they believe to be real that isn’t. In other words, they think it is true. And, therefore, they are concerned with the truth. But the bots are not concerned with the truth, so they can’t hallucinate. The authors even contend this is dangerous as it suggests the bots are reasoning about the truth but are mistaken, a state they can’t achieve. They sum it up well when they say,

“The problem here isn’t that large language models hallucinate, lie, or misrepresent the world in some way. It’s that they are not designed to represent the world at all; instead, they are designed to convey convincing lines of text.” 

-Hicks, et. al

According to the authors, AI chatbots are bullshit and so is what they produce. Sometimes it’s true. Sometimes it’s not. Though, it’s always bullshit… academically speaking. 

Are generative AI image creators also bullshit?

The authors of the paper only concern themselves with AI chatbots, but could this model apply to other AI outputs and tools as well? Let’s consider generative AI images creators like Midjourney or DALL·E. 

These GenAI image creators work much the same way that the AI Chatbots do. They make calculations of likely pixels based on the data input into their models (i.e. virtually any image on the Internet). There are two popular methods in the market right now: Diffusion and General Adversarial Networks (GANs).

Diffusion model
Diffusion models train on found images and reduce the images to a bunch of random pixels or “static.” This is what trains them. When offered a prompt they use the natural language identifying capabilities to reverse the diffusion from this static to a recognizable image that intends to match the prompt. The more language in the prompt the more specific the image will be.

GANs
These use a model that has an image creator produce an image that is then “reviewed” by another part of the system, the discriminator. This agent then determines if the produced image matches a real referenced image closely enough. If it doesn’t (they won’t at first), it causes the image generator to try again, and again, and again until the discriminator considers it passable to match real reference images. 

(Note: these are extremely simplified explanations of these AI image generators).

So, it’s similar to AI Chatbots in that it’s calculations and based on probabilities learned from the training data. 

Are these images bullshit?

Whether or not generative AI image creators are bullshit is a bit harder to parse out. One big difference here is that AI image generators aren’t intended to produce something that could be “true” in the same way as an AI chatbot output often intends. Human artists may also not produce images intended to be “real,” in that they depict something outside of reality, e.g. a painting. (And, yes, fiction writers do this too, but they do intend for you to believe they wrote it).  But the GenAI image creators generally do intend to create images that look as though they could have been created by a human. So, there is some potential for bullshit here.

I do, however, take offense that the human prompters using AI to generate images refer to themselves as “AI artists.” They are no more artists than a client who requests an image created by a human artist. Just because I ask a person to create a logo using my company’s initials and a color of blue doesn’t also make me an artist who created that image. I’m just the client. Creating a prompt for any generative AI tool effectively makes us clients of the AI. When using them we are not writers or artists.

If AI produces bullshit, does it matter?

This really depends on how much each of us cares about the truth when using AI to produce something. Using ChatGPT to produce a business email seems pretty low stakes. Though, if it produces something that was untrue and you deliver it without edits, you should have been concerned with that untruth. If you’re not, that’s bullshit, academically. 

Much has been said and more will be said about the nature of creating text, images, and even video with generative AI as compared with human creators. So, I’m not going to move through the arguments on that here. But, let’s consider the nature of valuing something as real and true only to be confronted with the fact that it isn’t. To believe that authentic emotion and intention was behind it, but was never actually there. 

In the 2017 movie Blade Runner 2049, Ryan Gosling’s character, K, is a replicant himself, a type of artificial human-like lifeform. He is made rather than born and has false memories implanted in his mind so his unlived life has a sense of history. He also experiences emotions and desires, much like a born human. K has a virtual AI companion named Joi played by Ana de Armas. This AI companion has no physical form, it’s just a projected hologram. But, she relates to him with convincing emotions that mimic his own feelings and desires. 

Through the events of the movie K loses Joi as her technology becomes inaccessible to him. He grieves her loss as though she were a true companion that cared for him as he cared for her. Later in the movie he’s been through hell and back and is walking in the dystopian city. He comes upon an advertisement for the AI companion that Joi was one of. Except this version is a hologram several stories tall. This advertisement version looks like Joi and promises the same type of companionship he felt with his Joi. But, he then realizes the lack of truth in what he thought he had. Joi wasn’t real (even less real than he is). Her promises of love and devotion weren’t real. They weren’t lies, per se, because she, or her programming rather, can’t be concerned with the truth. It was just intended to seem like it was concerned with and offering the truth. But it wasn’t. It was bullshit. 

AI may produce a lot things we value and care about. But, a lot of it will be bullshit, academically speaking. And, we should probably care about that, because it certainly doesn’t. 

Overview

The Strategic Web is an independent consultancy focused on innovation strategy. I help businesses and organizations develop strategies to differentiate themselves in the marketplace and progress out of static practices.

Let’s Talk
Scott Hutcheson
615-275-9998

scott@thestrategicweb.com