AI chatbots like ChatGPT generate answers by analyzing large amounts of text. But that doesn’t mean they’re always right.
ChatGPT and other AI chatbots can speak fluently and form grammatically correct sentences that even have a natural speech rhythm. But don’t be tempted to confuse this smart output of words with thoughts, emotions or intentions, experts say.
A chatbot is basically nothing more than a machine that performs mathematical calculations and statistical analysis to find the right words and sentences. Chatbots like ChatGPT are trained over large amounts of text, allowing them to interact naturally with human users.
OpenAI, the company behind ChatGPT, states on its website that its models are based on information from various sources, including data coming from the user or from licensed content.
This is how AI chatbots work
AI chatbots like OpenAI’s ChatGPT are based on large language models, called LLMs, trained on huge text data. This data comes from public texts and other information and is usually produced by people.
The systems are trained on strings of words and learn the meaning of words in those strings, experts say. The knowledge not only trains large language models on factual information, but also helps them recognize language patterns and the typical usage and grouping of words.
Chatbots are also trained by humans to provide appropriate responses and limit the number of malicious messages.
These data trainers work for companies that are commissioned by OpenAI, for example, to refine their models. An example of this is the company Invisible Technologies. The employees investigate factual inaccuracies, spelling and grammatical errors as well as harassment by the bot in certain entries.
“You can say, ‘That’s toxic, that’s too political, that’s an opinion,’ and phrase it in a way that the bot stops generating those things the same way,” says Kristian Hammond, a professor of computer science at Northwestern University, as well Director of the Center of Advancing Safety of Machine Intelligence.
When you ask a chatbot to answer a simple question, it can work like this: it uses a series of algorithms to select the most likely sentence to answer the question being asked. The bot selects the best possible answers within milliseconds and presents one of them at random. This is why the AI can generate slightly different answers when you ask the same question repeatedly.
Chatbots can also split questions into several parts and answer each part independently. Suppose you ask the chatbot to name a US president who has the same first name as the male lead in the movie “Camelot”. Then the bot would first respond that the actor’s name is Richard Harris. The chatbot then uses that answer to give Richard Nixon the answer to the original question, Hammond said.
Chatbots are not perfect – they can make mistakes
AI chatbots get into the most trouble when they’re asked questions they don’t have an answer to. They don’t know what they don’t know and give a guess based on what they do know.
The problem is that the chatbots won’t tell you if an answer is just a guess. When a chatbot makes up information and presents it to the user as fact, it is called a “hallucination.”
These hallucinations are one of the reasons some technology experts warn against its use. A recent study from Boston Consulting Group found that people who use ChatGPT at work can actually perform worse on certain tasks if they take the chatbot’s output at face value and don’t check it for errors.
“This is what we call knowledge of knowledge – or metacognition,” says William Wang, associate professor of computer science at the University of California, Santa Barbara, and co-director of the university’s Natural Language Processing Group. “The model doesn’t understand the known unknowns very well,” he says.
This text was translated from English. You can find the original here.