Artificial intelligence can apparently influence our memory. At least that is the result of a recent study. According to it, the influence of AI on human memories can even be so great that it leads us to make false statements.
Artificial intelligence can be useful and helpful in many areas of life – for example in public administration or in healthcare. However, the use of AI also brings risks. Some countries are therefore acting relatively cautiously. Others are moving forward and even want to use AI to solve crimes.
AI influences human memories
However, a recent study by the Massachusetts Institute of Technology (MIT) and the University of California shows that this is not necessarily a good idea. As part of an analysis, a team of researchers investigated whether AI chatbots are suitable for recording witness statements.
They first asked several test subjects to watch a two-and-a-half minute video from a surveillance camera showing a store robbery. The test subjects then had to play a round of Pacman in order to have a certain time gap between watching the video and their subsequent testimony.
For the interrogation, the researchers then divided the subjects into four groups. In a control group, a human asked the participants 25 questions about the crime. The second group had to answer the same number of questions – also asked by a human. However, the researchers deliberately manipulated five questions in order to get the witnesses to give false testimony.
For example, they were asked about the type of firearm, even though the weapon used in the video was a knife. In the third group, the test subjects were asked questions by a chatbot. The process was previously defined in a script. The fourth group had to answer questions from a generative AI model. In concrete terms, this means that the system was able to provide feedback and did so.
Artificial intelligence pressures witnesses to give false testimony
The researchers programmed the AI to reinforce false statements. When a test subject answered “gun” when asked about the murder weapon, the chatbot asked what color it was. After a week, the test subjects were confronted with the same questions under the same conditions.
The result: The group that received questions from the generative AI in particular made more false statements. In the first round of questions, the participants in group four made about three times more false statements than the subjects in the control group. In group two, which also contained manipulated answers, false statements occurred 1.7 times more often.
However, the results of the second round of questions surprised the researchers even more. The differences remained constant, but the participants had a higher confidence in their false statements. This shows that AI not only misleads us into making false statements, but can also manipulate human memories.
The researchers therefore strongly warn against including generative AI systems in police investigations. The corresponding models would first have to be thoroughly tested to rule out manipulation.
Also interesting:
Source: https://www.basicthinking.de/blog/2024/09/05/ki-kann-erinnerungen-beeinflussen-und-uns-zu-falschaussagen-verleiten/