The question of the extent to which artificial intelligence can take on human forms is currently a concern for many researchers. According to a new study, AI can now even mimic the understanding and language of children.

The market for artificial intelligence has exploded in recent months. Since the publication of ChatGPT, new reports of research successes have occurred almost daily.

With a new study, researchers have now discovered a new ability of AI language models such as ChatGPT. According to the study results, these can imitate the language and cognitive abilities of children.

AI can imitate children's speech

For their study, the researchers at Humboldt University in Berlin examined the stages of human development. They examined both cognitive and linguistic abilities.

They wanted to find out to what extent AI language models can simulate human cognitive abilities.

“Thanks to psycholinguistics, we have a relatively comprehensive understanding of what children are capable of at different ages,” explains study author Anna Marklová. The “theory of mind”, which explores the child’s inner world, plays a particularly important role.

We used this insight to find out whether large language models can pretend to be less powerful than they are. In fact, it is a practical application of concepts that have been discussed in psycholinguistics for decades.

This is how the study went

For their study, researchers led by Anna Marklová carried out 1,296 independent tests with GPT-3.5-turbo and GPT-4. They used false belief tasks to test whether the AI ​​systems could adapt their answers to different age ranges of children.

The researchers used two methods to promote the answers and their linguistic complexity. Firstly, the length of the answer was analyzed by counting the letters.

But the Kolmogorov complexity was also examined. This provides a measure of the structure of the respective answer and the information it contains.

The researchers were able to determine that as the simulated child grew older, the complexity of the language used by the AI ​​systems also increased.

“Large language models are able to simulate a lower level of intelligence than they actually have,” explains Marklová in an interview with PsyPost.

This implies that we must be careful when developing artificial superintelligence (ASI) so as not to require them to mimic human and therefore limited intelligence.

The main problem with the results of the study is that when developing artificial superintelligence, its capabilities could be underestimated over a longer period of time. Marklová considers this to be “not a safe situation”.

Also interesting:


Leave a Reply

Your email address will not be published. Required fields are marked *