OpenAI recently introduced a new AI model called CriticGPT. It is designed to identify errors within ChatGPT. Studies show that the tool outperforms humans in 63 percent of cases and can therefore make AI better.

Since the introduction of ChatGPT, artificial intelligence has become part of everyday life for many people. However, the system is not error-free and can sometimes develop certain biases. Therefore, the company behind the tool, OpenAI, recently introduced a new model called CriticGPT. This was specially developed to detect errors in the ChatGPT code.

The development aims to improve the process of adapting AI systems to human requirements by supporting human reviewers and increasing the accuracy of the outputs of large language models (LLMs). CriticGPT, based on the GPT-4 family. It analyzes code and points out potential errors. This makes it easier for human reviewers to detect errors that might otherwise be overlooked.

CriticGPT: Error detection 63 percent better than humans

In a research paper titled “LLM Critics Help Catch LLM Bugs,” OpenAI researchers showed that CriticGPT outperformed human reviewers 63 percent of the time. This was due in part to the tool generating fewer useless “bits” and fewer false alarms.

OpenAI trained the model to detect a variety of coding errors. To do this, the team trained the algorithm on a database of code examples that contained intentionally inserted errors.

This method allows CriticGPT to detect both injected and naturally occurring errors in ChatGPT's output. However, the tool was able to find errors not only in the actual code, but also in other tasks.

In experiments, the model identified errors in 24 percent of ChatGPT training data that human reviewers had previously classified as error-free. A team later confirmed these errors, highlighting CriticGPT's potential for reviewing uncoded tasks.

Effectiveness for more complex inputs not yet proven

Despite the promising results, CriticGPT, like all AI models, has its limitations. The team at OpenAI trained it on relatively short responses from ChatGPT, which may not be enough to evaluate longer, more complex tasks. Additionally, CriticGPT is not completely immune to incorrect outputs.

OpenAI plans to integrate CriticGPT-like models into its own processes to provide AI-powered support to trainers. This is a step towards better tools for evaluating outputs from LLM systems that are difficult for humans to evaluate without additional support.

Also interesting:

Source: https://www.basicthinking.de/blog/2024/07/05/criticgpt-fehlererkennung/

Leave a Reply