OpenAI has made huge progress in the field of artificial intelligence with ChatGPT. Now the company has launched its latest model, GPT-4o Mini. But what exactly is it?
It was only in May 2024 that OpenAI introduced version GPT-4o, which can process audio, image and text information in real time. ChatGPT can also conduct real conversations with people in this version.
Just a few months later, the US company has now introduced its new model, the GPT-4o Mini. We will explain to you what distinguishes the sister model from the GPT-4o.
What is GPT-4o Mini?
GPT-4o Mini is – as the name suggests – a slimmed down version of its big sister GPT-4o. OpenAI itself describes the new version as the “most cost-effective small model” that makes AI “much more affordable”.
The new model is intended to offer a cost-effective alternative, especially for developers. OpenAI wants to make AI “accessible to a wider audience,” as Olivier Godement, who heads the API platform product, told The Verge says.
If we want AI to benefit every corner of the world, every industry, and every application, we need to make AI much more affordable.
The new model is available today to users on Free, Plus and Team plans. They can now use it instead of GPT-3.5 Turbo. Enterprise users should get access in the coming week.
However, developers who do not want to switch to the new version can still access GPT-3.5 via the API. According to Godement, however, GPT-3.5 will be removed from the API in the long term.
How much does OpenAI’s new model cost?
According to OpenAI, GPT-4o Mini is “an order of magnitude cheaper than previous Frontier models.” Compared to GPT-3.5 Turbo, the new model is even 60 percent cheaper.
For each million input tokens, 15 cents are charged. One million output tokens cost 60 cents. Up to 16,000 output tokens are supported per request.
How powerful is GPT-4o Mini?
In the MMLU test, in which AI models have to answer around 16,000 multiple-choice questions from 57 academic subjects, GPT-4o Mini achieved 82 percent of the points, according to OpenAI. GPT-3.5 achieved 70 percent in this test, GPT-4o 88.7 percent.
However, benchmark tests such as the MMLU should be treated with caution, as the New York Times This is particularly because the way these tests are carried out can vary between companies.
It is also possible that the AI systems have the answers for these tests in their data sets. This way they could cheat, which would not be noticed because there are no external evaluators in this process.
Also interesting:
Source: https://www.basicthinking.de/blog/2024/07/19/gpt-4o-mini-openai-ki-moodell/