ChatGPT's parent company OpenAI recently introduced a new AI model. However, the company wants to proceed “cautiously” when introducing OpenAI o1, as the model could be misused to produce bioweapons.

With the introduction of ChatGPT, OpenAI has revolutionized the world of artificial intelligence. Now the company has introduced a new AI model with OpenAI o1.

According to the company, OpenAI o1 requires “more time to think.” However, the AI ​​is able to think through complex tasks and solve more difficult problems in the areas of science, programming and mathematics.

But this is precisely what could make the new AI model dangerous. That's why the company has given OpenAI o1 the highest risk level it has ever given to one of its models.

Could OpenAI o1 be misused to produce bioweapons?

OpenAI has classified its new AI model as “medium risk” in relation to the production of chemical, biological, radiological and nuclear weapons. Financial Times The company has stated that OpenAI o1 has “significantly improved” the ability of experts to develop bioweapons.

AI models that are capable of making incremental conclusions could pose an increased risk of misuse in the wrong hands. Mira Murati, CTO of OpenAI, told the Financial Timesthat the company will be particularly “cautious” when introducing these new capabilities to the public. However, the AI ​​will be generally available to ChatGPT subscribers and programmers.

Red teamers and experts from various scientific fields tested the model to push it to its limits, but Murati said the AI ​​performed far better than previous ones on general safety criteria.

Experts call for laws to restrict AI models

The new possibilities offered by the AI ​​model o1 “reinforces the importance and urgency” of laws regulating artificial intelligence, Yoshua Bengio, professor of computer science at the University of Montreal, told the Financial Times.

Such a law is currently being discussed in California. It would require manufacturers of AI models to take measures to minimize the risk of developing biological weapons, for example.

Bengio, one of the world's leading AI scientists, sees the danger in the constant development of AI models. “The risks will continue to increase if the right guardrails are missing,” says the researcher. “Improving AI's ability to think and use this ability to deceive is particularly dangerous.”

Also interesting:

Source: https://www.basicthinking.de/blog/2024/09/16/openai-o1-herstellung-biowaffen/

Leave a Reply

Your email address will not be published. Required fields are marked *