ChatGPT wants to offer its users more choices in the future when it comes to their data. That’s why you can now deactivate your chat history at ChatGPT.
The success of the AI software ChatGPT seems unstoppable. But with the increasing number of users, the responsibility for the parent company OpenAI also increases.
For this reason, users of the Ki software will be able to decide for themselves in future whether they want to save their chat history. There had previously been a data breach at this point.
Disable ChatGPT chat history
In the future you can decide for yourself whether ChatGPT should save your chat history with the bot or not. Because parent company OpenAI has introduced a new option with which you can easily deactivate your chat history at ChatGPT.
Conversely, this also means that the conversations you have with the chatbot are not included in the training of the AI model. This applies to any conversations you start using ChatGPT with chat history turned off.
If you have chosen to turn off your ChatGPT chat history, these parts of your conversation will not appear in your sidebar history either. You can make the change in your settings and, according to OpenAI, change it again at any time.
ChatGPT breach exposes chat histories
The chat histories at ChatGPT had previously led to a data breach in the AI software. At the end of March it became known that users were being shown strange histories in their own.
OpenAI CEO Sam Altman confirmed the data breach on Twitter. A “small percentage” of users were able to see the chat history of others.
The reason for this “significant problem” was a bug in an open source library. This had already been corrected by the time Altmann tweeted.
ChatGPT soon also for business
But OpenAi has not only announced deactivation for the chat histories of its users. There will also soon be innovations for business customers.
Because OpenAi also wants to introduce a business subscription for its AI software in the future. Among other things, the company is targeting “professionals who need more control over their data”.
The data of the users should not be used to train the AI models in order to offer more data security. According to OpenAI, the new model will be introduced “in the coming months”.