ChatGPT would like to reassure businesses about its privacy policy

“It was clear that our customers did not want to train our model on their data, so we changed our plans”Sam Altman said Friday, May 5, on the American television network CNBC. The CEO of OpenAI, the California-based startup that makes the popular ChatGPT virtual chatbot, has announced that this important change has indeed happened. “a little while ago” : From March 1, apparently.

Capable of generating relevant answers to any question after training on large amounts of textual data, ChatGPT has sparked interest and concern since its launch in November. The security of the personal data of its users, in particular, is not guaranteed by this tool, which acts particularly opaque. For example, the company refuses to disclose the data sets used for its training.

ChatGPT: First two complaints filed with CNIL

However, starting March 1, the information provided by the user during the virtual conversation with the chatbot will no longer be remembered to improve the tool. With a note: This protection only applies to organizations with a paid subscription. This allows them to use OpenAI technology to develop their own software solutions.

Web users using the public version of ChatGPT continue to use their virtual conversations with the chatbot to improve it.

Samsung and Amazon

This announcement from OpenAI’s CEO is intended above all to reassure companies. Some, in recent weeks, have seen their internal documents “leaked” during outside requests. On May 2, Samsung Electronics Group announced that it had banned some of its employees from using such devices after an internal document noted. “Misuse”. Goldman Sachs had previously made a similar announcement.

Artificial Intelligence: From Turing to ChatGPT, How Computers Learned to Speak

In early January, Amazon instructed its employees not to share confidential information with ChatGPT to avoid being exposed to the chatbot’s responses. Some technicians in the company have started using ChatGPT to write computer code.

“This is personal data, but also raises issues of copyright and commercial or medical confidentiality”, Arnaud Lathill, law lecturer at the Sorbonne University, explains. But according to him, the question of ChatGPT’s legitimacy arises above all for the training data. The billions of web pages that the “learning” tool has consumed to generate text undoubtedly contains a great deal of personal information.

Again recognized in Italy

The Italian Personal Data Protection Authority blocked ChatGPT throughout April, accusing it of not respecting the General Data Protection Regulation (GDPR) and not having a system to verify the age of minor users.

On April 28, OpenAI announced that it was reopening in Italy. The company has promised to release information on how on its site now “collection” And “Using data related to training”. A” High Visibility ยป The Privacy Policy is provided on ChatGPT and OpenAI’s home page.

When the user connects to ChatGPT for the first time, he is informed that the data provided in the conversations will be exploited, but he has no way to prevent it. Consent is one of the legal bases the GDPR provides to authorize such exploitation.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button