BRUSSELS (Reuters) -OpenAI’s efforts to produce less factually false results from its ChatGPT chatbot are not enough to ensure full compliance with European Union data rules, an EU privacy watchdog task force said.
“While the measures taken to comply with the principle of transparency are beneficial to avoid misinterpretations of ChatGPT results, they are not sufficient to comply with the principle of data accuracy,” the task force said in a report posted on its website on Friday -fair. .
The body that unites Europe’s national privacy watchdogs created the task force on ChatGPT last year after national regulators led by the Italian authority raised concerns about the widely used artificial intelligence service.
OpenAI did not immediately respond to a Reuters request for comment.
The various investigations launched by national privacy watchdogs in some Member States are still ongoing, the report states, adding that it was therefore not yet possible to provide a full description of the results. The conclusions should be understood as a “common denominator” among national authorities.
Data accuracy is one of the guiding principles of the EU’s set of data protection rules.
“In fact, due to the probabilistic nature of the system, the current training approach leads to a model that can also produce biased or invented results,” the report states.
“Furthermore, results provided by ChatGPT are likely to be considered factually accurate by end users, including information relating to individuals, regardless of their actual accuracy.”
(Reporting by Tassilo Hummel, additional reporting by Harshita Varghese; Editing by Benoit Van Overstraeten and Emelia Sithole-Matarise)