[ad_1]
OpenAI’s efforts to produce less factually false output from its ChatGPT chatbot are not enough to ensure full compliance with European Union data rules, a task force at the EU’s privacy watchdog said.
“Although the measures taken in order to comply with the transparency principle are beneficial to avoid misinterpretation of the output of ChatGPT, they are not sufficient to comply with the data accuracy principle,” the task force said in a report released on its website on Friday.
The body that unites Europe’s national privacy watchdogs set up the task force on ChatGPT last year after national regulators led by Italy’s authority raised concerns about the widely used artificial intelligence service.
OpenAI did not immediately respond to a Reuters request for comment.
The various investigations launched by national privacy watchdogs in some member states are still ongoing, the report said, adding it was therefore not yet possible to provide a full description of the results. The findings were to be understood as a ‘common denominator’ among national authorities.
Data accuracy is one of the guiding principles of the EU’s set of data protection rules.
“As a matter of fact, due to the probabilistic nature of the system, the current training approach leads to a model which may also produce biased or made up outputs”, the report said.
“In addition, the outputs provided by ChatGPT are likely to be taken as factually accurate by end users, including information relating to individuals, regardless of their actual accuracy.”
© Thomson Reuters 2024
(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)
For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who’sThat360 on Instagram and YouTube.
[ad_2]
Source link