Italy gives OpenAI initial to-do list for lifting ChatGPT suspension order
Italy’s data protection watchdog has laid out what OpenAI needs to do for it to lift an order against ChatGPT issued at the end of last month — when it said it suspected the AI chatbot service was in breach of the EU’s General Data Protection Regulation (GDPR) and ordered the U.S.-based company to stop processing locals’ data.
The EU’s GDPR applies whenever personal data is processed, and there’s no doubt large language models such as OpenAI’s GPT have hoovered up vast amounts of the stuff off the public internet in order to train their generative AI models to be able to respond in a human-like way to natural language prompts.
OpenAI responded to the Italian data protection authority’s order by swiftly geoblocking access to ChatGPT. In a brief public statement, OpenAI CEO Sam Altman also tweeted confirmation it had ceased offering the service in Italy — doing so alongside the usual Big Tech boilerplate caveat that it “think[s] we are following all privacy laws.”
Italy’s Garante evidently takes a different view.
The short version of the regulator’s new compliance demand is this: OpenAI will have to get transparent and publish an information notice detailing its data processing; it must immediately adopt age gating to prevent minors from accessing the tech and move to more robust age verification measures; it needs to clarify the legal basis it’s claiming for processing people’s data for training its AI (and cannot rely on performance of a contract — meaning it has to choose between consent or legitimate interests); it also has to provide ways for users (and non-users) to exercise rights over their personal data, including asking for corrections of disinformation generated about them by ChatGPT (or else have their data deleted); it must also provide users with an ability to object to OpenAI’s processing of their data for training its algorithms; and it must conduct a local awareness campaign to inform Italians that its processing their information to train its AIs.
The DPA has given OpenAI a deadline — of April 30 — to get most of that done. (The local radio, TV and internet awareness campaign has a slightly more generous timeline of May 15 to be actioned.)
There’s also a little more time for the additional requirement to migrate from the immediately required (but weak) age gating child safety tech to a harder-to-circumvent age verification system. OpenAI has been given until May 31 to submit a plan for implementing age verification tech to filter out users below age 13 (and users aged 13 to 18 who had not obtained parental consent) — with the deadline for having that more robust system in place set at September 30.
In a press release detailing what OpenAI must do in order for it to lift the temporary suspension on ChatGPT, ordered two weeks ago when the regulator announced it was commencing a formal investigation of suspected GDPR breaches, it writes:
OpenAI will have to comply by 30 April with the measures set out by the Italian SA [supervisory authority] concerning transparency, the right of data subjects — including users and non-users — and the legal basis of the processing for algorithmic training relying on users’ data. Only in that case will the Italian SA lift its order that placed a temporary limitation on the processing of Italian users’ data, there being no longer the urgency underpinning the order, so that ChatGPT will be available once again from Italy.
Going into more detail on each of the required “concrete measures,” the DPA stipulates that the mandated information notice must describe “the arrangements and logic of the data processing required for the operation of ChatGPT along with the rights afforded to data subjects (users and non-users),” adding that it “will have to be easily accessible and placed in such a way as to be read before signing up to the service.”
Users from Italy must be presented with this notice prior to signing up and also confirm they are over 18, it further requires. While users who registered prior to the DPA’s stop-data-processing order will have to be shown the notice when they access the reactivated service and must also be pushed through an age gate to filter out underage users.
On the legal basis issue attached to OpenAI’s processing of people’s data for training it’s algorithms, the Garante has narrowed the available options down to two: consent or legitimate interests — stipulating that it must immediately remove all references to performance of a contract “in line with the [GDPR’s] accountability principle.” (OpenAI’s privacy policy currently cites all three grounds but appears to lean most heavily on performance of a contract for providing services like ChatGPT.)
“This will be without prejudice to the exercise the SA’s investigation and enforcement powers in this respect,” it adds, confirming it is withholding judgment on whether the two remaining grounds can be used lawfully for OpenAI’s purposes too.
Additionally, the GDPR provides data subjects with a suite of access rights, including a right to corrections or deletion of their personal data. Which is why the Italian regulator has also demanded that OpenAI implements tools so that data subjects — which means both users and non-users — can exercise their rights and get falsities the chatbot generates about them rectified. Or, if correcting AI-generated lies about named individuals is found to be “technically unfeasible,” the DPA stipulates the company must provide a way for their personal data to be deleted.
“OpenAI will have to make available easily accessible tools to allow non-users to exercise their right to object to the processing of their personal data as relied upon for the operation of the algorithms. The same right will have to be afforded to users if legitimate interest is chosen as the legal basis for processing their data,” it adds, referring to another of the rights GDPR affords data subjects when legitimate interest is relied upon as the legal basis for processing personal data.
All of the measures the Garante has announced are contingencies, based on its preliminary concerns. And its press release notes that its formal inquiries — “to establish possible infringements of the legislation” — carry on and could lead to it deciding to take “additional or different measures if this proves necessary upon completion of the fact-finding exercise under way.”
We reached out to OpenAI for a response but the company had not replied to our email at press time.
Recent Posts
- Smaller businesses ‘more fearful and cautious’, Aito conference warned
- RK Janji Permudah Izin Dirikan Rumah Ibadah Semua Agama Jika Menang
- Colliers Quick Hits | From Travel Volumes to Labor Costs: 10 Trends Driving Hospitality Toward 2025
- Donald Trump watches SpaceX launch with Elon Musk, but test flight does not go as planned | US News
- Remaja Asal Bandung Jadi Korban TPPO di Saudi, Terlena Iming-iming Gaji
Recent Comments