Italy Bans OpenAI’s ChatGPT Chatbot Over Privacy Concerns and Lack of Age Verification

Italy has banned access to OpenAI’s ChatGPT chatbot due to privacy concerns after a data breach on March 20 that exposed users’ conversations and payment information.

The country’s data protection watchdog criticized the chatbot for not providing information notices to users and the “lack of a legal basis” to justify the collection and mass storage of personal data used to “train” the platform’s algorithms, according to Euronews.

Furthermore, the watchdog expressed concerns about the chatbot’s lack of age verification measures, potentially exposing minors to inappropriate responses.

The chatbot is intended for users over the age of 13.

OpenAI has 20 days to provide Italian regulators with additional information and possible remedies.

Failure to comply with the demand can result in a fine of up to 4% of OpenAI’s annual worldwide turnover.

The immediate effect of the ban limits the processing of Italian users’ data via OpenAI.

The Italian ban is the first instance of a Western country blocking ChatGPT, highlighting policy challenges for developers of advanced AI.

The chatbot’s impressive capabilities, such as writing essays and engaging in human-like conversations, have raised concerns about the spread of misinformation, employment, and the potential risks to society.

Over 1,000 technology leaders and researchers called for a six-month moratorium on the development of advanced AI systems to implement safety policies, The New York Times notes.

The Center for A.I. and Digital Policy, an advocacy group that promotes the ethical use of technology, urged the U.S. Federal Trade Commission to block OpenAI from releasing new commercial versions of ChatGPT.

LATEST VIDEO