A political storm is brewing in Brussels after European Commission (EC) plans to loosen digital privacy laws were leaked.
On November 19, the Commission is expected to unveil a Digital Omnibus package of reforms aimed at simplifying and streamlining Europe’s digital privacy regulations. However, critics have responded strongly to the proposed reforms, which were leaked last week in draft form by Politico.
Among the elements that have raised concern is the proposal to give companies breaching the rules on the highest-risk AI use a grace period of one year. Providers of generative AI systems that are placed on the market before the implementation date could earn a year’s protection from fines for violations of AI transparency rules until August 2027 to “provide sufficient time for adaptation of providers and deployers of AI systems” to implement the obligations.
For custom-made data processing services, where a higher level of adaptation is needed for the service to be usable, a lighter regime would apply. Small and medium-sized enterprise as well as small mid-cap sized data processing service providers would also get an out. However, in each case, this would only apply to contracts already in place before 12 September 2025.
Draft changes would allow companies to legally process special categories of data to train and operate AI technology. The Commission is also planning to reframe the definition of such special category data, which are afforded extra protections under the privacy rules. There are also plans to redefine what constitutes personal data. Pseudonymised data may not always be subject to GDPR requirement, a change that reflects a recent ruling from the EU’s top court.
Data would only qualify as special category data if it “directly revealed” information about an individual’s racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, health, sex life or sexual orientation.
There would be two new derogations from the prohibition on processing special category data. The first would allow residual processing of special category data for the development and operation of an AI system or model, provided attempts are made to identify and remove the data and, where this would be disproportionate, methods are used to prevent disclosure of the special category data to others as output data.
The threshold for reporting personal data breaches to data protection authorities would also be raised. Only incidents posing a high risk to data subjects could be reported. There is a proposal for a single reporting point for all data breach incidents, in order to meet requirements under the GDPR, Network and Information Security Directive, Digital Operations Resilience Act and the eventual Critical Entities Resilience Directive. The reporting obligation for communications service providers under the ePrivacy Directive would be repealed on the basis that it would be superfluous.
Additionally, the requirement for data intermediaries to be licensed and legally separate from other entities would be replaced by a voluntary licensing scheme and a requirement for functional separation of the intermediary services.
Where a controller directly collects data from a data subject, a privacy notice may not be required if there are reasonable grounds to believe that the individual already knows the controller’s identity, the purpose of processing the personal data, and how to contact any data protection officer. This exemption would not apply if the data were shared with third parties or third countries, used for automated individual decision making, or is otherwise used for high-risk processing.
The Commission proposes significant changes to the use of cookies, suggesting that there should be no need for consent when cookies and similar technology is used for aggregated audience measurement and security purposes. Whenever the tracking technology is used to process personal data, then the GDPR would trump ePrivacy and the controller could look to any lawful basis under the GDPR, not just consent.
Many have accused the EC of conceding to pressure from Big Tech and the Trump administration. A number of companies, including Facebook and Instagram owner Meta, have warned that the EU’s approach to regulating AI risks cutting the continent off from accessing cutting-edge services.
Meta, X and LinkedIn have previously delayed rollouts of AI applications in Europe after interventions by the Irish Data Protection Commission. Google is facing an inquiry by the same regulator and was previously forced to pause the release of its Bard chatbot. Italy’s regulator has previously imposed temporary blocks on OpenAI’s ChatGPT and Chinese DeepSeek over privacy concerns.
According to the Financial Times, EU officials are wary of any moves that could provoke the White House into retaliatory measures, be that cutting off intelligence or weapon supplies to Ukraine or starting a transatlantic trade war. A senior EU official told the Financial Times that the EU had been “engaging” with the Trump administration on adjustments to the AI act and other digital regulations as part of its wider simplification process.
Czech lawmaker Markéta Gregorová said she is “surprised and concerned” that the GDPR is being reopened. She warned that Europeans’ fundamental rights “must carry more weight than financial interests.”
Max Schrems, founder of Austrian privacy group Noyb, said the Commission is “secretly trying to overrun everyone else in Brussels”. Schrems, who has previously initiated legal action to block major data transfer deals with the United States, said: “This disregards every rule on good lawmaking, with terrible results.”
