Proposals by the European Commission to “simplify” digital regulations have sparked concerns among rights groups, who warn that the changes could weaken key protections and expand access to personal data for artificial intelligence systems.
Unveiled in late 2025, the so-called Digital Omnibus package aims to revise major frameworks such as the General Data Protection Regulation and the Artificial Intelligence Act. While the Commission argues that these reforms will boost competitiveness and reduce regulatory burdens, critics say they risk undermining rules that safeguard privacy, prevent discrimination, and ensure accountability in digital systems.
At the heart of the debate is the balance between innovation and regulation. Advocacy groups argue that the proposed “simplification” is effectively a form of deregulation that primarily benefits large technology companies, including firms like Amazon, which have significantly increased lobbying efforts in Brussels. They warn that loosening restrictions could allow companies to expand data collection practices and strengthen surveillance-based business models.
One major concern involves proposed changes to the GDPR, which currently governs how personal data is collected, used, and protected. Suggested reforms include redefining what qualifies as personal data and allowing companies to avoid removing data from AI systems if doing so requires “disproportionate effort.” Critics say such provisions could weaken individuals’ rights to control their data and make it harder to understand how personal information is being used.
The proposals also limit individuals’ ability to access their own data, a core right under the GDPR. By allowing data controllers to reject certain requests, the changes could reduce transparency and accountability in how personal data is handled.
Similarly, planned revisions to the AI Act have raised alarms. The law, considered one of the world’s most ambitious attempts to regulate AI, is still being implemented. However, proposed amendments could delay enforcement and reduce oversight, particularly for high-risk AI systems. Companies may no longer be required to publicly disclose their own risk assessments, making it more difficult to challenge potentially harmful technologies.
Beyond these two laws, further “simplification” measures may affect other key regulations, including the Digital Services Act and the Digital Markets Act. These frameworks aim to regulate online platforms, curb monopolistic practices, and address harmful content, but could face weakening under broader regulatory reviews.
Human rights organizations stress that strong digital regulations are essential in an era where AI systems increasingly shape everyday life. From facial recognition technologies used in public spaces to automated decision-making systems in welfare and migration, poorly regulated AI can reinforce bias, enable surveillance, and disproportionately impact vulnerable groups.
Data protection laws like the GDPR are also seen as critical tools for preventing misuse of personal information, including profiling, discrimination, and unauthorized data sharing with governments or private actors. Weakening these protections could expose individuals to greater risks, including loss of privacy and unfair treatment.
The European Commission’s proposals are still under negotiation, with the European Parliament and the European Council reviewing the measures. While some provisions have already faced pushback, the final outcome remains uncertain.
As debates continue, rights advocates argue that rather than scaling back protections, the EU should focus on strengthening and enforcing existing laws to ensure that technological innovation benefits society as a whole, without compromising fundamental rights.







