CyberThreat Report
Cyberthreat.report's English-language news channel
Digital Dinosaurs and the Deepfake Asteroid
Preview
0:00
-0:44

Digital Dinosaurs and the Deepfake Asteroid

The Regulation of Deepfake and Artificially Generated Content in the European Union and Member States

The development of artificial intelligence (AI), which enables the creation of hyper-realistic, manipulated audiovisual content, carries both innovation potential and significant societal risks, from disinformation and election interference to financial fraud and severe violations of personal rights. Everyone and their brother uses the technology, mostly to generate harmless jokes or create more serious animations and short films, but unfortunately, like anything, this technology can be harmful in the hands of malicious people. Deepfake forgeries have become daily practice in organized crime circles as well as in the smear campaigns of political parties against each other and in information warfare operations. The use of technology for such purposes undermines the foundations of child protection and privacy, making digital tools using false content uncontrollable and unmanageable in committing crimes.

The European Union has responded to this challenge with a complex, multi-pillar regulatory strategy, which has not created a single, technology-specific “deepfake law,” but rather addresses the phenomenon along its entire lifecycle.

The backbone of the regulation is provided by three complementary EU legislative acts. First, the General Data Protection Regulation (GDPR) forms the basis, regulating the processing of personal, especially biometric, data used to train deepfake models, setting strict legal bases and conditions for developers. Second, the Artificial Intelligence Act (AI Act) directly targets the developers and deployers of AI systems, imposing default transparency obligations to label synthetic content. Third, the Digital Services Act (DSA) establishes the responsibility of online platforms to prevent the dissemination of harmful and illegal deepfake content, particularly through strict risk assessment and mitigation obligations imposed on Very Large Online Platforms (VLOPs).

This multi-layered approach creates a comprehensive, yet extremely complex and fragmented legal environment. While the AI Act draws the baseline for transparency, the DSA provides the primary mechanism for combating harmful content already in circulation, and the GDPR sets a fundamental limit on the use of personal data. The report highlights that these three laws together form a complementary “push-pull” system: the AI Act seeks to “push” labeled content into the digital ecosystem at the source, while the DSA seeks to “pull” harmful or illegal materials from it at the point of distribution.

The analysis at the Member State level reveals significant differences. France, with its 2024 SREN Act, has adopted a proactive criminal law approach, creating specific criminal offenses for the creation and distribution of non-consensual deepfakes, especially for pornographic content. In contrast, Germany has so far relied on its existing personality rights and defamation laws, although a new draft bill for a specific criminal law indicates a move towards the French model. Hungary is currently experiencing a legislative gap; there are no specific deepfake rules, and the existing offenses in the Criminal Code (e.g., defamation, harassment) do not necessarily provide a proportionate and effective response to the new types of threats posed by the technology.

The Rise of Synthetic Media and the European Regulatory Response

Deepfake technology, derived from the merging of the words “deep learning” and “fake,”

Listen to this episode with a 7-day free trial

Subscribe to CyberThreat Report to listen to this post and get 7 days of free access to the full post archives.