Digital Rights and Regulation in the Era of Artificial Intelligence

UDC 349
Publication date: 24.04.2026
International Journal of Professional Science №4(1)-26

Digital Rights and Regulation in the Era of Artificial Intelligence

Kudryavtsev David
Scientific supervisor: Bashmakova N.,
1. The Undergraduate Student of Law Faculty.
The North Western branch of the Federal State Budget-Funded Educational Institutional of Higher Education “The Russian State University of Justice named after V. M. Lebedev”
2. Associate Professor, Ph.D., Department of Humanitarian and Socio-economic Disciplines The North Western branch of the Federal State Budget-Funded Educational Institutional of Higher Education “The Russian State University of Justice named after V. M. Lebedev”
Abstract: This article examines the historical development of digital rights and the evolving regulatory landscape in response to artificial intelligence. Key milestones from the emergence of data protection laws to contemporary AI-specific legislation are analyzed. Special attention is given to foundational legal instruments that have shaped the principle of digital autonomy. The author concludes that digital rights represent a continuous process of legal adaptation, and their effective regulation in the AI era depends on balancing innovation with fundamental freedoms.
Keywords: digital rights, artificial intelligence, regulation, data protection, legal principle, fundamental freedoms, algorithmic accountability.


  1. Introduction

Digital rights constitute an emerging category of fundamental rights that protect individuals in the online environment. With the rapid integration of artificial intelligence into public and private sectors, traditional legal frameworks face unprecedented challenges. The presumption of human autonomy, non‑discrimination, and data privacy must be reinterpreted in light of algorithmic decision‑making. This article traces the genesis of digital rights and examines how regulatory instruments have adapted or failed to adapt to the distinctive risks posed by AI systems.

  1. Material and methods

This study is based on the analysis of normative legal acts that mark key stages in the development of digital rights. These include the Swedish Data Act of 1973, the German Federal Data Protection Act of 1977, Convention 108 of 1981, the EU Data Protection Directive of 1995, the GDPR of 2016, the EU AI Act of 2024, Russian Federal Law No. 152‑FZ “On Personal Data.” The historical method was used to trace the evolution from early data protection laws to AI‑specific regulation. The comparative method served to contrast the EU risk‑based model, the Russian approach, and other jurisdictions (China). The formal‑legal method was applied to interpret key provisions, in particular Article 22 of the GDPR and the risk classification of the AI Act.

  1. Results of the study and their discussion.

The first legislative attempts to protect digital rights appeared in the 1970s in Europe, responding to the spread of computerized data processing. Sweden enacted the first national Data Act in 1973, which required government agencies to obtain permission before processing personal data using automatic systems[2, art. 617]. In 1977, Germany passed the Federal Data Protection Act (Bundesdatenschutzgesetz), introducing the concept of “informational self‑determination” as a distinct right derived from human dignity and personality. These early laws established the fundamental principle that individuals have a right to know and control information held about them by automated systems. They became the basis for future legislative acts in various countries around the world.

A pivotal milestone was the adoption of the Council of Europe’s Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (Convention 108) in 1981. This treaty was the first legally binding international instrument in the field of data protection. It laid the groundwork for cross‑border data protection rules and recognized the need to safeguard privacy against unchecked automated processing. Unlike later instruments, Convention 108 did not explicitly address algorithmic decision‑making, but its principles of fair collection and purpose limitation remain relevant today.

The European Union’s Data Protection Directive of 1995 (Directive 95/46/EC) further harmonized national laws across member states. However, the most comprehensive instrument to date is the General Data Protection Regulation (GDPR) of 2016, which entered into force in 2018. Article 22 of the GDPR explicitly grants individuals the right not to be subject to a decision based solely on automated processing, including profiling, if that decision produces legal effects or similarly significantly affects them. Recital 71 of the GDPR adds that data controllers must implement suitable measures to safeguard the data subject’s rights, including the right to obtain human intervention and to challenge the decision.

Parallel to European developments, the Russian Federation has incorporated digital rights into its law. The Federal Law “On Personal Data” (No. 152‑FZ) has also been amended several times to strengthen consent requirements, data localization mandates, and the obligation to notify regulators of data breaches. Currently, there are about 400 laws in Russia regulating relations regarding information and information technology [1, art. 33]. All of them serve to protect the data of the country’s citizens. In Russia, there is currently no single «Law on AI» similar to the European one, but there are already legislative drafts Lawyers distinguish three models of law regarding AI: classical, soft law, and mixed. The former includes prohibitive principles, while soft law is advisory. Mixed law, on the other hand, falls somewhere in between. According to Russian President Vladimir Putin, Russia should pursue the second approach in AI policy. Strict regulation of AI rules in some countries has hindered the development of the technology [5].

Some legal scholars associate AI with a new technological order that will foster the formation of new classes, where digital proficiency becomes a professional standard [6].

It’s also worth considering that artificial intelligence (AI) is driving the digitalization of justice, changing the structure and methods of communication in court proceedings. This could significantly alter the landscape of courts in the future. If left unchecked, errors are possible, leading to serious consequences [7]. For example, AI lacks empathy, and in some cases where this could be relevant, it won’t use it, unlike humans. Furthermore, ordinary participants in the proceedings may not even fully understand the logic of the verdict.

A fundamentally important question is whether AI can become aware of its «self,» and whether this «AI self» will be able to navigate computer networks. In this case, the most terrifying potential threat to humanity arises—the technological possibility of AI escaping human control. New legislation is being introduced in Europe to ensure this control. [4].More recently, the European Union adopted the Artificial Intelligence Act (AI Act) in 2024 the world’s first comprehensive horizontal regulation of AI. The AI Act classifies AI systems according to risk levels: unacceptable, high, limited, and minimal. Unacceptable‑risk systems (such as social scoring by governments or real‑time remote biometric identification in public spaces for law enforcement) are prohibited outright. High‑risk systems those used in employment, education, critical infrastructure, migration control, and access to essential services must comply with strict obligations regarding data governance, transparency, human oversight, accuracy, and robustness. Providers of high‑risk AI systems must also register their models in an EU database and perform conformity assessments before placing them on the market.

In addition to the EU, other jurisdictions have taken notable steps. China has enacted several binding rules, including the Algorithmic Recommendation Provisions (2022) and the Deep Synthesis Provisions (2023), which require algorithmic transparency, user labeling of synthetic content, and the right to opt out of personalized recommendations. A distinctive feature of China’s information policy is detailed state regulation of the information sphere in the form of strict control. [3, art. 21] These divergent regulatory models reflect different legal traditions and policy priorities.

  1. Conclusion

Thus, the genesis of digital rights shows a steady movement from basic data protection toward AI-specific regulation. The EU AI Act and the GDPR represent the most advanced models. The key finding is that no single legal instrument can resolve all challenges posed by AI. Instead, effective protection requires continuous adaptation of three core ideas: human oversight, algorithmic transparency, and accessible legal remedy.

References

1. Bachilo, I. L. Information Law: textbook – 5th ed. – Moscow: Yurayt Publishing House – 2026. 419 pp.
2. Fedotov, M. A. Information Law: textbook – Moscow.: Yurayt Publishing House – 2026. 855 pp.
3. Kovaleva N.N Information law: textbook – Moscow: Yurait Publishing House, 2025. - 353 p.
4. Privalova N.I. (2026). Homo technicus, homo traditium and artificial intelligence: a philosophical analysis//Context and reflection: Philosophy of the world and man. Vol. 15., pp.182-193.
5. Privalov N.G. (2025). Institutionalization of artificial intelligence: an interdisciplinary aspect// Journal of Political Studies. Vol. 9. No. 4. pp. 3-27. DOI https://doi.org/10.12737/2587-6295-2025-9-4-3-27.
6. Privalov N.G. (2025). Estates as new socio-professional groups in Russia //Journal of Political Studies. Vol. 9, No. 3. pp.106-120. DOI https://doi.org/10.12737/2587-6295-2025-9-3-106-120.
7. Bondarev, V.G., Bashmakova, N.I., Sinina, A.I. (2020). Judicial discourse: genesis and definition of the concept. Conflictology, 15(1), 52-65.