This website uses features that your browser doesn't support. Please upgrade to a recent version of your browser.



How is AI changing protests?

A deepdive in [il]legal surveillance of citizens

Mass surveillance refers to the monitoring of a large number of people or specific groups, as well as the collection of data on citizens and communities. Governments or large organizations often use specific programs and artificial intelligence to collect and process information about individuals.

Unlike targeted surveillance, mass surveillance is not focused on specific individuals. Its original purpose was to use the collected data for security and law enforcement purposes. However, today the question arises as to how far this surveillance should go and where it starts to infringe on the privacy of the population.

Governments or institutions use advanced technologies for mass surveillance to intercept and monitor communication channels and online activities on a large scale. Data is collected from telecommunications companies, internet service providers, and public surveillance cameras, among other sources. Artificial intelligence is then used to analyse this data for potential threats and patterns.

There are numerous methods of mass surveillance. These include, for example, collecting data from phone calls and emails, reading SMS messages, tracking personal locations, or using programs that can recognise faces through surveillance cameras.





Pros and Cons

Mass surveillance has both advantages and disadvantages. While AI surveillance is intended to increase security and help prevent crime, constant monitoring also raises many ethical questions. Does this kind of surveillance infringe on personal rights and our privacy? Does the benefit outweigh the scepticism toward this new technology?

Biometric Surveillance

Artificial intelligence is increasingly being used for biometric surveillance. This means that a person can be identified based on their physical, biological, and behavioural characteristics. These types of data are unique to each individual and therefore very difficult to forge. They may include fingerprints, facial features, the iris, or a person’s voice.

According to the AI & Big Data Global Surveillance Index, as early as 2022, at least 97 countries worldwide were actively using artificial intelligence for public surveillance purposes. Among them are countries like China, Russia, and the United States.

However, an increasing number of European countries are also testing or partially implementing AI surveillance through pilot projects. These include France, Germany, Poland, the United Kingdom, and Hungary. In these countries facial recognition technologies are particularly common at critical infrastructure points such as train stations and airports. Their use is intended to ensure state security and support law enforcement agencies.

In Hungary, three legal amendments were passed in March 2025 without any public debate. These amendments criminalise LGBTQ+ demonstrations and significantly expand the use of biometric surveillance. Since 15 April 2025, the new laws have been in force, allowing the use of facial recognition technologies even for minor infractions and during peaceful assemblies.

Previously, such technologies were only permitted in Hungary for criminal offences that could lead to imprisonment. With the new changes, individuals can now be identified even for minor violations, such as crossing the street at a red light.

Hungary's legal changes violate the EU AI Act and significantly restrict demonstrators. According to civil society analyses, Hungary uses images that are automatically matched with government databases in real or near-real time. This allows individuals to be identified within seconds – even for petty offences.

This level of surveillance poses a serious risk: people may choose not to attend protests out of fear of being identified and punished, and thus may no longer fully exercise their rights to freedom of expression and assembly. This directly contradicts the aims of the EU AI Act and the EU Charter of Fundamental Rights.

Specific technologies and applications should be avoided altogether where the regulation of human rights complaints is not possible. Both industry and States must be held accountable, including for their economic, social, environmental, and human rights impacts.

Experts from The Special Procedures of the Human Rights Council

The situation in Hungary has made it clear to the European Commission that it is time to draw a firm line and reach a consensus on how seriously privacy and fundamental rights are to be protected in the digital age.

The European Union's AI Act

To counter the potential risks of artificial intelligence, the European Union officially adopted the AI Act on July 12, 2024. This regulation defines the legal framework for the use of artificial intelligence. The aim of the AI Act is to promote human-centred and trustworthy AI while safeguarding the health, safety, and fundamental rights of citizens.

Among the prohibited practices is the use of AI systems for real-time remote biometric identification of individuals in publicly accessible spaces for law enforcement purposes. However, there are specific exceptions where the use of AI is still permitted:

  • For the targeted search of specific victims of abduction, human trafficking, or sexual exploitation, as well as for locating missing persons
  • To prevent immediate and concrete threats to the life or physical safety of individuals, or to avert actual or foreseeable terrorist attacks
  • To locate and identify a person suspected of having committed a serious criminal offense, for the purposes of law enforcement, investigation, or execution of a criminal penalty

These exceptions pose a risk and could pave the way for legitimising the use of such systems. The following guide outlines a human rights-based approach to sustaining resistance against biometric mass surveillance practices, both now and in the future.

Link: How to fight Biometric Mass Surveillance after the AI Act: A legal and practical guide - European Digital Rights (EDRi)

In addition to the AI Act, the European Union has the General Data Protection Regulation (GDPR), which provides a strong legal framework to protect personal data and individual privacy. It classifies biometric data as highly sensitive, meaning its collection and use are subject to strict conditions. The GDPR plays a key role in preventing unlawful surveillance and ensuring that technologies like facial recognition respect fundamental rights.

Dr. Jacek Pyżalski – a professor at Adam Mickiewicz University, in Poznań – is also reflecting on the impact of AI surveillance and the ethical questions it raises:

0:00/0:00

In Poland, there is a regulation that if there is surveillance, it must be clearly marked — that it exists, that it’s functioning. And most of the surveillance in Poland doesn’t have artificial intelligence tools integrated into it. Some parts probably do. And now, AI is being used all over the world, because the European Union, for instance, uses AI-powered surveillance at airports — for example. Everywhere. So I think it’s kind of inevitable, and it’s basically appearing everywhere.

Is it legal? Well, I don’t know the exact regulations, so I don’t want to say whether it’s legal or not, because I’d have to be a lawyer and know the legal provisions. But I think it is definitely illegal in Poland if it’s done secretly — meaning, when it’s not marked that something is being recorded, or when there are no signs indicating the area is under surveillance, then it’s illegal.

But these kinds of tools are used all across Europe, for example, as a means of road monitoring — for example, to measure speed. That’s, you could say, widespread. Poland is no exception here.

0:00/0:00

I mean, probably… when it's used in a very limited way — for example, to, I don't know, search for criminals or detect dangerous situations — because it is also used for that, in crowds or certain situations, to detect, well, people behaving suspiciously, planting explosive devices, and so on, and so on — then there might be a benefit, that certain actions can be prevented, right? It's all a matter of… the scope of its use, right? What it’s going to be used for and under what kind of control it’s generally implemented.

But like I said, this is kind of an inevitable thing. There’s basically no country that doesn’t use it. And, as I said, Poland has very similar regulations and practices compared to other countries. It doesn’t really differ, as far as I can tell.

The Polish Perspective

Poland has also seen significant developments in the area of AI surveillance. On 24 May 2025, the Polish Parliament passed a new law expanding the use of artificial intelligence in public surveillance. The legislation now permits the use of facial recognition and behavioural analysis by police and municipal authorities.

Furthermore, the law – approved by the government and several allied MPs – allows the use of AI systems for real-time crowd monitoring, movement analysis, and border control in public spaces.

Poland is by no means alone in expanding AI for public security. Other countries, such as Romania or, as mentioned earlier, Hungary, are also extending laws and applications in this area. These measures are often introduced under the guise of increasing efficiency at major events or addressing security concerns. However, the Polish law was passed without judicial oversight or parliamentary control.

The new provisions are set to come into force by the end of 2025. Human rights organisations have criticised the lack of accountability and transparency, while the Polish opposition has condemned the fast-tracked process and absence of public consultation.



TIMELINE:

This timeline outlines Poland’s evolving stance on the EU AI Act. From the initial stakeholder consultations to the proposal of a dedicated supervisory body, Poland has taken a proactive role in shaping its national approach to AI governance. The timeline highlights important political developments, including the country's EU Council Presidency and the involvement of key institutions like the national data protection authority.

We reached out to The Polish Personal Data Protection Office, about their view on biometric surveillance being used in Poland, but unfortunatly they were not available for an interview, or a written statement.



ANNA ALBOTH - HUMAN RIGHTS ACTIVIST

As artificial intelligence becomes an ever more powerful tool in the hands of governments, concerns about mass surveillance are no longer theoretical, they’re urgent. Few capture the stakes of this technological shift better than Anna Alboth, a leading voice in the fight for human rights.

Anna Alboth first gained international recognition as the initiator of the Civil March for Aleppo – a peace march on foot from Berlin to Aleppo, which took place from December 2016 to August 2017.



In 2018, she was nominated for the Nobel Peace Prize for this effort. For many years, she has been active in various human rights initiatives both in Poland and internationally. She currently works as the Global Media Officer at Minority Rights Group International.

0:00/0:00

Sound problems may occur.

About Us

We would like to thank all our interview-partners, as well as the organisers and partners of the International School of Multimedia Journalism, who made our stay in Warsaw and this project possible. A special thanks also goes to our mentor Oleksandra Iwaniuk for her support and assistance during the production!

Anastasiia Ordynets - Ukraine

Anja Bauer - Austria

David Lemser - Denmark

Lea Rebenschütz - Austria

0:00/0:00