Biometric and Behavioural Mass Surveillance in EU Member States
+Report for the Greens/EFA in the European Parliament
-Draft v.3
01/10/2021
-Francesco Ragazzi
-Elif Mendos Kuskonmaz
-Ildikó Plájás
-Ruben van de Ven
-Ben Wagner
-TABLE OF CONTENTS
- - - - - -1.1 Objectives of the report 17
-1.2 The international context 17
- -1.4 Four positions in the policy debates 19
-1.5 Lack of transparency and the stifling of public debate 21
-1.6 Scope and working definitions 21
- -PART I: OVERVIEW OF EUROPEAN PRACTICES 23
-CHAPTER 2. Technical overview 24
-2.1 Remote Biometric Identification and classification: defining key terms 24
-2.2 Detection vs recognition 24
-2.3 Facial Recognition: verification/authentication vs identification 25
-2.4 Forensic (ex-post) vs Live Facial Recognition 25
-2.5 Other systems: gait recognition, emotion recognition 25
-2.6 How does image-based remote biometric identification work? 27
-2.7 Technical limits, problems, and challenges of facial recognition 30
-CHAPTER 3. Overview of deployments in Europe 33
- - -3.3 Remote Biometric Identification 40
- - - -4.2 EU Secondary Law: GDPR & LED 46
-4.3 EU Soft law: Convention 108+ 48
-CHAPTER 5. Main political issues and debates 49
-5.1 The emergence of remote biometric identification as a policy issue 49
-5.2 Four positions in the policy debates 50
-5.3 EU Commission Proposal on the Regulation for the Artificial Intelligence Act 55
- -CHAPTER 6. Facial Recognition cameras at Brussels International Airport (Belgium) 59
-6.1 The Zaventem pilot in the context of Face Recognition Technology in Belgium 60
-6.2 Legal bases and challenges 61
-6.3 Mobilisations and contestations 62
-6.4 Effects of the technologies 63
-CHAPTER 7. The Burglary Free Neighbourhood in Rotterdam (Netherlands) 64
-7.1 Detection and decision-making in the “Burglary free neighbourhood” Fieldlab 64
-7.2 Legal bases and challenges 66
-7.3 Mobilisations and contestations 67
-7.4 Effects of the technologies 69
-CHAPTER 8. The Safe City Projects in Nice (France) 71
-8.1 The various facets of the “Safe city” project in Nice 71
-8.2 Legal bases and challenges 72
-8.3 Mobilisations and contestations 74
-8.4 Effects of the technologies 75
-CHAPTER 9. Facial Recognition in Hamburg, Mannheim & Berlin (Germany) 77
-9.1 RBI Deployments in Germany 77
-9.2 Legal bases and challenges 79
-9.3 Mobilisations and contestations 81
-9.4 Effects of the technologies: normalising surveillance 82
-CHAPTER 10. The Dragonfly project (Hungary) 84
-10.1 Remote Biometric Identification in Hungary 85
-10.2 Legal bases and challenges 87
-10.3 Mobilisations and contestations 89
-10.4 Effects of the technologies 90
-CHAPTER 11. Recommendations 92
- - - - -11.3 Decisions of National Courts 107
+ + + + + + + + +TABLE OF FIGURES
Figure 1. EU Countries use of FRT for forensic applications 35
@@ -220,360 +85,358 @@ACRONYMS
ABIS | -Automated Biometric Identification Systems | -|
---|---|---|
ABIS | +Automated Biometric Identification Systems | +|
ACLU | +ACLU | American Civil Liberties Union |
ADM | +ADM | Automated Decision-Making (System) |
AFIS | +AFIS | Automated Fingerprint Identification System |
AI | +AI | Artificial Intelligence |
ANPR | +ANPR | Automated Number Plate Recognition |
API | +API | Application Programming Interface |
AWS | +AWS | Amazon Web Services |
BDAS | +BDAS | Biometric Data Processing System |
BDSG | +BDSG | Federal Data Protection Act (Germany) |
BKA | +BKA | Federal Criminal Police Office (Germany) |
BKK | +BKK | Centre for Budapest Transport (Hungary) |
BPI | +BPI | Public Investment Bank (France) |
BPOL | +BPOL | German Federal Police |
CATCH | +CATCH | Central Automatic TeChnology for Recognition of Persons (Netherlands) |
CBIS | +CBIS | Central Biometric Information System (Czechia) |
CCTV | +CCTV | Closed Circuit Television |
CGT | +CGT | General Labour Confederation (France) |
CJEU | +CJEU | Court of Justice of the European Union (EU) |
CNIL | +CNIL | National Commission for Informatics and Freedoms (France) |
COC | +COC | Supervisory Body for Police Information (Belgium) |
CoE | +CoE | Council of Europe |
COCO | +COCO | Common Objects in Context (Dataset) |
COVID | +COVID | Coronavirus Disease |
CSU | +CSU | Centre for Urban Supervision (France) |
DEP | +DEP | Digital European Program |
DITSS | +DITSS | Dutch Institute for Technology, Safety & Security |
DPA | +DPA | Data Protection Authority |
EC | +EC | European Commission (EU) |
ECtHR | +ECtHR | European Court of Human Rights |
EDE | +EDE | Criminal identification database (Austria) |
EDPB | +EDPB | European Data Protection Board (EU) |
EDPS | +EDPS | European Data Protection Supervisor (EU) |
EDS | +EDS | European Data Strategy |
EEA | +EEA | European Economic Area |
EPP | +EPP | European People’s Party |
EU | +EU | European Union |
FRA | +FRA | Fundamental Rights Agency (EU) |
FRT | +FRT | Facial Recognition Technology |
FRVT | +FRVT | Face Recognition Vendor Test |
GDPR | +GDPR | General Data Protection Regulation (EU) |
HCLU | +HCLU | Hungarian Civil Liberties Union (Hungary). See “ |
HD | +HD | High Definition |
HDR | +HDR | Habitoscopic Data Register |
HKR | +HKR | Home Quarantine App (Hungary) |
IARPA | +IARPA | Intelligence Advanced Research Projects Agency (USA) |
ID | +ID | Identification |
IFRS | +IFRS | Interpol Facial R |
IKSZR | +IKSZR | Integrated Traffic Management and Control System (Hungary) |
INCLO | +INCLO | International Network of Civil Liberties Organisations |
INPOL | +INPOL | Criminal Case Management System (Germany) |
KAK | +KAK | Governmental Data Centre (Hungary) |
KDNP | +KDNP | Christian Democratic People's Party (Hungary) |
LED | +LED | Law Enforcement Directive (EU) |
LFP | +LFP | Law on the Function of Police (Belgium) |
LGBTQ | +LGBTQ | Lesbian, Gay, Bisexual,Transgender, Queer |
LIDAR | +LIDAR | Light Detection and Ranging |
LPA | +LPA | Airport Police (Belgium) |
LQDN | +LQDN | La Quadrature du Net (France) |
GMO | +GMO | Genetically Modified Organism |
MIT | +MIT | Massachusetts Institute of Technology |
MRAP | +MRAP | Movement against racism and for friendship between peoples (France) |
NAIH | +NAIH | Hungarian National Authority for Data Protection and Freedom of Information |
NBIS | +NBIS | National Biometric Identification System (Romania) |
NGO | +NGO | Non-Governmental Organisation |
NIST | +NIST | National Institute of Standards and Technology (USA) |
NISZ | +NISZ | National Infocommunication Services (Hungary) |
PARAFE | +PARAFE | Rapid passage at the external borders (France) |
PPM | +PPM | Pixels Per Meter |
RBI | +RBI | Remote Biometric Identification |
RETU | +RETU | Registered persons identifying features database and Aliens database (Finland) |
RGB | +RGB | Red, Green, Blue |
SIS | +SIS | Schengen Information System |
SSNS | +SSNS | Secret Service for National Security (Hungary) |
TAJ | +TAJ | Criminal case history database (France) |
TASZ | +TASZ | Hungarian Civil Liberties Union |
TELEFI | +TELEFI | Towards the European Level Exchange of Facial Images (EU Project) |
UAVG | +UAVG | GDPR Implementation Act (Germany) |
UK | +UK | United Kingdom |
UN | +UN | United Nations |
UNHRC | +UNHRC | United Nations Human Rights Council |
US(A) | +US(A) | United States of America |
VGG | +VGG | Visual Geometry Group (Dataset) |
VMD | +VMD | Video motion detection |
VOC | +VOC | Visual Object Classes (Pascal VOC) |
YOLO | +YOLO | You Only Look Once (Algorithm) |
+ |
-
Since the widespread use of neural network algorithms in 2012, artificial intelligence applied to the field of security has steadily grown into a political, economic, and social reality. As examples from Singapore, the UK, South Africa, or China demonstrate, the image of a digital society of control, in which citizens are monitored through algorithmically processed audio and video feeds is becoming a tangible possible reality in the European Union.
Through a set of “pilot projects”, private and public actors including supermarkets, casinos, city councils, border guards, local and national law enforcement agencies are increasingly deploying a wide array of “smart surveillance” solutions. Among them remote biometric identification, namely security mechanisms “that leverage unique biological characteristics” such as fingerprints, facial images, iris or vascular patterns to “identify multiple persons’ identities at a distance, in a public space and in a continuous or ongoing manner by checking them against data stored in a database.” (European Commission 2020b, 18) European institutions have reacted with a series of policy initiatives in the last years, but as we will show in this report, if left unchecked, remote biometric identification technologies can easily become biometric mass surveillance.
-+
Among technologies of remote biometric identification, facial recognition has been at the centre of the attention of most discussions in the public debate. The foregrounding of this specific use case of computer vision in the public debate has allowed concerned actors to raise awareness on the dangers of artificial intelligence algorithms applied to biometric datasets. But it has also generated confusion. The perception that facial recognition is a single type of technology (i.e., an algorithm “that recognises faces”) has obscured the broad range of applications of “smart technologies” within very different bureaucratic contexts: from the “smart cities” live facial recognition of video feeds deployed for the purpose of public space surveillance, to the much more specific, on-the-spot searches by law enforcement for the purpose of carrying out arrests or forensic investigations.
-+
The disentanglement and specification of each of these uses is important, if only because each distinct technological arrangement between sensing devices (cameras, microphones), datasets and algorithmic processing tools allows for radically different applications, and thus can have different types of impact on European citizens’ fundamental rights. As the recent communication of the European Commission (2021) stated, not all systems and not all applications are equally threatening for our democratic freedoms: some bear too much risk of infringing our fundamental rights – and therefore should never be allowed; some are “high risk” applications that can take place in certain circumstances with very clear safeguards; and some are more mundane uses of the technologies that require less attention. The ethical, political, and legal assessment of these levels of danger can therefore not be separated from a detailed understanding of how these technologies work. The limitation being of course that while we know what technologies are theoretically available to public actors, the detail of their characteristics is often hidden from view.
Objectives of the report
The aim of this report is thus to establish a problematised overview of what we know about what is currently being done in Europe when it comes to remote biometric identification, and to assess in which cases we could potentially fall into forms of biometric mass surveillance. The report will thus answer the following questions: What types of technologies are being used and how? In what context? By whom are these technologies used and to what aim? What types of actors are involved? What types of consequences does the use of those technologies entail? What legal basis and framework are applied to the use of those technologies? What are the forms of mobilisation and contestation against these uses?
-+
In the rest of this introduction, we locate the political context for this study, including the voices that have called for a moratorium or a ban of all technologies that are associated with “biometric mass surveillance”. We then specify the objectives, scope, methodology, some working definitions and outline the remaining chapters.
The international context
The concern for uncontrolled deployment of remote biometric identification systems emerges in a context characterised by the development of technologies in authoritarian regimes; the development of controversial “pilot” projects as part of “smart cities projects” in Europe; revelations about controversial privacy practices of companies such as Clearview AI; and finally, by the structuration of a US and EU debate around some of the key biases and problems they entail.
-+
In 2013, the Chinese authorities officially revealed the existence of a large system of mass surveillance involving more than 20 million cameras called Skynet, which had been established since 2005. While the cameras were aimed at the general public, more targeted systems were deployed in provinces such as Tibet and Xinjiang where political groups contest the authority of Beijing. In 2018, the surveillance system became coupled with a system of social credit, and Skynet became increasingly connected to facial recognition technology (Ma 2018; Jiaquan 2018). By 2019, it was estimated that Skynet had reached 200 million face-recognition enabled CCTV cameras (Mozur 2018).
-+
The intrusiveness of the system, and its impact on fundamental rights is best exemplified by its deployment in the Xinjiang province. The province capital, Urumqi, is chequered with checkpoints and identification stations. Citizens need to submit to facial recognition ID checks in supermarkets, hotels, train stations, highway stations and several other public spaces (Chin and Bürge 2017). The information collected through the cameras is centralised and matched against other biometric data such as DNA samples and voice samples. This allows the government to attribute trust-worthiness scores (trustworthy, average, untrustworthy) and thus generate a list of individuals that can become candidates for detention (Wang 2018).
-+
European countries’ deployments are far from the Chinese experience. But the companies involved in China’s pervasive digital surveillance network (such as Tencent, Dahua Technology, Hikvision, SenseTime, ByteDance and Huawei) are exporting their know-how to Europe, under the form of “safe city” packages. Huawei is one of the most active in this regard. On the European continent, the city of Belgrade has for example deployed an extensive communication network of more than 1.000 cameras which collect up to 10 body and facial attributes (Stojkovski 2019). The cameras, deployed on poles, major traffic crossings and a large number of public spaces allow the Belgrade police to monitor large parts of the city centre, collect biometric information and communicate it directly to police officers deployed in the field. Belgrade has the most advanced deployment of Huawei’s surveillance technologies on the European continent, but similar projects are being implemented by other corporations – including the European companies Thales, Engie Ineo or Idemia – in other European cities and many “Safe City” deployments are planned soon in EU countries such as France, Italy, Spain, Malta, and Germany (Hillman and McCalpin 2019). Furthermore, contrary to the idea China would be the sole exporter of Remote Biometric Identification technologies, EU companies have substantially developed their exports in this domain over the last years (Wagner 2021)
The turning point of public debates on facial recognition in Europe was probably the Clearview AI controversy in 2019-2020. Clearview AI, a company founded by Hoan Ton-That and Richard Schwartz in the United States, maintained a relatively secret profile until a New York Times article revealed in late 2019 that it was selling facial recognition technology to law enforcement. In February 2020, it was reported that the client list of Clearview AI had been stolen, and a few days later the details of the list were leaked (Mac, Haskins, and McDonald 2020). To the surprise of many in Europe, in addition to US government agencies and corporations, it appeared that the Metropolitan Police Service (London, UK), as well as law enforcement from Belgian, Denmark, Finland, France, Ireland, Italy, Latvia, Lithuania, Malta, the Netherlands, Norway, Portugal, Serbia, Slovenia, Spain, Sweden, and Switzerland were on the client list. The controversy grew larger as it emerged that Clearview AI had (semi-illegally) harvested a large number of images from social media platforms such as Facebook, YouTube and Twitter in order to constitute the datasets against which clients were invited to carry out searches (Mac, Haskins, and McDonald 2020).
-+
The news of the hacking strengthened a strong push-back movement against the development of facial recognition technology by companies such as Clearview AI, as well as their use by government agencies. In 2018, Massachusetts Institute of Technology (MIT) scholar and Algorithmic Justice League founder Joy Buolamwini together with Temnit Gebru had published the report Gender Shades (Buolamwini and Gebru 2018), in which they assessed the racial bias in the face recognition datasets and algorithms used by companies such as IBM and Microsoft. Buolamwini and Gebru found that algorithms performed generally worse on darker-skinned faces, and in particular darker-skinned females, with error rates up to 34% higher than lighter-skinned males (Najibi 2020). IBM and Microsoft responded by amending their systems, and a re-audit showed less bias. Not all companies responded equally. Amazon’s Rekognition system, which was included in the second study continued to show a 31% lower rate for darker-skinned females. The same year ACLU conducted another key study on Amazon’s Rekognition, using the pictures of members of congress against a dataset of mugshots from law enforcement. 28 members of Congress, largely people of colour were incorrectly matched (Snow 2018). Activists engaged lawmakers. In 2019, the Algorithmic Accountability Act allowed the Federal Trade Commission to regulate private companies’ uses of facial recognition. In 2020, several companies, including IBM, Microsoft, and Amazon, announced a moratorium on the development of their facial recognition technologies. Several US cities, including Boston, Cambridge (Massachusetts) San Francisco, Berkeley, Portland (Oregon), have also banned their police forces from using the technology.
-+
The European context
In Europe, a similar set of developments took place around Artificial Intelligence in activist circles, both at the member states level and at the EU level. (Andraško et al. 2021, 3). The first intervention dates from 2017 with the European Parliament Resolution of 16 February to the Commission on Civil Law Rules on Robotics (European Parliament 2017). It was followed by two statements and advisory documents: The Age of Artificial Intelligence, published by the European Political Strategy Centre; and a Statement on Artificial Intelligence, Robotics and Autonomous Systems (March 2018), published by the European Group on Ethics in Science and New Technologies (Andraško et al. 2021, 3). At the beginning of 2018, the European Economic and Social Committee issued three opinions on the deployment of AI in practice (European Economic and Social Committee 2018a, 2018b, 2018c). All these documents addressed the need for the EU to understand AI uses, and embedded them in the various ethical and political frameworks created by EU institutions. The same year, the Council of Europe began its activities on the matter. In 2017, the Parliamentary Assembly of the Council of Europe adopted a Recommendation on Technological Convergence, Artificial Intelligence and Human Rights pointing towards the need to established common guidelines for the use of artificial intelligence in court (Parliamentary Assembly of the Council of Europe 2017; Gonzalez Fuster 2020, 45).
-+
Legislative activity accelerated in 2018. The European Commission (2018a) published a communication Artificial Intelligence for Europe, in which it called for a joint legal framework for the regulation of AI-related services. Later in the year, the Commission (2018b) adopted a Coordinated Plan on Artificial Intelligence with similar objectives. It compelled EU member states to adopt a national strategy on artificial intelligence which should meet the EU requirements. It also allocated 20 billion euros each year for investment in AI development. (Andraško et al. 2021, 4).
-+
In 2019, the Council of Europe Commissioner for Human Rights published a Recommendation entitled Unboxing Artificial Intelligence: 10 steps to Protect Human Rights which describes several steps for national authorities to maximise the potential of AI while preventing or mitigating the risk of its misuse. (Gonzalez Fuster 2020, 46). The same year the European Union’s High Level Expert Group on Artificial Intelligence (AI HLEG) adopted the Ethics Guidelines for Trustworthy Artificial Intelligence, a key document for the EU strategy in bringing AI within ethical standards (Nesterova 2020, 3).
-+
In February 2020, the new European Commission went one step further in regulating matters related to AI, adopting the digital agenda package – a set of documents outlining the strategy of the EU in the digital age. Among the documents the White Paper on Artificial Intelligence: a European approach to excellence and trust captured most of the commission’s intentions and plans.
Technical overview
-+ |
-
In order to grasp the various facets of remote biometric identification that could potentially lead to biometric mass surveillance, this section provides an overview of the currently available technologies, how they work and what their limitations are as well as where and by whom they are deployed in the European Union.
Remote Biometric Identification and classification: defining key terms
@@ -936,21 +781,15 @@Overview of deployments in Europe
-+ |
-
When looking at the map of actual deployments of image and sound-based security technologies in Europe, Remote Biometric Identification is, as this report is being written, so far mostly an experimental and localised application. It coexists alongside a broad range of algorithmic processing of security images in a spectrum that goes from individual, localised authentication systems to generalised law enforcement uses of authentication, to what can properly be defined as Biometric Mass Surveillance. Distinguishing the various characteristics of these deployments is not only important to inform the public debate, but it also helps focus the discussion on the most problematic uses of the technologies. It also highlights the risks of function creep: systems deployed for one use which is respectful of EU fundamental rights can in some cases very easily be upgraded to function as biometric mass surveillance.
The European map of image and sound-based security technologies can be divided into two broad categories: authentication applications and surveillance applications. Remote Biometric Identification is a sub-category of the latter.
Legal bases
-+ |
-
The deployment of remote biometric identification in public spaces might have grave effects on the exercise of a range of fundamental rights of individuals (FRA 2019) such as the right to peaceful assembly and association (UNHRC 2019, para. 57) and the rights to liberty and security. Because the use of biometric tools for law enforcement purposes in public spaces involves collection, retention and processing of biometric data, a key issue on their legal permissibility is raised in relation to the obligations under the fundamental rights to privacy and personal data protection. This section thus will consider remote biometric identification against the protection offered by EU fundamental rights framework for the rights to privacy and personal data protection as well as by EU data protection legislation.
EU Fundamental Rights Framework for the Right to Privacy and the Right to Protection of Personal Data
The scope of the fundamental right to protection for RBI
Article 7 of the EU Charter of Fundamental Rights (Charter) sets out national and EU legislators’ obligations on guaranteeing the right to private life, family life, and communications of individuals (the right to privacy) under EU law. The right to privacy can also be found in Article 8 of the European Convention on Human Rights (ECHR), the scope of which has evolved over the years to cover issues relating to the processing of personal data. Because Article 7 of the Charter mirrors closely Article 8 ECHR, its scope must be interpreted in line with the latter and its interpretation by the European Court of Human Rights (ECtHR) pursuant to Article 52(3) of the Charter. The Charter enshrines a separate right to protection of personal data in its Article 8, which is “distinct from Article 7 of the Charter” (C-203/15, Tele2, para. 129).
-+
Biometric surveillance tools interfere with the fundamental rights to privacy and personal data protection as enshrined in each of these legal sources because they collect, retain, process and use personal data, including an intrinsically special category of biometric data, which is - as discussed below, personal data relating to the physical, physiological or behavioural characteristics of an individual that allows their unique identification (see section 4.2.1). Notably, it may not be just the physical biometric data such as fingerprints (S and Marper v UK; C-291/12 , Schwarz) or facial images (Gaughran v UK) that benefits from the rights to privacy and personal data protection as enshrined in the ECHR and EU law. For example, the ECtHR has adopted an expansive approach in terms of recognising the protective scope of Article 8 ECHR (S and Marper v UK, para 67), which would afford protection to different categories of biometric data including behavioural biometric data such as one’s way of movement or voice (Venier and Mordini, 2010).
Privacy and data protection in public space and the risk of mass surveillance
The use of a wide range of biometric data discussed above engages with the individuals’ right to privacy and data protection even if they are captured and used in public spaces while individuals enjoy public life. The case law of the ECtHR (PG and JH v UK; Peck v UK) and the Court of Justice of the European Union (CJEU) (Opinion 1/15) shows that they have afforded privacy protection to information that is not inherently private. In fact, performing biometric surveillance in public spaces is inherently intrusive and amounts to mass surveillance, which in this context can simply be characterised as monitoring, tracking, or processing of personal data of individuals indiscriminately and in a generalised manner without a prior criminal suspicion (FRA 2018). Biometric surveillance in public spaces relies on generalised and indiscriminate collection, retention, use and sharing of biometric data of individuals. This is the case even if the intended purpose of the biometric surveillance is targeted, because in order to identify people on the watchlist in a crowd, every person in that particular space must be analysed and compared with the watchlist (Houwing 2020).
-+
The grave consequences of this type of indiscriminate and generalised collection of personal data on fundamental rights of individuals can be found across the case law of the ECtHR and the CJEU. The ECtHR has repeatedly warned that covert surveillance tools must not be used to undermine or even destroy democracy on the grounds of defending it (Klass and others v Germany, para 49). Particularly in considering the lawfulness of collection of biometric data, the ECtHR recognised in S and Marper v UK that the use of biometric data that would allow identification of an individual and would carry the potential to deduce personal data that is classified as sensitive data such as ethnic origin would make the people concerned fundamentally vulnerable to stigmatisation and discrimination (paras 122-126). Because of the heightened level of protection afforded to it, the ECtHR found that generalised and indiscriminate collection and retention of biometric data did not comply with the ECHR requirements as it amounted to disproportionate interference with the right to privacy and thus constitute a violation of Article 8 ECHR.
The CJEU considered in Digital Rights Ireland (Joined Cases C293/12 and C594/12, para 37) as well as Tele2 (C-203/15, para 100) that EU law precluded the mass retention of traffic and location data for law enforcement purposes, and only allowed for targeted retention of said data. The deployment of biometric surveillance in public spaces thus must be subject to strict scrutiny and in light of the case law of both courts, the EU fundamental rights law as well as the ECHR preclude the deployment of biometric surveillance that leads to mass surveillance for law enforcement purposes in public spaces.
“Biometric data” in GDPR & LED
The General Data Protection Regulations (GDPR) provides the rules relating to the processing of personal data for all purposes except where the processing is carried out for the prevention, investigation, detection, or prosecution of criminal offences including the safeguarding against and the prevention of threats to public safety pursuant to its Article 2(2)(d). The Law Enforcement Directive (LED) complements the GDPR in this area as it applies specifically to the processing of personal data by competent authorities for the prevention, investigation, detection, or prosecution of criminal offences including the safeguarding against and the prevention of threats to public safety pursuant to its Article 1.
-+
Both legislations provide a specific framework for the processing of special categories of data (formerly known as “sensitive data”) – including biometric data, which is defined as “personal data” resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or “dactyloscopic data” under Article 4(14) of the GDPR and Article 3(13) of the LED. The definition thus recognises expanding categories of biometric data that can capture and measure human characteristics as it covers physical and physiological as well as behavioural biometric data. Notably, biometric data is granted a higher protection than non-sensitive personal data irrespective of the fact that they may not reveal sensitive information such as racial or ethnic origin, health, or sexual orientation.
Distinguishing personal data and biometric data
There are two elements that need to be sought for the personal data to constitute biometric data and for their processing to be subject to the specific limitations imposed by the GDPR and the LED.
- “Specific technical process”. Neither legislation defines the concept “specific technical process” but it should be understood as a special type of processing that captures the digital representation of biometric characteristics (e.g., facial images, fingerprints, voice) (Kindt 2013, 43; Jasserand 2016, 303). On this point, the European Data Protection Board (EDPB, 2019) notes that biometric data are the result of measurement of physical, physiological, or behavioural characteristics of individuals and thus the result of this special type of processing is captured by the concept of biometric data. For example, the image of the person captured by video surveillance is personal data, but it would be classified as biometric data once it is subjected to a specific type of processing to deduce the characteristics of that person (Recital 51, GDPR).
-+
- “Unique identification of an individual”. Compared to the definition of personal data, it is unclear whether the element of identification for the purpose of defining biometric data requires a higher threshold (Jasserand 2013, 305-306). Both legislations define personal data broadly, as “any information relating to an identified or identifiable individual”. It has been confirmed both by the former Article 29 Data Protection Working Party (2007) and the CJEU (C-582/14, Breyer) that the personal data is broadly defined to capture the concept of “identifiability” whereby a person could be identifiable combined with other information available (including the information retained by someone other than the data controller) even if the person is not prima facie identified (paras 39-49).
-+
The element of identification in the definition of biometric data on the other hand may suggest that said data must relate to an identified individual. The fact that the person could be identifiable through possible means would not be sufficient for the personal data to be classified as biometric data (Jasserand 2013, 306). The EDPB (2019) supports this view as it notes that if video surveillance system is set to detect the physical characteristics of individuals to classify them as opposed to uniquely identify them, this processing would not be subject to the framework reserved for the processing of sensitive data. Nevertheless, the data captured might still amount to personal data irrespective of the fact that they are not subject to any special type of processing.
Main political issues and debates
-+ |
-
The emergence of remote biometric identification as a policy issue
The technological developments in the field of remote biometric identification and the early legislative activity of EU institutions have progressively consolidated four main positions in relation to Remote Biometric Identification: 1) active promotion 2) support with safeguards; 3) moratorium and 4) outright ban. In this section we visit each of these positions and detail the logic of the arguments upon which they are based.
-+
As detailed in the introduction, so far, the European Commission and the European Council have generally supported the development of Remote Biometric Identification. In the White Paper on AI he European Commission (2020b) proposes a set of rules and actions for excellence and trust in AI that guarantee the safety and fundamental rights of people and businesses, while strengthening investment and innovation across EU countries. The Commission’s recent draft legislation takes these objectives a step further by proposing to turn Europe into “the global hub for trustworthy Artificial Intelligence (AI)” (European Commission 2021b). Biometric identification and specifically FRT have been central to many of the AI developments ranging from smart city initiatives financed by the EU all the way to the use of video surveillance and FRTs by law enforcement.
-+
The implementation of the GDPR and the LED in the EU and EEA in May 2018 has set the scene for wide-ranging contestations over the use of surveillance technologies, specifically facial recognition technologies in public spaces. A number of influential reports have been published (FRA 2018; FRA 2019; CNIL 2019b; EDRi 2020; Fernandez et. al. 2020; González Fuster 2020), online campaigns launched (e.g., #ReclaimYourFace) to warn about the risks posed by AI while simultaneously trying to put pressure on the European Commission to address their impact on safety and fundamental rights. Although many of the issues put forward by these reports reflect the overarching concern with privacy issues and human rights violations, each organisation uses a different problem definition ranging from the technical challenges and limitations of AI all the way to the risks involved in the implementation of biometric technologies. As a consequence they also propose different mitigation strategies such as promotion with safeguards, moratorium or full ban. In what follows, we present the configuration of mobilisation and contestation.
At the national level, Biometric systems for the purposes of authentication are increasingly deployed for forensic applications among law-enforcement agencies in the European Union. As we elaborate in Chapter 3, 11 out of 27 member states of the European Union are already using facial recognition against biometric databases for forensic purposes and 7 additional countries are expected to acquire such capabilities in the near future. The map of the European deployments of Biometric Identification Technologies (see Chapter 3) bear witness to a broad range of algorithmic processing of security images in a spectrum that goes from individual, localised authentication systems to generalised law enforcement uses of authentication, to Biometric Mass Surveillance.
Several states that have not yet adopted such technologies seem inclined to follow the trend, and push further. Belgian Minister of Interior Pieter De Crem for example, recently declared he was in favour of the use of facial recognition both for judicial inquiries but also for live facial recognition, a much rarer instance.
"The use of facial recognition can mean increased efficiency for security services […] The police are interested in using this technology in several of their missions. First of all, within the framework of the administrative police, with the aim of guaranteeing the security of a closed place accessible to the public, it would allow them to immediately intercept a person who is known in the police databases and who constitutes a danger for public security; but this technology can also be used within the framework of the judicial police, with the aim of controlling, during an investigation, if the suspect was present at the scene of the crime at the time when the punishable act was committed". (De Halleux 2020)
+"The use of facial recognition can mean increased efficiency for security services […] The police are interested in using this technology in several of their missions. First of all, within the framework of the administrative police, with the aim of guaranteeing the security of a closed place accessible to the public, it would allow them to immediately intercept a person who is known in the police databases and who constitutes a danger for public security; but this technology can also be used within the framework of the judicial police, with the aim of controlling, during an investigation, if the suspect was present at the scene of the crime at the time when the punishable act was committed".
Such outspoken advocates of the use of RBI constitute an important voice, but do not find an echo in the EU mainstream discussions.
Moratorium
On 20 January 2021, the European Parliament (2021) passed a resolution on artificial intelligence in which they invite the Commission “to assess the consequences of a moratorium on the use of facial recognition systems, and, depending on the results of this assessment, to consider a moratorium on the use of these systems in public spaces by public authorities and in premises meant for education and healthcare, as well as on the use of facial recognition systems by law enforcement authorities in semi-public spaces such as airports, until the technical standards can be considered fully fundamental rights-compliant, the results derived are non-biased and non-discriminatory, and there are strict safeguards against misuse that ensure the necessity and proportionality of using such technologies;” (European Parliament 2021).
-+
Another authority calling for a moratorium on automated recognition technologies in public spaces is the European Data Protection Supervisor (EDPS), the independent supervisory authority with responsibility for monitoring the processing of personal data by the EU institutions and bodies. According to their 2020-2024 Strategy (EDPS 2020) “Shaping a Safer Digital Future” released on 30 June 2020, the EDPS stresses that they are committed to supporting the idea of a moratorium on “the deployment, in the EU, of automated recognition in public spaces of human features, not only of faces but also of gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioural signals, so that an informed and democratic debate can take place” (EDPS 2020).
-+
The EDPS was also among the first to react to the draft Artificial Intelligence Act of the European Commission. While they welcomed the EU’s leadership aiming to ensure that AI solutions are shaped according to the EU’s values and legal principles, nonetheless they expressed their regret to see that their call for a moratorium on the use of remote biometric identification systems - including facial recognition - in publicly accessible spaces had not been addressed by the Commission. A stricter approach is necessary, they argue because “remote biometric identification, where AI may contribute to unprecedented developments, presents extremely high risks of deep and non-democratic intrusion into individuals’ private lives” (EDPS 2021). As mentioned below, shortly after their first reaction, the EDPS called for a general ban on the use of remote biometric systems with the European Data Protection Board (EDPB) (2021a).
A call for a moratorium, particularly, on facial recognition systems can be found in the Council of Europe documents. The Guidelines on Facial Recognition (Council of Europe, 2021) that is one of the instruments supplementing Convention 108+ call for a moratorium for the live facial recognition technologies (5) and lay out certain conditions for the use of facial recognition technologies by law enforcement authorities (6). For example, the Guidelines call for clear parameters and criteria when creating databases such as watchlists in light of a specific, legitimate, and explicit law enforcement purposes (ibid.)
Finally, a certain number of EU Political Parties, EU and national NGOs have argued that there is no acceptable deployment of RBI, because the danger of Biometric Mass Surveillance is too high. Such actors include organisations such as EDRi, La Quadrature du Net, Algorithm Watch or the French Défenseur des Droits29.
In the European Parliament, the European Greens have most vocally promoted the position of the ban, and have gathered support across party lines. In a letter to the European Commission dated 15 April 2021, 40 MEPs from the European Greens, the Party of the European Left, the Party of European Socialists, Renew Europe, a few non-attached MEPs and one member of the far-right party Identity and Democracy expressed their concerns about the leaked EU commission’ proposal for the AI Regulation a few days earlier. As they argued
People who constantly feel watched and under surveillance cannot freely and courageously stand up for their rights and for a just society. Surveillance, distrust and fear risk gradually transforming our society into one of uncritical consumers who believe they have “nothing to hide” and - in a vain attempt to achieve total security - are prepared to give up their liberties. That is not a society worth living in! (Breyer et al. 2021)
+People who constantly feel watched and under surveillance cannot freely and courageously stand up for their rights and for a just society. Surveillance, distrust and fear risk gradually transforming our society into one of uncritical consumers who believe they have “nothing to hide” and - in a vain attempt to achieve total security - are prepared to give up their liberties. That is not a society worth living in!
Taking in particular issue with Article 4 and the possible exemptions to regulation of AI “in order to safeguard public safety”, they urge the commissionEuropean Commission “to make sure that existing protections are upheld and a clear ban on biometric mass surveillance in public spaces is proposed. This is what a majority of citizens want” (Breyer et al. 2021)
-+
European Digital Rights (EDRi), an umbrella organisation of 44 digital rights NGOs in Europe takes a radical stance on the issue. They argue that mass processing of biometric data in public spaces creates a serious risk of mass surveillance that infringes on fundamental rights, and therefore they call on the Commission to permanently stop all deployments that can lead to mass surveillance. In their report Ban Biometric Mass Surveillance (2020) they demand that the EDPB and national DPAs) “publicly disclose all existing and planned activities and deployments that fall within this remit.” (EDRi 2020, 5). Furthermore, they call for ceasing all planned legislation which establishes biometric processing as well as the funding for all such projects, amounting to an “immediate and indefinite ban on biometric processing”.
La Quadrature du Net (LQDN) one of EDRi’s founding members (created in 2008 to “promote and defend fundamental freedoms in the digital world") similarly called for a ban on any present and future use of facial recognition for security and surveillance purposes. Together with a number of other French NGOs monitoring legislation impacting digital freedoms, as well as other collectives, companies, associations and trade unions, the LQDN initiated a joint open letter in which they call on French authorities to ban any security and surveillance use of facial recognition due to their uniquely invasive and dehumanising nature. In their letter they point to the fact that in France there are a “multitude of systems already installed, outside of any real legal framework, without transparency or public discussion” referring, among others, to the PARAFE system and the use of FRTs by civil and military police. As they put it:
“Facial recognition is a uniquely invasive and dehumanising technology, which makes possible, sooner or later, constant surveillance of the public space. It creates a society in which we are all suspects. It turns our face into a tracking device, rather than a signifier of personality, eventually reducing it to a technical object. It enables invisible control. It establishes a permanent and inescapable identification regime. It eliminates anonymity. No argument can justify the deployment of such a technology.” (La Quadrature du Net. et al. 2019)
+“Facial recognition is a uniquely invasive and dehumanising technology, which makes possible, sooner or later, constant surveillance of the public space. It creates a society in which we are all suspects. It turns our face into a tracking device, rather than a signifier of personality, eventually reducing it to a technical object. It enables invisible control. It establishes a permanent and inescapable identification regime. It eliminates anonymity. No argument can justify the deployment of such a technology.”
Another prominent voice asking for a full ban on FRTs is the Berlin-based NGO Algorithm Watch. In their report Automating Society (2020) the NGO similarly calls for a ban to all facial recognition technology that might amount to mass surveillance. Their analysis and recommendations place FRTs in a broader discussion regarding Automated Decision-Making (ADM) systems. They condemn any use of live facial recognition in public spaces and demand that public uses of FRTs that might amount to mass surveillance be decisively "banned until further notice, and urgently, at the EU level” (Algorithm Watch 2020, 10).
-+
They further demand meaningful transparency that not only means “disclosing information about a system’s purpose, logic, and creator, as well as the ability to thoroughly analyse, and test a system’s inputs and outputs. It also requires making training data and data results accessible to independent researchers, journalists, and civil society organisations for public interest research” (Algorithm Watch 2020, 11).
-+
Parallel to these reports there are also various campaigns that prove to be effective in raising awareness and putting pressure on governmental bodies both at a national and European level. In May 2020 EDRi launched the #ReclaimYourFace campaign, a European Citizens' Initiative (ECI) petition, that calls for a ban on all biometric mass surveillance practices. The campaign centres around the power imbalances inherent to surveillance. As of May 2021 the campaign has been supported by more than 50.000 individual signatures. #ReclaimYourFace is not the only campaign, though undoubtedly the most visible and influential, in a European Contextcontext. Other similar international initiatives are: "Ban the Scan" initiated by Amnesty International, "Ban Automated Recognition of Gender and Sexual Orientation" led by the international NGO Access Now, or "Project Panopticon" launched by the Indian based Panoptic Tracker.
In early June; a global coalition was launched under the hashtag #BanBS consisting of 175 organisations from 55 countries the. The coalition demands the halting of biometric surveillance practices. Drafted by Access Now, Amnesty International, European Digital Rights (EDRi), Human Rights Watch, Internet Freedom Foundation (IFF), and Instituto Brasileiro de Defesa do Consumidor (IDEC)), the open letter has been signed by almost 200 organisations, in which they call for an outright ban on uses of facial recognition and biometric technologies that enable mass surveillance and discriminatory targeted surveillance:
“These uses of facial and remote biometric recognition technologies, by design, threaten people’s rights and have already caused significant harm. No technical or legal safeguards could ever fully eliminate the threat they pose, and we therefore believe they should never be allowed in public or publicly accessible spaces, either by governments or the private sector.” (Access Now 2021)
+“These uses of facial and remote biometric recognition technologies, by design, threaten people’s rights and have already caused significant harm. No technical or legal safeguards could ever fully eliminate the threat they pose, and we therefore believe they should never be allowed in public or publicly accessible spaces, either by governments or the private sector.”
In April 2021, the European Commission (2021b) published its proposal on the Regulation for the Artificial Intelligence Act with the aim of setting out the harmonised regulatory rules for Member States on AI- based systems. It responded in part to the many challenges posed by the rapid technological development of AI as well as the pressure from watchdogs, regulatory bodies and civil society. If adopted in its current form, the proposed EU Artificial Intelligence Act will have important implications for the use of biometric identification systems for law enforcement purposes.
On the whole, the proposed EU Artificial Intelligence Act lays out those rules based on three categories of possible risks that the use of AI may create: (i) an unacceptable risk according to which the use of AI is prohibited (Article 5); (ii) a high-risk AI system, whose use is subject to certain conditions including an ex-ante conformity assessment (Article 6); and (iii) low or minimal risk, whose use is permitted without restrictions.
Notably for the purpose of this report, the proposed EU Artificial Intelligence Act covers “remote biometric identification systems” defined as “an AI system for the purpose of identifying natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database, and without prior knowledge of the user of the AI system whether the person will be present and can be identified” (Article 3(36)). In this way, the proposed EU Artificial Intelligence Act anticipates covering (AI-based) biometric video surveillance systems. In so doing, it differentiates between the use of “real-time” and “post” remote biometric identification systems in public spaces for law enforcement purposes.
-+
On initial observation, the proposal prohibits the use of “real-time” (live) remote biometric identification systems in public spaces for law enforcement purposes because it classifies them as systems that create an unacceptable risk.
However, Article 5 of the proposed EU Artificial Intelligence Act reads more as a heavy regulation rather than a prohibition. This is because the real-time remote biometric identification systems is prohibited unless it is “strictly necessary” for: (i) targeted search for specific potential victims of crime, including missing people; (ii) the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or of a terrorist attack; or (iii) in relation to a criminal offence for which a European Arrest Warrant can be issued provided that it is punishable by a custodial sentence or detention order of minimum three years. In determining the use of real-time remote biometric identification systems for one of those purposes, Member States should be subject to appropriate limits in time, space, and target person (Article 5(2)). A court or an independent administrative body should authorise the use of this type of biometric identification systems, except in duly justified emergency situations (Article 5(3)). Member States may allow for full or partial use of real-time biometric identification systems in public spaces for law enforcement purposes based on the requirements laid out in Article 5 of the Proposed Regulation (Article 5(4)).
-+
The use of “post” (forensic) remote identification systems for law enforcement purposes, on the other hand, is considered a high-risk AI system whose developers have the obligation to ensure that the system meets the condition set out in the proposed EU Artificial Intelligence Act (Chapter 2 and Annex III). As opposed to other high-risk AI systems whose providers have to conduct internal control checks, the post remote identification systems would be subject to third party conformity assessment.
-+
The above provisions concerning the use of remote biometric identification systems have important implications on the protection of personal data and privacy as those systems involve processing of personal data. For this reason, they should be read alongside the rules and obligations set out in the GDPR and the LED. When conducted for law enforcement purposes by competent authorities, the remote biometric systems must follow the relevant data protection legislation as well as the ECHR and the Charter requirements since they would involve processing of sensitive personal data.
Facial Recognition cameras at Brussels International Airport (Belgium)
-+ |
-
Belgium is, with Spain, one of the few countries in Europe that has not authorised the use of facial recognition technology, neither for criminal investigations nor for mass surveillance (Vazquez 2020). This does not mean that it is unlikely to change its position in the very near future. Law enforcement is indeed strongly advocating its use, and the current legal obstacles are not likely to hold for very long (Bensalem 2018). The pilot experiment that took place in Zaventem / Brussels International Airport, although aborted, occurred within a national context in which biometric systems are increasingly used and deployed.
Belgium will, for example, soon roll out at the national level the new biometric identity card “eID”, the Minister of Interior Annelies Verlinden has recently announced. The identification document, which will rely on the constitution of a broad biometric database and is part of a broader European Union initiative, is developed in partnership with security multinational Thales, was already trialled with 53.000 citizens in (Prins 2021; Thales Group 2020).30
Municipalities in different parts of the country are experimenting with Automated Number Plate Recognition (ANPR) technology. A smaller number have started deploying “smart CCTV” cameras, which fall just short of using facial recognition technology. The city of Kortrijk has for example deployed “body recognition” technology, which uses walking style or clothing of individuals to track them across the city’s CCTV network. Facial recognition is possible with these systems, but has not been activated as of yet pending legal authorisation to do so. In the city of Roeselare, “smart cameras” have been installed in one of the shopping streets. Deployed by telecom operator Citymesh, they could provide facial recognition services, but are currently used to count and estimate crowds, data which is shared with the police (van Brakel 2020). All the emerging initiatives of remote biometric identification are however pending a reversal of the decision to halt the experiment at Zaventem Brussels International Airport.
@@ -1307,15 +1128,15 @@Mobilisations and contestations
Based on this legislative framework, the General Commissioner, in his letter to the COC dated 18 July 2019, justified a deployment without consultation of the COC nor the Belgian DPA on the grounds that
“although the creation of a technical database for facial recognition is not possible under the current legislation, the use of real-time intelligent technologies other than Automatic Number Plate Recognition (ANPR) is possible under Article 25/3 of the LFP. The legislator has indeed provided that a camera used by the police, regardless of its type, can be equipped with intelligent technology. The introduction of real-time facial recognition is therefore, in our opinion, in accordance with the law.” (Organe de Controle de l'Information Policière 2019, 4)
+“although the creation of a technical database for facial recognition is not possible under the current legislation, the use of real-time intelligent technologies other than Automatic Number Plate Recognition (ANPR) is possible under Article 25/3 of the LFP. The legislator has indeed provided that a camera used by the police, regardless of its type, can be equipped with intelligent technology. The introduction of real-time facial recognition is therefore, in our opinion, in accordance with the law.”
The COC was not convinced by the arguments of the General Commissioner and concluded that the LFP did not apply. It justified its decision as follows:
“As the case stands, the Regulator is not entirely convinced that the LFP is applicable. It is true that the definition of a "smart camera" is taken in a very broad sense. According to Article 25/2, §1, 3° of the LFP, this term refers to "a camera which also includes components and software which, whether or not coupled with registers or files, can process the images collected autonomously or not". In the explanatory memorandum, ANPR cameras and cameras for facial recognition are mentioned as examples (Organe de Controle de l'Information Policière 2019, 4)
+“As the case stands, the Regulator is not entirely convinced that the LFP is applicable. It is true that the definition of a "smart camera" is taken in a very broad sense. According to Article 25/2, §1, 3° of the LFP, this term refers to "a camera which also includes components and software which, whether or not coupled with registers or files, can process the images collected autonomously or not". In the explanatory memorandum, ANPR cameras and cameras for facial recognition are mentioned as examples
It further added that
The possibility of testing a facial recognition system first raises questions about the exact scope of the processing. When determining the correct legal framework, it is not possible to establish from the outset whether the processing of personal data in the context of research and prosecution is already being considered in the test environment or during a test period - and thus whether the FPA and Title II of the DPA apply. The answer to this question is crucial in order to determine the legal basis, the level of decision making within the police that is entitled to decide to use facial recognition, the nature of the storage medium and the duration of storage, and the level of information security to be observed (operational or not). Secondly, and in the alternative, the Review Body notes that the LFP, if applicable, does describe what falls under the definition of a smart camera, but does not stipulate in what circumstances and under what conditions the use of facial recognition cameras is permitted, let alone on what medium the images can/should be recorded and what data should at least be stored. In the current state of the legislation, the legislator only wanted to regulate the creation of a technical database for ANPR images. (Organe de Controle de l'Information Policière 2019, 4)
+The possibility of testing a facial recognition system first raises questions about the exact scope of the processing. When determining the correct legal framework, it is not possible to establish from the outset whether the processing of personal data in the context of research and prosecution is already being considered in the test environment or during a test period - and thus whether the FPA and Title II of the DPA apply. The answer to this question is crucial in order to determine the legal basis, the level of decision making within the police that is entitled to decide to use facial recognition, the nature of the storage medium and the duration of storage, and the level of information security to be observed (operational or not). Secondly, and in the alternative, the Review Body notes that the LFP, if applicable, does describe what falls under the definition of a smart camera, but does not stipulate in what circumstances and under what conditions the use of facial recognition cameras is permitted, let alone on what medium the images can/should be recorded and what data should at least be stored. In the current state of the legislation, the legislator only wanted to regulate the creation of a technical database for ANPR images.
The COC thus counter-argued that because the current CCTV law was not voted on with facial recognition but ANPR in mind, and facial recognition is permitted but only for commercial use (such as the check-in of passengers) it was thus not legal to set up a technical database containing biometric information and the system did therefore not have a sound legal basis (7sur7 2019). The interesting technicality of the case is that the “snapshots” generated in the first phase of the system’s workflow were in practice stored only for a fraction of a second. Yet according to the law, this is still a biometric database, and thus not allowed (L'Avenir 2019).
The reaction by the Belgian Supervisory Body for Police Information shows that a degree of unclarity on the legal basis for conducting biometric surveillance persists. From a legislative perspective, such a system can easily be re-activated if the grounds for the interruption are not present in the law anymore. The current legislative activity seems to point in this direction.
The Burglary Free Neighbourhood in Rotterdam (Netherlands)
-+ |
-
In October 2019, the Carlo Collodihof, a courtyard in the Rotterdam neighbourhood Lombardijen, was equipped with a new kind of streetlamp. The twelve new luminaires did not just illuminate the streets; they were fitted with cameras, microphones, speakers, and a computer which was connected to the internet. They are part of the so called Fieldlab Burglary Free Neighbourhood: an experiment in the public space with technologies for computer sensing and data processing, aimed at the prevention of break-ins, robberies, and aggression; increasing the chances of catching and increasing a sense of safety for the inhabitants of the neighbourhood ((Redactie Inbraakvrije Wijk 2019; Kokkeler et al. 2020b). The practical nature of a Fieldlab provides a way to examine concretely how the various technologies come together, and how they fit in with existing infrastructures and regulations.
Detection and decision-making in the “Burglary free neighbourhood” Fieldlab
@@ -1364,7 +1179,7 @@The Constitution for the Kingdom of the Netherlands provides for a general right to protection for privacy in Article 10, according to which restrictions to that right must be laid down by law. The GDPR Implementation Act (Uitvoeringswet Algemene Verordening Gegevens-bescherming) (UAVG), as well as the Police Data Act (Wet Politiegegevens) or the Judicial Data and Criminal Records Act (Wet Justitiele en Strafvorderlijke Gegevens) which implement the GDPR and the LED, provides the legal framework regarding privacy and data protection.
The definition of personal data as enshrined in the GDPR and the LED is directly applicable under the Dutch law. To qualify data as such, “any information” must relate to an identified or identifiable natural person. Based on the data that can be captured by the Fieldlab programme, two elements of this definition need further attention.
-“Information “relating to” a natural person”. The former Article 29 Working Party (2007) substantiated this element by noting that information can relate to an individual based on its content (i.e., information is about the individual), its purpose (i.e., information is used or likely to be used to evaluate, treat in a way, or influence the status or behaviour of an individual), or its result (i.e., information is likely to have an impact on a certain person’s rights and interests, taking into account all the circumstances surrounding the precise case). These three alternative notions to determine whether the information relates to an individual was endorsed by the CJEU in its Nowak decision (C-434/16), where it dealt with the purpose (i.e., it evaluates the candidate’s competence) and the result (i.e., it is used to determine whether the candidate passes or fails, which can have an impact on the candidate’s rights) of the information in question in determining whether the written answers to an exam would qualify as personal data. In brief, in determining whether the data captured by the Fieldlab programme qualify as personal data, the context for which the data is used or captured is important. Information about the level of crowding or sound could “relate” to an individual if it is used to evaluate or influence the behaviour of a person (based on its purpose), or to affect a person’s rights (based on its result) (Galič and Gellert 2021).
-+
-“Identifiable Person”. The notion of identifiability covers circumstances where it is possible to distinguish the individual from a group of people by combining additional information (See 4.2.1). In situations where the person cannot be identified, determining the extent to which that person can be identifiable depends on the possibilities of combining additional identifying information (Galič and Gellert 2021). However, where the system mainly operates on non-personal data because its aim is to influence the behaviour of a group of people, instead of an identified or identifiable person, the chances of having sufficient data to render the person identifiable would be lower (ibid).
The uncertainties around these two elements of personal data mean that a project that monitors and tracks the behaviour of individuals in public spaces may fall outside the scope of data protection legislation if there are uncertainties around whether the data it processes actually qualify as personal data. Notably, the Whitepaper on the sensors in the role of municipalities (van Barneveld, Crover, and Yeh 2018), produced in collaboration with the Ministry of Interior, a reference to the definition of personal data and the possibility of combining for example sound-data with camera recordings to trigger the application of the data protection legislation, without giving further details. Unlike in the relevant sections of the other case studies, this section will not explore further data processing conditions under the UAVG and the other relevant laws because the issue from a data protection view in the first instance with the Fieldlab programme or any similar initiative is whether they process personal data.
Mobilisations and contestations
Despite visits from the mayor of Rotterdam and Secretary of State Sander Dekker, the Fieldlab of the Burglary Free Neighbourhood has not been discussed much in Dutch media. The most prominent discussion on the project has been in a TV broadcast and online video by Nieuwsuur, in which criminologist Marc Schuilenburg is sceptical about the technology deployed in the Fieldlab (Nieuwsuur 2020a, 5:38m):
The Safe City Projects in Nice (France)
-+ |
-
Although several French cities such as Paris, Valenciennes or Marseille have launched pilot projects for “safe city” projects involving biometric technologies (facial, voice, sound recognition), the city of Nice is perhaps the national leader in the experimentation with such technologies at a local level (Nice Premium 2017). The mayor of Nice, Christian Estrosi (Les Républicains Party, right) a prominent political figure on the national political scene, has made clear his intention was to make Nice a “laboratory” of crime prevention (Barelli 2018). Since 2010, more than 1.962 surveillance cameras have been deployed throughout the city, making it the city with highest CCTV coverage in France (27 cameras per square meter). Nice also possesses the most local police in France per inhabitant: 414 agents, for a population of 340.000 (in comparison, the neighbouring city of Marseille has 450 agents for 861.000 inhabitants).
The various facets of the “Safe city” project in Nice
@@ -1421,18 +1230,18 @@Legal bases and challenges
The use of facial recognition systems in high schools in Nice and Marseille, which was declared unlawful by the Administrative Court of Marseille, raised important issues on the legality of deploying biometric technologies in public places.
There is no specific provision devoted to the right to privacy or data protection in the French Constitution of 1958, but constitutional safeguards for the interests protected under said rights exists. The French Constitutional Council (Conseil Constitutionnel) has recognised that the respect for privacy is protected by Article 2 of the 1789 Declaration of the Rights of Man and of the Citizen, which is incorporated in the French constitutionality bloc as binding constitutional rule (bloc de constitutionnalité) (French Constitutional Council, Decision N° 2004-492 DC of 2 March 2004). Accordingly, the collection, retention, use and sharing of personal data attracts protection under the right to privacy (French Constitutional Council, Decision n° 2012-652 DC of 22 March 2012). The limitations to that right must thus be justified on grounds of general interest and implemented in an adequate manner, proportionate to this objective (ibid).
-+
France has updated the Act N°78-17 of 6 January 1978 on information technology, data files and civil liberties in various stages to incorporate the provisions of the GDPR, address the possible exemptions contained in the GDPR, and implement the LED.
-+
The Act sets out the reserved framework for sensitive data including biometric data in its Article 6, which states that sensitive data can be processed for purposes listed in Article 9(2) of the GDPR as well as those listed in its Article 44. The latter includes the re-use of information contained in court rulings and decisions, provided that neither the purpose nor the outcome of such processing is the re-identification of the data subjects; and the processing of biometric data by employers or administrative bodies if it is strictly necessary to control access to workplaces, equipment, and applications used by employees, agents, trainees, or service providers in their assignments.
Pursuant to Article 6 of the Act N°78-17, processing of sensitive data can be justified for public interest if it is duly authorised in accordance with Articles 31 and 32 of the Act. Accordingly, an authorisation by decree of the Conseil d'État (State Council) is required after reasoned opinion of CNIL, for processing of biometric data on behalf of the State for the authentication of control of the identity of the individuals (Article 32, Act N°78-17).
-+
In February 2020, the Administrative Court of Marseille considered the extent to which the data subject’s explicit consent may provide an appropriate legal basis in the deployment of facial recognition systems to control access to high schools in Nice and Marseille (Administrative Court of Marseille, Decision N°1901249 of 27 February 2020). After recognising that data collected by facial recognition constitute biometric data (para 10), the Court held that the required consent could not be obtained simply by the students or their legal representatives in the case of minors signing a form due to the power imbalance between the targeted public and the public educational establishment as the public authority (para. 12). More importantly, the Court determined that the biometric data processing could not be justified based on a substantial public interest (i.e., controlling access to premises) envisioned in Article 9(2)(g) of the GDPR in the absence of considerations that the relevant aim could not be achieved by badge checks combined with – where appropriate – video surveillance (ibid).
-+
Article 88 of the Act N°78-17 provides the specific limitations of the processing of sensitive data for law enforcement purposes, according to which their processing is prohibited unless it is strictly necessary, subject to appropriate safeguards for the data subject’s rights and freedoms and based on any of the same three grounds listed in Article 10 of the LED, including where it is authorised by law.
-+
The Act N°78-17 provides the data subject rights against the processing of their personal data with restrictions to the exercise of those rights subject to certain conditions (e.g., the restriction for protecting public security to the right to access the data processed for law enforcement purposes pursuant to Art 107 of Act N°78-17). An important data subject’s right in the context of biometric surveillance is the data subject’s right not to be subjected to solely automated decision-making, including profiling, except if it is carried out in light of circumstances laid out in Article 22 of the GDPR and for individual administrative decisions taken in compliance with French legislation (Article 47 of Act N°78-17). That said, for the latter circumstance, the automated data processing must not involve sensitive data (Article 47(2), Act N°78-17). Regarding the data processing operations relating to State security and defence (Article 120, Act N°78-17) and to the prevention, investigation, and prosecution of criminal offences (Article 95, Act N°78-17), the Act lays out an absolute prohibition against solely automated decision-making, according to which no decision producing legal effects or similarly significant effects can be based on said decision-making intended to predict or assess certain personal aspects of the person concerned. Particularly, with respect to data processing operations for law enforcement purposes, Article 95 of the Act prohibits any type of profiling that discriminates against natural persons based on sensitive data as laid out in Article 6.
-+
In addition to the data protection legislation, the other legislation applicable to biometric surveillance is the Code of Criminal Procedure. Its Article R40-26 allows the national police and gendarmerie to retain in a criminal records database (Traitement des Antécédents Judiciaires or TAJ) photographs of people suspected of having participated in criminal offences as well as victims and persons being investigated for causes of death, serious injury or disappearance to make it possible to use a facial recognition device. According to a 2018 report by Parliament, TAJ contains between 7 and 8 million facial images (Assemblée Nationale N°1335, 2018, 64, f.n. 2). La Quadrature du Net lodged legal complaints against the retention of facial images before the Conseil d'État, arguing that this practice does not comply with the strict necessity test required under Article 10 of LED and Article 88 of Act N°78-17 (La Quadrature du Net, 2020).
Facial Recognition in Hamburg, Mannheim & Berlin (Germany)
-+ |
-
RBI Deployments in Germany
All the deployments of RBI we are aware of in Germany were conducted by law enforcement. The deployments range from using facial recognition software to analyse the German central criminal information system, to specific deployments in more targeted locations such as Berlin Südkreuz train station or Mannheim city centre, or to deployments around specific events such as the G20 in Hamburg in 2019.
@@ -1497,25 +1300,25 @@Legal bases and challenges
The question on the legal permissibility of examples of biometric video surveillance explained above requires a brief description of the constitutional and legislative framework for the protection of privacy and personal data, and the police powers granted under the German law in relation to the use and processing of personal data.
The general right of personality based on Articles 2(1) and 1(1) of the German Constitution protects individuals against the collection, storage, and use of their personal data by public authorities (Eichenhofer and Gusy, 2017). The basic right to informational self-determination guarantees the authority to decide on the disclosure and also on the type of use of one's personal data (BVerfG, judgment of 15 December 1983 - 1 BvR 209/83, para. 149).
-+
Germany adapted a new Federal Data Protection Act (BDSG), to use the discretionary powers and the application of national laws contained in the GDPR. The BDSG also contains data protection provisions on the processing of personal data by activities of public federal bodies which do not fall within the scope of Union law (e.g., intelligence services, Federal Armed Forces) (Part 4, BDSG) and implements the LED (Part 3, BDSG).
-+
Paragraph 22 of the BDSG sets out lawful purposes additional to those listed in Article 9 of the GDPR for which sensitive data may be processed. For the purpose of this report, the lawful purposes that are relevant for public bodies processing operations are the following: (i) processing is urgently necessary for reasons of substantial public interest; (ii) processing is necessary to prevent substantial threats to public security; (iii) processing is urgently necessary to prevent substantial harm to the common good or to safeguard substantial concerns of the common good; (iv) processing is necessary for urgent reasons of defence or to fulfil supra- or intergovernmental obligations of a public body. In each case, the interests sought with any of these purposes must outweigh the data subject’s interest. Paragraph 22 of the BDSG further imposes obligations such as access restriction and encryption in relation to implementing appropriate safeguards to protect the data subjects’ interest when the processing is carried out based on the above purposes. Furthermore §27 of the BDSG envisages the processing of sensitive data for scientific or historical research purposes or statistical purposes subject to certain conditions.
-+
In regard to the processing of sensitive data for law enforcement purposes, §48 of the BDSG permits the processing only where it is strictly necessary for the performance of the competent authority’s task, and subject to the existence of certain safeguards such as those in relation to data security and encryption.
-+
In terms of the further use of the data, §23 of the BDSG designates purposes for which personal data may be processed other than the initial intended purpose such as where it is necessary to prevent substantial harm to common good, threat to public security, defence, or national security or where it is necessary to prevent serious harms to others’ rights. §49 of the BDSG lays out the rules for the processing of personal data for law-enforcement purposes other than the initial intended law enforcement purpose.
-+
Moreover, the BDSG devotes a specific section to the processing of personal data while conducting video surveillance. Pursuant to §4 of the BDSG, video surveillance of public spaces is permitted only as far as it is necessary (i) for public bodies to perform their tasks; (ii) to exercise the right to determine who shall be allowed or denied access, or (iii) to safeguard legitimate interests for specifically defined purposes. There should be nothing to indicate that the data subject’s legitimate interest overrides the interest protected by any of the respective purposes and protecting lives, health and freedom of people should be considered as a very important interest (§4, the BDSG). More importantly, the data collected through the use of video surveillance can be further processed if it is necessary to prevent threats to state and public security and to prosecute crimes (§4(4), the BDSG). The same section further provides conditions for notification at the earliest possible moment about the surveillance, informing the data subject whose personal data may be collected as a result of the surveillance and the deletion of the data if it is no longer necessary.
-+
The BDSG restricts the application of certain data subject rights as enshrined in the GDPR such as the right to be informed (§33) and the right to request access (§34). §37 of the Act provides a sectorial exception in relation to providing services pursuant to an insurance contract for the prohibition against the sole automated decision-making. In relation to the processing of personal data for law enforcement purposes, the BDSG permits the sole automated decision-making if it is authorised by law (§55). Nevertheless, the decision cannot be based on sensitive data unless there are suitable safeguards to the data subject (§55(2)). In any case, it provides an explicit prohibition against conducting profiling that may discriminate against people based on their sensitive data (§55(3)).
-+
The collection of personal data in general and facial images in particular in criminal investigation proceedings are authorised under German Law by the Federal Police Act (Gesetz über die Bundespolizei) (BPoIG), by the Federal Criminal Police Office and the Cooperation of the Federal and State Governments in Criminal Police Matters (Bundeskriminalamtgesetz) (BKAG), the Code of Criminal Procedure (Strafprozessordnung) (StPO), and the police acts of Länder.
-+
§24 of the BPoIG grants the Federal Police the authority to take photographs including image recordings of a person subject to specific conditions. Moreover, §26 of the BPoIG, entrusts the Federal Police the power to collect personal data by making picture and sound recordings of participants in public events or gatherings if facts justify that there are significant risks to border security or to categories of people or objects. §27 of the BPoIG further authorises the use of automatic image recording, albeit in relation to security risks at the border or to categories of people or objects. Each section provides the obligations for the deletion of the data after a specific timeframe.
-+
The BKAG provides the rules for information collection by the Federal Criminal Police Office in its information system, BKAG established pursuant to §13 of the BKAG. §12 of the Act allows the processing of personal data by the Office for purposes other than those for which they were collected in order to prevent, investigate, and prosecute serious crimes. Additionally, the personal data of people who are convicted of, accused of, and suspected of committing a crime, and for whom there are factual indications that they may commit crimes of considerable importance in the near future may be processed to identify that person. (§12, para. 5, the BKAG). The same Article states that personal data obtained by taking photos or image recordings of a person by means of covert use of technical means in or out of homes may not be further processed for law enforcement purposes.
-+
§81b of the StPO grants the police the authority to obtain the photographs and fingerprints of a suspect and any of his measurements in order to conduct criminal proceedings. §100h of the StPO covers the police power to conduct covert surveillance measures, which includes the recording of the photographs and other images of the person concerned outside of private premises where other means of establishing the facts or determining an accused’s whereabouts would offer less prospect of success or would be more difficult. In terms of the investigative powers of police to use personal data in general, §98c of the StPO grants the authority to automatic matching of personal data from criminal proceedings with other data stored for the purposes of criminal prosecution or the enforcement of a sentence, or in order to avert a danger. This is, however, subject to the specific rules under federal law or Länder law. §483 of the StPO authorises a number of authorities to process personal data where necessary for the purposes of criminal proceedings including for criminal proceedings other than the one for which the data were collected. §484 of the StPO allows for the processing of personal data for future criminal proceedings.
The Dragonfly project (Hungary)
-+ |
-
Under the Government of Prime Minister Viktor Orbán, Hungary has been on a collision course with EU Institutions. It has centralised and consolidated its power by marginalising civil society and curtailing the autonomy of Hungarian media, cultural and higher education institutions (Csaky 2020; Gehrke 2020; Verseck 2020). Orbán’s continued erosion of the country’s democratic institutions was further advanced with the 2020 adoption of an emergency law which allows the government to rule by decree (Schlagwein 2020; Stolton 2020). In this context, the latest developments in using Biometric Identification Technologies in Hungary flag serious concerns regarding the rule of law, human rights and civil liberties.
Hungary is a frontrunner in Europe when it comes to authorising law enforcement’s use of Facial Recognition Technology, developing a nationwide and centralised database, and using the Home Quarantine App as part of the Government’s Coronavirus measures. The infrastructure in place that potentially allows for a centralised deployment of biometric mass surveillance technologies in Hungary has reached an unprecedented scale while the legal and ethical scrutiny of these technologies lags dangerously behind. This is due to (1) the overlap between the private and public sectors, specifically government institutions, and (2) due to the complex entanglements biometric systems have with other information systems (such as car registries, traffic management, public transport monitoring and surveillance, etc.). Although the latter are not concerned with the traces of the human body they can nonetheless be used for and facilitate biometric mass surveillance. These entanglements create grey zones of biometric mass surveillance where the development and deployment of such technologies is hidden from visibility and critical scrutiny.
--
-
Remote Biometric Identification in Hungary
--
-
The Hungarian Police’s use of Facial Recognition
-
-
Remote Biometric Identification in Hungary
+ +The Hungarian Police’s use of Facial Recognition
+On 10 December 2019 the Hungarian Parliament passed a package of amendments of acts for the work of law enforcement in Hungary. Entitled “the simplification and digitisation of some procedures” this adjustment legalised the use of forensic – but also live – FRT by the Hungarian Police (Hungarian Parliament 2019). In cases when a person identified by the police cannot present an ID document, the police agents can take a photograph of the individual on location, take fingerprints, and record the biometric data based on “perception and measurement” of external characteristics. The photo taken on location can be instantly verified against the database of the national registry of citizens. The automatic search is performed by a face recognition algorithm and the five closest matches are returned to the police agent who, based on these photos proceeds with identifying the person (1994. Évi XXXIV. Törvény, para 29/4(a)). This application of FRT does not fall under the category of mass surveillance; however, it is only possible due to a central system which collects and centralises the national and other biometric databases but also provides the technical support for accessing it in a quick and affective way by various operational units. In this instance by the patrolling police.
The Dragonfly (Szitakötő) Project
In 2018 the Ministry of Interior presented a bill in the Hungarian Government that proposed a centralised CCTV system with data stored in one centralised database called the Governmental Data Centre (Kormányzati Adatközpont, abbreviated as KAK). All governmental operations aiming at developing this centralised database run under the name Szitakötő (Dragonfly). This central storage facility collects surveillance data of public spaces (streets, squares, parks, parking facilities, etc.); the Centre for Budapest Transport (BKK); bank security and the Hungarian Public Road PLC. The project with an estimated budget of 50 billion forints (160 million euros) proposes to centralise about 35.000 CCTV cameras and 25.000 terabytes of monitoring data from across the country (NAIH 2018). While the project, and notably the response of Dr. Attila Péterfalvi, head of the Hungarian Data Protection Authority, - Hungarian National Authority for Data Protection and Freedom of Information (NAIH), who warned of the lack of data protection considerations in the bill, have been largely mediatised, this has done little for halting the Project which has already been rolled out. In 2015 the Hungarian company GVSX Ltd (Hungary). Had already been contracted (NISZ-GVSX 2019) to implement an Integrated Traffic Management and Control System called IKSZR (Integrált Közlekedésszervezési és Szabályozási Rendszer) that centralises data from various systems such as ANPR cameras, car parks, traffic monitoring, meteorological data, etc. The Dragonfly Project has been designed as an expansion of this system by centralising the data flowing from both the IKSZR system, the databases of the National Infocommunication Services (NISZ) and also CCTV data from other public and private surveillance systems such as those operated by local governments, public transport companies and banks.
-+
The technical description of the Dragonfly Project does not make any explicit reference to (live) facial recognition technology, however, the system collects, stores and searches, in real time, video surveillance footage from 35.000 CCTV cameras. However, from the reports of the Hungarian Civil Liberties Union (HCLU or TASZ in Hungarian) and the DPA, it is known (NAIH 2019, 139) that to some extend FRT has been used by the Secret Service for National Security (SSNS), one of the national security services of Hungary. According to the DPA’s investigation all the cases in which FRT has been used happened in relation to concrete (criminal) cases looking for a missing person or someone under warrant. These cases were also limited to specific geographic locations (NAIH 2019). According to the DPA’s investigation, in 2019 the FRT system operated by the SSNS found 6.000 matches, which resulted in around 250 instances of stop-and-search and 4 arrests (NAIH 2019). The numbers for 2020 are inconsistent with those given for 2019 (3 matches, 28 instances of stop-and-search, unknown number of arrests), however, this is probably due to the fact that the system has since been moved primarily to the jurisdiction of the Hungarian Police.
-+
While the legal framework for police checks does refer to the use of facial recognition technologies, the national security act does not mention it. This is even more striking as the SSNS, is known to be using FRT to provide the national security services, the police, or other authorised institutions (e.g., prosecutor’s office, tax office, etc.) classified information.
-+
Two interrelated companies are responsible for the development, maintenance, and administration of this single central system: the NISZ and IdomSoft Ltd., both owned by the state. The NISZ or National Infocommunication Services is a 100% state owned company that only in 2020 signed 6 contracts to purchase the necessary hardware, storage, and other IT equipment for implementing the Dragonfly Project. While Public Procurement documents (Közbeszerzési Hatóság, 2020) bear witness to the ongoing investments and development of the Dragonfly Project by the Hungarian Government, a comprehensive overview of the project, the stages of its implementation or its budget, is nowhere to be found.
-+
The other company responsible for the administration of the Dragonfly Project is the IdomSoft company, a member of the so called NISZ group. Idomsoft is a 100% indirect state-owned company (indirect ownership means that the government owns shares, but not through authorised state institutions or through other organisations) that, according to its website, “plays a leading role in the development, integration, installation and operation of IT systems of national importance”. Apart from administering the National Dragonfly Database, Idomsoft also assures the interoperability of the various national databases such as the citizen’s registry, passport and visa databases, car registries, and police alerts, and it connects the Hungarian databases into the Schengen Information System (SIS II).
-+
Since the implementation of the Dragonfly Project the Hungarian government has been collecting video surveillance data that is centralised in the Governmental Data Centre (Kormányzati Adatközpont) in the same location and by the same institutions that administer the national registry of citizens, visa-entries, police databases, and also other e-governmental databases such as related to social security, tax office or health records.
-+
While the COVID-19 pandemic has brought a temporary halt of movement in public spaces, it also facilitated the introduction of new tracking technologies. Hungary is among two countries in Europe (Poland being the other) to introduce a Home Quarantine App which uses automated face recognition technology to verify that people stay in quarantine for the required time.
The normalisation of biometric surveillance at home: The Hungarian Home Quarantine App
In May 2020 Hungarian Authorities rolled out two digital applications, the contract-tracing app called VirusRadar (Kaszás 2020) and the Home Quarantine App (Házi Karantén Rendszer, abreviated HKR). Both of these apps are centralised tracing apps meaning that they send contact logs with pseudonymised personal data to a central (government) back-end server (Council of Europe 2020, 28). While the VirusRadar only uses Bluetooth data and proximity of other devices, the HKR processes biometric data when comparing facial images of its users.
Those who, according to the COVID-19 regulations in Hungary, are confined to home quarantine are offered the option to use the app instead of being checked by the police. For those who return from abroad, the use of the app is compulsory. But even those who can choose are encourage by the authorities to make use of the HKR app otherwise they will be subjected to frequent visits by police agents. Once a person downloads the app, its use becomes compulsory and failure to do so or attempts to evade its tracking is considered an administrative offense. From a data protection law point of view, this is a clear case where the data subject’s consent (and in the case of biometric data, their explicit consent) cannot provide the lawful ground for the processing of data through the app (see section 4.2.2). Even if the processing can be based on another lawful ground such as public interest, the punitive nature of non-compliance may raise issues in terms of adhering to the necessity test, which requires a balancing act between the objective pursued and the data subject’s interests.
-+
The HKR app is developed by Asura Technologies and implemented by IdomSoft Ltd., the same company that provides the software and technical implementation for the nation-wide Dragonfly Project. The HKR application works with face recognition technology combined with location verification. The application sends notifications at random times prompting the user to upload a facial image while retrieving the location data of the mobile device. The user must respond within 15 minutes and the location data must match the address registered for quarantine. In order for the Home Quarantine App to work, the user first needs to upload a facial image which is compared by a police officer with the photo of the same individual stored in the central database. After this facial verification, the app creates a biometric template on the mobile phone of the user and the photo is deleted. The consecutive photos are only compared to this biometric template, so neither the photos nor the template leave the personal device. If there is suspicion about the identity or whereabouts of the user, a police officer visits the address to make sure that the person is adhering to the quarantine rules.
-+
Interestingly, the HKR app, — just like the contact tracing app VirusRadar, which was developed by Nextsense — has been “donated” to the Hungarian Government by Asura Technologies “free of charge”.
-+
Figure 5. Snapshots from the video Home Quarantine System Short Presentation by Asura Technologies38
Legal bases and challenges
The creation of a nation-wide and centralised database that uses facial recognition technology may raise important legal questions on its compliance under the constitutional framework and the data protection legislation. Article 6 of the Fundamental Law of Hungary affirms the right to privacy and the right to protection of personal data. They are implemented by the Right to Informational Self-Determination and Freedom of Information (2011. évi CXII. Törvény az információs önrendelkezési jogról és az információszabadságról) (Infotv), which was amended in 2018 to use the discretionary powers and application of national laws contained in the GDPR. With the amendments, the Act also provides rules for the data processing activities that fall outside the scope of the GDPR and implements the LED. The sectoral laws on the processing of personal data have been amended as of 2019 to comply with the GDPR.
-+
The Infovt permits the processing of sensitive data where: (i) the processing is necessary and proportionate to protect the vital interest of the data subject or another person; (ii) the data is made publicly available by the data subject; (iii) the processing is absolutely necessary and proportionate for the implementation of an international treaty, or is required by law for the enforcement of fundamental rights, national security, prevention, detection or prosecution of criminal offences (§5). Furthermore, in relation to processing of (non-sensitive) “personal criminal data” (bűnügyi személyes adat), which is personal data obtained during criminal justice proceedings, can only be processed by state or municipal bodies for the prevention, detection and prosecution of criminal offenses and for administrative and judicial tasks, as well as criminal, civil and non-judicial matters (§5(4)).
-+
In regard to the data subjects’ rights, notably, the Infovt permits sole automated decision-making, whereby a decision based on the sole automated decision-making process may be taken if it is permitted by national law or EU law and subject to certain conditions (§6). The sole automated decision can be based on sensitive data if it is authorised by national law or EU law (§6(c)).
Recently, the Hungarian Government issued a Decree (Decree No. 179/2020 of 4 May) as a response to the COVID-19 pandemic for which it declared a “state of emergency” (Stolton 2020). The Decree restricts the scope of a number of the data subject’s rights such as the right to be informed. The EDPB (2021b) was highly critical of those restrictions. It particularly considered that although the state of emergency adopted in the context of a pandemic may serve as a circumstance to trigger Article 23 of the GDPR, according to which EU Member States can restrict the scope of the data subject rights and certain data protection principles (see section 4.2.2), those states must nevertheless adhere to the guarantees enshrined in the same Article for those restrictions to be legal under the GDPR (ibid). It went further to emphasise the fundamental rights requirements that must be observed and a general blanket restriction on the scope of the data subject’s rights would infringe upon the essence of fundamental rights (ibid).
In terms of the public authorities’ power to use sensitive data in relation to criminal proceedings, § 269 of the Criminal Procedure Act (2017. évi XC. Törvény a büntetőeljárásró) authorises the prosecutor's office, the investigating authority, and the crime prevention, detection and counter-terrorism bodies of the police to request the existing biometric data held in accordance with the Act on the criminal registry system, the registry of judgments against Hungarian citizens passed by the courts of Member States of the European Union and the registry of criminal and police biometric data (2009. évi XLVII. Törvény a bűnügyi nyilvántartási rendszerről, az Európai Unió tagállamainak bíróságai által Magyar állampolgárokkal szemben hozott ítéletek nyilvántartásáról, valamint a bűnügyi és rendészeti biometrikus adatok nyilvántartásáról) and request facial image analysis from the body responsible for the management and operation of the facial image register.
@@ -1605,16 +1400,16 @@Mobilisations and contestations
The Dragonfly Project has elicited numerous warnings regarding data protection and the rights to privacy from both public and private organisations (TASZ 2021). The Hungarian National Authority for Data Protection and Freedom of Information (NAIH), in October 2018 filed a communique (NAIH 2018) in which it stresses the problems raised by the centralisation and storing of visual data from as many as 35.000 CCTV cameras from all over the country and public transport facilities resulting in 25.000 terabytes of surveillance data.
-+
The main concerns, according to the NAIH, stemmed from the fact that once the surveillance data is centralised the collecting bodies stop being the official administrators of these databases. Moreover, they won’t even know how and by whom the data is collected, accessed and utilised, or for what purposes. What is even more worrisome according to this communique, is that the centralised database (Governmental Data Centre) would not administer the data either, they would only process it. Therefore, while the database can be accessed and more or less freely “used” by a number of clients (such as government organisations, law enforcement, secret services) there is no legal body who is responsible for applying the data protection measures or who would be liable in case of transgressions. Eventually the government incorporated some of the suggestions and owners of the data remain the uploading bodies to whom the requests have to be addressed for accessing the database by the different authorised bodies (e.g., the Hungarian Police).
-+
Independent Hungarian media has also picked up the news. For instance, Hungary’s leading independent economic and political weekly HVG has published an article in which they outline the bill and cite the head of the NAIH (Dercsényi 2018). Interestingly, the article starts with an announcement/amendment saying that the HVG expresses its regrets for violating the good reputation of the Ministry of Internals when claiming that the bill has not incorporated the suggestions from the NAIH, which is not true (Dercsényi 2018). However, the article still claims the opposite. Other liberal online news sites and Magazines such as the Magyar Narancs (Szalai 2019), 444.hu (Herczeg 2019) and 24.hu (Kerékgyártó 2018; Spirk 2019) also report on the case. However, the main pro-government newspapers such as Magyar Nemzet remain silent.
-+
More recently, in January 2021, the INCLO, a network of Human Liberties NGOs published a report (INCLO 2021) in which they discuss the Hungarian Case and specifically the Dragonfly Project as an example of how the employment of FRT is at odds with the right to privacy and civil liberties. They mainly flag their concern that due to the inadequate regulation FRT can be used in conjunction with the CCTV network developed at an alarming rate.
-+
In an interview, one of the authors of the INCLO case study, legal expert Ádám Remport, explains:
State operated and centralised mass surveillance systems, such as the Dragonfly Project currently under development in Hungary, bring up at least two sets of questions with regard to their societal and political effects. The first set of questions concerns visibility and the (lack of) possibility for societal debate and contestation. The second concerns the grey areas of legislations and regulations. When the development and employment of such novel technologies as biometric video surveillance and (live) facial recognition becomes entangled with the national interest of reinforcing public order, preventing terrorism, and fighting criminality, or, as with the Home Quarantine App, reinforcing Coronavirus measures, their ability to carry out effective oversight might be seriously compromised. The Hungarian Governmental Decree from 16 March 2020 is a case in point. While the decree authorises the Minister for Innovation and Technology and an operational body consisting of representatives of the Ministry of Interior, the police, and health authorities to “acquire and process any kind of personal data from private or public entities, including traffic and location data from telecommunication providers, with a very broad definition of the purpose for which the data can be used” (Council of Europe 2020, 12) at the same time ordinary courts have been suspended, thus preventing the Constitutional Court from reviewing the proportionality of measures introduced under emergency conditions (Ibid 10).
Using such technologies for the so-called public good can even attract the support of residents who want to live in safe and predictable environments. The fact that these public environments are “secured” at the expense of curtailing the human rights to privacy and to one’s face and biometric data is often overlooked by the public. As the human right NGO “Hungarian Civil Liberties Union” have put it in their recent publication:
“If there was oversight, I think that the use of these technologies would be probably more accepted. There’s certainly a possibility for abuses. This doesn’t necessarily mean that these abuses happen, first of all because it’s impossible to prove them, and second, we have no direct evidence of them. This needs to be emphasised. But in reality, it only depends on the personal good will of the secret services not to breach individual’s privacy rights. Because in the end there’s no viable or independent oversight over their workings. They can go by the rules, and most of the times they probably do. Unless they don’t. But then, we will never find out.”
+“If there was oversight, I think that the use of these technologies would be probably more accepted. There’s certainly a possibility for abuses. This doesn’t necessarily mean that these abuses happen, first of all because it’s impossible to prove them, and second, we have no direct evidence of them. This needs to be emphasised. But in reality, it only depends on the personal good will of the secret services not to breach individual’s privacy rights. Because in the end there’s no viable or independent oversight over their workings. They can go by the rules, and most of the times they probably do. Unless they don’t. But then, we will never find out.”
3. The EU should promote the reinforcement of robust accountability mechanisms for biometric surveillance systems.
-
-
+
The current legislative framework remains unclear as to which institutions may review or authorise biometric surveillance systems. In light of the GDPR and the LED, the Data Protection Authorities (DPAs) in some member states enforce the relevant data protection legislation and oversee the processing of biometric data, while in others a separate authority is tasked with the responsibility to review the compatibility with the relevant legislation insofar as personal data processing by law enforcement authorities is concerned (such as Belgium, see case study).
-
-+
+-
The EU should work toward developing a centralised authorisation process for biometric surveillance, within which all relevant authorities are included and are able to veto the authorisation.
-
+
Although the proposed EU Artificial Intelligence Act limits a prior authorisation by a court or independent administrative authority to ‘real-time’ biometric surveillance, it is necessary to underline that ex-post biometric identification systems must be subject to supervision or authorisation taking into account the standards under the ECHR and the Charter.
@@ -1685,7 +1480,7 @@
REFERENCES
+REFERENCES
1994. Évi XXXIV. Törvény - Nemzeti Jogszabálytár. 1994. https://njt.hu/jogszabaly/1994-34-00-00.
2015. Évi CLXXXVIII. Törvény - Nemzeti Jogszabálytár. 2015. https://njt.hu/jogszabaly/2015-188-00-00.
7sur7. 2019. “Des caméras avec reconnaissance faciale à Brussels Airport.” https://www.7sur7.be/belgique/des-cameras-avec-reconnaissance-faciale-a-brussels-airport~a46f7a4c/.
@@ -1865,7 +1660,7 @@Xie, Ning, Gabrielle Ras, Marcel van Gerven, and Derek Doran. 2020. ‘Explainable Deep Learning: A Field Guide for the Uninitiated’. arXiv:2004.14545 [cs, stat]. http://arxiv.org/abs/2004.14545