batch replacements, phase1

This commit is contained in:
Ruben van de Ven 2021-10-07 15:08:33 +02:00
parent 3f39033931
commit f0476ed2a1
1 changed files with 125 additions and 125 deletions

View File

@ -198,7 +198,7 @@
<td>German Federal Police</td>
</tr>
<tr class="even">
<th>CATCH</th>
<th><a class="maplink" data-title="CATCH">CATCH</a></th>
<td>Central Automatic TeChnology for Recognition of Persons (Netherlands)</td>
</tr>
<tr class="odd">
@ -218,7 +218,7 @@
<td>Court of Justice of the European Union (EU)</td>
</tr>
<tr class="odd">
<th>CNIL</th>
<th><a class="maplink" data-title="CNIL">CNIL</a></th>
<td>National Commission for Informatics and Freedoms (France)</td>
</tr>
<tr class="even">
@ -287,7 +287,7 @@
</tr>
<tr class="even">
<th>EU</th>
<td>European Union</td>
<td><a class="maplink" data-title="European Union">European Union</a></td>
</tr>
<tr class="odd">
<th>FRA</th>
@ -307,7 +307,7 @@
</tr>
<tr class="odd">
<th>HCLU</th>
<td>Hungarian Civil Liberties Union (Hungary). See “</td>
<td><a class="maplink" data-title="HCLU">Hungarian Civil Liberties Union (Hungary)</a>. See “</td>
</tr>
<tr class="even">
<th>HD</th>
@ -319,7 +319,7 @@
</tr>
<tr class="even">
<th>HKR</th>
<td>Home Quarantine App (Hungary)</td>
<td><a class="maplink" data-title="Home Quarantine App Hungary">Home Quarantine App (Hungary)</a></td>
</tr>
<tr class="odd">
<th>IARPA</th>
@ -331,7 +331,7 @@
</tr>
<tr class="odd">
<th>IFRS</th>
<td>Interpol Facial R</td>
<td><a class="maplink" data-title="IFRS (Interpol)">Interpol Facial Recognition System</a></td>
</tr>
<tr class="even">
<th>IKSZR</th>
@ -375,7 +375,7 @@
</tr>
<tr class="even">
<th>LQDN</th>
<td>La Quadrature du Net (France)</td>
<td><a class="maplink" data-title="La Quadrature du Net">La Quadrature du Net</a> (France)</td>
</tr>
<tr class="odd">
<th>GMO</th>
@ -402,7 +402,7 @@
<td>Non-Governmental Organisation</td>
</tr>
<tr class="odd">
<th>NIST</th>
<th><a class="maplink" data-title="NIST">NIST</a></th>
<td>National Institute of Standards and Technology (USA)</td>
</tr>
<tr class="even">
@ -443,7 +443,7 @@
</tr>
<tr class="odd">
<th>TASZ</th>
<td>Hungarian Civil Liberties Union</td>
<td><a class="maplink" data-title="HCLU">Hungarian Civil Liberties Union</a></td>
</tr>
<tr class="even">
<th>TELEFI</th>
@ -555,12 +555,12 @@
<li><p>Several French cities have launched “safe city” projects involving biometric technologies, however Nice is arguably the national leader. The city currently has the highest CCTV coverage of any city in France and has more than double the police agents per capita of the neighbouring city of Marseille.</p></li>
<li><p>Through a series of public-private partnerships the city began a number of initiatives using RBI technologies (including emotion and facial recognition). These technologies were deployed for both authentication and surveillance purposes with some falling into the category of biometric mass surveillance.</p></li>
<li><p>One project which used FRT at a high school in Nice and one in Marseille was eventually declared unlawful. The court determined that the required consent could not be obtained due to the power imbalance between the targeted public (students) and the public authority (public educational establishment). This case highlights important issues about the deployment of biometric technologies in public spaces.</p></li>
<li><p>The use of biometric mass surveillance by the mayor of Nice Christian Estrosi has put him on a collision course with the French Data Protection Authority (CNIL) as well as human rights/ digital rights organisations (Ligue des Droits de lHomme, La Quadrature du Net). His activities have raised both concern and criticism over the usage of the technologies and their potential impact on the privacy of personal data.</p></li>
<li><p>The use of biometric mass surveillance by the mayor of Nice Christian Estrosi has put him on a collision course with the French Data Protection Authority (<a class="maplink" data-title="CNIL">CNIL</a>) as well as human rights/ digital rights organisations (Ligue des Droits de lHomme, <a class="maplink" data-title="La Quadrature du Net">La Quadrature du Net</a>). His activities have raised both concern and criticism over the usage of the technologies and their potential impact on the privacy of personal data.</p></li>
</ul>
<p><strong>CHAPTER 9: Facial Recognition in Südkreuz Berlin, Hamburg G20 and Mannheim (Germany)</strong></p>
<ul>
<li><p>The German federal police, in cooperation with the German railway company, conducted a project called “Sicherheitsbahnhof” at the Berlin railway station Südkreuz in 2017/18, which included 77 video cameras and a video management system.</p></li>
<li><p>The police in Hamburg used facial recognition software Videmo 360 during the protests against the G20 summit in 2017. The database includes 100.000 individuals in Hamburg during the G20 summit and whose profiles are saved in the police database. The technology allows for the determination of behaviour, participation in gatherings, preferences, and religious or political engagement.</p></li>
<li><p>The police in Hamburg used facial recognition software <a class="maplink" data-title="Videmo">Videmo</a> 360 during the protests against the G20 summit in 2017. The database includes 100.000 individuals in Hamburg during the G20 summit and whose profiles are saved in the police database. The technology allows for the determination of behaviour, participation in gatherings, preferences, and religious or political engagement.</p></li>
<li><p>Sixty-eight cameras were installed by local police on central squares and places in the German city Mannheim to record the patterns of movement of people. In this project, which started in 2018, the software is used to detect conspicuous behaviour.</p></li>
<li><p>Half of these deployments (Mannheim &amp; Berlin Südkreuz) took place as measures to test the effectiveness of facial recognition and behavioural analysis software. This “justification as a test” approach is often used in Germany to argue for a deviation from existing rules and societal expectations and was similarly applied during deviations to commonly agreed measures in the Coronavirus/COVID-19 pandemic.</p></li>
<li><p>Resistance to video surveillance is also in no small part a result of constant campaigning and protest by German civil society. The Chaos Computer Club and Digital Courage have consistently campaigned against video surveillance and any form of biometric or behavioural surveillance. The long-term effect of these “pilots” is to normalise surveillance.</p></li>
@ -570,11 +570,11 @@
<li><p>The Hungarian Government led by Prime Minister Viktor Orbán has long been on a collision course with EU Institutions over the rule of law and the undermining of the countrys judicial independence and democratic institutions.</p></li>
</ul>
<ul>
<li><p>Hungary is a frontrunner in Europe when it comes to authorising law enforcements use of Facial Recognition Technology, developing a nationwide and centralised database (The Dragonfly Project), and using the Home Quarantine App as part of the Governments Coronavirus measures.</p></li>
<li><p>Hungary is a frontrunner in Europe when it comes to authorising law enforcements use of Facial Recognition Technology, developing a nationwide and centralised database (The <a class="maplink" data-title="Dragonfly Project">Dragonfly Project</a>), and using the <a class="maplink" data-title="Home Quarantine App Hungary">Home Quarantine App</a> as part of the Governments Coronavirus measures.</p></li>
<li><p>The infrastructure in place that potentially allows for a centralised deployment of biometric mass surveillance technologies in Hungary has reached an unprecedented scale while the legal and ethical scrutiny of these technologies lags dangerously behind.</p></li>
<li><p>This is due to (1) the overlap between the private and public sectors, specifically government institutions, and (2) the complex entanglements biometric systems have with other information systems (such as car registries, traffic management, public transport monitoring and surveillance, etc.).</p></li>
<li><p>Although the latter are not concerned with the traces of the human body they can nonetheless be used for and facilitate biometric mass surveillance. These entanglements create grey zones of biometric mass surveillance where the development and deployment of such technologies is hidden from visibility and critical scrutiny.</p></li>
<li><p>The Dragonfly Project has elicited numerous warnings regarding data protection and the rights to privacy from both public and private organisations. However the lack of contestation and social debate around the issues of privacy and human rights in relation to such projects as the Hungarian Governments Dragonfly is striking.</p></li>
<li><p>The <a class="maplink" data-title="Dragonfly Project">Dragonfly Project</a> has elicited numerous warnings regarding data protection and the rights to privacy from both public and private organisations. However the lack of contestation and social debate around the issues of privacy and human rights in relation to such projects as the Hungarian Governments Dragonfly is striking.</p></li>
</ul>
<p><strong>CHAPTER 11: Recommendations</strong></p>
<p><strong>1. The EU should prohibit the deployment of both indiscriminate and “targeted” Remote Biometric and Behavioural Identification technologies in public spaces, as it amounts to mass surveillance.</strong></p>
@ -613,7 +613,7 @@
<ul>
<li><p>From a technical perspective, <strong>biometric mass surveillance can easily emerge by connecting different elements of a technical infrastructure</strong> (video acquisition capacities, processing algorithms, biometric datasets) <strong>developed in other contexts.</strong></p></li>
<li><p>For example, while the <strong>forensic use of facial recognition</strong> is not a form of <strong>remote biometric identification</strong> per se, the adoption of such systems has allowed for the creation of biometrically searchable national datasets. These datasets are one piece of a potential <strong>biometric mass surveillance</strong> infrastructure which can become a technical reality if live camera feeds, processed through live facial recognition software is connected to them.</p></li>
<li><p>In order to maintain democratic oversight over the uses of the infrastructure, and <strong>avoid the risk of function creep</strong> (i.e. when a technology is being used beyond its initial purpose) it is thus imperative that the principle of <strong>purpose limitation</strong> is systematically enforced and strictly regulated with regard to the <strong>type of data</strong> (criminal or civilian datasets, datasets generated from social media, as in the Clearview AI controversy) against which biometric searches can be performed.</p></li>
<li><p>In order to maintain democratic oversight over the uses of the infrastructure, and <strong>avoid the risk of function creep</strong> (i.e. when a technology is being used beyond its initial purpose) it is thus imperative that the principle of <strong>purpose limitation</strong> is systematically enforced and strictly regulated with regard to the <strong>type of data</strong> (criminal or civilian datasets, datasets generated from social media, as in the <a class="maplink" data-title="Clearview AI">Clearview AI</a> controversy) against which biometric searches can be performed.</p></li>
</ul>
<p><strong>6. The EU should support voices and organisations which are mobilised for the respect of EU fundamental rights</strong></p>
<ul>
@ -640,7 +640,7 @@
<li><p>Four main positions have emerged with regard to the deployments of RBI technologies and their potential impact on fundamental rights: 1) active promotion 2) support with safeguards; 3) moratorium and 4) outright ban.</p></li>
</ul>
</div> <!-- key points -->
<p>Since the widespread use of neural network algorithms in 2012, artificial intelligence applied to the field of security has steadily grown into a political, economic, and social reality. As examples from Singapore, the UK, South Africa, or China demonstrate, the image of a digital society of control, in which citizens are monitored through algorithmically processed audio and video feeds is becoming a tangible possible reality in the European Union.</p>
<p>Since the widespread use of neural network algorithms in 2012, artificial intelligence applied to the field of security has steadily grown into a political, economic, and social reality. As examples from Singapore, the UK, South Africa, or China demonstrate, the image of a digital society of control, in which citizens are monitored through algorithmically processed audio and video feeds is becoming a tangible possible reality in the <a class="maplink" data-title="European Union">European Union</a>.</p>
<p>Through a set of “pilot projects”, private and public actors including supermarkets, casinos, city councils, border guards, local and national law enforcement agencies are increasingly deploying a wide array of “<strong>smart surveillance</strong>” solutions. Among them <strong>remote biometric identification,</strong> namely <strong>security mechanisms “that leverage unique biological characteristics” such as fingerprints, facial images, iris or vascular patterns to “identify multiple persons identities at a distance, in a public space and in a continuous or ongoing manner by checking them against data stored in a database.”</strong> (European Commission 2020b, 18) European institutions have reacted with a series of policy initiatives in the last years, but as we will show in this report, if left unchecked, remote biometric identification technologies can easily become <strong>biometric mass surveillance</strong>.</p>
<p>Among technologies of <strong>remote biometric identification</strong>, <strong>facial recognition</strong> has been at the centre of the attention of most discussions in the public debate. The foregrounding of this specific use case of computer vision in the public debate has allowed concerned actors to raise awareness on <strong>the dangers of artificial intelligence algorithms applied to biometric datasets</strong>. But it has also generated confusion. The perception that facial recognition is a single type of technology (i.e., an algorithm “that recognises faces”) has obscured <strong>the broad range of applications of “smart technologies” within very different bureaucratic contexts</strong>: from the “smart cities” live facial recognition of video feeds deployed for the purpose of public space surveillance, to the much more specific, on-the-spot searches by law enforcement for the purpose of carrying out arrests or forensic investigations.</p>
@ -654,16 +654,16 @@
</section>
<section id="the-international-context" class="level2">
<h2>The international context</h2>
<p>The concern for uncontrolled deployment of <strong>remote biometric identification</strong> systems emerges in a context characterised by the development of technologies in authoritarian regimes; the development of controversial “pilot” projects as part of “smart cities projects” in Europe; revelations about controversial privacy practices of companies such as Clearview AI; and finally, by the structuration of a US and EU debate around some of the key biases and problems they entail.</p>
<p>The concern for uncontrolled deployment of <strong>remote biometric identification</strong> systems emerges in a context characterised by the development of technologies in authoritarian regimes; the development of controversial “pilot” projects as part of “smart cities projects” in Europe; revelations about controversial privacy practices of companies such as <a class="maplink" data-title="Clearview AI">Clearview AI</a>; and finally, by the structuration of a US and EU debate around some of the key biases and problems they entail.</p>
<p>In 2013, the Chinese authorities officially revealed the existence of a large system of mass surveillance involving more than 20 million cameras called Skynet, which had been established since 2005. While the cameras were aimed at the general public, more targeted systems were deployed in provinces such as Tibet and Xinjiang where political groups contest the authority of Beijing. In 2018, the surveillance system became coupled with a <strong>system of social credit</strong>, and Skynet became increasingly connected to facial recognition technology (Ma 2018; Jiaquan 2018). By 2019, it was estimated that Skynet had reached 200 million face-recognition enabled CCTV cameras (Mozur 2018).</p>
<p>The intrusiveness of the system, and its impact on fundamental rights is best exemplified by its deployment in the Xinjiang province. The province capital, Urumqi, is chequered with <strong>checkpoints and identification stations</strong>. Citizens need to submit to facial recognition ID checks in supermarkets, hotels, train stations, highway stations and several other public spaces (Chin and Bürge 2017). The information collected through the cameras is centralised and matched against other <strong>biometric data</strong> such as <strong>DNA samples</strong> and <strong>voice samples</strong>. This allows the government to attribute <strong>trust-worthiness scores</strong> (trustworthy, average, untrustworthy) and thus generate a list of individuals that can become candidates for detention (Wang 2018).</p>
<p>European countries deployments are far from the Chinese experience. But the companies involved in Chinas pervasive digital surveillance network (such as <strong>Tencent</strong>, <strong>Dahua Technology</strong>, <strong>Hikvision</strong>, <strong>SenseTime</strong>, <strong>ByteDance</strong> and <strong>Huawei</strong>) are exporting their know-how to Europe, under the form of “<strong>safe city” packages</strong>. <strong>Huawei</strong> is one of the most active in this regard. On the European continent, the city of Belgrade has for example deployed an extensive communication network of more than 1.000 cameras which collect up to 10 body and facial attributes (Stojkovski 2019). The cameras, deployed on poles, major traffic crossings and a large number of public spaces allow the Belgrade police to monitor large parts of the city centre, collect <strong>biometric information</strong> and communicate it directly to police officers deployed in the field. Belgrade has the most advanced deployment of Huaweis surveillance technologies on the European continent, but similar projects are being implemented by other corporations including the <strong>European companies Thales, Engie Ineo or Idemia</strong> in other European cities and many “Safe City” deployments are planned soon in EU countries such as France, Italy, Spain, Malta, and Germany (Hillman and McCalpin 2019). Furthermore, contrary to the idea China would be the sole exporter of Remote Biometric Identification technologies, EU companies have substantially developed their exports in this domain over the last years (Wagner 2021)</p>
<p>The turning point of public debates on facial recognition in Europe was probably <strong>the Clearview AI controversy</strong> in 2019-2020. <strong>Clearview AI</strong>, a company founded by Hoan Ton-That and Richard Schwartz in the United States, maintained a relatively secret profile until a New York Times article revealed in late 2019 that it was selling <strong>facial recognition technology</strong> to law enforcement.  In February 2020, it was reported that the client list of Clearview AI had been stolen, and a few days later the details of the list were leaked (Mac, Haskins, and McDonald 2020). To the surprise of many in Europe, in addition to US government agencies and corporations, it appeared that the <strong>Metropolitan Police Service</strong> <strong>(London, UK)</strong>, as well as <strong>law enforcement from Belgian, Denmark, Finland, France, Ireland, Italy, Latvia, Lithuania, Malta, the Netherlands, Norway, Portugal, Serbia, Slovenia, Spain, Sweden, and Switzerland were on the client list.</strong> The controversy grew larger as it emerged that Clearview AI had (semi-illegally) harvested a large number of images from social media platforms such as <strong>Facebook, YouTube</strong> and <strong>Twitter</strong> in order to constitute the datasets against which clients were invited to carry out searches (Mac, Haskins, and McDonald 2020).</p>
<p>European countries deployments are far from the Chinese experience. But the companies involved in Chinas pervasive digital surveillance network (such as <strong>Tencent</strong>, <strong>Dahua Technology</strong>, <strong><a class="maplink" data-title="Hikvision">Hikvision</a></strong>, <strong>SenseTime</strong>, <strong>ByteDance</strong> and <strong><a class="maplink" data-title="Huawei">Huawei</a></strong>) are exporting their know-how to Europe, under the form of “<strong>safe city” packages</strong>. <strong><a class="maplink" data-title="Huawei">Huawei</a></strong> is one of the most active in this regard. On the European continent, the city of Belgrade has for example deployed an extensive communication network of more than 1.000 cameras which collect up to 10 body and facial attributes (Stojkovski 2019). The cameras, deployed on poles, major traffic crossings and a large number of public spaces allow the Belgrade police to monitor large parts of the city centre, collect <strong>biometric information</strong> and communicate it directly to police officers deployed in the field. Belgrade has the most advanced deployment of <a class="maplink" data-title="Huawei">Huawei</a>s surveillance technologies on the European continent, but similar projects are being implemented by other corporations including the <strong>European companies <a class="maplink" data-title="Thales">Thales</a>, <a class="maplink" data-title="Engie Ineo">Engie Ineo</a> or <a class="maplink" data-title="IDEMIA">Idemia</strong> in other European cities and many “Safe City” deployments are planned soon in EU countries such as France, Italy, Spain, Malta, and Germany (Hillman and McCalpin 2019). Furthermore, contrary to the idea China would be the sole exporter of Remote Biometric Identification technologies, EU companies have substantially developed their exports in this domain over the last years (Wagner 2021)</p>
<p>The turning point of public debates on facial recognition in Europe was probably <strong>the <a class="maplink" data-title="Clearview AI">Clearview AI</a> controversy</strong> in 2019-2020. <strong><a class="maplink" data-title="Clearview AI">Clearview AI</a></strong>, a company founded by Hoan Ton-That and Richard Schwartz in the United States, maintained a relatively secret profile until a New York Times article revealed in late 2019 that it was selling <strong>facial recognition technology</strong> to law enforcement.  In February 2020, it was reported that the client list of <a class="maplink" data-title="Clearview AI">Clearview AI</a> had been stolen, and a few days later the details of the list were leaked (Mac, Haskins, and McDonald 2020). To the surprise of many in Europe, in addition to US government agencies and corporations, it appeared that the <strong>Metropolitan Police Service</strong> <strong>(London, UK)</strong>, as well as <strong>law enforcement from Belgian, Denmark, Finland, France, Ireland, Italy, Latvia, Lithuania, Malta, the Netherlands, Norway, Portugal, Serbia, Slovenia, Spain, Sweden, and Switzerland were on the client list.</strong> The controversy grew larger as it emerged that <a class="maplink" data-title="Clearview AI">Clearview AI</a> had (semi-illegally) harvested a large number of images from social media platforms such as <strong><a class="maplink" data-title="Facebook">Facebook</a>, YouTube</strong> and <strong>Twitter</strong> in order to constitute the datasets against which clients were invited to carry out searches (Mac, Haskins, and McDonald 2020).</p>
<p>The news of the hacking strengthened a strong push-back movement against the development of facial recognition technology by companies such as Clearview AI, as well as their use by government agencies. In 2018, <strong>Massachusetts Institute of Technology</strong> (MIT) scholar and <strong>Algorithmic Justice League</strong> founder <strong>Joy Buolamwini</strong> together with <strong>Temnit Gebru</strong> had published the report <em>Gender Shades</em> (Buolamwini and Gebru 2018), in which they assessed the racial bias in the face recognition datasets and algorithms used by companies such as IBM and Microsoft. Buolamwini and Gebru found that <strong>algorithms performed generally worse on darker-skinned faces, and in particular darker-skinned females, with error rates up to 34% higher than lighter-skinned males</strong> (Najibi 2020). IBM and Microsoft responded by amending their systems, and a re-audit showed less bias. Not all companies responded equally. <strong>Amazons Rekognition</strong> system, which was included in the second study continued to show a 31% lower rate for darker-skinned females. The same year <strong>ACLU</strong> conducted another key study on Amazons Rekognition, using the pictures of <strong>members of congress against a dataset of mugshots from law enforcemen</strong>t. 28 members of Congress, <strong>largely people of colour were incorrectly matched</strong> (Snow 2018). Activists engaged lawmakers. In 2019, the Algorithmic Accountability Act allowed the Federal Trade Commission to regulate private companies uses of facial recognition. In 2020, several companies, including IBM, Microsoft, and Amazon, announced a moratorium on the development of their facial recognition technologies. Several US cities, including <strong>Boston</strong>, <strong>Cambridge</strong> (Massachusetts) <strong>San Francisco</strong>, <strong>Berkeley</strong>, <strong>Portland</strong> (Oregon), have also banned their police forces from using the technology.</p>
<p>The news of the hacking strengthened a strong push-back movement against the development of facial recognition technology by companies such as <a class="maplink" data-title="Clearview AI">Clearview AI</a>, as well as their use by government agencies. In 2018, <strong>Massachusetts Institute of Technology</strong> (MIT) scholar and <strong><a class="maplink" data-title="Algorithmic Justice League">Algorithmic Justice League</a></strong> founder <strong>Joy Buolamwini</strong> together with <strong>Temnit Gebru</strong> had published the report <em>Gender Shades</em> (Buolamwini and Gebru 2018), in which they assessed the racial bias in the face recognition datasets and algorithms used by companies such as <a class="maplink" data-title="IBM">IBM</a> and Microsoft. Buolamwini and Gebru found that <strong>algorithms performed generally worse on darker-skinned faces, and in particular darker-skinned females, with error rates up to 34% higher than lighter-skinned males</strong> (Najibi 2020). <a class="maplink" data-title="IBM">IBM</a> and Microsoft responded by amending their systems, and a re-audit showed less bias. Not all companies responded equally. <strong>Amazons Rekognition</strong> system, which was included in the second study continued to show a 31% lower rate for darker-skinned females. The same year <strong>ACLU</strong> conducted another key study on Amazons Rekognition, using the pictures of <strong>members of congress against a dataset of mugshots from law enforcemen</strong>t. 28 members of Congress, <strong>largely people of colour were incorrectly matched</strong> (Snow 2018). Activists engaged lawmakers. In 2019, the Algorithmic Accountability Act allowed the Federal Trade Commission to regulate private companies uses of facial recognition. In 2020, several companies, including <a class="maplink" data-title="IBM">IBM</a>, Microsoft, and Amazon, announced a moratorium on the development of their facial recognition technologies. Several US cities, including <strong>Boston</strong>, <strong>Cambridge</strong> (Massachusetts) <strong>San Francisco</strong>, <strong>Berkeley</strong>, <strong>Portland</strong> (Oregon), have also banned their police forces from using the technology.</p>
</section>
<section id="the-european-context" class="level2">
@ -672,7 +672,7 @@
<p>Legislative activity accelerated in 2018. The <strong>European Commission</strong> (2018a) published a communication <em>Artificial Intelligence for Europe</em>, in which it called for a joint legal framework for the regulation of AI-related services. Later in the year, the Commission (2018b) adopted a <em>Coordinated Plan on Artificial Intelligence</em> with similar objectives. It compelled EU member states to adopt a national strategy on artificial intelligence which should meet the EU requirements. It also allocated 20 billion euros each year for investment in AI development. (Andraško et al. 2021, 4).</p>
<p>In 2019, the <strong>Council of Europe Commissioner for Human Rights</strong> published a Recommendation entitled <em>Unboxing Artificial Intelligence: 10 steps to Protect Human Rights</em> which describes several steps for national authorities to maximise the potential of AI while preventing or mitigating the risk of its misuse. (Gonzalez Fuster 2020, 46). The same year the <strong>European Unions High Level Expert Group on Artificial Intelligence (AI HLEG)</strong> adopted the <em>Ethics Guidelines for Trustworthy Artificial Intelligence</em>, a key document for the EU strategy in bringing AI within ethical standards (Nesterova 2020, 3).</p>
<p>In 2019, the <strong>Council of Europe Commissioner for Human Rights</strong> published a Recommendation entitled <em>Unboxing Artificial Intelligence: 10 steps to Protect Human Rights</em> which describes several steps for national authorities to maximise the potential of AI while preventing or mitigating the risk of its misuse. (Gonzalez Fuster 2020, 46). The same year the <strong><a class="maplink" data-title="European Union">European Union</a>s High Level Expert Group on Artificial Intelligence (AI HLEG)</strong> adopted the <em>Ethics Guidelines for Trustworthy Artificial Intelligence</em>, a key document for the EU strategy in bringing AI within ethical standards (Nesterova 2020, 3).</p>
<p>In February 2020, the new <strong>European Commission</strong> went one step further in regulating matters related to AI, adopting the digital agenda package a set of documents outlining the strategy of the EU in the digital age. Among the documents the <em>White Paper on Artificial Intelligence: a European approach to excellence and trust</em> captured most of the commissions intentions and plans.  </p>
</section>
@ -681,7 +681,7 @@
<p>Over the past 3-4 years, positions around the use of facial recognition and more specifically the use of remote biometric identification in public space have progressively crystalised into four camps (for a more detailed analysis of the positions, see Chapter 5).</p>
<section id="active-promotion" class="level3">
<h3>Active promotion</h3>
<p>A certain number of actors, both at the national and at the local level are pushing for the development and the extension of biometric remote identification. At the local level, figures such as Nices (France) mayor Christian Estrosi have repeatedly challenged Data Protection Authorities, arguing for the usefulness of such technologies in the face of insecurity (for a detailed analysis, see chapter 8 in this report, see also Barelli 2018). <strong>At the national level, Biometric systems for the purposes of authentication are increasingly deployed for forensic applications</strong> among law-enforcement agencies in the European Union. As we elaborate in Chapter 3, 11 out of 27 member states of the European Union are already using facial recognition against biometric databases for forensic purposes and 7 additional countries are expected to acquire such capabilities in the near future. Several states that have not yet adopted such technologies seem inclined to follow the trend, and push further. Belgian Minister of Interior Pieter De Crem for example, recently declared he was in favour of the use of facial recognition both for judicial inquiries but also for live facial recognition, a much rarer instance. Such outspoken advocates of the use of RBI constitute an important voice, but do not find an echo in the EU mainstream discussions.</p>
<p>A certain number of actors, both at the national and at the local level are pushing for the development and the extension of biometric remote identification. At the local level, figures such as Nices (France) mayor Christian Estrosi have repeatedly challenged Data Protection Authorities, arguing for the usefulness of such technologies in the face of insecurity (for a detailed analysis, see chapter 8 in this report, see also Barelli 2018). <strong>At the national level, Biometric systems for the purposes of authentication are increasingly deployed for forensic applications</strong> among law-enforcement agencies in the <a class="maplink" data-title="European Union">European Union</a>. As we elaborate in Chapter 3, 11 out of 27 member states of the <a class="maplink" data-title="European Union">European Union</a> are already using facial recognition against biometric databases for forensic purposes and 7 additional countries are expected to acquire such capabilities in the near future. Several states that have not yet adopted such technologies seem inclined to follow the trend, and push further. Belgian Minister of Interior Pieter De Crem for example, recently declared he was in favour of the use of facial recognition both for judicial inquiries but also for live facial recognition, a much rarer instance. Such outspoken advocates of the use of RBI constitute an important voice, but do not find an echo in the EU mainstream discussions.</p>
</section>
<section id="support-with-safeguards" class="level3">
<h3>Support with safeguards </h3>
@ -693,7 +693,7 @@
</section>
<section id="ban" class="level3">
<h3>Ban</h3>
<p>Finally, a growing number of actors considers that there is enough information about remote biometric identification in public space to determine that they will never be able to comply to the strict requirement of the European Union in terms of respect of Fundamental Rights, and as such should be banned entirely. It is the current position of the <strong>European Data Protection Supervisor (EDPS, 2021)</strong> the <strong>Council of Europe</strong> and a large coalition of NGOs, gathered under the umbrella of the <strong>European Digital Rights organisation</strong> (EDRi 2020). In the <strong>European Parliament</strong>, the position has most vocally been defended by the European Greens, but has been shared by several other voices, such as members of the Party of the European Left, the Party of European Socialists or Renew Europe (Breyer et al 2021).</p>
<p>Finally, a growing number of actors considers that there is enough information about remote biometric identification in public space to determine that they will never be able to comply to the strict requirement of the <a class="maplink" data-title="European Union">European Union</a> in terms of respect of Fundamental Rights, and as such should be banned entirely. It is the current position of the <strong>European Data Protection Supervisor (EDPS, 2021)</strong> the <strong>Council of Europe</strong> and a large coalition of NGOs, gathered under the umbrella of the <strong>European Digital Rights organisation</strong> (EDRi 2020). In the <strong>European Parliament</strong>, the position has most vocally been defended by the European Greens, but has been shared by several other voices, such as members of the Party of the European Left, the Party of European Socialists or Renew Europe (Breyer et al 2021).</p>
</section>
</section>
<section id="lack-of-transparency-and-the-stifling-of-public-debate" class="level2">
@ -729,7 +729,7 @@
<li><p>RBI technologies are subject to technical challenges and limitations which should be considered in any broader analysis of their ethical, legal, and political implications.</p></li>
</ul>
</div> <!-- key points -->
<p>In order to grasp the various facets of remote biometric identification that could potentially lead to biometric mass surveillance, this section provides an overview of the currently available technologies, how they work and what their limitations are as well as where and by whom they are deployed in the European Union.</p>
<p>In order to grasp the various facets of remote biometric identification that could potentially lead to biometric mass surveillance, this section provides an overview of the currently available technologies, how they work and what their limitations are as well as where and by whom they are deployed in the <a class="maplink" data-title="European Union">European Union</a>.</p>
<section id="remote-biometric-identification-and-classification-defining-key-terms" class="level2">
<h2>Remote Biometric Identification and classification: defining key terms</h2>
<p>Although there are a growing number of technologies based on other supports than images (photographs or videos) such as voice recognition (audio), LIDAR scans or radio waves, the current market of remote biometric identification is overwhelmingly dominated by image-based products, at the centre of which is face recognition. In the following sections we thus focus primarily on image-based products.</p>
@ -759,7 +759,7 @@
</section>
<section id="people-tracking-and-counting" class="level3">
<h3>People tracking and counting </h3>
<p>This is perhaps the form of person tracking with which the least information about an individual is stored. An <strong>object detection algorithm</strong> estimates the presence and position of individuals on a camera image. These positions are stored or counted and used for further metrics. It is used to count <strong>passers-by in city centres</strong>, and for a <strong>one-and-a-half-meter social distancing monitor in Amsterdam</strong><a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a>. See also the case study in this document on the Burglary-Free Neighbourhood in Rotterdam (CHAPTER 7), which goes into more detail about the use of the recorded trajectories of individuals to label anomalous behaviour.</p>
<p>This is perhaps the form of person tracking with which the least information about an individual is stored. An <strong>object detection algorithm</strong> estimates the presence and position of individuals on a camera image. These positions are stored or counted and used for further metrics. It is used to count <strong>passers-by in city centres</strong>, and for a <strong>one-and-a-half-meter social distancing monitor in Amsterdam</strong><a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a>. See also the case study in this document on the <a class="maplink" data-title="Burglary-Free Neighbourhood">Burglary-Free Neighbourhood</a> in Rotterdam (CHAPTER 7), which goes into more detail about the use of the recorded trajectories of individuals to label anomalous behaviour.</p>
</section>
<section id="emotion-recognition." class="level3">
<h3>Emotion recognition. </h3>
@ -767,11 +767,11 @@
</section>
<section id="age-gender-and-ethnicity-classification" class="level3">
<h3>Age, gender, and ethnicity classification </h3>
<p>Aside from deducing emotions, the face is used to deduce a variety of traits from individuals. For example, <strong>gender</strong>, <strong>ethnicity</strong>, and <strong>age estimations</strong> are available in many off-the-shelf facial analysis products. As with <strong>emotion recognition</strong>, these classifications are mainly used in digital signage and video advertisement contexts. LGBTQ+ communities have spoken out against automatic gender classification, pointing out that a long fought, non-binary understanding of gender is made undone by the technologys binary classifications (Vincent, 2021). Similarly, recent revelations that Hikvision (China) has used similar technology to estimate whether an individual is from <strong>Chinas Uyghur minority</strong>, has directly led the <strong>European Parliament</strong> to call for a ban of <strong>Hikvisions</strong> products on the Parliaments premises (Rollet, 2021).</p>
<p>Aside from deducing emotions, the face is used to deduce a variety of traits from individuals. For example, <strong>gender</strong>, <strong>ethnicity</strong>, and <strong>age estimations</strong> are available in many off-the-shelf facial analysis products. As with <strong>emotion recognition</strong>, these classifications are mainly used in digital signage and video advertisement contexts. LGBTQ+ communities have spoken out against automatic gender classification, pointing out that a long fought, non-binary understanding of gender is made undone by the technologys binary classifications (Vincent, 2021). Similarly, recent revelations that <a class="maplink" data-title="Hikvision">Hikvision</a> (China) has used similar technology to estimate whether an individual is from <strong>Chinas Uyghur minority</strong>, has directly led the <strong>European Parliament</strong> to call for a ban of <strong><a class="maplink" data-title="Hikvision">Hikvision</a>s</strong> products on the Parliaments premises (Rollet, 2021).</p>
</section>
<section id="audio-recognition" class="level3">
<h3>Audio recognition </h3>
<p>From a technological perspective, neural networks process audio relatively similarly to how video is processed: rather than feeding an image, a spectrogram is used as input for the network. However, under the GDPR, recording conversations, is illegal in the European Union without informed consent of the participants. In order to adhere to these regulations, on some occasions, only particular frequencies are recorded and processed. For example, in the Burglary-Free Neighbourhood in Rotterdam (CHAPTER 7), only two frequencies are used to classify audio; making conversations indiscernible while being able to discern shouting or the breaking of glass<a href="#fn3" class="footnote-ref" id="fnref3" role="doc-noteref"><sup>3</sup></a>.</p>
<p>From a technological perspective, neural networks process audio relatively similarly to how video is processed: rather than feeding an image, a spectrogram is used as input for the network. However, under the GDPR, recording conversations, is illegal in the <a class="maplink" data-title="European Union">European Union</a> without informed consent of the participants. In order to adhere to these regulations, on some occasions, only particular frequencies are recorded and processed. For example, in the <a class="maplink" data-title="Burglary-Free Neighbourhood">Burglary-Free Neighbourhood</a> in Rotterdam (CHAPTER 7), only two frequencies are used to classify audio; making conversations indiscernible while being able to discern shouting or the breaking of glass<a href="#fn3" class="footnote-ref" id="fnref3" role="doc-noteref"><sup>3</sup></a>.</p>
</section>
</section>
<section id="how-does-image-based-remote-biometric-identification-work" class="level2">
@ -792,14 +792,14 @@
<section id="machine-learning-and-operational-datasets" class="level3">
<h3><strong>Machine learning</strong> and operational datasets</h3>
<p>Remote biometric identification and classification relies in large part on datasets, for two key but distinct moments of their operation.</p>
<p><strong>Machine learning datasets.</strong> These are the datasets used to train models through <strong>machine learning.</strong> We find three categories of such datasets. <strong>Publicly available datasets</strong> for object detection such as COCO, ImageNet, Pascal VOC include a varying number of images labelled in a range of categories, these can be used to train algorithms to detect for example people on an image (IPVM Team 2021a, 27). The most used open-source datasets for surveillance technologies are Celeb 500k, MS-Celeb-1Million-Cleaned, Labeled Faces in the Wild, VGG Face 2, DeepGlint Asian Celeb, IMDB-Face, IMDB-Wiki, CelebA, Diveface, Flickr faces and the IARPA Janus Benchmark (IPVM Team 2021b, 7). Many of these datasets also function as a public benchmark, against which the performance and accuracy of various algorithms is measured. For example, Labeled Faces in the Wild, the COCO dataset and NIST present such leaderboards on their website<a href="#fn6" class="footnote-ref" id="fnref6" role="doc-noteref"><sup>6</sup></a>. <strong>Government datasets</strong> are generally collections of images available to a government for other purposes (drivers license, passport, or criminal record photo datasets). While in Europe most of these datasets are not accessible to the public, in China and in the US, they are made available for testing and training purposes to private companies, such as the Multiple Encounter Dataset (NIST, 2010). Finally <strong>proprietary datasets</strong> may be developed by providers for their specific applications.</p>
<p><strong>Machine learning datasets.</strong> These are the datasets used to train models through <strong>machine learning.</strong> We find three categories of such datasets. <strong>Publicly available datasets</strong> for object detection such as COCO, ImageNet, Pascal VOC include a varying number of images labelled in a range of categories, these can be used to train algorithms to detect for example people on an image (IPVM Team 2021a, 27). The most used open-source datasets for surveillance technologies are Celeb 500k, MS-Celeb-1Million-Cleaned, Labeled Faces in the Wild, VGG Face 2, DeepGlint Asian Celeb, IMDB-Face, IMDB-Wiki, CelebA, Diveface, Flickr faces and the IARPA Janus Benchmark (IPVM Team 2021b, 7). Many of these datasets also function as a public benchmark, against which the performance and accuracy of various algorithms is measured. For example, Labeled Faces in the Wild, the COCO dataset and <a class="maplink" data-title="NIST">NIST</a> present such leaderboards on their website<a href="#fn6" class="footnote-ref" id="fnref6" role="doc-noteref"><sup>6</sup></a>. <strong>Government datasets</strong> are generally collections of images available to a government for other purposes (drivers license, passport, or criminal record photo datasets). While in Europe most of these datasets are not accessible to the public, in China and in the US, they are made available for testing and training purposes to private companies, such as the Multiple Encounter Dataset (NIST, 2010). Finally <strong>proprietary datasets</strong> may be developed by providers for their specific applications.</p>
<p><strong>Machine learning models.</strong> In the machine learning process, an algorithm gets iteratively configured for the optimal output, based on the particular dataset that it is fed with. This can be a neural network, but also e.g., the aforementioned Viola-Jones object detector algorithm. The <strong>model</strong> is the final configuration of this learning process. As such, it does not contain the images of the dataset in and of themselves. Rather, it represents the abstractions the algorithm “learned” over time. In other words, the model operationalises the machine learning dataset. For example, the YOLO object detection algorithm yields different results when it is trained on either the COCO or the model (in conjunction with the algorithm) which determines the translation of an image into a category, or of the image of a face into its embedding.</p>
<p><strong>Operational datasets, or image databases.</strong> Datasets used in training machine learning models should be distinguished from matching or operational datasets which are the “watchlists” of for example criminals, persons of interest or other lists of individuals against which facial recognition searches will be performed whether these are in real time or post hoc. These datasets contain pre-processed images of individuals on the watchlist, and store the numerical representations of these faces, their feature vectors or <em>embedding</em>, in an index for fast retrieval and comparison with the queried features (using for example k-Nearest Neighbour or Support Vector Machines). Face or object detection models do not use such a dataset.</p>
</section>
<section id="availability" class="level3">
<h3>Availability</h3>
<p>Facial recognition algorithms can be developed in-house, taken from an open-source repository, or purchased (IPVM Team 2021b, 14). Popular <strong>open-source facial recognition</strong> implementations include OpenCV, Face_pytorch, OpenFace and Insightface. Many of these software libraries are developed at universities or implement algorithms and neural network architectures presented in academic papers. They are free, and allow for a great detail of customisation, but require substantial programming skills to be implemented in a surveillance system. Moreover, when using such software, the algorithms run on ones own hardware which provides the developer with more control, but also requires more maintenance.</p>
<p><strong>Proprietary facial recognition.</strong> There are three possible routes for the use of proprietary systems: There are <strong>“turnkey”</strong> systems sold by manufacturers such as <strong>Hikvision</strong>, <strong>Dahua</strong>, <strong>Anyvision</strong> or <strong>Briefcam</strong>. Those integrate the software and hardware, and as such can be directly deployed by the client. <strong>Algorithm developers</strong> such as <strong>Amazon AWS Rekognition</strong> (USA), <strong>NEC</strong> (Japan), <strong>NTechlab</strong> (Russia), <strong>Paravision</strong> (USA) allow to implement their algorithms and customise them to ones needs, and finally there are <strong>“cloud” API systems</strong>, a sub-set of the former category, where the algorithm is hosted in a datacentre and is accessed remotely (IPVM Team 2021b, 16). The latter type of technology bears important legal ramifications, as the data may travel outside of national or European jurisdictions. It should be noted that many of the proprietary products are based on similar algorithms and network architectures as their open-source counterparts (OpenCV, 2021). Contrary to the open-source software, it is generally unclear which datasets of images have been used to train the proprietary algorithms.</p>
<p><strong>Proprietary facial recognition.</strong> There are three possible routes for the use of proprietary systems: There are <strong>“turnkey”</strong> systems sold by manufacturers such as <strong><a class="maplink" data-title="Hikvision">Hikvision</a></strong>, <strong>Dahua</strong>, <strong>Anyvision</strong> or <strong>Briefcam</strong>. Those integrate the software and hardware, and as such can be directly deployed by the client. <strong>Algorithm developers</strong> such as <strong>Amazon AWS Rekognition</strong> (USA), <strong><a class="maplink" data-title="NEC">NEC</a></strong> (Japan), <strong>NTechlab</strong> (Russia), <strong><a class="maplink" data-title="Paravision">Paravision</a></strong> (USA) allow to implement their algorithms and customise them to ones needs, and finally there are <strong>“cloud” API systems</strong>, a sub-set of the former category, where the algorithm is hosted in a datacentre and is accessed remotely (IPVM Team 2021b, 16). The latter type of technology bears important legal ramifications, as the data may travel outside of national or European jurisdictions. It should be noted that many of the proprietary products are based on similar algorithms and network architectures as their open-source counterparts (OpenCV, 2021). Contrary to the open-source software, it is generally unclear which datasets of images have been used to train the proprietary algorithms.</p>
</section>
</section>
<section id="technical-limits-problems-and-challenges-of-facial-recognition" class="level2">
@ -815,12 +815,12 @@
<p>More problematically, a <strong>lack of diversity</strong>, in particular when it comes to ethnicity, age, or gender leads to bias in the algorithm. This issue has been at the core of the US-based discussion on the banning of Facial Recognition. Public databases such as VGGFace2 (based on faces from Google images) and MS-Celeb-1M42 (celebrity faces) are often used to train facial recognition algorithms yet are far from representative of everyday populations this is called <strong>representation bias</strong> (Fernandez et al. 2020, 30). The main goal of the project <em>Gender Shades</em> led by <strong>Joy Buolamwini</strong> was both to show the lack of representativity of existing datasets and address the problem of the consequent discrepancy between the error rates related to light-skinned men and dark-skinned women (Fernandez et al. 2020, 3031).</p>
<p>However, a representational dataset is not always a desirable dataset, because actual <strong>structural biases</strong> often do not match the values of society. Illustrative of this is that, when doing a Google image search for the term “CEO” it would originally return primarily photographs of white male people. While this was representative of the CEO population (and thus accurate), the results reinforce the vision of a world that does not align with progressive societal values (Suresh, 2019). Because of the gap between ideals of equality and actual societal structural inequalities, datasets <strong>can be either representative of an unequal society, or representative of desired equality but never of both at the same time.</strong></p>
<p>Datasets upon which the computer algorithm will later be able to distinguish particular entities or behaviour are built through vast amounts of <strong>human labour</strong>. For example, the work that has gone into the image dataset ImageNet is equivalent to <strong>19 years of working 24 hours a day, 7 days a week</strong> (Malevé, 2020). Nevertheless, quantity does not necessarily equal quality. Many of the categories with which images are annotated are ambiguous. Not in their dictionary definition per se, but when they enter the culture of the annotation workers. For example, the category of “ratatouille” contains images of various stews, salads and even a character of the eponymous Pixar movie. Similarly, the category “Parisian” contains images of Paris Hilton (Malevé, 2020). This ambiguity of categories does not only haunt ImageNet. The aforementioned COCO dataset contains images of a birdhouse in the shape of a bird, which is tagged as bird, or a bare pizza bottom which is tagged as pizza (Cochior and van de Ven, 2020). <strong>These examples show that even seemingly unambiguous concepts become fluid the moment they have to become strictly delineated in a dataset.</strong></p>
<p>Another important issue with ethical and political repercussions is <strong>unethically collected data</strong>, as in the case of Clearview AI detailed above. When it comes to <strong>operational datasets</strong>, i.e., datasets used in the actual process of facial authentication and/or identification, we have seen that possible deployments include the use of cloud-based services (either for the processing or the storage of the sensitive information). This increases the risks of data breaches and attacks by hackers. (Fernandez et al. 2020, 34)</p>
<p>Another important issue with ethical and political repercussions is <strong>unethically collected data</strong>, as in the case of <a class="maplink" data-title="Clearview AI">Clearview AI</a> detailed above. When it comes to <strong>operational datasets</strong>, i.e., datasets used in the actual process of facial authentication and/or identification, we have seen that possible deployments include the use of cloud-based services (either for the processing or the storage of the sensitive information). This increases the risks of data breaches and attacks by hackers. (Fernandez et al. 2020, 34)</p>
</section>
<section id="algorithm-related-challenges" class="level3">
<h3>Algorithm-related challenges</h3>
<p>Finally, there are issues related to the quality and performance of the algorithms and how to measure it. The National Institute of Standards and Technology is an agency of the US Department of Commerce. The NIST provides the possibility for vendors to test the efficacy of their algorithms on a standardised dataset, the “Ongoing Face Recognition Vendor Test (FRVT).</p>
<p>As an IPVM study shows, brands often use single-number scores obtained from NIST vendor tests (i.e., “our algorithm showed 98,6% accuracy”.) (IPVM Team 2021b, 17). These scores are however obtained in very controlled conditions that <strong>do not match the real-world use</strong> of the algorithms. There are thus important discrepancies in this regard. Moreover, the accuracy score is not always representative of desirable behaviour of a model. Data scientists therefore distinguish <strong>precision</strong> and <strong>recall</strong>, to better account for cases where e.g., positive classification is rare, yet of high impact for example when classifying individuals as high risk (Shung 2020, 202). These distinctions are often lost in the commercial language and in the public debate.</p>
<p>Finally, there are issues related to the quality and performance of the algorithms and how to measure it. The National Institute of Standards and Technology is an agency of the US Department of Commerce. The <a class="maplink" data-title="NIST">NIST</a> provides the possibility for vendors to test the efficacy of their algorithms on a standardised dataset, the “Ongoing Face Recognition Vendor Test (FRVT).</p>
<p>As an IPVM study shows, brands often use single-number scores obtained from <a class="maplink" data-title="NIST">NIST</a> vendor tests (i.e., “our algorithm showed 98,6% accuracy”.) (IPVM Team 2021b, 17). These scores are however obtained in very controlled conditions that <strong>do not match the real-world use</strong> of the algorithms. There are thus important discrepancies in this regard. Moreover, the accuracy score is not always representative of desirable behaviour of a model. Data scientists therefore distinguish <strong>precision</strong> and <strong>recall</strong>, to better account for cases where e.g., positive classification is rare, yet of high impact for example when classifying individuals as high risk (Shung 2020, 202). These distinctions are often lost in the commercial language and in the public debate.</p>
<p>A final issue related to working with the existing algorithms is what is known as <strong>observer bias or confirmation bias</strong>. The output of an algorithm reinforces the (subconscious) biases that went into producing it. It can occur both when creating the dataset or when training and running the algorithms. For example, the software used for predictive policing in Chicago helped determine where to send police officers on patrol. “Because these predictions are likely to overrepresent areas that were already known to police, officers become increasingly likely to patrol these same areas and observe new criminal acts that confirm their prior beliefs regarding the distributions of criminal activity. The newly observed criminal acts that police document as a result of these targeted patrols then feed into the predictive policing algorithm on subsequent days, generating increasingly biased predictions. This creates a feedback loop where the model becomes increasingly confident that the locations most likely to experience further criminal activity are exactly the locations, they had previously believed to be high in crime.” (Lum and Isaac, 2016). The example reveals that the different kinds of biases at play are hard to untangle, as the observer bias coincides with a historical bias of over-policing. It requires a lot of work to recognise such confirmation biases in the automated operation of automated classification software. The “black box” dimension of their operation and the only just emerging efforts to build explanatory AI make it difficult to understand their categorisation process (Xie et al. 2020; Fernandez et al. 2020, 34)</p>
</section>
</section>
@ -843,23 +843,23 @@
<p>A broad range of deployments, which we consider in this first section, is not aimed at surveillance, but at authentication (see section 2.3 in this report), namely making sure that the person in front of the security camera is who they say they are.</p>
<section id="live-authentication" class="level3">
<h3>Live authentication</h3>
<p>As in the cases of the use of Cisco systems powered FRT in two pilot projects in <strong>high schools of Nice</strong> (see section 8.1) <strong>and Marseille (France)</strong><a href="#fn7" class="footnote-ref" id="fnref7" role="doc-noteref"><sup>7</sup></a>, or as in the case of the <strong>Anderstorp Upper Secondary School in Skelleftea (Sweden)</strong><a href="#fn8" class="footnote-ref" id="fnref8" role="doc-noteref"><sup>8</sup></a>, the aim of these projects was to identify students who could have access to the premises. School-wide biometric databases were generated and populated with students portraits. Gates were fitted with cameras connected to facial recognition technology and allowed access only to recognised students. Another documented use has been for the <strong>Home Quarantine App (Hungary)</strong>, in which telephone cameras are used by authorities to verify the identity of the persons logged into the app (see also section 10.1).</p>
<p>As in the cases of the use of Cisco systems powered FRT in two pilot projects in <strong>high schools of Nice</strong> (see section 8.1) <strong>and Marseille (France)</strong><a href="#fn7" class="footnote-ref" id="fnref7" role="doc-noteref"><sup>7</sup></a>, or as in the case of the <strong>Anderstorp Upper Secondary School in Skelleftea (Sweden)</strong><a href="#fn8" class="footnote-ref" id="fnref8" role="doc-noteref"><sup>8</sup></a>, the aim of these projects was to identify students who could have access to the premises. School-wide biometric databases were generated and populated with students portraits. Gates were fitted with cameras connected to facial recognition technology and allowed access only to recognised students. Another documented use has been for the <strong><a class="maplink" data-title="Home Quarantine App Hungary">Home Quarantine App (Hungary)</a></strong>, in which telephone cameras are used by authorities to verify the identity of the persons logged into the app (see also section 10.1).</p>
<p>In these deployments, people must submit themselves to the camera in order to be identified and gain access. While these techniques of identification pose <strong>important threats to the privacy of the concerned small groups of users</strong> (in both high school cases, DPAs banned the use of FRTs), and run the risk of false positives (unauthorised people recognised as authorised) or false negatives (authorised people not recognised as such) <strong>the risk of biometric mass surveillance strictly speaking is low to non-existent because of the nature of the acquisition of images and other sensor-based data.</strong></p>
<p>However, other forms of live authentication tie in with surveillance practices, in particular various forms of <strong>blacklisting</strong>. With blacklisting the face of every passer-by is compared to a list of faces of individuals who have been rejected access to the premises. In such an instance, people do not have to be identified, as long as an image of their face is provided. This has been used in public places, for example in the case of the Korte Putstraat in the Dutch city of 's-Hertogenbosch: during the carnival festivities of 2019 two people were rejected access to the street after they were singled out by the system (Gotink, 2019). It is unclear how many false positives were generated during this period. Other cases of blacklisting can be found at, for example, access control at various football stadiums in Europe, see also section 3.3. In many cases of blacklisting, individuals do not enrol voluntarily.</p>
</section>
<section id="forensic-authentication" class="level3">
<h3>Forensic authentication</h3>
<p>Biometric systems for the purposes of authentication are also increasingly deployed for <strong>forensic applications</strong> among law-enforcement agencies in the European Union. The typical scenario for the use of such technologies is to match the photograph of a suspect (extracted, for example, from previous records or from CCTV footage) against an existing dataset of known individuals (e.g., a national biometric database, a drivers license database, etc.). (TELEFI, 2021). The development of these forensic authentication capabilities is particularly relevant to this study, because it entails making large databases ready for searches on the basis of biometric information.</p>
<p>To date, <strong>11 out of 27 member states of the European Union</strong> are using facial recognition against biometric databases for forensic purposes: <strong>Austria</strong> (EDE)<a href="#fn9" class="footnote-ref" id="fnref9" role="doc-noteref"><sup>9</sup></a>, <strong>Finland</strong> (KASTU)<a href="#fn10" class="footnote-ref" id="fnref10" role="doc-noteref"><sup>10</sup></a>, <strong>France</strong> (TAJ)<a href="#fn11" class="footnote-ref" id="fnref11" role="doc-noteref"><sup>11</sup></a>, <strong>Germany</strong> (INPOL)<a href="#fn12" class="footnote-ref" id="fnref12" role="doc-noteref"><sup>12</sup></a>, <strong>Greece</strong> (Mugshot Database)<a href="#fn13" class="footnote-ref" id="fnref13" role="doc-noteref"><sup>13</sup></a>, <strong>Hungary</strong> (Facial Image Registry)<a href="#fn14" class="footnote-ref" id="fnref14" role="doc-noteref"><sup>14</sup></a>, <strong>Italy</strong> (AFIS)<a href="#fn15" class="footnote-ref" id="fnref15" role="doc-noteref"><sup>15</sup></a>, <strong>Latvia</strong> (BDAS)<a href="#fn16" class="footnote-ref" id="fnref16" role="doc-noteref"><sup>16</sup></a>, <strong>Lithuania</strong> (HDR)<a href="#fn17" class="footnote-ref" id="fnref17" role="doc-noteref"><sup>17</sup></a>, <strong>Netherlands</strong> (CATCH)<a href="#fn18" class="footnote-ref" id="fnref18" role="doc-noteref"><sup>18</sup></a> and <strong>Slovenia</strong> (Record of Photographed Persons)<a href="#fn19" class="footnote-ref" id="fnref19" role="doc-noteref"><sup>19</sup></a> (TELEFI 2021).</p>
<p>To date, <strong>11 out of 27 member states of the <a class="maplink" data-title="European Union">European Union</a></strong> are using facial recognition against biometric databases for forensic purposes: <strong>Austria</strong> (EDE)<a href="#fn9" class="footnote-ref" id="fnref9" role="doc-noteref"><sup>9</sup></a>, <strong>Finland</strong> (KASTU)<a href="#fn10" class="footnote-ref" id="fnref10" role="doc-noteref"><sup>10</sup></a>, <strong>France</strong> (TAJ)<a href="#fn11" class="footnote-ref" id="fnref11" role="doc-noteref"><sup>11</sup></a>, <strong>Germany</strong> (INPOL)<a href="#fn12" class="footnote-ref" id="fnref12" role="doc-noteref"><sup>12</sup></a>, <strong>Greece</strong> (Mugshot Database)<a href="#fn13" class="footnote-ref" id="fnref13" role="doc-noteref"><sup>13</sup></a>, <strong>Hungary</strong> (Facial Image Registry)<a href="#fn14" class="footnote-ref" id="fnref14" role="doc-noteref"><sup>14</sup></a>, <strong>Italy</strong> (AFIS)<a href="#fn15" class="footnote-ref" id="fnref15" role="doc-noteref"><sup>15</sup></a>, <strong>Latvia</strong> (BDAS)<a href="#fn16" class="footnote-ref" id="fnref16" role="doc-noteref"><sup>16</sup></a>, <strong>Lithuania</strong> (HDR)<a href="#fn17" class="footnote-ref" id="fnref17" role="doc-noteref"><sup>17</sup></a>, <strong>Netherlands</strong> (<a class="maplink" data-title="CATCH">CATCH</a>)<a href="#fn18" class="footnote-ref" id="fnref18" role="doc-noteref"><sup>18</sup></a> and <strong>Slovenia</strong> (Record of Photographed Persons)<a href="#fn19" class="footnote-ref" id="fnref19" role="doc-noteref"><sup>19</sup></a> (TELEFI 2021).</p>
<p><strong>Seven additional countries</strong> are expected to acquire such capabilities in the near future: <strong>Croatia</strong> (ABIS)<a href="#fn20" class="footnote-ref" id="fnref20" role="doc-noteref"><sup>20</sup></a>, <strong>Czech Republic</strong> (CBIS)<a href="#fn21" class="footnote-ref" id="fnref21" role="doc-noteref"><sup>21</sup></a>, <strong>Portugal</strong> (AFIS) <strong>Romania</strong> (NBIS)<a href="#fn22" class="footnote-ref" id="fnref22" role="doc-noteref"><sup>22</sup></a>, <strong>Spain</strong> (ABIS), <strong>Sweden</strong> (National Mugshot Database), <strong>Cyprus</strong> (ISIS Faces)<a href="#fn23" class="footnote-ref" id="fnref23" role="doc-noteref"><sup>23</sup></a>, <strong>Estonia</strong> (ABIS) (TELEFI 2021).</p>
<p>When it comes to international institutions, <strong>Interpol</strong> (2020) has a facial recognition system (IFRS)<a href="#fn24" class="footnote-ref" id="fnref24" role="doc-noteref"><sup>24</sup></a>, based on facial images received from more than 160 countries. <strong>Europol</strong> has <strong>t</strong>wo sub-units which use the facial recognition search tool and database known as FACE: the European Counter Terrorism Center (ECTC) and the European Cybercrime Center (ECC). (TELEFI, 2021 149-153) (Europol 2020)</p>
<p>When it comes to international institutions, <strong><a class="maplink" data-title="Interpol">Interpol</a></strong> (2020) has a facial recognition system (<a class="maplink" data-title="IFRS (Interpol)">IFRS</a>)<a href="#fn24" class="footnote-ref" id="fnref24" role="doc-noteref"><sup>24</sup></a>, based on facial images received from more than 160 countries. <strong><a class="maplink" data-title="Europol">Europol</a></strong> has <strong>t</strong>wo sub-units which use the facial recognition search tool and database known as FACE: the European Counter Terrorism Center (ECTC) and the European Cybercrime Center (ECC). (TELEFI, 2021 149-153) (Europol 2020)</p>
<p><strong>Only 9 countries in the EU so far have rejected or do not plan to implement</strong> FRT for forensic purposes: <strong>Belgium</strong> (see CHAPTER 6), <strong>Bulgaria</strong>, <strong>Denmark</strong>, <strong>Ireland</strong>, <strong>Luxembourg</strong>, <strong>Malta</strong>, <strong>Poland</strong>, <strong>Portugal</strong>, <strong>Slovakia</strong>.</p>
<p><img src="images/media/image1.png" style="width:4.62502in;height:3.28283in" alt="Map Description automatically generated" /></p>
<p>Figure 1. EU Countries use of FRT for forensic applications<a href="#fn25" class="footnote-ref" id="fnref25" role="doc-noteref"><sup>25</sup></a></p>
<p><strong>When it comes to databases</strong>, some countries limit the searches to <strong>criminal databases</strong> (Austria, Germany, France, Italy, Greece, Slovenia, Lithuania, UK), while other countries open the searches to <strong>civil databases</strong> (Finland, Netherlands, Latvia, Hungary).</p>
<p>This means that the <strong>person categories can vary substantially.</strong> In the case of criminal databases it can range from suspects and convicts, to asylum seekers, aliens, unidentified persons, immigrants, visa applicants. When <strong>civil databases</strong> are used as well, such as in Hungary, the database contains a broad range of “individuals of known identity from various document/civil proceedings” (TELEFI 2021, appendix 3).</p>
<p><strong>Finally, the database sizes</strong>, in comparison to the authentication databases mentioned in the previous section, are of a different magnitude. The databases of school students in France and Sweden, mentioned in the previous section contains a few hundred entries. National databases can contain instead several millions. Criminal databases such as Germanys INPOL contains <strong>6,2 million individuals</strong>, Frances TAJ <strong>21 million individuals</strong> and Italys AFIS <strong>9 million individuals.</strong> Civil databases, such as Hungarys Facial Image Registry contain <strong>30 million templates</strong> (TELEFI, 2021 appendix 3).</p>
<p>Authentication has also been deployed as part of integrated “safe city” solutions, such as the <strong>NEC Technology Bio-IDiom system in Lisbon and London,</strong> deployed for forensic investigation purposes. For this specific product, authentication can occur via facial recognition, as well as other biometric authentication techniques such as <strong>ear acoustics, iris, voice, fingerprint, and finger vein recognition</strong>. We currently do not have public information on the use of Bio-IDiom in Lisbon nor in London. On NECs Website (2021) however, Bio-IDiom is advertised as a “multimodal” identification system, that has been used for example by the Los Angeles County Sheriffs Department (LASD) for criminal investigations. The system “combines multiple biometric technologies including fingerprint, palm print, face, and iris recognition” and works “based on the few clues left behind at crime scenes. In Los Angeles, “this system is also connected to the databases of federal and state law enforcement agencies such as the California Department of Justice and FBI, making it the worlds largest-scale service-based biometrics system for criminal investigation”. We dont know if that is the case in Portugal and in the UK deployments.</p>
<p>Authentication has also been deployed as part of integrated “safe city” solutions, such as the <strong><a class="maplink" data-title="NEC">NEC</a> Technology <a class="maplink" data-title="NEC Technology in Lisbon">Bio-IDiom system in Lisbon</a> and London,</strong> deployed for forensic investigation purposes. For this specific product, authentication can occur via facial recognition, as well as other biometric authentication techniques such as <strong>ear acoustics, iris, voice, fingerprint, and finger vein recognition</strong>. We currently do not have public information on the use of <a class="maplink" data-title="NEC Technology in Lisbon">Bio-IDiom in Lisbon</a> nor in London. On <a class="maplink" data-title="NEC">NEC</a>s Website (2021) however, Bio-IDiom is advertised as a “multimodal” identification system, that has been used for example by the Los Angeles County Sheriffs Department (LASD) for criminal investigations. The system “combines multiple biometric technologies including fingerprint, palm print, face, and iris recognition” and works “based on the few clues left behind at crime scenes. In Los Angeles, “this system is also connected to the databases of federal and state law enforcement agencies such as the California Department of Justice and FBI, making it the worlds largest-scale service-based biometrics system for criminal investigation”. We dont know if that is the case in Portugal and in the UK deployments.</p>
</section>
<section id="case-study-inpol-germany" class="level3">
<h3>Case study: INPOL (Germany)</h3>
@ -879,19 +879,19 @@
<section id="smart-surveillance-features" class="level3">
<h3>Smart surveillance features</h3>
<p>A first range of deployments of <strong>“smart” systems</strong> correspond to what can broadly be defined as “smart surveillance” yet <strong>do not collect or process biometric information per se</strong><a href="#fn26" class="footnote-ref" id="fnref26" role="doc-noteref"><sup>26</sup></a>. Smart systems can be used <strong>ex-post</strong>, <strong>to assist CCTV camera operators</strong> in processing large amounts of <strong>recorded information</strong>, or can guide their attention when they have to monitor a large number of <strong>live video feeds</strong> simultaneously. Smart surveillance uses the following features:</p>
<p><strong>- Anomaly detection. In Toulouse (France), the City Council commissioned IBM to connect 30 video surveillance cameras to software able to "assist human decisions" by raising alerts when "abnormal events are detected." (Technopolice 2021) The request was justified by the “difficulties of processing the images generated daily by the 350 cameras and kept for 30 days (more than 10,000 images per second)”. The objective, according to the digital direction is "to optimise and structure the supervision of video surveillance operators by generating alerts through a system of intelligent analysis that facilitates the identification of anomalies detected, whether: movements of crowds, isolated luggage, crossing virtual barriers north of the Garonne, precipitous movement, research of shapes and colour. All these detections are done in real time or delayed (Technopolice 2021). In other words, the anomaly detection is a way to <em>operationalise</em> the numerical output of various computer vision based recognition systems. Similar systems are used</strong> in the <strong>Smart video surveillance deployment in Valenciennes (France)</strong> or in the <strong>Urban Surveillance Centre (Marseille).</strong></p>
<p><strong>- Object Detection.</strong> In Amsterdam, around the <strong>Johan Cruijff ArenA</strong> (Stadium), the city has been experimenting with a <strong>Digitale Perimeter</strong> (digital perimeter) surveillance system. In addition to the usual features of facial recognition, and crowd monitorining, the system includes the possibility of automatically detecting specific objects such as <strong>weapons, fireworks</strong> or <strong>drones</strong>. Similar features are found in <strong>Inwebits Smart Security Platform (SSP) in Poland.</strong></p>
<p><strong>- Feature search. In Marbella (Spain), Avigilon deployed a smart camera system aimed at providing “smart” functionalities without biometric data. Since regional law bans facial and biometric identification without consent, the software uses “appearance search”. “Appearance search” provides estimates for “unique facial traits, the colour of a persons clothes, age, shape, gender and hair colour”. This information is not considered biometric. The individuals features can be used to search for suspects fitting a particular profile. Similar technology has been deployed in Kortrijk (Belgium), which provides search parameters for people, vehicles and animals (</strong>Verbeke 2019)<strong>.</strong></p>
<p>- <strong>Video summary.</strong> Some companies, such as <strong>Briefcam</strong> and their product <strong>Briefcam Review</strong>, offer a related product, which promises to shorten the analysis of long hours of CCTV footage, by identifying specific topics of interest (children, women, lighting changes) and making the footage searchable. The product combines face recognition, license plate recognition, and more mundane video analysis features such as the possibility to overlay selected scenes, thus highlighting recurrent points of activity in the image. Briefcam is deployed in several cities across Europe, including Vannes, Roubaix (in partnership with <strong>Eiffage</strong>) and Moirand in France.</p>
<p><strong>- Object detection and object tracking. As outlined in chapter 2, object detection is often the first step in the various digital detection applications for images. An object here can mean anything the computer is conditioned to search for: a suitcase, a vehicle, but also a person; while some products further process the detected object to estimate particular features, such as the colour of a vehicle, the age of a person. However, on some occasions — often to address concerns over privacy — only the position of the object on the image is stored. This is for example the case with the</strong> test of the <strong>One-and-a-half-meter monitor in Amsterdam (Netherlands), Intemos people counting system in Nijmegen (Netherlands),</strong> the <strong>KICK project</strong> in <strong>Brugge</strong>, <strong>Kortrijk</strong>, <strong>Ieper</strong>, <strong>Roeselare</strong> and <strong>Oostende</strong> in Belgium or the <strong>Eco-counter</strong> <strong>tracking cameras pilot project</strong> in <strong>Lannion</strong> (France).</p>
<p><strong>- Movement recognition. Avigilons software that is deployed in Marbella (Spain) also detects unusual movement. “To avoid graffiti, we can calculate the time someone takes to pass a shop window, “explained Javier Martín, local chief of police in Marbella to the Spanish newspaper El País. “If it takes them more than 10 seconds, the camera is activated to see if they are graffitiing. So far, it hasnt been activated.” (Colomé 2019) Similar movement recognition technology is used in, the ViSense deployment at the Olympic Park London (UK) and the security camera system in Mechelen-Willebroek (Belgium). It should be noted that movement</strong> recognition can be done in two ways: where projects such as the <strong>Data-lab Burglary-free Neighbourhood in Rotterdam (Netherlands)</strong><a href="#fn27" class="footnote-ref" id="fnref27" role="doc-noteref"><sup>27</sup></a> are only based on the tracking of trajectories of people through an image (see also Object detection), cases such as <strong>the Living Lab Stratumseind</strong><a href="#fn28" class="footnote-ref" id="fnref28" role="doc-noteref"><sup>28</sup></a> <strong>in Eindhoven (Netherlands)</strong> also process the movements and gestures of individuals in order to estimate their behaviour.</p>
<p><strong>- Anomaly detection. In Toulouse (France), the City Council commissioned <a class="maplink" data-title="IBM">IBM</a> to connect 30 video surveillance cameras to software able to "assist human decisions" by raising alerts when "abnormal events are detected." (Technopolice 2021) The request was justified by the “difficulties of processing the images generated daily by the 350 cameras and kept for 30 days (more than 10,000 images per second)”. The objective, according to the digital direction is "to optimise and structure the supervision of video surveillance operators by generating alerts through a system of intelligent analysis that facilitates the identification of anomalies detected, whether: movements of crowds, isolated luggage, crossing virtual barriers north of the Garonne, precipitous movement, research of shapes and colour. All these detections are done in real time or delayed (Technopolice 2021). In other words, the anomaly detection is a way to <em>operationalise</em> the numerical output of various computer vision based recognition systems. Similar systems are used</strong> in the <strong>Smart video surveillance deployment in Valenciennes (France)</strong> or in the <strong>Urban Surveillance Centre (Marseille).</strong></p>
<p><strong>- Object Detection.</strong> In Amsterdam, around the <strong><a class="maplink" data-title="Johan Cruijff ArenA">Johan Cruijff ArenA</a></strong> (Stadium), the city has been experimenting with a <strong><a class="maplink" data-title="Digitale Perimeter">Digitale Perimeter</a></strong> (digital perimeter) surveillance system. In addition to the usual features of facial recognition, and crowd monitorining, the system includes the possibility of automatically detecting specific objects such as <strong>weapons, fireworks</strong> or <strong>drones</strong>. Similar features are found in <strong><a class="maplink" data-title="Inwebit">Inwebit</a>s Smart Security Platform (SSP) in Poland.</strong></p>
<p><strong>- Feature search. In Marbella (Spain), <a class="maplink" data-title="Avigilon">Avigilon</a> deployed a smart camera system aimed at providing “smart” functionalities without biometric data. Since regional law bans facial and biometric identification without consent, the software uses “appearance search”. “Appearance search” provides estimates for “unique facial traits, the colour of a persons clothes, age, shape, gender and hair colour”. This information is not considered biometric. The individuals features can be used to search for suspects fitting a particular profile. Similar technology has been deployed in Kortrijk (Belgium), which provides search parameters for people, vehicles and animals (</strong>Verbeke 2019)<strong>.</strong></p>
<p>- <strong>Video summary.</strong> Some companies, such as <strong>Briefcam</strong> and their product <strong>Briefcam Review</strong>, offer a related product, which promises to shorten the analysis of long hours of CCTV footage, by identifying specific topics of interest (children, women, lighting changes) and making the footage searchable. The product combines face recognition, license plate recognition, and more mundane video analysis features such as the possibility to overlay selected scenes, thus highlighting recurrent points of activity in the image. Briefcam is deployed in several cities across Europe, including Vannes, Roubaix (in partnership with <strong><a class="maplink" data-title="Eiffage">Eiffage</a></strong>) and Moirand in France.</p>
<p><strong>- Object detection and object tracking. As outlined in chapter 2, object detection is often the first step in the various digital detection applications for images. An object here can mean anything the computer is conditioned to search for: a suitcase, a vehicle, but also a person; while some products further process the detected object to estimate particular features, such as the colour of a vehicle, the age of a person. However, on some occasions — often to address concerns over privacy — only the position of the object on the image is stored. This is for example the case with the</strong> test of the <strong>One-and-a-half-meter monitor in Amsterdam (Netherlands), <a class="maplink" data-title="Intemo">Intemo</a>s people counting system in Nijmegen (Netherlands),</strong> the <strong>KICK project</strong> in <strong>Brugge</strong>, <strong>Kortrijk</strong>, <strong>Ieper</strong>, <strong>Roeselare</strong> and <strong>Oostende</strong> in Belgium or the <strong>Eco-counter</strong> <strong>tracking cameras pilot project</strong> in <strong>Lannion</strong> (France).</p>
<p><strong>- Movement recognition. <a class="maplink" data-title="Avigilon">Avigilon</a>s software that is deployed in Marbella (Spain) also detects unusual movement. “To avoid graffiti, we can calculate the time someone takes to pass a shop window, “explained Javier Martín, local chief of police in Marbella to the Spanish newspaper El País. “If it takes them more than 10 seconds, the camera is activated to see if they are graffitiing. So far, it hasnt been activated.” (Colomé 2019) Similar movement recognition technology is used in, the ViSense deployment at the Olympic Park London (UK) and the security camera system in Mechelen-Willebroek (Belgium). It should be noted that movement</strong> recognition can be done in two ways: where projects such as the <strong><a class="maplink" data-title="Data-lab Burglary-free Neighbourhood">Data-lab Burglary-free Neighbourhood</a> in Rotterdam (Netherlands)</strong><a href="#fn27" class="footnote-ref" id="fnref27" role="doc-noteref"><sup>27</sup></a> are only based on the tracking of trajectories of people through an image (see also Object detection), cases such as <strong>the <a class="maplink" data-title="Living Lab Stratumseind">Living Lab Stratumseind</a></strong><a href="#fn28" class="footnote-ref" id="fnref28" role="doc-noteref"><sup>28</sup></a> <strong>in Eindhoven (Netherlands)</strong> also process the movements and gestures of individuals in order to estimate their behaviour.</p>
<section id="audio-recognition-1" class="level4">
<h4>Audio recognition</h4>
<p>- In addition to image (video) based products, some deployments use audio recognition to complement the decision-making process, for example used in the <strong>Serenecity (a branch of Verney-Carron) Project in Saint-Etienne (France)</strong>, the <strong>Smart CCTV deployment in public transportation in Rouen (France)</strong> or the <strong>Smart CCTV system in Strasbourg (France)</strong>. The project piloted in Saint-Etienne for example, worked by placing “audio capture devices” - the term microphone was avoided- in strategic parts of the city. Sounds qualified by an anomaly detection algorithm as suspicious would then alert operators in the Urban Supervision Center, prompting further investigation via CCTV or deployment of the necessary services (healthcare or police for example) (France 3 Auvergne-Rhône-Alpes 2019.)</p>
<p>- In addition to image (video) based products, some deployments use audio recognition to complement the decision-making process, for example used in the <strong><a class="maplink" data-title="Serenecity">Serenecity</a> (a branch of Verney-Carron) Project in Saint-Etienne (France)</strong>, the <strong>Smart CCTV deployment in public transportation in Rouen (France)</strong> or the <strong>Smart CCTV system in Strasbourg (France)</strong>. The project piloted in Saint-Etienne for example, worked by placing “audio capture devices” - the term microphone was avoided- in strategic parts of the city. Sounds qualified by an anomaly detection algorithm as suspicious would then alert operators in the Urban Supervision Center, prompting further investigation via CCTV or deployment of the necessary services (healthcare or police for example) (France 3 Auvergne-Rhône-Alpes 2019.)</p>
</section>
<section id="emotion-recognition" class="level4">
<h4>Emotion recognition</h4>
<p>- <strong>Emotion recognition</strong> is a rare occurrence. We found evidence of its deployment only in a <strong>pilot project in Nice (see section 8.1)</strong> and in the <strong>Citybeacon project in Eindhoven, but even then, the project was never actually tested. The original idea proposed by the company Two-I was “a "real-time emotional mapping" capable of highlighting "potentially problematic or even dangerous situations". "A dynamic deployment of security guards in an area where tension and stress are felt, is often a simple way to avoid any overflow," also argues Two-I, whose "Security" software would be able to decipher some 10,000 faces per second. (Binacchi 2019)</strong></p>
<p>- <strong>Emotion recognition</strong> is a rare occurrence. We found evidence of its deployment only in a <strong>pilot project in Nice (see section 8.1)</strong> and in the <strong><a class="maplink" data-title="Citybeacon">Citybeacon</a> project in Eindhoven, but even then, the project was never actually tested. The original idea proposed by the company <a class="maplink" data-title="Two-I">Two-I</a> was “a "real-time emotional mapping" capable of highlighting "potentially problematic or even dangerous situations". "A dynamic deployment of security guards in an area where tension and stress are felt, is often a simple way to avoid any overflow," also argues <a class="maplink" data-title="Two-I">Two-I</a>, whose "Security" software would be able to decipher some 10,000 faces per second. (Binacchi 2019)</strong></p>
</section>
<section id="gait-recognition-1" class="level4">
<h4>Gait recognition</h4>
@ -902,7 +902,7 @@
<h3>Integrated solutions </h3>
<section id="smart-cities" class="level4">
<h4>Smart cities</h4>
<p>While some cities or companies decide to implement some of the functionalities with their existing or updated CCTV systems, several chose to centralise several of these “smart” functions in <strong>integrated systems</strong> often referred to as “safe city” solutions. These solutions do not necessarily process biometric information. This is the case for example for the deployments in <strong>TIMs</strong>, <strong>Insula</strong> and <strong>Venis</strong> <strong>Safe City Platform in Venice (Italy)</strong>, <strong>Huaweis</strong> <strong>Safe City in Valenciennes (France)</strong>, <strong>Dahuas integrated solution in Brienon-sur-Armançon</strong> <strong>(France)</strong>, <strong>Thalès Safe City in La Défense and Nice (France)</strong>, <strong>Engie Inéos and SNEFs integrated solution in Marseille (France)</strong>, the <strong>Center of Urban Supervision in Roubaix (France)</strong>, <strong>AI Mars (Madrid, in development)</strong> or <strong>NECs platform in Lisbon and London</strong>.</p>
<p>While some cities or companies decide to implement some of the functionalities with their existing or updated CCTV systems, several chose to centralise several of these “smart” functions in <strong>integrated systems</strong> often referred to as “safe city” solutions. These solutions do not necessarily process biometric information. This is the case for example for the deployments in <strong>TIMs</strong>, <strong>Insula</strong> and <strong>Venis</strong> <strong>Safe City Platform in Venice (Italy)</strong>, <strong><a class="maplink" data-title="Huawei">Huawei</a>s</strong> <strong>Safe City in Valenciennes (France)</strong>, <strong>Dahuas integrated solution in Brienon-sur-Armançon</strong> <strong>(France)</strong>, <strong>Thalès Safe City in La Défense and Nice (France)</strong>, <strong>Engie Inéos and SNEFs integrated solution in Marseille (France)</strong>, the <strong>Center of Urban Supervision in Roubaix (France)</strong>, <strong>AI Mars (Madrid, in development)</strong> or <strong>NECs platform in <a class="maplink" data-title="NEC Technology in Lisbon">Lisbon</a> and London</strong>.</p>
<p>The way “Smart/Safe City” solutions work is well exemplified by the “Control room” deployed in Venice, connected to an urban surveillance network. The system is composed of a central command and control room which aggregates cloud computing systems, together with smart cameras, artificial intelligence systems, antennas and hundreds of sensors distributed on a widespread network. The idea is to monitor what happens in the lagoon city in real time. The scope of the abilities of the centre is wide-ranging. It promises to: manage events and incoming tourist flows, something particularly relevant to a city which aims to implement a visiting fee for tourists; predict and manage weather events in advance, such as the shifting of tides and high water, by defining alternative routes for transit in the city; indicating to the population in real time the routes to avoid traffic and better manage mobility for time optimisation; improve the management of public safety allowing city agents to intervene in a more timely manner; control and manage water and road traffic, also for sanctioning purposes, through specific video-analysis systems; control the status of parking lots; monitor the environmental and territorial situation; collect, process data and information that allow for the creation of forecasting models and the allocation of resources more efficiently and effectively; bring to life a physical "Smart Control Room" where law enforcement officers train and learn how to read data as well. (LUMI 2020)</p>
</section>
<section id="smartphone-apps" class="level4">
@ -924,7 +924,7 @@
<p>- <strong>Live Facial Recognition pilot project in Brussels International Airport / Zaventem</strong> (Belgium, see detailed case study, CHAPTER 6)</p>
<p><strong>- Live Facial Recognition in Budapest</strong> (Hungary, see detailed case study, CHAPTER 10)</p>
<p>- <strong>Live Facial Recognition pilot project during the Carnival in Nice</strong> (France, see detailed case study, CHAPTER 8)</p>
<p><strong>- Live Facial Recognition Pilot Project Südkreuz Berlin</strong> (Germany, see detailed case study, CHAPTER 9)</p>
<p><strong>- Live Facial Recognition <a class="maplink" data-title="Pilot Project Südkreuz Berlin">Pilot Project Südkreuz Berlin</a></strong> (Germany, see detailed case study, CHAPTER 9)</p>
<ul>
<li><p>Live Facial Recognition during Carnival 2019 in 's-Hertogenboschs Lange Putstraat (the (Netherlands)</p></li>
</ul>
@ -933,11 +933,11 @@
<section id="deployment-of-rbi-in-commercial-spaces" class="level3">
<h3>Deployment of RBI in commercial spaces</h3>
<p>The number of deployments of live facial recognition systems in commercial spaces hosting the public is much higher, but because of its commercial nature, difficult to document and trace. Our research found the following instances:</p>
<p>- <strong>Live Facial Recognition project, Brøndby IF Football stadium</strong> (Denmark)</p>
<p>- <strong>Live Facial Recognition Pilot in Metz Stadium</strong> (France)</p>
<p>- <strong>Live Facial Recognition in Ifema</strong> (Spain)</p>
<p>- <strong>Live Facial Recognition in Mercadona or Mallorca, Zaragoza, Valencia</strong> (Spain)</p>
<p>The systems operate more or less in the same way as RBI in public spaces, or as forensic authentication systems if they were connected to live cameras. In the <strong>Brøndby IF Football stadium deployment for example</strong>, developed in partnership with <strong>Panasonic</strong> and the <strong>National University of Singapore</strong>, the football fans who want to access the game have to pass through a gate equipped with a camera, connected to a facial recognition algorithm. The stadium administration has constituted a database of unwanted individuals and if the software matches one of the incoming fans with a record in the database, it flags it to the system (Overgaard 2019).</p>
<p>- <strong><a class="maplink" data-title="AFR at Brøndby IF">Live Facial Recognition project, Brøndby IF Football stadium</a></strong> (Denmark)</p>
<p>- <strong>Live <a class="maplink" data-title="Facial Recognition Pilot in Metz Stadium">Facial Recognition Pilot in Metz Stadium</a></strong> (France)</p>
<p>- <strong>Live <a class="maplink" data-title="Facial Recognition in Ifema">Facial Recognition in Ifema</a></strong> (Spain)</p>
<p>- <strong>Live <a class="maplink" data-title="Facial Recognition in Mercadona">Facial Recognition in Mercadona</a> or Mallorca, Zaragoza, Valencia</strong> (Spain)</p>
<p>The systems operate more or less in the same way as RBI in public spaces, or as forensic authentication systems if they were connected to live cameras. In the <strong><a class="maplink" data-title="AFR at Brøndby IF">Brøndby IF Football stadium deployment</a> for example</strong>, developed in partnership with <strong><a class="maplink" data-title="Panasonic">Panasonic</a></strong> and the <strong><a class="maplink" data-title="National University of Singapore">National University of Singapore</a></strong>, the football fans who want to access the game have to pass through a gate equipped with a camera, connected to a facial recognition algorithm. The stadium administration has constituted a database of unwanted individuals and if the software matches one of the incoming fans with a record in the database, it flags it to the system (Overgaard 2019).</p>
<p>There is however little to no information of the uses of these technologies in commercial spaces, as there is no requirement to publicise the various components of these systems. The case studies of this report thus focus mostly on the deployment of RBI in public spaces. More research, and more transparency would however be welcome in order to understand the data gathering practices and the impact of these deployments.</p>
</section>
</section>
@ -1056,7 +1056,7 @@
<h2>Four positions in the policy debates</h2>
<section id="active-promotion-1" class="level3">
<h3>Active promotion</h3>
<p>A certain number of actors, both at the national and at the local level are pushing for the development and the extension of biometric remote identification. At the local level, the new technological developments meet a growing apetite for smart city initiatives and the ambitions of mayors that strive for developing digital platforms and employ technology-oriented solutions for governance and law enforcement. The intention of the mayor of Nice, Christian Etrosi, to make <strong>Nice a “laboratory” of crime prevention, despite repeated concerns of the French DPA,</strong> is a case in point (for a detailed analysis, see chapter 8 in this report, see also Barelli 2018). Law enforcement agencies across Europe also continue to press ahead with efforts to build <strong>digital and automated infrastructures that benefits tech companies who push their face recognition technologies with the concept of smart city and innovation tech</strong> (ex. Huawei, NEC, etc.).</p>
<p>A certain number of actors, both at the national and at the local level are pushing for the development and the extension of biometric remote identification. At the local level, the new technological developments meet a growing apetite for smart city initiatives and the ambitions of mayors that strive for developing digital platforms and employ technology-oriented solutions for governance and law enforcement. The intention of the mayor of Nice, Christian Etrosi, to make <strong>Nice a “laboratory” of crime prevention, despite repeated concerns of the French DPA,</strong> is a case in point (for a detailed analysis, see chapter 8 in this report, see also Barelli 2018). Law enforcement agencies across Europe also continue to press ahead with efforts to build <strong>digital and automated infrastructures that benefits tech companies who push their face recognition technologies with the concept of smart city and innovation tech</strong> (ex. <a class="maplink" data-title="Huawei">Huawei</a>, <a class="maplink" data-title="NEC">NEC</a>, etc.).</p>
<p><strong>At the national level, Biometric systems for the purposes of authentication are increasingly deployed for forensic applications</strong> among law-enforcement agencies in the European Union. As we elaborate in Chapter 3, 11 out of 27 member states of the European Union are already using facial recognition against biometric databases for forensic purposes and 7 additional countries are expected to acquire such capabilities in the near future. The map of the European deployments of Biometric Identification Technologies (see Chapter 3) bear witness to a <strong>broad range of algorithmic processing of security images</strong> in a spectrum that goes from individual, localised authentication systems to generalised law enforcement uses of authentication, to Biometric Mass Surveillance.</p>
<p>Several states that have not yet adopted such technologies seem inclined to follow the trend, and push further. Belgian Minister of Interior Pieter De Crem for example, recently declared he was in favour of the use of facial recognition both for judicial inquiries but also for live facial recognition, a much rarer instance.</p>
<section id="the-use-of-facial-recognition-can-mean-increased-efficiency-for-security-services-the-police-are-interested-in-using-this-technology-in-several-of-their-missions.-first-of-all-within-the-framework-of-the-administrative-police-with-the-aim-of-guaranteeing-the-security-of-a-closed-place-accessible-to-the-public-it-would-allow-them-to-immediately-intercept-a-person-who-is-known-in-the-police-databases-and-who-constitutes-a-danger-for-public-security-but-this-technology-can-also-be-used-within-the-framework-of-the-judicial-police-with-the-aim-of-controlling-during-an-investigation-if-the-suspect-was-present-at-the-scene-of-the-crime-at-the-time-when-the-punishable-act-was-committed.-de-halleux-2020" class="level4 Quote">
@ -1066,13 +1066,13 @@
</section>
<section id="support-with-safeguards-1" class="level3">
<h3>Support with safeguards</h3>
<p>A second category of actors has indeed adopted the point of view that the RBI technologies should be supported, to the condition that their development should be monitored because of the risks they potentially pose. We find in this category the EU Commission, the EU Council, some EU Political parties, as well as the Fundamental Rights Agency (FRA), national DPAs such as the CNIL, the CoE (Council of Europe), and a certain number of courts.</p>
<p>A second category of actors has indeed adopted the point of view that the RBI technologies should be supported, to the condition that their development should be monitored because of the risks they potentially pose. We find in this category the EU Commission, the EU Council, some EU Political parties, as well as the Fundamental Rights Agency (FRA), national DPAs such as the <a class="maplink" data-title="CNIL">CNIL</a>, the CoE (Council of Europe), and a certain number of courts.</p>
<p>Developments in the field of AI for governance, security and law enforcement are widely encouraged and financially supported by EU institutions. In their communication <em><strong>Shaping Europes Digital Futures</strong></em> accompanying the White Paper on AI, the European Commission set out its guidelines and strategies to create a “Europe fit for the digital age” (European Commission 2020a). In support of a “fair and competitive economy” the Commission proposes a European Data Strategy (EDS) <strong>to make Europe a global leader in the data-agile economy</strong>. The EDS further aims to ensure Europes technological sovereignty in a globalised world and “<strong>unlock the enormous potential of new technologies like AI”</strong> (Newsroom 2020). Therefore, the Commission proposes, among others “building and deploying cutting-edge joint digital capacities in the areas of AI, cyber, super and quantum computing, quantum communication and blockchain;” as well as “[r]einforcing EU governments interoperability strategy to ensure coordination and common standards for secure and borderless public sector data flows and services.” (European Commission 2020a, 4)</p>
<p>The financial support for these initiatives is planned to be channelled from the <strong>Digital Europe programme (DEP), the Connecting Europe Facility 2 and Horizon Europe.</strong> Through the Horizon Europe for instance, the Commission plans to invest €15 billion in the Digital, Industry and Space cluster, with AI as a key activity to be supported. The DEP would benefit from almost €2.5 billion in deploying data platforms and AI applications while also supporting national authorities in making their high value data sets interoperable (Newsroom 2020).</p>
<p>In the <strong>European Parliament</strong>, the EPPEuropean People's Party most aligns with this approach. “We want to regulate facial recognition technologies, not ban them. We need clear rules where they can be used and where they must not be used”, has for example declared Emil Radev MEP, EPP Group Member of the Legal Affairs Committee. As he puts it “Without a doubt, we want to prevent mass surveillance and abuse. But this cannot mean banning facial recognition all together. There are harmless and useful applications for facial recognition, which increase personal security" (European Peoples Party, 2021)</p>
<p>The <strong>FRAs</strong> 2019 report on facial recognition technologies (FRA 2019), which builds on several previous reports concerning biometrics, IT systems and fundamental rights (FRA 2018); big data and decision making (FRA 2018); data quality and artificial intelligence (FRA 2019); calls for a moderate approach. The FRA advocates for a comprehensive understanding of how exactly facial recognition technologies work and what their impact on fundamental human rights are. Fundamental rights implications of using FRT, they argue, vary considerably depending on the purpose, scope and context. They highlight a number of issues based on the EU fundamental rights framework as well as the EU data protection legislation. For example, according to Article 9 of the GDPR, processing of biometric data is allowed based on the data subjects <strong>explicit</strong> consent, which requires a higher threshold of precision and definitiveness including for processing purposes. In terms of using <strong>biometric surveillance in public spaces</strong>, <strong>explicit consent</strong> would not provide a lawful ground for the relevant data processing because as observed by the CJEU in its <em>Schwarz</em> decision, the data subject who is entering the premises <strong>would not have any choice of opting out of data processing</strong>. If the processing of biometric data is based on substantial public interest, which is another lawful data processing ground under Article 9 of the GDPR, it must be “<strong>proportionate</strong> to the aim pursued, <strong>respect the essence of the right to data protection</strong> and provide for <strong>suitable and specific measures to safeguard the fundamental rights and interest</strong> of the data subjects” ((Article 9(2)(g), GDPR). Finally, when emphasising that the processing must be based on a lawful ground as recognised under the EU data protection legislation, the FRA was particularly vocal about the “<strong>function creep”,</strong> in regard to use of facial recognition systems and emphasised that the purpose of information collection must be strictly determined in light of the gravity of the intrusion upon peoples fundamental rights (25). </p>
<p>Therefore, the FRA places the <strong>right to privacy and protection of personal and sensitive data at the core of their problem definition</strong>, emphasising the potential dangers of FRTs undermining the freedom of expression, association and assembly. The FRA report also makes a case for <strong>the rights of special groups</strong> such as children, the elderly and people with disabilities, and addresses the issue of how the use of FRTs can contribute to further criminalise and stigmatise already discriminated groups of people (e.g., certain ethnic or racial minorities). In light of these considerations they advocate for a clear and “sufficiently detailed” legal framework, close monitoring and a thorough and continuous impact assessment of each deployment.</p>
<p>The French DPA, the CNIL, takes a similar position in the report “Facial Recognition. For a debate living up to the challenges” (CNIL 2019b). The <strong>CNIL</strong> report argues that the contactless and ubiquitous nature of the different FRTs can create an <strong>unprecedented potential for surveillance which</strong>, in the long run, could potentially undermine societal choices.  They also emphasise that biometric data is sensitive data therefore its collection is never completely harmless: “Even legitimate and well-defined use can, in the event of a cyber-attack or a simple error, have particularly serious consequences. In this context, the question of securing biometric data is crucial and must be an overriding priority in the design of any project of this kind” (CNIL 2019b, 6). In their recommendations, while <strong>calling for special vigilance,</strong> they <strong>acknowledge the legitimacy and proportionality of <em>some</em> uses.</strong> The CNIL pointed out that GDPR-endangering applications are often presented as “pilot projects”, and thus requested the drawing of “some <strong>red lines even before any experimental use”.</strong> They call instead for “a genuinely experimental approach” that test and perfect technical solutions that respect the legal framework (CNIL 2019b, 10).</p>
<p>The French DPA, the <a class="maplink" data-title="CNIL">CNIL</a>, takes a similar position in the report “Facial Recognition. For a debate living up to the challenges” (CNIL 2019b). The <strong><a class="maplink" data-title="CNIL">CNIL</a></strong> report argues that the contactless and ubiquitous nature of the different FRTs can create an <strong>unprecedented potential for surveillance which</strong>, in the long run, could potentially undermine societal choices.  They also emphasise that biometric data is sensitive data therefore its collection is never completely harmless: “Even legitimate and well-defined use can, in the event of a cyber-attack or a simple error, have particularly serious consequences. In this context, the question of securing biometric data is crucial and must be an overriding priority in the design of any project of this kind” (CNIL 2019b, 6). In their recommendations, while <strong>calling for special vigilance,</strong> they <strong>acknowledge the legitimacy and proportionality of <em>some</em> uses.</strong> The <a class="maplink" data-title="CNIL">CNIL</a> pointed out that GDPR-endangering applications are often presented as “pilot projects”, and thus requested the drawing of “some <strong>red lines even before any experimental use”.</strong> They call instead for “a genuinely experimental approach” that test and perfect technical solutions that respect the legal framework (CNIL 2019b, 10).</p>
<p>The CoEs <strong>Practical Guide on the Use of Personal Data in the Police Sector</strong> (Council of Europe 2018), supplementing Convention 108+, puts great emphasis on implementing specific safeguards where an automated biometric system is introduced and considers that due to the high risk that such system poses to individuals rights, data protection authorities should be consulted in its implementation (10). Also, as mentioned below, the Council of Europes <strong>Guidelines on Facial Recognition</strong> (Council of Europe 2021), while considering <strong>a moratorium on the live facial recognition technology,</strong> sets out certain requirements to be met when implementing (possibly forensic) facial recognition technology.</p>
</section>
<section id="moratorium-1" class="level3">
@ -1086,23 +1086,23 @@
</section>
<section id="outright-ban" class="level3">
<h3>Outright Ban</h3>
<p>Finally, a certain number of EU Political Parties, EU and national NGOs have argued that there is no acceptable deployment of RBI, because the danger of Biometric Mass Surveillance is too high. Such actors include organisations such as EDRi, La Quadrature du Net, Algorithm Watch or the French Défenseur des Droits<a href="#fn29" class="footnote-ref" id="fnref29" role="doc-noteref"><sup>29</sup></a>.</p>
<p>Finally, a certain number of EU Political Parties, EU and national NGOs have argued that there is no acceptable deployment of RBI, because the danger of Biometric Mass Surveillance is too high. Such actors include organisations such as EDRi, <a class="maplink" data-title="La Quadrature du Net">La Quadrature du Net</a>, <a class="maplink" data-title="Algorithm Watch">Algorithm Watch</a> or the French Défenseur des Droits<a href="#fn29" class="footnote-ref" id="fnref29" role="doc-noteref"><sup>29</sup></a>.</p>
<p>In the European Parliament, the <strong>European Greens</strong> have most vocally promoted the position of the ban, and have gathered support across party lines. In a letter to the European Commission dated 15 April 2021, 40 MEPs from the European Greens, the Party of the European Left, the Party of European Socialists, Renew Europe, a few non-attached MEPs and one member of the far-right party Identity and Democracy expressed their concerns about the leaked EU commission proposal for the AI Regulation a few days earlier. As they argued</p>
<section id="people-who-constantly-feel-watched-and-under-surveillance-cannot-freely-and-courageously-stand-up-for-their-rights-and-for-a-just-society.-surveillance-distrust-and-fear-risk-gradually-transforming-our-society-into-one-of-uncritical-consumers-who-believe-they-have-nothing-to-hide-and---in-a-vain-attempt-to-achieve-total-security---are-prepared-to-give-up-their-liberties.-that-is-not-a-society-worth-living-in-breyer-et-al.-2021" class="level4 Quote">
<blockquote class="Quote">People who constantly feel watched and under surveillance cannot freely and courageously stand up for their rights and for a just society. Surveillance, distrust and fear risk gradually transforming our society into one of uncritical consumers who believe they have “nothing to hide” and - in a vain attempt to achieve total security - are prepared to give up their liberties. That is not a society worth living in! <footer>(Breyer et al. 2021)</footer></blockquote>
<p>Taking in particular issue with Article 4 and the possible exemptions to regulation of AI “in order to safeguard public safety”, they urge the commissionEuropean Commission “to make sure that existing protections are upheld and <strong>a clear ban on biometric mass surveillance in public spaces is proposed</strong>. This is what a majority of citizens want” (Breyer et al. 2021)</p>
<p><strong>European Digital Rights (EDRi), an umbrella organisation of</strong> 44 digital rights NGOs in Europe takes a radical stance on the issue. They argue <strong>that mass processing of biometric data in public spaces creates a serious risk of mass surveillance</strong> that infringes on fundamental rights, and therefore they call on the Commission to <strong>permanently stop all deployments that can lead to mass surveillance</strong>. In their report <em>Ban Biometric Mass Surveillance</em> (2020) they demand that the EDPB and national DPAs) <strong>“publicly disclose all existing and planned activities and deployments that fall within this remit.</strong>” (EDRi 2020, 5). Furthermore, they call for ceasing all planned legislation which establishes biometric processing as well as the funding for all such projects, amounting to an “immediate and indefinite ban on biometric processing”.</p>
<p><strong>La Quadrature du Net (LQDN) one of EDRis founding members</strong> (created in 2008 to “promote and defend fundamental freedoms in the digital world") similarly called for a ban on <strong>any present and future use of facial recognition for security and surveillance purposes</strong>. Together with a number of other French NGOs monitoring legislation impacting digital freedoms, as well as other collectives, companies, associations and trade unions, the LQDN initiated a joint open letter in which they call on French authorities to ban any security and surveillance use of facial recognition due to their <strong>uniquely invasive and dehumanising</strong> nature. In their letter they point to the fact that in France there are a “multitude of systems already installed, outside of any real legal framework, without transparency or public discussion” referring, among others, to the PARAFE system and the use of FRTs by civil and military police. As they put it:</p>
<p><strong><a class="maplink" data-title="European Digital Rights (EDRi)">European Digital Rights (EDRi)</a>, an umbrella organisation of</strong> 44 digital rights NGOs in Europe takes a radical stance on the issue. They argue <strong>that mass processing of biometric data in public spaces creates a serious risk of mass surveillance</strong> that infringes on fundamental rights, and therefore they call on the Commission to <strong>permanently stop all deployments that can lead to mass surveillance</strong>. In their report <em>Ban Biometric Mass Surveillance</em> (2020) they demand that the EDPB and national DPAs) <strong>“publicly disclose all existing and planned activities and deployments that fall within this remit.</strong>” (EDRi 2020, 5). Furthermore, they call for ceasing all planned legislation which establishes biometric processing as well as the funding for all such projects, amounting to an “immediate and indefinite ban on biometric processing”.</p>
<p><strong><a class="maplink" data-title="La Quadrature du Net">La Quadrature du Net</a> (LQDN) one of EDRis founding members</strong> (created in 2008 to “promote and defend fundamental freedoms in the digital world") similarly called for a ban on <strong>any present and future use of facial recognition for security and surveillance purposes</strong>. Together with a number of other French NGOs monitoring legislation impacting digital freedoms, as well as other collectives, companies, associations and trade unions, the LQDN initiated a joint open letter in which they call on French authorities to ban any security and surveillance use of facial recognition due to their <strong>uniquely invasive and dehumanising</strong> nature. In their letter they point to the fact that in France there are a “multitude of systems already installed, outside of any real legal framework, without transparency or public discussion” referring, among others, to the PARAFE system and the use of FRTs by civil and military police. As they put it:</p>
</section>
<section id="facial-recognition-is-a-uniquely-invasive-and-dehumanising-technology-which-makes-possible-sooner-or-later-constant-surveillance-of-the-public-space.-it-creates-a-society-in-which-we-are-all-suspects.-it-turns-our-face-into-a-tracking-device-rather-than-a-signifier-of-personality-eventually-reducing-it-to-a-technical-object.-it-enables-invisible-control.-it-establishes-a-permanent-and-inescapable-identification-regime.-it-eliminates-anonymity.-no-argument-can-justify-the-deployment-of-such-a-technology.-la-quadrature-du-net.-et-al.-2019" class="level4 Quote">
<blockquote class="Quote">“Facial recognition is a uniquely invasive and dehumanising technology, which makes possible, sooner or later, constant surveillance of the public space. It creates a society in which we are all suspects. It turns our face into a tracking device, rather than a signifier of personality, eventually reducing it to a technical object. It enables invisible control. It establishes a permanent and inescapable identification regime. It eliminates anonymity. No argument can justify the deployment of such a technology.” <footer>(La Quadrature du Net. et al. 2019)</footer></blockquote>
<p>Another prominent voice asking for a full ban on FRTs is the Berlin-based NGO <strong>Algorithm Watch</strong>.  In their report <em><strong>Automating Society (2020)</strong></em> the NGO similarly calls for a ban to all facial recognition technology that might amount to mass surveillance. Their analysis and recommendations place FRTs in a broader discussion regarding <strong>Automated Decision-Making (ADM) systems</strong>. They condemn any use of live facial recognition in public spaces and demand that public uses of FRTs that might amount to mass surveillance be decisively "<strong>banned until further notice, and urgently, at the EU level</strong>” (Algorithm Watch 2020, 10).</p>
<p>Another prominent voice asking for a full ban on FRTs is the Berlin-based NGO <strong><a class="maplink" data-title="Algorithm Watch">Algorithm Watch</a></strong>.  In their report <em><strong>Automating Society (2020)</strong></em> the NGO similarly calls for a ban to all facial recognition technology that might amount to mass surveillance. Their analysis and recommendations place FRTs in a broader discussion regarding <strong>Automated Decision-Making (ADM) systems</strong>. They condemn any use of live facial recognition in public spaces and demand that public uses of FRTs that might amount to mass surveillance be decisively "<strong>banned until further notice, and urgently, at the EU level</strong>” (Algorithm Watch 2020, 10).</p>
<p>They further demand <strong>meaningful transparency</strong> that not only means “disclosing information about a systems purpose, logic, and creator, as well as the ability to thoroughly analyse, and test a systems inputs and outputs. It also requires <strong>making training data and data results accessible</strong> to independent researchers, journalists, and civil society organisations for public interest research” (Algorithm Watch 2020, 11).</p>
<p>Parallel to these reports there are also various campaigns that prove to be effective in raising awareness and putting pressure on governmental bodies both at a national and European level. In May 2020 EDRi launched the <strong>#ReclaimYourFace</strong> campaign, a European Citizens' Initiative (ECI) petition, that calls for a ban on all biometric mass surveillance practices. The campaign centres around the power <strong>imbalances inherent to surveillance</strong>. As of May 2021 the campaign has been supported by more than 50.000 individual signatures. #ReclaimYourFace is not the only campaign, though undoubtedly the most visible and influential, in a European Contextcontext. Other similar international initiatives are: "<strong>Ban the Scan" initiated by Amnesty International, "Ban Automated Recognition of Gender and Sexual Orientation" led by the</strong> international NGO Access Now, or <strong>"Project Panopticon" launched by the Indian based Panoptic Tracker.</strong></p>
<p>In early June; a global coalition was launched under the hashtag <strong>#BanBS consisting of 175 organisations from 55 countries</strong> the<strong>.</strong> The coalition demands the halting of biometric surveillance practices. Drafted by Access Now, Amnesty International, European Digital Rights (EDRi), Human Rights Watch, Internet Freedom Foundation (IFF), and Instituto Brasileiro de Defesa do Consumidor (IDEC)), the open letter has been signed by almost 200 organisations, in which they call for an outright ban on uses of facial recognition and biometric technologies that enable mass surveillance and discriminatory targeted surveillance:</p>
<p>In early June; a global coalition was launched under the hashtag <strong>#BanBS consisting of 175 organisations from 55 countries</strong> the<strong>.</strong> The coalition demands the halting of biometric surveillance practices. Drafted by Access Now, Amnesty International, <a class="maplink" data-title="European Digital Rights (EDRi)">European Digital Rights (EDRi)</a>, Human Rights Watch, Internet Freedom Foundation (IFF), and Instituto Brasileiro de Defesa do Consumidor (IDEC)), the open letter has been signed by almost 200 organisations, in which they call for an outright ban on uses of facial recognition and biometric technologies that enable mass surveillance and discriminatory targeted surveillance:</p>
</section>
<section id="these-uses-of-facial-and-remote-biometric-recognition-technologies-by-design-threaten-peoples-rights-and-have-already-caused-significant-harm.-no-technical-or-legal-safeguards-could-ever-fully-eliminate-the-threat-they-pose-and-we-therefore-believe-they-should-never-be-allowed-in-public-or-publicly-accessible-spaces-either-by-governments-or-the-private-sector.-access-now-2021" class="level4 Quote">
<blockquote class="Quote">“These uses of facial and remote biometric recognition technologies, by design, threaten peoples rights and have already caused significant harm. No technical or legal safeguards could ever fully eliminate the threat they pose, and we therefore believe they should never be allowed in public or publicly accessible spaces, either by governments or the private sector.” <footer>(Access Now 2021)</footer></blockquote>
@ -1148,7 +1148,7 @@
</ul>
</div> <!-- key points -->
<p>Belgium is, with Spain, one of the few countries in Europe that <strong>has not authorised the use of facial recognition technology</strong>, neither for criminal investigations nor for mass surveillance (Vazquez 2020). This does not mean that it is unlikely to change its position in the very near future. <strong>Law enforcement is indeed strongly advocating its use</strong>, and the current legal obstacles are not likely to hold for very long (Bensalem 2018). The pilot experiment that took place in Zaventem / Brussels International Airport, although aborted, occurred within a national context in which <strong>biometric systems are increasingly used and deployed</strong>.</p>
<p>Belgium will, for example, soon roll out at the national level the new biometric identity card “<strong>eID</strong>”, the Minister of Interior Annelies Verlinden has recently announced. The identification document, which will rely on the constitution of a broad biometric database and is part of a broader European Union initiative, is developed in partnership with security multinational <strong>Thales</strong>, was already trialled with 53.000 citizens in (Prins 2021; Thales Group 2020).<a href="#fn30" class="footnote-ref" id="fnref30" role="doc-noteref"><sup>30</sup></a></p>
<p>Belgium will, for example, soon roll out at the national level the new biometric identity card “<strong>eID</strong>”, the Minister of Interior Annelies Verlinden has recently announced. The identification document, which will rely on the constitution of a broad biometric database and is part of a broader <a class="maplink" data-title="European Union">European Union</a> initiative, is developed in partnership with security multinational <strong><a class="maplink" data-title="Thales">Thales</a></strong>, was already trialled with 53.000 citizens in (Prins 2021; Thales Group 2020).<a href="#fn30" class="footnote-ref" id="fnref30" role="doc-noteref"><sup>30</sup></a></p>
<p>Municipalities in different parts of the country are experimenting with <strong>Automated Number Plate Recognition (ANPR) technology</strong>. A smaller number have started deploying “<strong>smart CCTV</strong>” cameras, which fall just short of using facial recognition technology. The city of Kortrijk has for example deployed “<strong>body recognition</strong>” technology, which uses walking style or clothing of individuals to track them across the citys CCTV network. Facial recognition is possible with these systems, but has not been activated as of yet <strong>pending legal authorisation to do so</strong>. In the city of Roeselare, “smart cameras” have been installed in one of the shopping streets. Deployed by telecom operator Citymesh, they could provide facial recognition services, but are currently used to count and estimate crowds, data which is shared with the police (van Brakel 2020). All the emerging initiatives of remote biometric identification are however pending a reversal of the decision to halt the experiment at Zaventem Brussels International Airport.</p>
<section id="the-zaventem-pilot-in-the-context-of-face-recognition-technology-in-belgium" class="level2">
<h2>The Zaventem pilot in the context of Face Recognition Technology in Belgium</h2>
@ -1208,14 +1208,14 @@
<p>In October 2019, the Carlo Collodihof, a courtyard in the Rotterdam neighbourhood Lombardijen, was equipped with a new kind of streetlamp. The twelve new luminaires did not just illuminate the streets; they were <strong>fitted with cameras, microphones, speakers, and a computer which was connected to the internet</strong>. They are part of the so called <strong>Fieldlab Burglary Free Neighbourhood</strong>: an experiment in the public space with technologies for computer sensing and data processing, aimed at the prevention of break-ins, robberies, and aggression; increasing the chances of catching and increasing a sense of safety for the inhabitants of the neighbourhood ((Redactie Inbraakvrije Wijk 2019; Kokkeler et al. 2020b). The practical nature of a Fieldlab provides a way to examine concretely how the various technologies come together, and how they fit in with existing infrastructures and regulations.</p>
<section id="detection-and-decision-making-in-the-burglary-free-neighbourhood-fieldlab" class="level2">
<h2>Detection and decision-making in the “Burglary free neighbourhood” Fieldlab</h2>
<p>The national programme Burglary Free Neighbourhood was initiated and funded by the <strong>Dutch Ministry of Justice and Security</strong>. It is led by <strong>DITSS</strong> (Dutch Institute for Technology, Safety &amp; Security), a non-profit organisation, that has been involved in earlier computer sensing projects in the Netherlands for example in <strong>Stratumseind, Eindhoven</strong> (The Hague Security Delta 2021). Other parties involved include the municipality of Rotterdam, the police both on a local and national level the Public Prosecutors Office and insurance company Interpolis. Part of the research is carried out by University of Twente, Avans Hogeschool, the Network Institute of the Vrije Universiteit Amsterdam and the Max Planck Institute for Foreign and International Criminal Law (Freiburg, Germany).</p>
<p>The national programme Burglary Free Neighbourhood was initiated and funded by the <strong>Dutch Ministry of Justice and Security</strong>. It is led by <strong>DITSS</strong> (Dutch Institute for Technology, Safety &amp; Security), a non-profit organisation, that has been involved in earlier computer sensing projects in the Netherlands for example in <strong>Stratumseind, Eindhoven</strong> (The Hague Security Delta 2021). Other parties involved include the municipality of Rotterdam, the police both on a local and national level the Public Prosecutors Office and insurance company <a class="maplink" data-title="Interpolis">Interpolis</a>. Part of the research is carried out by University of Twente, <a class="maplink" data-title="Avans Hogeschool">Avans Hogeschool</a>, the Network Institute of the Vrije Universiteit Amsterdam and the Max Planck Institute for Foreign and International Criminal Law (Freiburg, Germany).</p>
<p><img src="images/media/image2.jpg" style="width:6.25564in;height:3.51788in" alt="A picture containing roller coaster, ride Description automatically generated" /></p>
<p>Figure 2. Fieldlab in Rotterdam Lombardijen</p>
<p>From a technological perspective, the project has two aims: to <strong>detect suspicious behaviour</strong>, and in turn<strong>, to influence the behaviour of the suspect</strong>. As such, project manager Guido Delver, who agreed to be interviewed for this report, describes the project as being primarily a behavioural experiment (Delver 2021). The twelve luminaires are provided by Sustainder (their Anne series (Sustainder 2021)). The processing of the video and audio is done on the spot by a computer embedded in the luminaire, using software from the Eindhoven based company <strong>ViNotion</strong> (ViNotion 2020). This software reads the video frames from the camera and estimates the presence and position of people thereby mapping the coordinates of the video frame to coordinates in the space. It then determines the direction they are facing. <strong>Only these values position and direction and no other characteristics nor any images,</strong> are sent over the internet to a datacentre somewhere in the Netherlands, where the position data is stored for further processing (Delver 2021).</p>
<p>From a technological perspective, the project has two aims: to <strong>detect suspicious behaviour</strong>, and in turn<strong>, to influence the behaviour of the suspect</strong>. As such, project manager Guido Delver, who agreed to be interviewed for this report, describes the project as being primarily a behavioural experiment (Delver 2021). The twelve luminaires are provided by <a class="maplink" data-title="Sustainder">Sustainder</a> (their Anne series (Sustainder 2021)). The processing of the video and audio is done on the spot by a computer embedded in the luminaire, using software from the Eindhoven based company <strong><a class="maplink" data-title="ViNotion">ViNotion</a></strong> (ViNotion 2020). This software reads the video frames from the camera and estimates the presence and position of people thereby mapping the coordinates of the video frame to coordinates in the space. It then determines the direction they are facing. <strong>Only these values position and direction and no other characteristics nor any images,</strong> are sent over the internet to a datacentre somewhere in the Netherlands, where the position data is stored for further processing (Delver 2021).</p>
<p>Currently, <strong>there is no immediate processing of the position data</strong> to classify behaviour as being suspicious or not. The proposed pipeline consists of two stages: first, an unsupervised machine algorithm for <strong>anomaly (outlier) detection processes the gathered trajectories</strong>, in order to distinguish trajectories that statistically deviate from the norm. As an example, both children playing, as well as burglars making a scouting round through the neighbourhood can potentially produce anomalous trajectories. Secondly, <strong>these anomalous trajectories are judged as being suspicious or not by a computer model</strong> that was trained with human supervision. In the Fieldlabs first data collection experiment 100.000 trajectories were collected, totalling 20.000.000 data points (Hamada 2020). It turned out however that this was still too few to draw any conclusions about viability of the approach; the big data was still too small (Delver 2021).</p>
<p>Another input for detecting suspicious situations is the <strong>microphone with which some of the streetlamps are equipped</strong>. By recording two frequencies of sound, sounds can be categorised as coming from for example a conversation, shouting, dog barking, or the breaking of glass. The two frequencies recorded provide too little information to distinguish the words in a conversation (Delver 2021).</p>
<p>Aside from experimenting with the automated detection of suspicious behaviour, the Fieldlab experiments with various ways in which the detected situations can be played out. Project manager Guido Delver notes that the aim is not <em>per se</em> to involve the police. Instead, the suspect should be deterred before any crime is committed (Delver 2021). Various strategies are laid out: the yet-to-be-autonomous system can <strong>voice warnings through the speakers</strong> embedded in the streetlamps. Or, in line with the work of DITSS in Eindhovens Stratumseind street, the <strong>light intensity or colour of the streetlamps can be changed</strong> (Intelligent Lighting Institute, n.d.). Both strategies are aimed at signalling the subjects that their behaviour is noticed, which generally suffices to have burglars break off their scouting. Another option under consideration is to send a signal to the residents living nearby.</p>
<p>The process of data gathering in the Burglary Free Neighbourhood is quite similar to technologies that are deployed for anonymous people counting. One such application has been developed by <strong>Numina</strong> and is deployed in the Dutch city of Nijmegen: individuals are <strong>traced through space and time, but not identified or categorised.</strong> This information is then used to provide statistics about the number of visitors in the city centre (Schouten and Bril 2019). Another Dutch deployment of technologically similar software is the <strong>One-and-a-half-meter monitor developed by the municipality of Amsterdam,</strong> which is based on the YOLO5 object detection algorithm and trained on the COCO dataset. This data processing architecture can detect the presence of persons but is incapable of deducing any characteristics (Amsterdam-Amstelland safety region 2020). These implementations show biometrics can be used to detect the presence of people, while refraining from storing these characteristics.</p>
<p>The process of data gathering in the Burglary Free Neighbourhood is quite similar to technologies that are deployed for anonymous people counting. One such application has been developed by <strong><a class="maplink" data-title="Numina">Numina</a></strong> and is deployed in the Dutch city of Nijmegen: individuals are <strong>traced through space and time, but not identified or categorised.</strong> This information is then used to provide statistics about the number of visitors in the city centre (Schouten and Bril 2019). Another Dutch deployment of technologically similar software is the <strong>One-and-a-half-meter monitor developed by the municipality of Amsterdam,</strong> which is based on the YOLO5 object detection algorithm and trained on the COCO dataset. This data processing architecture can detect the presence of persons but is incapable of deducing any characteristics (Amsterdam-Amstelland safety region 2020). These implementations show biometrics can be used to detect the presence of people, while refraining from storing these characteristics.</p>
<p><img src="images/media/image3.png" style="width:5.35242in;height:3.07738in" alt="Two people holding umbrellas on a street Description automatically generated with low confidence" /></p>
<p>Figure 3. The one-and-a-half-meter monitor developed by the municipality of Amsterdam</p>
</section>
@ -1243,7 +1243,7 @@
<section id="effects-of-the-technologies-1" class="level2">
<h2>Effects of the technologies</h2>
<p>Since March 2021, the experiment in the Fieldlab of the Burglary Free Neighbourhood in Rotterdam has been on hold. <strong>Researchers have not yet been able to have the computer distinguish suspicious trajectories or sounds.</strong> As such, the system has not been able to respond to any such situations with lights or sounds (Redactie LikeJeWijk 2021). Further research into this is happening in a Virtual Reality environment, as was discussed in the first section.</p>
<p>As part of the Fieldlab, research about the effects of the technologies deployed in the streets has been carried out by the Avans Hogeschool, presenting five relevant observations. First, it is too early to draw any conclusions about the impact of the deployed technologies on the statistics for high impact crime (e.g., break-ins, aggression, robberies) in the neighbourhood (Kokkeler et al. 2020b, 25). Moreover, no research has yet been done into the waterbed effect of crime whether crime prevention in one block, leads to an increase in crime in an adjacent neighbourhood (Kokkeler et al. 2020b, 9).</p>
<p>As part of the Fieldlab, research about the effects of the technologies deployed in the streets has been carried out by the <a class="maplink" data-title="Avans Hogeschool">Avans Hogeschool</a>, presenting five relevant observations. First, it is too early to draw any conclusions about the impact of the deployed technologies on the statistics for high impact crime (e.g., break-ins, aggression, robberies) in the neighbourhood (Kokkeler et al. 2020b, 25). Moreover, no research has yet been done into the waterbed effect of crime whether crime prevention in one block, leads to an increase in crime in an adjacent neighbourhood (Kokkeler et al. 2020b, 9).</p>
<p>Secondly, in the Rotterdam neighbourhood that was examined, the streetlights equipped with cameras were by no means the only technological interventions to prevent break-ins. A breadth of technology is deployed e.g., cameras, alarm systems which are either privately owned, owned by the municipality, the police, or distributed by insurance companies. In this cacophony of technological appliances, <strong>it becomes unclear which data is collected and how it is processed.</strong> Furthermore, it is unclear <strong>who owns and manages these data collection and processing networks, whether they are private parties or law enforcement agencies</strong>. Kokkeler et al. argue that a better overview of these practices is crucial in order to assess the ethical, legal, and social impact of these deployments (Kokkeler et al. 2020b, 24).</p>
<p>Thirdly, after conducting interviews with the residents, Kokkeler et al. concluded that most were unaware that the newly placed streetlights were equipped with sensors. <strong>Moreover, when discussing the “sensors” in the streetlights, many residents could only imagine the use of cameras not realising what data was being gathered (Kokkeler et al. 2020a, 21).</strong> While resident participation features prominent in the goals of the Fieldlab, the Coronavirus pandemic has hindered the planned involvement of the residents (Delver 2021).</p>
<p>Fourth, the moment residents were informed about the data gathering and processing taking place, they were optimistic about a potential use of the data by the local police and municipality. As long as the cameras <strong>were only directed at public space</strong>. Some residents voiced their concern that the information should only be used to address high impact crime, and not for minor offences in particular if these involve minors. On the other hand, some other residents suggested a broader use of the streetlights, for example in fighting litter and speeding (Kokkeler et al. 2020a, 21). Despite the fact that the direct sharing of the generated data with the police is contrary to the aims of the project (Delver 2021) the infrastructure that is deployed in the streets enables other engagements with the technology the so-called <strong>function creep</strong>.</p>
@ -1260,16 +1260,16 @@
<li><p>Several French cities have launched “safe city” projects involving biometric technologies, however Nice is arguably the national leader. The city currently has the highest CCTV coverage of any city in France and has more than double the police agents per capita of the neighbouring city of Marseille.</p></li>
<li><p>Through a series of public-private partnerships the city began a number of initiatives using RBI technologies (including emotion and facial recognition). These technologies were deployed for both authentication and surveillance purposes with some falling into the category of biometric mass surveillance.</p></li>
<li><p>One project which used FRT at a high school in Nice and one in Marseille was eventually declared unlawful. The court determined that the required consent could not be obtained due to the power imbalance between the targeted public (students) and the public authority (public educational establishment). This case highlights important issues about the deployment of biometric technologies in public spaces.</p></li>
<li><p>The use of biometric mass surveillance by the mayor of Nice Christian Estrosi has put him on a collision course with the French Data Protection Authority (CNIL) as well as human rights/ digital rights organisations (Ligue des Droits de lHomme, La Quadrature du Net). His activities have raised both concern and criticism over the usage of the technologies and their potential impact on the privacy of personal data.</p></li>
<li><p>The use of biometric mass surveillance by the mayor of Nice Christian Estrosi has put him on a collision course with the French Data Protection Authority (<a class="maplink" data-title="CNIL">CNIL</a>) as well as human rights/ digital rights organisations (Ligue des Droits de lHomme, <a class="maplink" data-title="La Quadrature du Net">La Quadrature du Net</a>). His activities have raised both concern and criticism over the usage of the technologies and their potential impact on the privacy of personal data.</p></li>
</ul>
</div> <!-- key points -->
<p>Although several French cities such as Paris, Valenciennes or Marseille have launched pilot projects for “safe city” projects involving <strong>biometric technologies (facial, voice, sound recognition),</strong> the city of Nice is perhaps the <strong>national leader in the experimentation with such technologies at a local level</strong> (Nice Premium 2017). The mayor of Nice, Christian Estrosi (Les Républicains Party, right) a prominent political figure on the national political scene, has made clear his intention was to make Nice a “laboratory” of crime prevention (Barelli 2018). Since 2010, more than <strong>1.962 surveillance cameras have been deployed throughout the city</strong>, making it the city with <strong>highest CCTV coverage in France</strong> (27 cameras per square meter). Nice also possesses the most local police in France per inhabitant: 414 agents, for a population of 340.000 (in comparison, the neighbouring city of Marseille has 450 agents for 861.000 inhabitants).</p>
<section id="the-various-facets-of-the-safe-city-project-in-nice" class="level2">
<h2>The various facets of the “Safe city” project in Nice</h2>
<p>Nice has experimented with various initiatives related to <strong>remote biometric identification</strong> many of which fall into the category of biometric mass surveillance. In 2017, Christian Estrosi announced a partnership with the energy company Engie Ineo for the development of an Urban Surveillance Centre (Centre de Surveillance Urbain, CSU). Based on a touch-interface technology, it centralises a platform of <strong>real-time data such as traffic accidents, patrol locations, as well as video feeds from CCTV</strong>s on the streets and in public transportation. (Dudebout 2020, 1). The video feeds from the city tramways are connected to an <strong>emotion recognition algorithm</strong> to flag suspicious situations (Allix 2018).</p>
<p>In June 2018, an additional step was taken with the signing of a partnership agreement with a consortium of companies headed by Thales, specialised in social network intelligence, geolocation, biometrics and crowd simulation<a href="#fn33" class="footnote-ref" id="fnref33" role="doc-noteref"><sup>33</sup></a> for a <strong>“Safe City” project</strong> (Dudebout 2020, 2). Established for three years (2018-2021) with a budget of EUR 10,9 million, the project is financed by the city council, subsidised in part by BPI France<a href="#fn34" class="footnote-ref" id="fnref34" role="doc-noteref"><sup>34</sup></a>, and supported by the Committee for the Security Industrial Sector, an agency under the tutelage of the Prime Ministers office<a href="#fn35" class="footnote-ref" id="fnref35" role="doc-noteref"><sup>35</sup></a> (Allix 2018; BPI France 2018)</p>
<p>Nice has experimented with various initiatives related to <strong>remote biometric identification</strong> many of which fall into the category of biometric mass surveillance. In 2017, Christian Estrosi announced a partnership with the energy company <a class="maplink" data-title="Engie Ineo">Engie Ineo</a> for the development of an Urban Surveillance Centre (Centre de Surveillance Urbain, CSU). Based on a touch-interface technology, it centralises a platform of <strong>real-time data such as traffic accidents, patrol locations, as well as video feeds from CCTV</strong>s on the streets and in public transportation. (Dudebout 2020, 1). The video feeds from the city tramways are connected to an <strong>emotion recognition algorithm</strong> to flag suspicious situations (Allix 2018).</p>
<p>In June 2018, an additional step was taken with the signing of a partnership agreement with a consortium of companies headed by <a class="maplink" data-title="Thales">Thales</a>, specialised in social network intelligence, geolocation, biometrics and crowd simulation<a href="#fn33" class="footnote-ref" id="fnref33" role="doc-noteref"><sup>33</sup></a> for a <strong>“Safe City” project</strong> (Dudebout 2020, 2). Established for three years (2018-2021) with a budget of EUR 10,9 million, the project is financed by the city council, subsidised in part by BPI France<a href="#fn34" class="footnote-ref" id="fnref34" role="doc-noteref"><sup>34</sup></a>, and supported by the Committee for the Security Industrial Sector, an agency under the tutelage of the Prime Ministers office<a href="#fn35" class="footnote-ref" id="fnref35" role="doc-noteref"><sup>35</sup></a> (Allix 2018; BPI France 2018)</p>
<p>The first facial recognition test of the Safe city project took place from 16 February to 2 March 2019, during the Nice Carnival. The experiment was a simulation, involving matching faces collected through CCTV footage of the crowd attending the carnival with a fictitious set of databases (lost individuals, wanted individuals, or individuals with restraining orders). The fictitious datasets were constituted by 50 volunteers, recruited mostly among the municipality, who provided their pictures, or were freshly photographed for the test. The system used <strong>live facial recognition software provided by the company Anyvision</strong>. The live feeds were filmed during the carnival. Passers-by (approximately 1000 people were concerned) were informed of the ongoing test and asked to wear a bracelet if they consented to being filmed (Hassani 2019).</p>
<p>A second experiment took the form of a <strong>software application (app) named “Reporty”,</strong> rolled out in January 2018. The app, developed by the Israeli American company Carbyne, allows citizens to be in direct audio and video connection and share geolocation information with the Urban Supervision Centre in order to report any incivility, offense, or crime that they might witness (Barelli 2018).</p>
<p>A second experiment took the form of a <strong>software application (app) named “Reporty”,</strong> rolled out in January 2018. The app, developed by the Israeli American company <a class="maplink" data-title="Carbyne">Carbyne</a>, allows citizens to be in direct audio and video connection and share geolocation information with the Urban Supervision Centre in order to report any incivility, offense, or crime that they might witness (Barelli 2018).</p>
<p>The third project, involving <strong>facial recognition</strong> was tested in the education context. In February 2019, <strong>a high school in Nice and a high school in Marseille were fitted with facial recognition technology</strong> at their gates in order to grant or bar access to the premises. The official motivation behind the deployment was to "assist the personnel of the high schools and to fight against identity theft (Dudebout 2020, 34).</p>
</section>
<section id="legal-bases-and-challenges-2" class="level2">
@ -1280,7 +1280,7 @@
<p><strong>France has updated the Act N°78-17</strong> of 6 January 1978 on information technology, data files and civil liberties in various stages to incorporate the provisions of the <strong>GDPR</strong>, address the possible exemptions contained in the <strong>GDPR</strong>, and implement the <strong>LED</strong>.</p>
<p>The Act sets out the <strong>reserved framework for sensitive data including biometric data</strong> in its Article 6, which states that <strong>sensitive data</strong> can be processed for purposes listed in Article 9(2) of the GDPR as well as those listed in its Article 44. The latter includes the <strong>re-use of information contained in court rulings and decisions</strong>, provided that neither the purpose nor the outcome of such processing is the re-identification of the data subjects; and the processing of biometric data by employers or administrative bodies if it is strictly necessary to control access to workplaces, equipment, and applications used by employees, agents, trainees, or service providers in their assignments. </p>
<p>Pursuant to Article 6 of the Act N°78-17, <strong>processing of sensitive data can be justified for public interest if it is duly authorised</strong> in accordance with Articles 31 and 32 of the Act. Accordingly, an <strong>authorisation by decree of the Conseil d'État (<em>State Council</em>) is required after reasoned opinion of CNIL</strong>, for processing of biometric data on behalf of the State for the authentication of control of the identity of the individuals (Article 32, Act N°78-17). </p>
<p>Pursuant to Article 6 of the Act N°78-17, <strong>processing of sensitive data can be justified for public interest if it is duly authorised</strong> in accordance with Articles 31 and 32 of the Act. Accordingly, an <strong>authorisation by decree of the <a class="maplink" data-title="Conseil d'État">Conseil d'État</a> (<em>State Council</em>) is required after reasoned opinion of <a class="maplink" data-title="CNIL">CNIL</a></strong>, for processing of biometric data on behalf of the State for the authentication of control of the identity of the individuals (Article 32, Act N°78-17). </p>
<p>In February 2020, the Administrative Court of Marseille considered the extent to which the <strong>data</strong> <strong>subjects explicit consent may provide an appropriate legal basis</strong> in the deployment of facial recognition systems to control access to high schools in Nice and Marseille (Administrative Court of Marseille, Decision N°1901249 of 27 February 2020). After recognising that data collected <strong>by</strong> <strong>facial recognition constitute biometric data</strong> (para 10), the Court held that the required consent could not be obtained simply by the students or their legal representatives in the case of minors signing a form due to the power imbalance between the targeted public and the public educational establishment as the public authority (para. 12). More importantly, <strong>the Court determined that the biometric data processing could not be justified based on a substantial public interest</strong> (i.e., controlling access to premises) envisioned in Article 9(2)(g) of the GDPR in the absence of considerations that the relevant aim could not be achieved by badge checks combined with where appropriate video surveillance (ibid). </p>
@ -1288,24 +1288,24 @@
<p><strong>The Act N°78-17 provides the data subject rights against the processing of their personal data</strong> with restrictions to the exercise of those rights subject to certain conditions (e.g., the restriction for protecting public security to the right to access the data processed for law enforcement purposes pursuant to Art 107 of Act N°78-17). An important data subjects right in the context of biometric surveillance is <strong>the data subjects right not to be subjected to solely automated decision-making, including profiling, except if it is carried out in light of circumstances laid out in Article 22 of the GDPR</strong> and for individual administrative decisions taken in compliance with French legislation (Article 47 of Act N°78-17). That said, for the latter circumstance, the automated data processing must not involve sensitive data (Article 47(2), Act N°78-17). Regarding the data processing operations relating to State security and defence (Article 120, Act N°78-17) and to the prevention, investigation, and prosecution of criminal offences (Article 95, Act N°78-17), the Act lays out an absolute prohibition against solely automated decision-making, according to which no decision producing legal effects or similarly significant effects can be based on said decision-making intended to predict or assess certain personal aspects of the person concerned. Particularly, with respect to data processing operations for law enforcement purposes, Article 95 of the Act prohibits any type of profiling that discriminates against natural persons based on sensitive data as laid out in Article 6.</p>
<p>In addition to the data protection legislation, <strong>the other legislation applicable to biometric surveillance is the Code of Criminal Procedure.</strong> Its Article R40-26 allows the national police and gendarmerie to retain in a criminal records database (<em>Traitement des Antécédents Judiciaires</em> or <em>TAJ</em>) photographs of people suspected of having participated in criminal offences as well as victims and persons being investigated for causes of death, serious injury or disappearance to make it possible to use a facial recognition device. According to a 2018 report by Parliament, <strong>TAJ contains between 7 and 8 million facial images (<em>Assemblée Nationale</em> N°1335, 2018, 64, f.n. 2).</strong> La Quadrature du Net lodged legal complaints against the retention of facial images before the Conseil d'État, arguing that this practice does not comply with the strict necessity test required under Article 10 of LED and Article 88 of Act N°78-17 (La Quadrature du Net, 2020).</p>
<p>In addition to the data protection legislation, <strong>the other legislation applicable to biometric surveillance is the Code of Criminal Procedure.</strong> Its Article R40-26 allows the national police and gendarmerie to retain in a criminal records database (<em>Traitement des Antécédents Judiciaires</em> or <em>TAJ</em>) photographs of people suspected of having participated in criminal offences as well as victims and persons being investigated for causes of death, serious injury or disappearance to make it possible to use a facial recognition device. According to a 2018 report by Parliament, <strong>TAJ contains between 7 and 8 million facial images (<em>Assemblée Nationale</em> N°1335, 2018, 64, f.n. 2).</strong> <a class="maplink" data-title="La Quadrature du Net">La Quadrature du Net</a> lodged legal complaints against the retention of facial images before the <a class="maplink" data-title="Conseil d'État">Conseil d'État</a>, arguing that this practice does not comply with the strict necessity test required under Article 10 of LED and Article 88 of Act N°78-17 (La Quadrature du Net, 2020).</p>
</section>
<section id="mobilisations-and-contestations-2" class="level2">
<h2>Mobilisations and contestations</h2>
<p>The political agenda of Nices mayor to be at the forefront of biometric mass surveillance technologies in France and possibly in Europe has put him on a collision course with two main actors: <strong>the French Data Protection Authority (CNIL) and human rights/digital rights organisations.</strong></p>
<p>The French digital rights organisation <strong>La Quadrature du Net was quick to highlight the problems raised by the deployment of these technologies in Nice</strong>. “The safe city is the proliferation of tools from the intelligence community, with a logic of massive surveillance, identification of weak signals and suspicious behaviour," commented Félix Tréguer, a Marseilles-based leader of the association La Quadrature du Net and member of the campaign Technopolice<a href="#fn36" class="footnote-ref" id="fnref36" role="doc-noteref"><sup>36</sup></a>. “We do not find it reassuring that the municipal police will become the intelligence service of the urban public space and its digital double" (Allix 2018).</p>
<p>The political agenda of Nices mayor to be at the forefront of biometric mass surveillance technologies in France and possibly in Europe has put him on a collision course with two main actors: <strong>the French Data Protection Authority (<a class="maplink" data-title="CNIL">CNIL</a>) and human rights/digital rights organisations.</strong></p>
<p>The French digital rights organisation <strong><a class="maplink" data-title="La Quadrature du Net">La Quadrature du Net</a> was quick to highlight the problems raised by the deployment of these technologies in Nice</strong>. “The safe city is the proliferation of tools from the intelligence community, with a logic of massive surveillance, identification of weak signals and suspicious behaviour," commented Félix Tréguer, a Marseilles-based leader of the association <a class="maplink" data-title="La Quadrature du Net">La Quadrature du Net</a> and member of the campaign Technopolice<a href="#fn36" class="footnote-ref" id="fnref36" role="doc-noteref"><sup>36</sup></a>. “We do not find it reassuring that the municipal police will become the intelligence service of the urban public space and its digital double" (Allix 2018).</p>
<p><strong>The Ligue des Droits de lHomme emphasised similar points, highlighting the political dangers involved.</strong> As Henri Busquet of the Ligue des Droits de l'Homme in Nice put “improving emergency services and traffic is legitimate, but the generalisation of video surveillance worries us, and scrutinising social networks is not the role of a mayor. Without any safeguards, such a tool cannot demonstrate the necessary neutrality [...] It is potentially a tool for political destruction, which puts opponents and journalists at particular risk” (Allix 2018).</p>
<p>In July 2019, the city of Nice hoped the CNIL would provide advice related to its first test experiment during the Carnival. The CNIL responded however that not enough information was provided by the municipality for the DPA to assess it. The French DPA pointed out in particular the lack of “quantified elements on the effectiveness of the technical device or the concrete consequences of a possible bias (related to gender, skin colour ...) of the software” (Dudebout 2020, 3).</p>
<p><strong>The launch of the smartphone application “Reporty” was the catalyst for mobilisation in Nice, united under the umbrella organisation “Collectif anti-Reporty"</strong>. The coalition was formed by local representatives from two left-wing parties (Parti Socialiste, Les Insoumis), Tous Citoyens, the union CGT and the anti-discrimination NGO MRAP. The coalition appealed to two institutions to block the use of the application: <strong>The Defender of Rights</strong> (Défenseur des Droits) and the French DPA (CNIL). The coalition denounced “a risk of generalised denunciation and a serious breach of privacy”, calling to “put an end to the securitarian drift of Christian Estrosi” (Barelli 2018).</p>
<p><strong>On 15 March 2018, the CNIL stated that the application was too invasive and did not meet the criteria set out by the legislation</strong>. It did not meet the proportionality test; it failed to fall within the frame of existing law on video-protection due to the integration of private citizens terminals (smartphones) with a security database managed by the police; it was excessively intrusive due to the collection of images and voice of people in the public space and finally it covered a field of offenses that was too broad (CNIL 2018).</p>
<p><strong>The school experimentation further pushed the CNIL to take a position on the technological activism of Nices mayor.</strong> On 29 October 2019, it expressed serious concerns over the experimentation, arguing that the technology was clashing with the principles of proportionality and data collection minimisation enshrined in the principles of the GDPR. It pointed out that other methods, less intrusive for the privacy of the students, could be used to achieve the technologys stated goal, namely increasing the students security and traffic fluidity (Dudebout 2020, 4).</p>
<p><strong>In a landmark opinion published on 15 November 2019, the CNIL clarified what it defined as guidelines related to facial recognition (CNIL 2019a).</strong> The French DPA expressed concerns over a blanket and indiscriminate use of the technologies, highlighting possible infringements to fundamental rights, because these technologies operate in the public space, where these freedoms (expression, reunion, protest) are expressed. It however did not suggest that they should be banned in all circumstances it suggested instead that its uses could be justified if properly regulated, on a case-by-case basis. <strong>Certain uses could be rejected a priori such as in the case of minors, whose data are strictly protected</strong>. The question of data retention is also central, warning against excessive data duration and excessive centralisation, suggesting instead citizens control over their own data. But as the president of the CNIL, Marie-Laure Denis explained, facial recognition technology “can have legitimate uses, and there is a not firm position of the CNILs board” (Untersinger 2019).</p>
<p><strong>The repeated rebukes of the Nices experimentation with facial recognition technology by the CNIL have however not tempered the enthusiasm of the mayor</strong>. Rather than cave in, Estrosi questioned the legitimacy of the CNILs decisions, arguing that the legal framework, and in particular the French law of 1978 regulating data collection in relation to digital technologies was itself a limitation. In 2018, Estrosi asked: “I have the software that would allow us to apply facial recognition tomorrow morning and to identify registered individuals wherever they are in the city... Why should we prevent this? Do we want to take the risk of seeing people die in the name of individual freedoms, when we have the technology that would allow us to avoid it?” (Allix 2018) In December 2019, Estrosi reiterated his attacks on the CNIL, and together with the mayor of Gravelines Bertrand Ringot (Socialist Party) accused the institution of acting as a “permanent obstruction to the development of digital experiments” (Dudebout 2020, 5).</p>
<p>In July 2019, the city of Nice hoped the <a class="maplink" data-title="CNIL">CNIL</a> would provide advice related to its first test experiment during the Carnival. The <a class="maplink" data-title="CNIL">CNIL</a> responded however that not enough information was provided by the municipality for the DPA to assess it. The French DPA pointed out in particular the lack of “quantified elements on the effectiveness of the technical device or the concrete consequences of a possible bias (related to gender, skin colour ...) of the software” (Dudebout 2020, 3).</p>
<p><strong>The launch of the smartphone application “Reporty” was the catalyst for mobilisation in Nice, united under the umbrella organisation “Collectif anti-Reporty"</strong>. The coalition was formed by local representatives from two left-wing parties (Parti Socialiste, Les Insoumis), Tous Citoyens, the union CGT and the anti-discrimination NGO MRAP. The coalition appealed to two institutions to block the use of the application: <strong>The Defender of Rights</strong> (Défenseur des Droits) and the French DPA (<a class="maplink" data-title="CNIL">CNIL</a>). The coalition denounced “a risk of generalised denunciation and a serious breach of privacy”, calling to “put an end to the securitarian drift of Christian Estrosi” (Barelli 2018).</p>
<p><strong>On 15 March 2018, the <a class="maplink" data-title="CNIL">CNIL</a> stated that the application was too invasive and did not meet the criteria set out by the legislation</strong>. It did not meet the proportionality test; it failed to fall within the frame of existing law on video-protection due to the integration of private citizens terminals (smartphones) with a security database managed by the police; it was excessively intrusive due to the collection of images and voice of people in the public space and finally it covered a field of offenses that was too broad (CNIL 2018).</p>
<p><strong>The school experimentation further pushed the <a class="maplink" data-title="CNIL">CNIL</a> to take a position on the technological activism of Nices mayor.</strong> On 29 October 2019, it expressed serious concerns over the experimentation, arguing that the technology was clashing with the principles of proportionality and data collection minimisation enshrined in the principles of the GDPR. It pointed out that other methods, less intrusive for the privacy of the students, could be used to achieve the technologys stated goal, namely increasing the students security and traffic fluidity (Dudebout 2020, 4).</p>
<p><strong>In a landmark opinion published on 15 November 2019, the <a class="maplink" data-title="CNIL">CNIL</a> clarified what it defined as guidelines related to facial recognition (CNIL 2019a).</strong> The French DPA expressed concerns over a blanket and indiscriminate use of the technologies, highlighting possible infringements to fundamental rights, because these technologies operate in the public space, where these freedoms (expression, reunion, protest) are expressed. It however did not suggest that they should be banned in all circumstances it suggested instead that its uses could be justified if properly regulated, on a case-by-case basis. <strong>Certain uses could be rejected a priori such as in the case of minors, whose data are strictly protected</strong>. The question of data retention is also central, warning against excessive data duration and excessive centralisation, suggesting instead citizens control over their own data. But as the president of the <a class="maplink" data-title="CNIL">CNIL</a>, Marie-Laure Denis explained, facial recognition technology “can have legitimate uses, and there is a not firm position of the <a class="maplink" data-title="CNIL">CNIL</a>s board” (Untersinger 2019).</p>
<p><strong>The repeated rebukes of the Nices experimentation with facial recognition technology by the <a class="maplink" data-title="CNIL">CNIL</a> have however not tempered the enthusiasm of the mayor</strong>. Rather than cave in, Estrosi questioned the legitimacy of the <a class="maplink" data-title="CNIL">CNIL</a>s decisions, arguing that the legal framework, and in particular the French law of 1978 regulating data collection in relation to digital technologies was itself a limitation. In 2018, Estrosi asked: “I have the software that would allow us to apply facial recognition tomorrow morning and to identify registered individuals wherever they are in the city... Why should we prevent this? Do we want to take the risk of seeing people die in the name of individual freedoms, when we have the technology that would allow us to avoid it?” (Allix 2018) In December 2019, Estrosi reiterated his attacks on the <a class="maplink" data-title="CNIL">CNIL</a>, and together with the mayor of Gravelines Bertrand Ringot (Socialist Party) accused the institution of acting as a “permanent obstruction to the development of digital experiments” (Dudebout 2020, 5).</p>
</section>
<section id="effects-of-the-technologies-2" class="level2">
<h2>Effects of the technologies</h2>
<p>To our knowledge, there has not been any systematic ex-post impact assessment of the effects of these three experiments in the city of Nice.</p>
<p>The city of Nice asked the CNIL to provide an assessment of the Carnival experiment, but the CNIL refused to do so, arguing that not enough information had been communicated to them about the parameters of the experiment.</p>
<p>The city of Nice asked the <a class="maplink" data-title="CNIL">CNIL</a> to provide an assessment of the Carnival experiment, but the <a class="maplink" data-title="CNIL">CNIL</a> refused to do so, arguing that not enough information had been communicated to them about the parameters of the experiment.</p>
<p>There are no systematic qualitative or quantitative studies about the perception of the citizens in relation to the technologies in Nice. While the political opposition to these technologies has been documented, it would be erroneous to conclude that they are generally unpopular among the population. Surveys conducted at the national level, such as the one carried out by the organisation Renaissance Numérique show that the public is generally supportive. While 51% of the polled citizens consider that the technologies are not transparent, do not sufficiently allow for consent and can potentially lead to mass surveillance, 84% consider it justified for National Security issues (kidnappings, terror attacks), 76% to secure important public events, and 72% consider it justified to secure public spaces in general. Only when asked about their faith in private actors using the technologies properly, the confidence rates decline (38%). (Reconnaissance Numérique 2019)).</p>
<p>As one press article reports, “For their part, many people in Nice do not seem to be hostile to this application”. The article further quotes a 72-year-old from Nice: “With terrorism, any measure that allows us to reinforce security seems desirable to me. On the condition that we don't give this application to just anyone". (Barelli 2018)</p>
</section>
@ -1316,7 +1316,7 @@
<p><strong>Key points</strong></p>
<ul>
<li><p>The German federal police, in cooperation with the German railway company, conducted a project called “Sicherheitsbahnhof” at the Berlin railway station Südkreuz in 2017/18, which included 77 video cameras and a video management system.</p></li>
<li><p>The police in Hamburg used facial recognition software Videmo 360 during the protests against the G20 summit in 2017. The database includes 100.000 individuals in Hamburg during the G20 summit and whose profiles are saved in the police database. The technology allows for the determination of behaviour, participation in gatherings, preferences, and religious or political engagement</p></li>
<li><p>The police in Hamburg used facial recognition software <a class="maplink" data-title="Videmo">Videmo</a> 360 during the protests against the G20 summit in 2017. The database includes 100.000 individuals in Hamburg during the G20 summit and whose profiles are saved in the police database. The technology allows for the determination of behaviour, participation in gatherings, preferences, and religious or political engagement</p></li>
<li><p>Sixty-eight cameras were installed by local police on central squares and places in the German city Mannheim to record the patterns of movement of people. In this project, which started in 2018, the software is used to detect conspicuous behaviour.</p></li>
<li><p>Half of these deployments (Mannheim &amp; Berlin Südkreuz) took place as measures to test the effectiveness of facial recognition and behavioural analysis software. This “justification as a test” approach is often used in Germany to argue for a deviation from existing rules and societal expectations and was similarly applied during deviations to commonly agreed measures in the Coronavirus/COVID-19 pandemic.</p></li>
<li><p>Resistance to video surveillance is also in no small part a result of constant campaigning and protest by German civil society. The Chaos Computer Club and Digital Courage have consistently campaigned against video surveillance and any form of biometric or behavioural surveillance. The long term effect of these “pilots” is to normalise surveillance.</p></li>
@ -1326,19 +1326,19 @@
<h2>RBI Deployments in Germany</h2>
<p><strong>All the deployments of RBI we are aware of in Germany were conducted by law enforcement</strong>. The deployments range from using facial recognition software to analyse the German central criminal information system, to specific deployments in more targeted locations such as Berlin Südkreuz train station or Mannheim city centre, or to deployments around specific events such as the G20 in Hamburg in 2019.</p>
<section id="pilot-project-südkreuz-berlin" class="level3">
<h3>Pilot Project Südkreuz Berlin</h3>
<p><strong>The German federal police (BPOL)</strong>, in cooperation with the Deutsche Bahn AG, the German railway company, conducted a project called “Sicherheitsbahnhof” at the Berlin railway station Südkreuz in 2017/18. The project consisted of two parts: part one was done from August 2017 until January 2018 with 312 voluntary participants. Part two was carried out from February until July 2018, including 201 participants (Bundespolizeipräsidium Potsdam 2018).</p>
<p><strong>For the first project, 77 video cameras and a video management system were installed at the train station Berlin Südkreuz.</strong> Three cameras were used for the <strong>biometric facial recognition during live monitoring</strong>. During the project, the systems BioSurveillance by the company Herta Security, delivered by Dell EMC AG, Morpho Video Investigator (MVI) by IDEMIA AG, and AnyVision by Anyvision were used and tested. To detect and identify faces, the systems worked based <strong>on neural networks using Template Matching Methods</strong>. For that purpose, images of the faces were recorded and converted into templates. Subsequently, the facial recognition software matched the unknown picture to a known model saved in the reference database. As soon as a certain threshold of similarity is reached, the image is considered a match (see 2.3. for a technical description) The reference database consisted of high-quality images of the participants. That means the photographs had to adhere to quality standards such as a neutral grey background, no shadow in the faces, enough lighting, low compression to avoid artefacts, high resolution, and a straightforward viewing direction (Bundespolizeipräsidium Potsdam 2018).</p>
<p><strong>For the first testing phase,</strong> the participants passed the designated area of the train station Berlin Südkreuz a total of 41.000 times. BioSurveillance had an average hit rate of 68,5%, MVI of 31,7%, and AnyVision 63,1%. <strong>A combined hit rate by the interconnection of the three systems resulted in an increased total hit rate of 84,9%.</strong> <strong>The interconnection also increased the rate of false positives.</strong> The matches were logged but not saved (Bundespolizeipräsidium Potsdam 2018).</p>
<p><strong>For the second testing phase</strong>, the reference database consisted of participant images from the video stream of the first testing phase. For each participant, 2-5 images were extracted from the video stream. The images recorded during the second testing phase generally were of worse quality than from the first phase. All systems used more than one picture as a reference to identify a person (Bundespolizeipräsidium Potsdam 2018). During the second phase, the interconnected systems had <strong>an average testing rate of 91,2%.</strong> BioSurveillance resulted in an average hit rate of 82,8%, MVI in 31,2%, and AnyVision in 76,2%. The performance increased as the systems had more images as a reference (Bundespolizeipräsidium Potsdam 2018).</p>
<p><strong>The Deutsche Bahn AG used the existing infrastructure at the railway station Berlin Südkreuz for an experiment on behavioural analysis starting in June 2019.</strong> The tests were done twice a week during the day. Volunteers performed situations the system should recognise and identify. After scanning people's behaviour, the software would alert the police or the railway company (Henning 2019). <strong>The police assembled a list of behaviours that should be recognised by the system: people lying down or entering certain zones of the train station (such as construction areas), groups of people or streams of people, objects that were set down such as luggage, and the positions of persons and objects</strong>. Furthermore, the system would be counting the number of people in certain areas and allow the analysis of the video data by the police. The software used by the tests is provided by IBM Germany GmbH, the Hitachi Consortium (Hitachi, Conef, MIG), Funkwerk video systems GmbH and G2K Group GmbH (Bundespolizei 2019).</p>
<h3><a class="maplink" data-title="Pilot Project Südkreuz Berlin">Pilot Project Südkreuz Berlin</a></h3>
<p><strong>The German federal police (BPOL)</strong>, in cooperation with the <a class="maplink" data-title="Deutsche Bahn AG">Deutsche Bahn AG</a>, the German railway company, conducted a project called “Sicherheitsbahnhof” at the Berlin railway station Südkreuz in 2017/18. The project consisted of two parts: part one was done from August 2017 until January 2018 with 312 voluntary participants. Part two was carried out from February until July 2018, including 201 participants (Bundespolizeipräsidium Potsdam 2018).</p>
<p><strong>For the first project, 77 video cameras and a video management system were installed at the train station Berlin Südkreuz.</strong> Three cameras were used for the <strong>biometric facial recognition during live monitoring</strong>. During the project, the systems BioSurveillance by the company <a class="maplink" data-title="Herta Security">Herta Security</a>, delivered by Dell EMC AG, Morpho Video Investigator (MVI) by <a class="maplink" data-title="IDEMIA">IDEMIA AG</a>, and <a class="maplink" data-title="AnyVision">AnyVision</a> by Anyvision were used and tested. To detect and identify faces, the systems worked based <strong>on neural networks using Template Matching Methods</strong>. For that purpose, images of the faces were recorded and converted into templates. Subsequently, the facial recognition software matched the unknown picture to a known model saved in the reference database. As soon as a certain threshold of similarity is reached, the image is considered a match (see 2.3. for a technical description) The reference database consisted of high-quality images of the participants. That means the photographs had to adhere to quality standards such as a neutral grey background, no shadow in the faces, enough lighting, low compression to avoid artefacts, high resolution, and a straightforward viewing direction (Bundespolizeipräsidium Potsdam 2018).</p>
<p><strong>For the first testing phase,</strong> the participants passed the designated area of the train station Berlin Südkreuz a total of 41.000 times. BioSurveillance had an average hit rate of 68,5%, MVI of 31,7%, and <a class="maplink" data-title="AnyVision">AnyVision</a> 63,1%. <strong>A combined hit rate by the interconnection of the three systems resulted in an increased total hit rate of 84,9%.</strong> <strong>The interconnection also increased the rate of false positives.</strong> The matches were logged but not saved (Bundespolizeipräsidium Potsdam 2018).</p>
<p><strong>For the second testing phase</strong>, the reference database consisted of participant images from the video stream of the first testing phase. For each participant, 2-5 images were extracted from the video stream. The images recorded during the second testing phase generally were of worse quality than from the first phase. All systems used more than one picture as a reference to identify a person (Bundespolizeipräsidium Potsdam 2018). During the second phase, the interconnected systems had <strong>an average testing rate of 91,2%.</strong> BioSurveillance resulted in an average hit rate of 82,8%, MVI in 31,2%, and <a class="maplink" data-title="AnyVision">AnyVision</a> in 76,2%. The performance increased as the systems had more images as a reference (Bundespolizeipräsidium Potsdam 2018).</p>
<p><strong>The <a class="maplink" data-title="Deutsche Bahn AG">Deutsche Bahn AG</a> used the existing infrastructure at the railway station Berlin Südkreuz for an experiment on behavioural analysis starting in June 2019.</strong> The tests were done twice a week during the day. Volunteers performed situations the system should recognise and identify. After scanning people's behaviour, the software would alert the police or the railway company (Henning 2019). <strong>The police assembled a list of behaviours that should be recognised by the system: people lying down or entering certain zones of the train station (such as construction areas), groups of people or streams of people, objects that were set down such as luggage, and the positions of persons and objects</strong>. Furthermore, the system would be counting the number of people in certain areas and allow the analysis of the video data by the police. The software used by the tests is provided by <a class="maplink" data-title="IBM">IBM</a> Germany GmbH, the <a class="maplink" data-title="Hitachi Consortium">Hitachi Consortium</a> (<a class="maplink" data-title="Hitachi">Hitachi</a>, <a class="maplink" data-title="Conef">Conef</a>, MIG), Funkwerk video systems GmbH and G2K Group GmbH (Bundespolizei 2019).</p>
</section>
<section id="hamburg-g20-summit" class="level3">
<h3>Hamburg G20 Summit </h3>
<p><strong>The police in Hamburg used facial recognition software Videmo 360 (by Videmo)</strong> during the protests against the G20 summit in 2017 (Bröckling 2019). The database, consisting of 100 TB of data, consists of material the police assembled during recording identities in investigations and data from external sources such as surveillance cameras in train stations, the BKA's online portal called “Boston Infrastruktur”, from the internet and the media. <strong>"Boston Infrastruktur" is a web portal accessible to the public in July 2017, where people could upload images and videos</strong>. <strong>All data that concerns the time and place of the G20 summit were included.</strong> Furthermore, data were assembled in 2017 during investigations of the special commission “Schwarzer Block” in the context of the G20 summit protests. The images were first detected and identified, meaning templates of faces were made. <strong>Subsequently, experts checked the material manually (Caspar 2018). The database includes 100.000 individuals in Hamburg during the G20 summit and whose profiles are saved in the police database.</strong> The technology allows for the determination of behaviour, participation in gatherings, preferences, and religious or political engagement (Bröckling 2019).</p>
<p><strong>The police in Hamburg used facial recognition software <a class="maplink" data-title="Videmo">Videmo</a> 360 (by <a class="maplink" data-title="Videmo">Videmo</a>)</strong> during the protests against the G20 summit in 2017 (Bröckling 2019). The database, consisting of 100 TB of data, consists of material the police assembled during recording identities in investigations and data from external sources such as surveillance cameras in train stations, the BKA's online portal called “Boston Infrastruktur”, from the internet and the media. <strong>"Boston Infrastruktur" is a web portal accessible to the public in July 2017, where people could upload images and videos</strong>. <strong>All data that concerns the time and place of the G20 summit were included.</strong> Furthermore, data were assembled in 2017 during investigations of the special commission “Schwarzer Block” in the context of the G20 summit protests. The images were first detected and identified, meaning templates of faces were made. <strong>Subsequently, experts checked the material manually (Caspar 2018). The database includes 100.000 individuals in Hamburg during the G20 summit and whose profiles are saved in the police database.</strong> The technology allows for the determination of behaviour, participation in gatherings, preferences, and religious or political engagement (Bröckling 2019).</p>
</section>
<section id="mannheim-public-surveillance" class="level3">
<h3>Mannheim public surveillance</h3>
<h3><a class="maplink" data-title="Mannheim public surveillance">Mannheim public surveillance</a></h3>
<p><strong>68 cameras were installed by local police on central squares and places in the German city Mannheim to record the moving patterns of people.</strong> In this project, which started in 2018, the software is used to detect conspicuous behaviour. The police are alerted by the cameras and investigate the emerging situation they have observed on camera further (Reuter 2018). The cameras were placed in areas with increased incidences of criminal activity. Only two minutes lie between the alert of the system and the intervention by the police on average. <strong>As the software is learning, it is increasingly able to detect criminal or violent activity. However, sometimes the alerts are not correct, for instance, the system cannot recognise a hug as not dangerous (heise online 2020).</strong> The software is developed by the Fraunhofer Institute of Optronics, System Technologies, and Image Exploitation Karlsruhe and is continuously tested and adapted to be suitable for public spaces. Twenty cameras are used to test the software (Ministerium für Inneres 2020).</p>
</section>
</section>
@ -1378,7 +1378,7 @@
<h2>Effects of the technologies: normalising surveillance</h2>
<p><strong>As there have only been a few implementations of behavioural or biometric surveillance in Germany, many of which have been as part of tests or for “exceptional circumstances”, their effects are relatively hard to measure.</strong> In some cases this can lead to a normalisation of video surveillance, as was the case in Hamburg (Gröhn 2017). The video surveillance cameras that were installed for the G20 summit remain in use and additional video surveillance cameras have since been installed.</p>
<p><strong>All of the video data stored by the Hamburg police during the G20 remains stored by the police and even if the Hamburg data protection authority believes that it should be removed, deletion is not currently possible.</strong> This video data includes several days of footage from central Hamburg from 6-10 July 2017 and includes many people going about their daily lives without any indication of committing a crime (Monroy 2018).</p>
<p><strong>Another element of normalisation is in regard to the integration of biometric facial recognition for historical data using the German central criminal information system INPOL.</strong> Historical data of the usage of the systems shows a systematic year on year increase in the number of requests being made to the system by the German police (Monroy 2020), even though the number of criminal offenses has gone down steadily over the past decade (Statista 2021).</p>
<p><strong>Another element of normalisation is in regard to the integration of biometric facial recognition for historical data using the <a class="maplink" data-title="German central criminal information system INPOL">German central criminal information system INPOL</a>.</strong> Historical data of the usage of the systems shows a systematic year on year increase in the number of requests being made to the system by the German police (Monroy 2020), even though the number of criminal offenses has gone down steadily over the past decade (Statista 2021).</p>
<p><img src="images/media/image4.png" style="width:6.07974in;height:3.33433in" alt="Chart, bar chart, histogram Description automatically generated" /></p>
<p>Figure 4. Growth in police requests to INPOL system<a href="#fn37" class="footnote-ref" id="fnref37" role="doc-noteref"><sup>37</sup></a></p>
</section>
@ -1389,42 +1389,42 @@
<p><strong>Key points</strong></p>
<ul>
<li><p>The Hungarian Government led by Prime Minister Viktor Orbán has long been on a collision course with EU Institutions over the rule of law and the undermining of the countrys judicial independence and democratic institutions.</p></li>
<li><p>Hungary is a frontrunner in Europe when it comes to authorising law enforcements use of Facial Recognition Technology, developing a nationwide and centralised database (The Dragonfly Project), and using the Home Quarantine App as part of the Governments Coronavirus measures.</p></li>
<li><p>Hungary is a frontrunner in Europe when it comes to authorising law enforcements use of Facial Recognition Technology, developing a nationwide and centralised database (The <a class="maplink" data-title="Dragonfly Project">Dragonfly Project</a>), and using the <a class="maplink" data-title="Home Quarantine App Hungary">Home Quarantine App</a> as part of the Governments Coronavirus measures.</p></li>
<li><p>The infrastructure in place that potentially allows for a centralised deployment of biometric mass surveillance technologies in Hungary has reached an unprecedented scale while the legal and ethical scrutiny of these technologies lags dangerously behind.</p></li>
<li><p>This is due to (1) the overlap between the private and public sectors, specifically government institutions, and (2) the complex entanglements biometric systems have with other information systems (such as car registries, traffic management, public transport monitoring and surveillance, etc.).</p></li>
<li><p>Although the latter are not concerned with the traces of the human body they can nonetheless be used for and facilitate biometric mass surveillance. These entanglements create grey zones of biometric mass surveillance where the development and deployment of such technologies is hidden from visibility and critical scrutiny.</p></li>
<li><p>The Dragonfly Project has elicited numerous warnings regarding data protection and the rights to privacy from both public and private organisations. However the lack of contestation and social debate around the issues of privacy and human rights in relation to such projects as the Hungarian Governments Dragonfly is striking.</p></li>
<li><p>The <a class="maplink" data-title="Dragonfly Project">Dragonfly Project</a> has elicited numerous warnings regarding data protection and the rights to privacy from both public and private organisations. However the lack of contestation and social debate around the issues of privacy and human rights in relation to such projects as the Hungarian Governments Dragonfly is striking.</p></li>
</ul>
</div> <!-- key points -->
<p>Under the Government of Prime Minister Viktor Orbán, Hungary has been on a collision course with EU Institutions. It has centralised and consolidated its power by marginalising civil society and curtailing the autonomy of Hungarian media, cultural and higher education institutions (Csaky 2020; Gehrke 2020; Verseck 2020). Orbáns continued <strong>erosion of the countrys democratic institutions</strong> was further advanced with the 2020 adoption of an emergency law which allows the government to rule by decree (Schlagwein 2020; Stolton 2020). In this context, the latest developments in using Biometric Identification Technologies in Hungary flag serious concerns regarding the rule of law, human rights and civil liberties.</p>
<p>Hungary is a frontrunner in Europe when it comes to authorising law enforcements use of Facial Recognition Technology, developing a nationwide and centralised database, and using the Home Quarantine App as part of the Governments Coronavirus measures. The infrastructure in place that potentially allows for a <strong>centralised deployment of biometric mass surveillance technologies</strong> in Hungary has reached an unprecedented scale while the legal and ethical scrutiny of these technologies lags dangerously behind. This is due to (1) <strong>the overlap between the private and public sectors</strong>, specifically government institutions, and (2) due to the <strong>complex entanglements biometric systems have with other information systems (such as car registries, traffic management, public transport monitoring and surveillance, etc.).</strong> Although the latter are not concerned with the traces of the human body they can nonetheless be used for and facilitate biometric mass surveillance. These entanglements create <strong>grey zones</strong> of biometric mass surveillance where the development and deployment of such technologies is hidden from visibility and critical scrutiny.</p>
<p>Hungary is a frontrunner in Europe when it comes to authorising law enforcements use of Facial Recognition Technology, developing a nationwide and centralised database, and using the <a class="maplink" data-title="Home Quarantine App Hungary">Home Quarantine App</a> as part of the Governments Coronavirus measures. The infrastructure in place that potentially allows for a <strong>centralised deployment of biometric mass surveillance technologies</strong> in Hungary has reached an unprecedented scale while the legal and ethical scrutiny of these technologies lags dangerously behind. This is due to (1) <strong>the overlap between the private and public sectors</strong>, specifically government institutions, and (2) due to the <strong>complex entanglements biometric systems have with other information systems (such as car registries, traffic management, public transport monitoring and surveillance, etc.).</strong> Although the latter are not concerned with the traces of the human body they can nonetheless be used for and facilitate biometric mass surveillance. These entanglements create <strong>grey zones</strong> of biometric mass surveillance where the development and deployment of such technologies is hidden from visibility and critical scrutiny.</p>
<h2 id="remote-biometric-identification-in-hungary">Remote Biometric Identification in Hungary</h2>
<h3 id="the-hungarian-polices-use-of-facial-recognition">The Hungarian Polices use of Facial Recognition</h3></li>
<h3 id="the-hungarian-polices-use-of-facial-recognition">The <a class="maplink" data-title="Hungarian Police">Hungarian Police</a>s use of Facial Recognition</h3></li>
<p>On 10 December 2019 the Hungarian Parliament passed a package of amendments of acts for the work of law enforcement in Hungary. Entitled “the simplification and digitisation of some procedures” this adjustment legalised <strong>the use of forensic but also live FRT by the Hungarian Police</strong> (Hungarian Parliament 2019). In cases when a person identified by the police cannot present an ID document, the police agents can take a photograph of the individual on location, take fingerprints, and record the biometric data based on “perception and measurement” of external characteristics. The photo taken on location can be instantly verified against the database of the national registry of citizens. The <strong>automatic search</strong> is performed by a face recognition algorithm and the five closest matches are returned to the police agent who, based on these photos proceeds with identifying the person (1994. Évi XXXIV. Törvény, para 29/4(a)). This application of FRT does not fall under the category of mass surveillance; however, it is only possible due to <strong>a central system which collects and centralises the national and other biometric databases</strong> but also provides the technical support for accessing it in a quick and affective way by various operational units. In this instance by the patrolling police.  </p>
<p>On 10 December 2019 the Hungarian Parliament passed a package of amendments of acts for the work of law enforcement in Hungary. Entitled “the simplification and digitisation of some procedures” this adjustment legalised <strong>the use of forensic but also live FRT by the <a class="maplink" data-title="Hungarian Police">Hungarian Police</a></strong> (Hungarian Parliament 2019). In cases when a person identified by the police cannot present an ID document, the police agents can take a photograph of the individual on location, take fingerprints, and record the biometric data based on “perception and measurement” of external characteristics. The photo taken on location can be instantly verified against the database of the national registry of citizens. The <strong>automatic search</strong> is performed by a face recognition algorithm and the five closest matches are returned to the police agent who, based on these photos proceeds with identifying the person (1994. Évi XXXIV. Törvény, para 29/4(a)). This application of FRT does not fall under the category of mass surveillance; however, it is only possible due to <strong>a central system which collects and centralises the national and other biometric databases</strong> but also provides the technical support for accessing it in a quick and affective way by various operational units. In this instance by the patrolling police.  </p>
<section id="the-dragonfly-szitakötő-project" class="level3">
<h3>The Dragonfly (Szitakötő) Project</h3>
<p>In 2018 the Ministry of Interior presented a bill in the Hungarian Government that proposed a <strong>centralised CCTV system with data stored in one centralised database called the Governmental Data Centre</strong> (Kormányzati Adatközpont, abbreviated as KAK). All governmental operations aiming at developing this centralised database run under the name <strong>Szitakötő (Dragonfly)</strong>. This central storage facility collects surveillance data of public spaces (streets, squares, parks, parking facilities, etc.); the Centre for Budapest Transport (BKK); bank security and the Hungarian Public Road PLC. The project with an estimated budget of 50 billion forints (160 million euros) proposes to centralise about <strong>35.000 CCTV cameras and 25.000 terabytes of monitoring data</strong> from across the country (NAIH 2018). While the project, and notably the response of Dr. Attila Péterfalvi, head of the Hungarian Data Protection Authority, - Hungarian National Authority for Data Protection and Freedom of Information (NAIH), who warned of the lack of data protection considerations in the bill, have been largely mediatised, this has done little for halting the Project which has already been rolled out. In 2015 the Hungarian company GVSX Ltd (Hungary). Had already been contracted (NISZ-GVSX 2019) to implement an Integrated Traffic Management and Control System called IKSZR (Integrált Közlekedésszervezési és Szabályozási Rendszer) that centralises data from various systems such as ANPR cameras, car parks, traffic monitoring, meteorological data, etc. The Dragonfly Project has been designed as an expansion of this system by <strong>centralising the data flowing from both the IKSZR system, the databases of the National Infocommunication Services (NISZ) and also CCTV data from other public and private surveillance systems</strong> such as those operated by local governments, public transport companies and banks.</p>
<p>In 2018 the Ministry of Interior presented a bill in the Hungarian Government that proposed a <strong>centralised CCTV system with data stored in one centralised database called the Governmental Data Centre</strong> (Kormányzati Adatközpont, abbreviated as KAK). All governmental operations aiming at developing this centralised database run under the name <strong>Szitakötő (Dragonfly)</strong>. This central storage facility collects surveillance data of public spaces (streets, squares, parks, parking facilities, etc.); the Centre for Budapest Transport (BKK); bank security and the Hungarian Public Road PLC. The project with an estimated budget of 50 billion forints (160 million euros) proposes to centralise about <strong>35.000 CCTV cameras and 25.000 terabytes of monitoring data</strong> from across the country (NAIH 2018). While the project, and notably the response of Dr. Attila Péterfalvi, head of the Hungarian Data Protection Authority, - Hungarian National Authority for Data Protection and Freedom of Information (NAIH), who warned of the lack of data protection considerations in the bill, have been largely mediatised, this has done little for halting the Project which has already been rolled out. In 2015 the Hungarian company GVSX Ltd (Hungary). Had already been contracted (NISZ-GVSX 2019) to implement an Integrated Traffic Management and Control System called IKSZR (Integrált Közlekedésszervezési és Szabályozási Rendszer) that centralises data from various systems such as ANPR cameras, car parks, traffic monitoring, meteorological data, etc. The <a class="maplink" data-title="Dragonfly Project">Dragonfly Project</a> has been designed as an expansion of this system by <strong>centralising the data flowing from both the IKSZR system, the databases of the National Infocommunication Services (NISZ) and also CCTV data from other public and private surveillance systems</strong> such as those operated by local governments, public transport companies and banks.</p>
<p>The technical description of the Dragonfly Project does not make any explicit reference to (live) facial recognition technology, however, the system <strong>collects, stores and searches, in real time, video surveillance footage from 35.000 CCTV cameras</strong>. However, from the reports of the Hungarian Civil Liberties Union (HCLU or TASZ in Hungarian) and the DPA, it is known (NAIH 2019, 139) that <strong>to some extend FRT has been used by the Secret Service for National Security (SSNS)</strong>, one of the national security services of Hungary. According to the DPAs investigation all the cases in which FRT has been used happened <strong>in relation to concrete (criminal) cases looking for a missing person or someone under warrant</strong>. These cases were also <strong>limited to specific geographic locations</strong> (NAIH 2019). According to the DPAs investigation, in 2019 the <span class="underline"> </span> FRT system operated by the SSNS found 6.000 matches, which resulted in around 250 instances of stop-and-search and 4 arrests (NAIH 2019). The numbers for 2020 are inconsistent with those given for 2019 (3 matches, 28 instances of stop-and-search, unknown number of arrests), however, this is probably due to the fact that <strong>the system has since been moved primarily to the jurisdiction of the Hungarian Police</strong>.</p>
<p>The technical description of the <a class="maplink" data-title="Dragonfly Project">Dragonfly Project</a> does not make any explicit reference to (live) facial recognition technology, however, the system <strong>collects, stores and searches, in real time, video surveillance footage from 35.000 CCTV cameras</strong>. However, from the reports of the <a class="maplink" data-title="HCLU">Hungarian Civil Liberties Union</a> (HCLU or TASZ in Hungarian) and the DPA, it is known (NAIH 2019, 139) that <strong>to some extend FRT has been used by the Secret Service for National Security (SSNS)</strong>, one of the national security services of Hungary. According to the DPAs investigation all the cases in which FRT has been used happened <strong>in relation to concrete (criminal) cases looking for a missing person or someone under warrant</strong>. These cases were also <strong>limited to specific geographic locations</strong> (NAIH 2019). According to the DPAs investigation, in 2019 the <span class="underline"> </span> FRT system operated by the SSNS found 6.000 matches, which resulted in around 250 instances of stop-and-search and 4 arrests (NAIH 2019). The numbers for 2020 are inconsistent with those given for 2019 (3 matches, 28 instances of stop-and-search, unknown number of arrests), however, this is probably due to the fact that <strong>the system has since been moved primarily to the jurisdiction of the <a class="maplink" data-title="Hungarian Police">Hungarian Police</a></strong>.</p>
<p>While the legal framework for police checks does refer to the use of facial recognition technologies, the national security act does not mention it. This is even more striking as the SSNS, is <strong>known to be using FRT to provide the national security services, the police, or other authorised institutions (e.g., prosecutors office, tax office, etc.) classified information</strong>.</p>
<p>Two interrelated companies are responsible for the development, maintenance, and administration of this single central system: <strong>the NISZ and IdomSoft Ltd., both owned by the state.</strong> The NISZ or National Infocommunication Services is a 100% state owned company that only in 2020 signed 6 contracts to purchase the necessary <strong>hardware, storage, and other IT equipment for implementing the Dragonfly Project</strong>. While Public Procurement documents (Közbeszerzési Hatóság, 2020) bear witness to the ongoing investments and development of the Dragonfly Project by the Hungarian Government, a comprehensive overview of the project, the stages of its implementation or its budget, is nowhere to be found.</p>
<p>Two interrelated companies are responsible for the development, maintenance, and administration of this single central system: <strong>the NISZ and <a class="maplink" data-title="IdomSoft">IdomSoft</a> Ltd., both owned by the state.</strong> The NISZ or National Infocommunication Services is a 100% state owned company that only in 2020 signed 6 contracts to purchase the necessary <strong>hardware, storage, and other IT equipment for implementing the <a class="maplink" data-title="Dragonfly Project">Dragonfly Project</a></strong>. While Public Procurement documents (Közbeszerzési Hatóság, 2020) bear witness to the ongoing investments and development of the <a class="maplink" data-title="Dragonfly Project">Dragonfly Project</a> by the Hungarian Government, a comprehensive overview of the project, the stages of its implementation or its budget, is nowhere to be found.</p>
<p>The other company responsible for the administration of the Dragonfly Project is the IdomSoft company, a member of the so called NISZ group. Idomsoft is a 100% indirect state-owned company (indirect ownership means that the government owns shares, but not through authorised state institutions or through other organisations) that, according to its website, “plays a leading role in the <strong>development, integration, installation and operation of IT systems of national importance</strong>”. Apart from administering the National Dragonfly Database, Idomsoft also assures the <strong>interoperability of the various national databases</strong> such as the citizens registry, passport and visa databases, car registries, and police alerts, and it connects the Hungarian databases into the <strong>Schengen Information System</strong> (SIS II).</p>
<p>The other company responsible for the administration of the <a class="maplink" data-title="Dragonfly Project">Dragonfly Project</a> is the <a class="maplink" data-title="IdomSoft">IdomSoft</a> company, a member of the so called NISZ group. Idomsoft is a 100% indirect state-owned company (indirect ownership means that the government owns shares, but not through authorised state institutions or through other organisations) that, according to its website, “plays a leading role in the <strong>development, integration, installation and operation of IT systems of national importance</strong>”. Apart from administering the National Dragonfly Database, Idomsoft also assures the <strong>interoperability of the various national databases</strong> such as the citizens registry, passport and visa databases, car registries, and police alerts, and it connects the Hungarian databases into the <strong>Schengen Information System</strong> (SIS II).</p>
<p>Since the implementation of the Dragonfly Project the Hungarian government has been collecting video surveillance data that is centralised in the <strong>Governmental Data Centre</strong> (Kormányzati Adatközpont) in the same location and by the same institutions that administer the national registry of citizens, visa-entries, police databases, and also other e-governmental databases such as related to social security, tax office or health records.</p>
<p>Since the implementation of the <a class="maplink" data-title="Dragonfly Project">Dragonfly Project</a> the Hungarian government has been collecting video surveillance data that is centralised in the <strong>Governmental Data Centre</strong> (Kormányzati Adatközpont) in the same location and by the same institutions that administer the national registry of citizens, visa-entries, police databases, and also other e-governmental databases such as related to social security, tax office or health records.</p>
<p>While the COVID-19 pandemic has brought a temporary halt of movement in public spaces, it also facilitated the <strong>introduction of new tracking technologies.</strong> Hungary is among two countries in Europe (Poland being the other) to introduce a <strong>Home Quarantine App</strong> which uses automated face recognition technology to verify that people stay in quarantine for the required time.</p>
<p>While the COVID-19 pandemic has brought a temporary halt of movement in public spaces, it also facilitated the <strong>introduction of new tracking technologies.</strong> Hungary is among two countries in Europe (Poland being the other) to introduce a <strong><a class="maplink" data-title="Home Quarantine App Hungary">Home Quarantine App</a></strong> which uses automated face recognition technology to verify that people stay in quarantine for the required time.</p>
</section>
<section id="the-normalisation-of-biometric-surveillance-at-home-the-hungarian-home-quarantine-app" class="level3">
<h3> The normalisation of biometric surveillance at home: The Hungarian Home Quarantine App</h3>
<p>In May 2020 Hungarian Authorities rolled out two digital applications, the contract-tracing app called <strong>VirusRadar</strong> (Kaszás 2020) and the <strong>Home Quarantine App</strong> (Házi Karantén Rendszer, abreviated HKR). Both of these apps are centralised tracing apps meaning that they send contact logs with pseudonymised personal data to a central (government) back-end server (Council of Europe 2020, 28). While the VirusRadar only uses Bluetooth data and proximity of other devices, the <strong>HKR processes biometric data</strong> when comparing facial images of its users.</p>
<h3> The normalisation of biometric surveillance at home: The Hungarian <a class="maplink" data-title="Home Quarantine App Hungary">Home Quarantine App</a></h3>
<p>In May 2020 Hungarian Authorities rolled out two digital applications, the contract-tracing app called <strong>VirusRadar</strong> (Kaszás 2020) and the <strong><a class="maplink" data-title="Home Quarantine App Hungary">Home Quarantine App</a></strong> (Házi Karantén Rendszer, abreviated HKR). Both of these apps are centralised tracing apps meaning that they send contact logs with pseudonymised personal data to a central (government) back-end server (Council of Europe 2020, 28). While the VirusRadar only uses Bluetooth data and proximity of other devices, the <strong>HKR processes biometric data</strong> when comparing facial images of its users.</p>
<p>Those who, according to the COVID-19 regulations in Hungary, are confined to home quarantine are offered the option to use the app instead of being checked by the police. For those who return from abroad, the use of the app is compulsory. But even those who can choose are encourage by the authorities to make use of the HKR app otherwise they will be subjected to frequent visits by police agents. <strong>Once a person downloads the app, its use becomes compulsory</strong> and failure to do so or attempts to evade its tracking is considered an administrative offense. From a data protection law point of view, this is a clear case where the data subjects consent (and in the case of biometric data, their explicit consent) cannot provide the lawful ground for the processing of data through the app (see section 4.2.2). Even if the processing can be based on another lawful ground such as public interest, the punitive nature of non-compliance may raise issues in terms of adhering to the necessity test, which requires a balancing act between the objective pursued and the data subjects interests.</p>
<p>The HKR app is <strong>developed by Asura Technologies and implemented by IdomSoft Ltd</strong>., the same company that provides the software and technical implementation for the nation-wide Dragonfly Project. The HKR application works with <strong>face recognition technology combined with location verification</strong>. The application sends notifications at random times prompting the user to <strong>upload a facial image</strong> while retrieving the location data of the mobile device.  The user must respond within 15 minutes and the location data must match the address registered for quarantine. In order for the Home Quarantine App to work, the user first needs to upload a facial image which is compared by a police officer with the photo of the same individual stored in the central database. After this <strong>facial verification</strong>, the app creates <strong>a biometric template on the mobile phone of the user</strong> and the photo is deleted. The consecutive photos are only compared to this biometric template, so neither the photos nor the template leave the personal device. If there is suspicion about the identity or whereabouts of the user, a police officer visits the address to make sure that the person is adhering to the quarantine rules.</p>
<p>The HKR app is <strong>developed by Asura Technologies and implemented by <a class="maplink" data-title="IdomSoft">IdomSoft</a> Ltd</strong>., the same company that provides the software and technical implementation for the nation-wide <a class="maplink" data-title="Dragonfly Project">Dragonfly Project</a>. The HKR application works with <strong>face recognition technology combined with location verification</strong>. The application sends notifications at random times prompting the user to <strong>upload a facial image</strong> while retrieving the location data of the mobile device.  The user must respond within 15 minutes and the location data must match the address registered for quarantine. In order for the <a class="maplink" data-title="Home Quarantine App Hungary">Home Quarantine App</a> to work, the user first needs to upload a facial image which is compared by a police officer with the photo of the same individual stored in the central database. After this <strong>facial verification</strong>, the app creates <strong>a biometric template on the mobile phone of the user</strong> and the photo is deleted. The consecutive photos are only compared to this biometric template, so neither the photos nor the template leave the personal device. If there is suspicion about the identity or whereabouts of the user, a police officer visits the address to make sure that the person is adhering to the quarantine rules.</p>
<p>Interestingly, the HKR app, — just like the contact tracing app VirusRadar, which was developed <span class="underline"> </span> by Nextsense — has been <strong>“donated” to the Hungarian Government by Asura Technologies “free of charge”</strong>. </p>
@ -1445,24 +1445,24 @@
</section>
<section id="mobilisations-and-contestations-4" class="level2">
<h2>Mobilisations and contestations</h2>
<p>The Dragonfly Project has elicited <strong>numerous warnings</strong> regarding data protection and the rights to privacy from both public and private organisations (TASZ 2021). The <strong>Hungarian National Authority for Data Protection and Freedom of Information (NAIH)</strong>, in October 2018 filed a communique (NAIH 2018) in which it stresses the problems raised by the centralisation and storing of visual data from as many as 35.000 CCTV cameras from all over the country and public transport facilities resulting in 25.000 terabytes of surveillance data.</p>
<p>The <a class="maplink" data-title="Dragonfly Project">Dragonfly Project</a> has elicited <strong>numerous warnings</strong> regarding data protection and the rights to privacy from both public and private organisations (TASZ 2021). The <strong>Hungarian National Authority for Data Protection and Freedom of Information (NAIH)</strong>, in October 2018 filed a communique (NAIH 2018) in which it stresses the problems raised by the centralisation and storing of visual data from as many as 35.000 CCTV cameras from all over the country and public transport facilities resulting in 25.000 terabytes of surveillance data.</p>
<p>The main concerns, according to the NAIH, stemmed from the fact that <strong>once the surveillance data is centralised the collecting bodies stop being the official administrators of these databases</strong>. Moreover, they wont even know how and by whom the data is collected, accessed and utilised, or for what purposes. What is even more worrisome according to this communique, is that the <strong>centralised database (Governmental Data Centre) would not administer the data either, they would only process it</strong>. Therefore, while the database can be accessed and more or less freely “used” by a number of clients (such as government organisations, law enforcement, secret services) there is <strong>no legal body who is responsible for applying the data protection measures or who would be liable in case of transgressions.</strong> Eventually the government incorporated some of the suggestions and owners of the data remain the uploading bodies to whom the requests have to be addressed for accessing the database by the different authorised bodies (e.g., the Hungarian Police).</p>
<p>The main concerns, according to the NAIH, stemmed from the fact that <strong>once the surveillance data is centralised the collecting bodies stop being the official administrators of these databases</strong>. Moreover, they wont even know how and by whom the data is collected, accessed and utilised, or for what purposes. What is even more worrisome according to this communique, is that the <strong>centralised database (Governmental Data Centre) would not administer the data either, they would only process it</strong>. Therefore, while the database can be accessed and more or less freely “used” by a number of clients (such as government organisations, law enforcement, secret services) there is <strong>no legal body who is responsible for applying the data protection measures or who would be liable in case of transgressions.</strong> Eventually the government incorporated some of the suggestions and owners of the data remain the uploading bodies to whom the requests have to be addressed for accessing the database by the different authorised bodies (e.g., the <a class="maplink" data-title="Hungarian Police">Hungarian Police</a>).</p>
<p>Independent Hungarian media has also picked up the news. For instance, Hungarys <strong>leading independent economic and political weekly HVG</strong> has published an article in which they outline the bill and cite the head of the NAIH (Dercsényi 2018). Interestingly, the article starts with an announcement/amendment saying that the HVG expresses its regrets for violating the good reputation of the Ministry of Internals when claiming that the bill has not incorporated the suggestions from the NAIH, which is not true (Dercsényi 2018). However, the article still claims the opposite. <strong>Other liberal online news sites and Magazines</strong> such as the Magyar Narancs (Szalai 2019), 444.hu (Herczeg 2019) and 24.hu (Kerékgyártó 2018; Spirk 2019) also report on the case. However, t<strong>he main pro-government newspapers such as Magyar Nemzet remain silent.</strong></p>
<p>More recently, in January 2021, the <strong>INCLO, a network of Human Liberties NGOs</strong> published a report (INCLO 2021) in which they discuss the Hungarian Case and specifically the Dragonfly Project as an example of how the employment of FRT is at odds with the right to privacy and civil liberties. They mainly flag their concern that <strong>due to the inadequate regulation FRT can be used in conjunction with the CCTV network developed at an alarming rate.</strong></p>
<p>More recently, in January 2021, the <strong>INCLO, a network of Human Liberties NGOs</strong> published a report (INCLO 2021) in which they discuss the Hungarian Case and specifically the <a class="maplink" data-title="Dragonfly Project">Dragonfly Project</a> as an example of how the employment of FRT is at odds with the right to privacy and civil liberties. They mainly flag their concern that <strong>due to the inadequate regulation FRT can be used in conjunction with the CCTV network developed at an alarming rate.</strong></p>
<p>In an interview, one of the authors of the INCLO case study, legal expert Ádám Remport, explains:</p>
<section id="regarding-secret-surveillance-in-general-the-problem-is-the-lack-of-adequate-supervision-and-an-effective-remedial-system.-the-legal-provisions-governing-national-security-agencies-are-mostly-satisfactory.-however-they-are-not-necessarily-enforced-or-if-they-are-breached-theres-no-way-to-find-out.-not-via-the-court-which-is-what-our-latest-cases-show-not-via-parliaments-national-security-committee-due-to-the-quorum-in-order-for-the-national-security-committee-to-be-operational-the-majority-of-its-members-must-be-present.-given-that-the-ruling-fidesz-and-kdnp-parties-hold-more-than-half-of-the-seats-if-they-decide-to-boycott-the-committee-they-can-prevent-it-from-performing-its-job.-this-has-already-happened-on-several-occasions-when-the-committee-was-supposed-to-look-into-surveillance-cases-which-would-potentially-have-been-politically-unfeasible-for-the-government.-interview-by-author-with-ádám-remport-2021" class="level4 Quote">
<blockquote class="Quote">Regarding secret surveillance in general the problem is the <strong>lack of adequate supervision and an effective remedial system</strong>. The legal provisions governing national security agencies are mostly satisfactory. However, they are not necessarily enforced, or <strong>if they are breached, theres no way to find ou</strong>t. Not via the court —which is what our latest cases show— not via Parliaments national security committee, due to the quorum: in order for the national security committee to be operational, the majority of its members must be present. Given that the ruling Fidesz and KDNP parties hold more than half of the seats, if they decide to boycott the committee, they can prevent it from performing its job. This has already happened on several occasions when the committee was supposed to look into surveillance cases which would potentially have been politically unfeasible for the government.” <footer>(Interview by author with Ádám Remport 2021)</footer></blockquote>
<p>The lack of contestation and social debate around the issues of privacy and human rights in relations to projects such as the Hungarian Governments Dragonfly is striking. While information about the Dragonfly Project has sporadically reached the wider public <strong>any discussion of face recognition technologies employed by the HKR App has been missing</strong>.</p>
<p>The lack of contestation and social debate around the issues of privacy and human rights in relations to projects such as the Hungarian Governments Dragonfly is striking. While information about the <a class="maplink" data-title="Dragonfly Project">Dragonfly Project</a> has sporadically reached the wider public <strong>any discussion of face recognition technologies employed by the HKR App has been missing</strong>.</p>
</section>
</section>
<section id="effects-of-the-technologies-3" class="level2">
<h2>Effects of the technologies</h2>
<p><strong>State operated and centralised mass surveillance systems</strong>, such as the Dragonfly Project currently under development in Hungary, bring up at least two sets of questions with regard to their societal and political effects. The first set of questions concerns <strong>visibility and the (lack of) possibility for societal debate and contestation</strong>. The second concerns the <strong>grey areas of legislations and regulations</strong>. When the development and employment of such novel technologies as biometric video surveillance and (live) facial recognition becomes <strong>entangled with the national interest</strong> of reinforcing public order, preventing terrorism, and fighting criminality, or, as with the Home Quarantine App, reinforcing Coronavirus measures, their ability to carry out effective oversight might be seriously compromised. The Hungarian Governmental Decree from 16 March 2020 is a case in point. While the decree authorises the Minister for Innovation and Technology and an operational body consisting of representatives of the Ministry of Interior, the police, and health authorities to “<strong>acquire and process any kind of personal data from private or public entities,</strong> including traffic and location data from telecommunication providers, <strong>with a very broad definition of the purpose for which the data can be used</strong>” (Council of Europe 2020, 12) at the same time ordinary courts have been suspended, thus preventing the Constitutional Court from reviewing the proportionality of measures introduced under emergency conditions (Ibid 10).  </p>
<p>Using such technologies for the so-called public good can even attract the <strong>support of residents</strong> who want to live in safe and predictable environments. The fact that these public environments are “secured” at the expense of <strong>curtailing the human rights to privacy and to ones face and biometric data</strong> is often overlooked by the public. As the human right NGO “Hungarian Civil Liberties Union” have put it in their recent publication:</p>
<p><strong>State operated and centralised mass surveillance systems</strong>, such as the <a class="maplink" data-title="Dragonfly Project">Dragonfly Project</a> currently under development in Hungary, bring up at least two sets of questions with regard to their societal and political effects. The first set of questions concerns <strong>visibility and the (lack of) possibility for societal debate and contestation</strong>. The second concerns the <strong>grey areas of legislations and regulations</strong>. When the development and employment of such novel technologies as biometric video surveillance and (live) facial recognition becomes <strong>entangled with the national interest</strong> of reinforcing public order, preventing terrorism, and fighting criminality, or, as with the <a class="maplink" data-title="Home Quarantine App Hungary">Home Quarantine App</a>, reinforcing Coronavirus measures, their ability to carry out effective oversight might be seriously compromised. The Hungarian Governmental Decree from 16 March 2020 is a case in point. While the decree authorises the Minister for Innovation and Technology and an operational body consisting of representatives of the Ministry of Interior, the police, and health authorities to “<strong>acquire and process any kind of personal data from private or public entities,</strong> including traffic and location data from telecommunication providers, <strong>with a very broad definition of the purpose for which the data can be used</strong>” (Council of Europe 2020, 12) at the same time ordinary courts have been suspended, thus preventing the Constitutional Court from reviewing the proportionality of measures introduced under emergency conditions (Ibid 10).  </p>
<p>Using such technologies for the so-called public good can even attract the <strong>support of residents</strong> who want to live in safe and predictable environments. The fact that these public environments are “secured” at the expense of <strong>curtailing the human rights to privacy and to ones face and biometric data</strong> is often overlooked by the public. As the human right NGO “<a class="maplink" data-title="HCLU">Hungarian Civil Liberties Union</a>” have put it in their recent publication:</p>
<section id="the-introduction-of-facial-recognition-systems-is-almost-never-preceded-by-social-debate.-its-widespread-application-and-the-omission-of-public-consultation-can-lead-to-the-normalisation-of-continuous-surveillance-and-a-violation-of-rights-where-states-possess-the-ability-of-knowing-where-we-go-whom-we-meet-what-pubs-or-churches-we-visit-inclo-2021." class="level4 Quote">
<blockquote class="Quote">“[…] the introduction of facial recognition systems is almost never preceded by social debate. Its widespread application and the omission of public consultation can lead to the normalisation of continuous surveillance and a violation of rights, where states possess the ability of knowing where we go, whom we meet, what pubs or churches we visit.” <footer>(INCLO 2021)</footer> </blockquote>
<p>To bring awareness to these issues, there is a need for a <strong>strong civil society and independent media</strong> which, if seriously compromised, as in the case of Hungary, can do little to educate the general public. Talking about the <strong>lack of legal framework</strong> with regard to the use of face recognition technologies by the Hungarian Secret Services Ádám Remport explained:</p>
@ -1510,7 +1510,7 @@
<ul>
<li><p>From a technical perspective, <strong>biometric mass surveillance can easily emerge by connecting different elements of a technical infrastructure</strong> (video acquisition capacities, processing algorithms, biometric datasets) <strong>developed in other contexts.</strong></p></li>
<li><p>For example, while the <strong>forensic use of facial recognition</strong> is not a form of <strong>remote biometric identification</strong> per se, the adoption of such systems has allowed for the creation of biometrically searchable national datasets. These datasets are one piece of a potential <strong>biometric mass surveillance</strong> infrastructure which can become a technical reality if live camera feeds, processed through live facial recognition software is connected to them.</p></li>
<li><p>In order to maintain democratic oversight over the uses of the infrastructure, and <strong>avoid the risk of function creep</strong> (i.e. when a technology is being used beyond its initial purpose) it is thus imperative that the principle of <strong>purpose limitation</strong> is systematically enforced and strictly regulated with regard to the <strong>type of data</strong> (criminal or civilian datasets, datasets generated from social media, as in the Clearview AI controversy) against which biometric searches can be performed.</p></li>
<li><p>In order to maintain democratic oversight over the uses of the infrastructure, and <strong>avoid the risk of function creep</strong> (i.e. when a technology is being used beyond its initial purpose) it is thus imperative that the principle of <strong>purpose limitation</strong> is systematically enforced and strictly regulated with regard to the <strong>type of data</strong> (criminal or civilian datasets, datasets generated from social media, as in the <a class="maplink" data-title="Clearview AI">Clearview AI</a> controversy) against which biometric searches can be performed.</p></li>
</ul>
<p><strong>6. The EU should support voices and organisations which are mobilised for the respect of EU fundamental rights</strong></p>
<ul>
@ -1744,38 +1744,38 @@
<hr />
<ol>
<li id="fn1" role="doc-endnote"><p>https://edps.europa.eu/press-publications/press-news/press-releases/2021/edpb-edps-call-ban-use-ai-automated-recognition_en<a href="#fnref1" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn2" role="doc-endnote"><p>The one-and-a half meter monitor is trained on the COCO dataset, published by <strong>Microsoft</strong> and <strong>Facebook AI</strong><a href="#fnref2" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn2" role="doc-endnote"><p>The one-and-a half meter monitor is trained on the COCO dataset, published by <strong>Microsoft</strong> and <strong><a class="maplink" data-title="Facebook AI Research">Facebook AI</a></strong><a href="#fnref2" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn3" role="doc-endnote"><p>Relatedly, see the Spotify controversy (Access Now 2021)<a href="#fnref3" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn4" role="doc-endnote"><p>For example, a in a 4K UHD image, composed of 3840 × 2160 pixels, a face occupying 300 x 300 pixels would need to occupy approximately 1/100<sup>th</sup> of the screens surface. In a HD image composed of 1920 x 1080 pixels, the same 300 x 300 pixel face would occupy about 1/25<sup>th</sup> of the screens surface.<a href="#fnref4" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn5" role="doc-endnote"><p>“The ViolaJones object detection framework is an object detection framework which was proposed in 2001 by Paul Viola and Michael Jones. Although it can be trained to detect a variety of object classes, it was motivated primarily by the problem of face detection.” Wikipedia, “ViolaJones object detection framework” <a href="https://en.wikipedia.org/wiki/Viola%E2%80%93Jones_object_detection_framework">https://en.wikipedia.org/wiki/Viola%E2%80%93Jones_object_detection_framework</a><a href="#fnref5" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn6" role="doc-endnote"><p>See: <a href="http://vis-www.cs.umass.edu/lfw/results.html">http://vis-www.cs.umass.edu/lfw/results.html</a>, <a href="https://cocodataset.org/#detection-leaderboard">https://cocodataset.org/#detection-leaderboard</a> and <a href="https://www.nist.gov/programs-projects/face-challenges">https://www.nist.gov/programs-projects/face-challenges</a>.<a href="#fnref6" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn7" role="doc-endnote"><p>Both projects were shut down by the <strong>CNIL,</strong> the French DPA.<a href="#fnref7" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn8" role="doc-endnote"><p>The project was shut down by the <strong>Swedish Authority for Privacy Protection (IMY)</strong><a href="#fnref8" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn9" role="doc-endnote"><p>Criminal identification database, used by the <strong>Austrian Criminal Intelligence Service.</strong><a href="#fnref9" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn10" role="doc-endnote"><p>The KASTU system interrogates two datasets: the <strong>Registered persons identifying features database (RETU)</strong> and <strong>Aliens database.</strong> It is managed by the National Bureau of Investigation (NBI), and can be used by the <strong>Finnish Police</strong>, the <strong>Finnish Border Guard</strong> and the <strong>Finnish Customs</strong>.<a href="#fnref10" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn8" role="doc-endnote"><p>The project was shut down by the <strong><a class="maplink" data-title="Swedish Authority for Privacy Protection (IMY)">Swedish Authority for Privacy Protection (IMY)</a></strong><a href="#fnref8" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn9" role="doc-endnote"><p>Criminal identification database, used by the <strong><a class="maplink" data-title="Austrian Criminal Intelligence Service">Austrian Criminal Intelligence Service</a>.</strong><a href="#fnref9" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn10" role="doc-endnote"><p>The KASTU system interrogates two datasets: the <strong>Registered persons identifying features database (RETU)</strong> and <strong>Aliens database.</strong> It is managed by the <a class="maplink" data-title="National Bureau of Investigation (NBI)">National Bureau of Investigation (NBI)</a>, and can be used by the <strong><a class="maplink" data-title="Finnish Police">Finnish Police</a></strong>, the <strong><a class="maplink" data-title="Finnish Border Guard">Finnish Border Guard</a></strong> and the <strong><a class="maplink" data-title="Finnish Customs">Finnish Customs</a></strong>.<a href="#fnref10" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn11" role="doc-endnote"><p>Criminal case history database, managed by the <strong>French Ministry of Interior</strong><a href="#fnref11" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn12" role="doc-endnote"><p>Criminal case management system, managed by the <strong>German Federal Criminal Police Office</strong> (Bundeskriminalamt)<a href="#fnref12" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn13" role="doc-endnote"><p>Managed by the <strong>Video and Image Laboratory</strong> of the Audiovisual Evidence of the Department of Photography and Modus Operandi of the Hellenic Police Forensic Science Division<a href="#fnref13" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn14" role="doc-endnote"><p>The Facial Image registry is interrogated through a search engine developed by NEC, and accessible to <strong>the</strong> <strong>National Investigation Agency</strong>, <strong>the</strong> <strong>Criminal Courts</strong>, <strong>the</strong> <strong>National Protective Service</strong>, <strong>the</strong> <strong>Counter-Terrorism Centre</strong>, <strong>the</strong> <strong>Hungarian Prison Service</strong>, <strong>the Prosecution Service of Hungary</strong>, t<strong>he Public Administration</strong>, <strong>the</strong> <strong>Special Service for National Security</strong>, <strong>the</strong> <strong>Intelligence Agencies</strong>, <strong>the</strong> <strong>Hungarian Police</strong>, t<strong>he Hungarian Parliamentary Guard</strong>, <strong>Hungarian Ministry of Justice</strong>, <strong>Witness Protection Service</strong>, <strong>the</strong> <strong>National Directorate-General for Aliens Policing and Institution of the President of the Republic.</strong><a href="#fnref14" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn15" role="doc-endnote"><p>Automated Fingerprint Identification System. The system can be interrogated via a software developed by the company <strong>Reco 3.26</strong>, a subsidiary of <strong>Parsec 3.26</strong>. Another software used is provided by the japanese company <strong>NEC</strong>.<a href="#fnref15" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn16" role="doc-endnote"><p>Biometric Data Processing System (criminal data array), supported by database software from <strong>RIX Technologies</strong><a href="#fnref16" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn13" role="doc-endnote"><p>Managed by the <strong>Video and Image Laboratory</strong> of the Audiovisual Evidence of the Department of Photography and Modus Operandi of the <a class="maplink" data-title="Hellenic Police Forensic Science Division">Hellenic Police Forensic Science Division</a><a href="#fnref13" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn14" role="doc-endnote"><p>The Facial Image registry is interrogated through a search engine developed by <a class="maplink" data-title="NEC">NEC</a>, and accessible to <strong>the</strong> <strong>National Investigation Agency</strong>, <strong>the</strong> <strong>Criminal Courts</strong>, <strong>the</strong> <strong>National Protective Service</strong>, <strong>the</strong> <strong>Counter-Terrorism Centre</strong>, <strong>the</strong> <strong>Hungarian Prison Service</strong>, <strong>the Prosecution Service of Hungary</strong>, t<strong>he Public Administration</strong>, <strong>the</strong> <strong>Special Service for National Security</strong>, <strong>the</strong> <strong>Intelligence Agencies</strong>, <strong>the</strong> <strong><a class="maplink" data-title="Hungarian Police">Hungarian Police</a></strong>, t<strong>he Hungarian Parliamentary Guard</strong>, <strong><a class="maplink" data-title="Hungarian Ministry of Justice">Hungarian Ministry of Justice</a></strong>, <strong>Witness Protection Service</strong>, <strong>the</strong> <strong>National Directorate-General for Aliens Policing and Institution of the President of the Republic.</strong><a href="#fnref14" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn15" role="doc-endnote"><p>Automated Fingerprint Identification System. The system can be interrogated via a software developed by the company <strong><a class="maplink" data-title="Reco 3.26">Reco 3.26</a></strong>, a subsidiary of <strong><a class="maplink" data-title="Parsec 3.26">Parsec 3.26</a></strong>. Another software used is provided by the japanese company <strong><a class="maplink" data-title="NEC">NEC</a></strong>.<a href="#fnref15" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn16" role="doc-endnote"><p>Biometric Data Processing System (criminal data array), supported by database software from <strong><a class="maplink" data-title="RIX Technologies">RIX Technologies</a></strong><a href="#fnref16" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn17" role="doc-endnote"><p>Habitoscopic Data Register<a href="#fnref17" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn18" role="doc-endnote"><p>Central Automatic TeChnology for Recognition of Persons, managed by the <strong>Centrum voor Biometrie</strong>, connected to the <strong>Dutch Judicial Information Service (Justid).</strong><a href="#fnref18" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn19" role="doc-endnote"><p>The database uses <strong>VeriLook</strong> and <strong>Face Trace</strong> software from the Lithuanian company <strong>Neurotechnology.</strong><a href="#fnref19" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn20" role="doc-endnote"><p>Automated Biometric Identification System, searchable by the <strong>IntellQ</strong> software from the company <strong>IntellByte,</strong> managed by the <strong>Ministry of the Interior (Croatia).</strong><a href="#fnref20" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn18" role="doc-endnote"><p>Central Automatic TeChnology for Recognition of Persons, managed by the <strong><a class="maplink" data-title="Centrum voor Biometrie">Centrum voor Biometrie</a></strong>, connected to the <strong><a class="maplink" data-title="Dutch Judicial Information Service (Justid)">Dutch Judicial Information Service (Justid)</a>.</strong><a href="#fnref18" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn19" role="doc-endnote"><p>The database uses <strong>VeriLook</strong> and <strong>Face Trace</strong> software from the Lithuanian company <strong><a class="maplink" data-title="Neurotechnology">Neurotechnology</a>.</strong><a href="#fnref19" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn20" role="doc-endnote"><p>Automated Biometric Identification System, searchable by the <strong>IntellQ</strong> software from the company <strong><a class="maplink" data-title="IntellByte">IntellByte</a>,</strong> managed by the <strong>Ministry of the Interior (Croatia).</strong><a href="#fnref20" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn21" role="doc-endnote"><p>Central Biometric Information System<a href="#fnref21" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn22" role="doc-endnote"><p>National Biometric Identification System<a href="#fnref22" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn23" role="doc-endnote"><p>Managed by the <strong>Photographic and Graphic Laboratory of Criminalistic Services,</strong> using search software by the company <strong>Unidas</strong><a href="#fnref23" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn24" role="doc-endnote"><p>Interpol Facial Recognition System<a href="#fnref24" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn23" role="doc-endnote"><p>Managed by the <strong>Photographic and Graphic Laboratory of Criminalistic Services,</strong> using search software by the company <strong><a class="maplink" data-title="Unidas">Unidas</a></strong><a href="#fnref23" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn24" role="doc-endnote"><p><a class="maplink" data-title="IFRS (Interpol)">Interpol Facial Recognition System</a><a href="#fnref24" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn25" role="doc-endnote"><p>Source: TELEFI Report p.23. [[ NOTE TO THE GREENS: A new map should be made to match reports design at a later stage in the publication process ]]<a href="#fnref25" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn26" role="doc-endnote"><p>As detailed in CHAPTER 4. However, that does not mean that it is not subjected to similar legal frameworks.<a href="#fnref26" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn27" role="doc-endnote"><p>Developed as a partnership between the <strong>Dutch Ministry of Justice &amp; Security</strong>, the <strong>Dutch Institute for Technology Safety and Security (DITSS)</strong>, the <strong>Rotterdam Municipality, the Interpolis</strong>, the <strong>Dutch Police</strong>, the <strong>ViNotion</strong>, the <strong>Avans Hogeschool</strong>, the <strong>Munisense</strong>, the <strong>Sustainder</strong>, the <strong>Twente University</strong>, the <strong>Max Planck Institute</strong> <strong>for the Study of Crime</strong>, the <strong>Security and Law</strong> and <strong>The Network Institute (Vrij University).</strong><a href="#fnref27" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn28" role="doc-endnote"><p>Developed in partnership between the <strong>Dutch Institute for Technology Safety and Security (DITSS)</strong>, <strong>Atos</strong>, <strong>the Municipality of Eindhoven</strong>, <strong>Tilburg University</strong>, <strong>Eindhoven University of Technology</strong>, <strong>Intel</strong>, <strong>Sorama</strong>, and <strong>Axis Communications</strong>; it uses search software from <strong>Oddity.ai</strong> and <strong>ViNotion</strong>.<a href="#fnref28" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn27" role="doc-endnote"><p>Developed as a partnership between the <strong>Dutch Ministry of Justice &amp; Security</strong>, the <strong><a class="maplink" data-title="Dutch Institute for Technology Safety and Security (DITSS)">Dutch Institute for Technology Safety and Security (DITSS)</a></strong>, the <strong><a class="maplink" data-title="Rotterdam Municipality">Rotterdam Municipality</a></strong>, insurance company <strong><a class="maplink" data-title="Interpolis">Interpolis</a></strong>, the <strong><a class="maplink" data-title="Dutch Police">Dutch Police</a></strong>, the <strong><a class="maplink" data-title="ViNotion">ViNotion</a></strong>, the <strong><a class="maplink" data-title="Avans Hogeschool">Avans Hogeschool</a></strong>, the <strong><a class="maplink" data-title="Munisense">Munisense</a></strong>, the <strong><a class="maplink" data-title="Sustainder">Sustainder</a></strong>, the <strong><a class="maplink" data-title="Twente University">Twente University</a></strong>, the <strong>Max Planck Institute</strong> <strong>for the Study of Crime</strong>, the <strong>Security and Law</strong> and <strong><a class="maplink" data-title="The Network Institute">The Network Institute</a> (Vrije Universiteit Amsterdam).</strong><a href="#fnref27" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn28" role="doc-endnote"><p>Developed in partnership between the <strong><a class="maplink" data-title="Dutch Institute for Technology Safety and Security (DITSS)">Dutch Institute for Technology Safety and Security (DITSS)</a></strong>, <strong><a class="maplink" data-title="Atos">Atos</a></strong>, <strong>the <a class="maplink" data-title="Municipality of Eindhoven">Municipality of Eindhoven</a></strong>, <strong><a class="maplink" data-title="Tilburg University">Tilburg University</a></strong>, <strong><a class="maplink" data-title="Eindhoven University of Technology">Eindhoven University of Technology</a></strong>, <strong><a class="maplink" data-title="Intel">Intel</a></strong>, <strong><a class="maplink" data-title="Sorama">Sorama</a></strong>, and <strong>Axis Communications</strong>; it uses search software from <strong><a class="maplink" data-title="Oddity.ai">Oddity.ai</a></strong> and <strong><a class="maplink" data-title="ViNotion">ViNotion</a></strong>.<a href="#fnref28" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn29" role="doc-endnote"><p>The Defenseur des Droits is a governmental watchdog on civil rights and liberties in France. See Defenseur des Droits (2021) for the call for a ban on facial recognition.<a href="#fnref29" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn30" role="doc-endnote"><p>While the <strong>eID project</strong> is not specific to Belgium, the country stands out for having piloted the project ahead of other EU member states. eID is a form of authentication rather than surveillance system - yet the constitution of a database of machine-readable identities participates to the construction of a digital infrastructure of surveillance that can be misuedmisused for biometric mass surveillance., as argued in chapter 3<a href="#fnref30" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn31" role="doc-endnote"><p>The COC, or Supervisory Body for Police Information is « the autonomous federal parliamentary body in charge of monitoring the management of police information and also the data controller for the integrated police service, the Passenger Information Unit and the General Inspectorate of the Federal and the Local Police. » (Organe de Controle de l'Information Policière 2021).<a href="#fnref31" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn32" role="doc-endnote"><p>https://reclaimyourface.eu/<a href="#fnref32" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn33" role="doc-endnote"><p>The other companies are: Arclan Systems, Business Card Associates, Deveryware, Egidium, Gemalto, Geol Semantics, Igo, Inria, Luceor, Onhys, Idemia, Sys, Sysnav and Yncrea.<a href="#fnref33" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn33" role="doc-endnote"><p>The other companies are: Arclan Systems, Business Card Associates, <a class="maplink" data-title="Deveryware">Deveryware</a>, Egidium, Gemalto, Geol Semantics, Igo, Inria, Luceor, Onhys, <a class="maplink" data-title="IDEMIA">IDEMIA</a>, Sys, Sysnav and Yncrea.<a href="#fnref33" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn34" role="doc-endnote"><p>Banque Publique dInvestissement: French Public Investment Bank<a href="#fnref34" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn35" role="doc-endnote"><p>Comité de la Filière industrielle de la sécurité<a href="#fnref35" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn36" role="doc-endnote"><p>For the campaign, see: http://www.technopolice.fr<a href="#fnref36" class="footnote-back" role="doc-backlink">↩︎</a></p></li>