diff --git a/www/report/report.html b/www/report/report.html index 5ce8dbd..f8c54d4 100644 --- a/www/report/report.html +++ b/www/report/report.html @@ -198,7 +198,7 @@ German Federal Police -CATCH +CATCH Central Automatic TeChnology for Recognition of Persons (Netherlands) @@ -218,7 +218,7 @@ Court of Justice of the European Union (EU) -CNIL +CNIL National Commission for Informatics and Freedoms (France) @@ -287,7 +287,7 @@ EU -European Union +European Union FRA @@ -307,7 +307,7 @@ HCLU -Hungarian Civil Liberties Union (Hungary). See “ +Hungarian Civil Liberties Union (Hungary). See “ HD @@ -319,7 +319,7 @@ HKR -Home Quarantine App (Hungary) +Home Quarantine App (Hungary) IARPA @@ -331,7 +331,7 @@ IFRS -Interpol Facial R +Interpol Facial Recognition System IKSZR @@ -375,7 +375,7 @@ LQDN -La Quadrature du Net (France) +La Quadrature du Net (France) GMO @@ -402,7 +402,7 @@ Non-Governmental Organisation -NIST +NIST National Institute of Standards and Technology (USA) @@ -443,7 +443,7 @@ TASZ -Hungarian Civil Liberties Union +Hungarian Civil Liberties Union TELEFI @@ -555,12 +555,12 @@
  • Several French cities have launched “safe city” projects involving biometric technologies, however Nice is arguably the national leader. The city currently has the highest CCTV coverage of any city in France and has more than double the police agents per capita of the neighbouring city of Marseille.

  • Through a series of public-private partnerships the city began a number of initiatives using RBI technologies (including emotion and facial recognition). These technologies were deployed for both authentication and surveillance purposes with some falling into the category of biometric mass surveillance.

  • One project which used FRT at a high school in Nice and one in Marseille was eventually declared unlawful. The court determined that the required consent could not be obtained due to the power imbalance between the targeted public (students) and the public authority (public educational establishment). This case highlights important issues about the deployment of biometric technologies in public spaces.

  • -
  • The use of biometric mass surveillance by the mayor of Nice Christian Estrosi has put him on a collision course with the French Data Protection Authority (CNIL) as well as human rights/ digital rights organisations (Ligue des Droits de l’Homme, La Quadrature du Net). His activities have raised both concern and criticism over the usage of the technologies and their potential impact on the privacy of personal data.

  • +
  • The use of biometric mass surveillance by the mayor of Nice Christian Estrosi has put him on a collision course with the French Data Protection Authority (CNIL) as well as human rights/ digital rights organisations (Ligue des Droits de l’Homme, La Quadrature du Net). His activities have raised both concern and criticism over the usage of the technologies and their potential impact on the privacy of personal data.

  • CHAPTER 9: Facial Recognition in Südkreuz Berlin, Hamburg G20 and Mannheim (Germany)

    CHAPTER 11: Recommendations

    1. The EU should prohibit the deployment of both indiscriminate and “targeted” Remote Biometric and Behavioural Identification technologies in public spaces, as it amounts to mass surveillance.

    @@ -613,7 +613,7 @@

    6. The EU should support voices and organisations which are mobilised for the respect of EU fundamental rights

    -

    Since the widespread use of neural network algorithms in 2012, artificial intelligence applied to the field of security has steadily grown into a political, economic, and social reality. As examples from Singapore, the UK, South Africa, or China demonstrate, the image of a digital society of control, in which citizens are monitored through algorithmically processed audio and video feeds is becoming a tangible possible reality in the European Union.

    +

    Since the widespread use of neural network algorithms in 2012, artificial intelligence applied to the field of security has steadily grown into a political, economic, and social reality. As examples from Singapore, the UK, South Africa, or China demonstrate, the image of a digital society of control, in which citizens are monitored through algorithmically processed audio and video feeds is becoming a tangible possible reality in the European Union.

    Through a set of “pilot projects”, private and public actors including supermarkets, casinos, city councils, border guards, local and national law enforcement agencies are increasingly deploying a wide array of “smart surveillance” solutions. Among them remote biometric identification, namely security mechanisms “that leverage unique biological characteristics” such as fingerprints, facial images, iris or vascular patterns to “identify multiple persons’ identities at a distance, in a public space and in a continuous or ongoing manner by checking them against data stored in a database.” (European Commission 2020b, 18) European institutions have reacted with a series of policy initiatives in the last years, but as we will show in this report, if left unchecked, remote biometric identification technologies can easily become biometric mass surveillance.

    Among technologies of remote biometric identification, facial recognition has been at the centre of the attention of most discussions in the public debate. The foregrounding of this specific use case of computer vision in the public debate has allowed concerned actors to raise awareness on the dangers of artificial intelligence algorithms applied to biometric datasets. But it has also generated confusion. The perception that facial recognition is a single type of technology (i.e., an algorithm “that recognises faces”) has obscured the broad range of applications of “smart technologies” within very different bureaucratic contexts: from the “smart cities” live facial recognition of video feeds deployed for the purpose of public space surveillance, to the much more specific, on-the-spot searches by law enforcement for the purpose of carrying out arrests or forensic investigations.

    @@ -654,16 +654,16 @@

    The international context

    -

    The concern for uncontrolled deployment of remote biometric identification systems emerges in a context characterised by the development of technologies in authoritarian regimes; the development of controversial “pilot” projects as part of “smart cities projects” in Europe; revelations about controversial privacy practices of companies such as Clearview AI; and finally, by the structuration of a US and EU debate around some of the key biases and problems they entail.

    +

    The concern for uncontrolled deployment of remote biometric identification systems emerges in a context characterised by the development of technologies in authoritarian regimes; the development of controversial “pilot” projects as part of “smart cities projects” in Europe; revelations about controversial privacy practices of companies such as Clearview AI; and finally, by the structuration of a US and EU debate around some of the key biases and problems they entail.

    In 2013, the Chinese authorities officially revealed the existence of a large system of mass surveillance involving more than 20 million cameras called Skynet, which had been established since 2005. While the cameras were aimed at the general public, more targeted systems were deployed in provinces such as Tibet and Xinjiang where political groups contest the authority of Beijing. In 2018, the surveillance system became coupled with a system of social credit, and Skynet became increasingly connected to facial recognition technology (Ma 2018; Jiaquan 2018). By 2019, it was estimated that Skynet had reached 200 million face-recognition enabled CCTV cameras (Mozur 2018).

    The intrusiveness of the system, and its impact on fundamental rights is best exemplified by its deployment in the Xinjiang province. The province capital, Urumqi, is chequered with checkpoints and identification stations. Citizens need to submit to facial recognition ID checks in supermarkets, hotels, train stations, highway stations and several other public spaces (Chin and Bürge 2017). The information collected through the cameras is centralised and matched against other biometric data such as DNA samples and voice samples. This allows the government to attribute trust-worthiness scores (trustworthy, average, untrustworthy) and thus generate a list of individuals that can become candidates for detention (Wang 2018).

    -

    European countries’ deployments are far from the Chinese experience. But the companies involved in China’s pervasive digital surveillance network (such as Tencent, Dahua Technology, Hikvision, SenseTime, ByteDance and Huawei) are exporting their know-how to Europe, under the form of “safe city” packages. Huawei is one of the most active in this regard. On the European continent, the city of Belgrade has for example deployed an extensive communication network of more than 1.000 cameras which collect up to 10 body and facial attributes (Stojkovski 2019). The cameras, deployed on poles, major traffic crossings and a large number of public spaces allow the Belgrade police to monitor large parts of the city centre, collect biometric information and communicate it directly to police officers deployed in the field. Belgrade has the most advanced deployment of Huawei’s surveillance technologies on the European continent, but similar projects are being implemented by other corporations – including the European companies Thales, Engie Ineo or Idemia – in other European cities and many “Safe City” deployments are planned soon in EU countries such as France, Italy, Spain, Malta, and Germany (Hillman and McCalpin 2019). Furthermore, contrary to the idea China would be the sole exporter of Remote Biometric Identification technologies, EU companies have substantially developed their exports in this domain over the last years (Wagner 2021)

    -

    The turning point of public debates on facial recognition in Europe was probably the Clearview AI controversy in 2019-2020. Clearview AI, a company founded by Hoan Ton-That and Richard Schwartz in the United States, maintained a relatively secret profile until a New York Times article revealed in late 2019 that it was selling facial recognition technology to law enforcement.  In February 2020, it was reported that the client list of Clearview AI had been stolen, and a few days later the details of the list were leaked (Mac, Haskins, and McDonald 2020). To the surprise of many in Europe, in addition to US government agencies and corporations, it appeared that the Metropolitan Police Service (London, UK), as well as law enforcement from Belgian, Denmark, Finland, France, Ireland, Italy, Latvia, Lithuania, Malta, the Netherlands, Norway, Portugal, Serbia, Slovenia, Spain, Sweden, and Switzerland were on the client list. The controversy grew larger as it emerged that Clearview AI had (semi-illegally) harvested a large number of images from social media platforms such as Facebook, YouTube and Twitter in order to constitute the datasets against which clients were invited to carry out searches (Mac, Haskins, and McDonald 2020).

    +

    European countries’ deployments are far from the Chinese experience. But the companies involved in China’s pervasive digital surveillance network (such as Tencent, Dahua Technology, Hikvision, SenseTime, ByteDance and Huawei) are exporting their know-how to Europe, under the form of “safe city” packages. Huawei is one of the most active in this regard. On the European continent, the city of Belgrade has for example deployed an extensive communication network of more than 1.000 cameras which collect up to 10 body and facial attributes (Stojkovski 2019). The cameras, deployed on poles, major traffic crossings and a large number of public spaces allow the Belgrade police to monitor large parts of the city centre, collect biometric information and communicate it directly to police officers deployed in the field. Belgrade has the most advanced deployment of Huawei’s surveillance technologies on the European continent, but similar projects are being implemented by other corporations – including the European companies Thales, Engie Ineo or Idemia – in other European cities and many “Safe City” deployments are planned soon in EU countries such as France, Italy, Spain, Malta, and Germany (Hillman and McCalpin 2019). Furthermore, contrary to the idea China would be the sole exporter of Remote Biometric Identification technologies, EU companies have substantially developed their exports in this domain over the last years (Wagner 2021)

    +

    The turning point of public debates on facial recognition in Europe was probably the Clearview AI controversy in 2019-2020. Clearview AI, a company founded by Hoan Ton-That and Richard Schwartz in the United States, maintained a relatively secret profile until a New York Times article revealed in late 2019 that it was selling facial recognition technology to law enforcement.  In February 2020, it was reported that the client list of Clearview AI had been stolen, and a few days later the details of the list were leaked (Mac, Haskins, and McDonald 2020). To the surprise of many in Europe, in addition to US government agencies and corporations, it appeared that the Metropolitan Police Service (London, UK), as well as law enforcement from Belgian, Denmark, Finland, France, Ireland, Italy, Latvia, Lithuania, Malta, the Netherlands, Norway, Portugal, Serbia, Slovenia, Spain, Sweden, and Switzerland were on the client list. The controversy grew larger as it emerged that Clearview AI had (semi-illegally) harvested a large number of images from social media platforms such as Facebook, YouTube and Twitter in order to constitute the datasets against which clients were invited to carry out searches (Mac, Haskins, and McDonald 2020).

    -

    The news of the hacking strengthened a strong push-back movement against the development of facial recognition technology by companies such as Clearview AI, as well as their use by government agencies. In 2018, Massachusetts Institute of Technology (MIT) scholar and Algorithmic Justice League founder Joy Buolamwini together with Temnit Gebru had published the report Gender Shades (Buolamwini and Gebru 2018), in which they assessed the racial bias in the face recognition datasets and algorithms used by companies such as IBM and Microsoft. Buolamwini and Gebru found that algorithms performed generally worse on darker-skinned faces, and in particular darker-skinned females, with error rates up to 34% higher than lighter-skinned males (Najibi 2020). IBM and Microsoft responded by amending their systems, and a re-audit showed less bias. Not all companies responded equally. Amazon’s Rekognition system, which was included in the second study continued to show a 31% lower rate for darker-skinned females. The same year ACLU conducted another key study on Amazon’s Rekognition, using the pictures of members of congress against a dataset of mugshots from law enforcement. 28 members of Congress, largely people of colour were incorrectly matched (Snow 2018). Activists engaged lawmakers. In 2019, the Algorithmic Accountability Act allowed the Federal Trade Commission to regulate private companies’ uses of facial recognition. In 2020, several companies, including IBM, Microsoft, and Amazon, announced a moratorium on the development of their facial recognition technologies. Several US cities, including Boston, Cambridge (Massachusetts) San Francisco, Berkeley, Portland (Oregon), have also banned their police forces from using the technology.

    +

    The news of the hacking strengthened a strong push-back movement against the development of facial recognition technology by companies such as Clearview AI, as well as their use by government agencies. In 2018, Massachusetts Institute of Technology (MIT) scholar and Algorithmic Justice League founder Joy Buolamwini together with Temnit Gebru had published the report Gender Shades (Buolamwini and Gebru 2018), in which they assessed the racial bias in the face recognition datasets and algorithms used by companies such as IBM and Microsoft. Buolamwini and Gebru found that algorithms performed generally worse on darker-skinned faces, and in particular darker-skinned females, with error rates up to 34% higher than lighter-skinned males (Najibi 2020). IBM and Microsoft responded by amending their systems, and a re-audit showed less bias. Not all companies responded equally. Amazon’s Rekognition system, which was included in the second study continued to show a 31% lower rate for darker-skinned females. The same year ACLU conducted another key study on Amazon’s Rekognition, using the pictures of members of congress against a dataset of mugshots from law enforcement. 28 members of Congress, largely people of colour were incorrectly matched (Snow 2018). Activists engaged lawmakers. In 2019, the Algorithmic Accountability Act allowed the Federal Trade Commission to regulate private companies’ uses of facial recognition. In 2020, several companies, including IBM, Microsoft, and Amazon, announced a moratorium on the development of their facial recognition technologies. Several US cities, including Boston, Cambridge (Massachusetts) San Francisco, Berkeley, Portland (Oregon), have also banned their police forces from using the technology.

    @@ -672,7 +672,7 @@

    Legislative activity accelerated in 2018. The European Commission (2018a) published a communication Artificial Intelligence for Europe, in which it called for a joint legal framework for the regulation of AI-related services. Later in the year, the Commission (2018b) adopted a Coordinated Plan on Artificial Intelligence with similar objectives. It compelled EU member states to adopt a national strategy on artificial intelligence which should meet the EU requirements. It also allocated 20 billion euros each year for investment in AI development. (Andraško et al. 2021, 4).

    -

    In 2019, the Council of Europe Commissioner for Human Rights published a Recommendation entitled Unboxing Artificial Intelligence: 10 steps to Protect Human Rights which describes several steps for national authorities to maximise the potential of AI while preventing or mitigating the risk of its misuse. (Gonzalez Fuster 2020, 46). The same year the European Union’s High Level Expert Group on Artificial Intelligence (AI HLEG) adopted the Ethics Guidelines for Trustworthy Artificial Intelligence, a key document for the EU strategy in bringing AI within ethical standards (Nesterova 2020, 3).

    +

    In 2019, the Council of Europe Commissioner for Human Rights published a Recommendation entitled Unboxing Artificial Intelligence: 10 steps to Protect Human Rights which describes several steps for national authorities to maximise the potential of AI while preventing or mitigating the risk of its misuse. (Gonzalez Fuster 2020, 46). The same year the European Union’s High Level Expert Group on Artificial Intelligence (AI HLEG) adopted the Ethics Guidelines for Trustworthy Artificial Intelligence, a key document for the EU strategy in bringing AI within ethical standards (Nesterova 2020, 3).

    In February 2020, the new European Commission went one step further in regulating matters related to AI, adopting the digital agenda package – a set of documents outlining the strategy of the EU in the digital age. Among the documents the White Paper on Artificial Intelligence: a European approach to excellence and trust captured most of the commission’s intentions and plans.  

    @@ -681,7 +681,7 @@

    Over the past 3-4 years, positions around the use of facial recognition and more specifically the use of remote biometric identification in public space have progressively crystalised into four camps (for a more detailed analysis of the positions, see Chapter 5).

    Active promotion

    -

    A certain number of actors, both at the national and at the local level are pushing for the development and the extension of biometric remote identification. At the local level, figures such as Nice’s (France) mayor Christian Estrosi have repeatedly challenged Data Protection Authorities, arguing for the usefulness of such technologies in the face of insecurity (for a detailed analysis, see chapter 8 in this report, see also Barelli 2018). At the national level, Biometric systems for the purposes of authentication are increasingly deployed for forensic applications among law-enforcement agencies in the European Union. As we elaborate in Chapter 3, 11 out of 27 member states of the European Union are already using facial recognition against biometric databases for forensic purposes and 7 additional countries are expected to acquire such capabilities in the near future. Several states that have not yet adopted such technologies seem inclined to follow the trend, and push further. Belgian Minister of Interior Pieter De Crem for example, recently declared he was in favour of the use of facial recognition both for judicial inquiries but also for live facial recognition, a much rarer instance. Such outspoken advocates of the use of RBI constitute an important voice, but do not find an echo in the EU mainstream discussions.

    +

    A certain number of actors, both at the national and at the local level are pushing for the development and the extension of biometric remote identification. At the local level, figures such as Nice’s (France) mayor Christian Estrosi have repeatedly challenged Data Protection Authorities, arguing for the usefulness of such technologies in the face of insecurity (for a detailed analysis, see chapter 8 in this report, see also Barelli 2018). At the national level, Biometric systems for the purposes of authentication are increasingly deployed for forensic applications among law-enforcement agencies in the European Union. As we elaborate in Chapter 3, 11 out of 27 member states of the European Union are already using facial recognition against biometric databases for forensic purposes and 7 additional countries are expected to acquire such capabilities in the near future. Several states that have not yet adopted such technologies seem inclined to follow the trend, and push further. Belgian Minister of Interior Pieter De Crem for example, recently declared he was in favour of the use of facial recognition both for judicial inquiries but also for live facial recognition, a much rarer instance. Such outspoken advocates of the use of RBI constitute an important voice, but do not find an echo in the EU mainstream discussions.

    Support with safeguards

    @@ -693,7 +693,7 @@

    Ban

    -

    Finally, a growing number of actors considers that there is enough information about remote biometric identification in public space to determine that they will never be able to comply to the strict requirement of the European Union in terms of respect of Fundamental Rights, and as such should be banned entirely. It is the current position of the European Data Protection Supervisor (EDPS, 2021) the Council of Europe and a large coalition of NGOs, gathered under the umbrella of the European Digital Rights organisation (EDRi 2020). In the European Parliament, the position has most vocally been defended by the European Greens, but has been shared by several other voices, such as members of the Party of the European Left, the Party of European Socialists or Renew Europe (Breyer et al 2021).

    +

    Finally, a growing number of actors considers that there is enough information about remote biometric identification in public space to determine that they will never be able to comply to the strict requirement of the European Union in terms of respect of Fundamental Rights, and as such should be banned entirely. It is the current position of the European Data Protection Supervisor (EDPS, 2021) the Council of Europe and a large coalition of NGOs, gathered under the umbrella of the European Digital Rights organisation (EDRi 2020). In the European Parliament, the position has most vocally been defended by the European Greens, but has been shared by several other voices, such as members of the Party of the European Left, the Party of European Socialists or Renew Europe (Breyer et al 2021).

    @@ -729,7 +729,7 @@
  • RBI technologies are subject to technical challenges and limitations which should be considered in any broader analysis of their ethical, legal, and political implications.

  • -

    In order to grasp the various facets of remote biometric identification that could potentially lead to biometric mass surveillance, this section provides an overview of the currently available technologies, how they work and what their limitations are as well as where and by whom they are deployed in the European Union.

    +

    In order to grasp the various facets of remote biometric identification that could potentially lead to biometric mass surveillance, this section provides an overview of the currently available technologies, how they work and what their limitations are as well as where and by whom they are deployed in the European Union.

    Remote Biometric Identification and classification: defining key terms

    Although there are a growing number of technologies based on other supports than images (photographs or videos) such as voice recognition (audio), LIDAR scans or radio waves, the current market of remote biometric identification is overwhelmingly dominated by image-based products, at the centre of which is face recognition. In the following sections we thus focus primarily on image-based products.

    @@ -759,7 +759,7 @@

    People tracking and counting

    -

    This is perhaps the form of person tracking with which the least information about an individual is stored. An object detection algorithm estimates the presence and position of individuals on a camera image. These positions are stored or counted and used for further metrics. It is used to count passers-by in city centres, and for a one-and-a-half-meter social distancing monitor in Amsterdam2. See also the case study in this document on the Burglary-Free Neighbourhood in Rotterdam (CHAPTER 7), which goes into more detail about the use of the recorded trajectories of individuals to label anomalous behaviour.

    +

    This is perhaps the form of person tracking with which the least information about an individual is stored. An object detection algorithm estimates the presence and position of individuals on a camera image. These positions are stored or counted and used for further metrics. It is used to count passers-by in city centres, and for a one-and-a-half-meter social distancing monitor in Amsterdam2. See also the case study in this document on the Burglary-Free Neighbourhood in Rotterdam (CHAPTER 7), which goes into more detail about the use of the recorded trajectories of individuals to label anomalous behaviour.

    Emotion recognition.

    @@ -767,11 +767,11 @@

    Age, gender, and ethnicity classification

    -

    Aside from deducing emotions, the face is used to deduce a variety of traits from individuals. For example, gender, ethnicity, and age estimations are available in many off-the-shelf facial analysis products. As with emotion recognition, these classifications are mainly used in digital signage and video advertisement contexts. LGBTQ+ communities have spoken out against automatic gender classification, pointing out that a long fought, non-binary understanding of gender is made undone by the technology’s binary classifications (Vincent, 2021). Similarly, recent revelations that Hikvision (China) has used similar technology to estimate whether an individual is from China’s Uyghur minority, has directly led the European Parliament to call for a ban of Hikvision’s products on the Parliament’s premises (Rollet, 2021).

    +

    Aside from deducing emotions, the face is used to deduce a variety of traits from individuals. For example, gender, ethnicity, and age estimations are available in many off-the-shelf facial analysis products. As with emotion recognition, these classifications are mainly used in digital signage and video advertisement contexts. LGBTQ+ communities have spoken out against automatic gender classification, pointing out that a long fought, non-binary understanding of gender is made undone by the technology’s binary classifications (Vincent, 2021). Similarly, recent revelations that Hikvision (China) has used similar technology to estimate whether an individual is from China’s Uyghur minority, has directly led the European Parliament to call for a ban of Hikvision’s products on the Parliament’s premises (Rollet, 2021).

    Audio recognition

    -

    From a technological perspective, neural networks process audio relatively similarly to how video is processed: rather than feeding an image, a spectrogram is used as input for the network. However, under the GDPR, recording conversations, is illegal in the European Union without informed consent of the participants. In order to adhere to these regulations, on some occasions, only particular frequencies are recorded and processed. For example, in the Burglary-Free Neighbourhood in Rotterdam (CHAPTER 7), only two frequencies are used to classify audio; making conversations indiscernible while being able to discern shouting or the breaking of glass3.

    +

    From a technological perspective, neural networks process audio relatively similarly to how video is processed: rather than feeding an image, a spectrogram is used as input for the network. However, under the GDPR, recording conversations, is illegal in the European Union without informed consent of the participants. In order to adhere to these regulations, on some occasions, only particular frequencies are recorded and processed. For example, in the Burglary-Free Neighbourhood in Rotterdam (CHAPTER 7), only two frequencies are used to classify audio; making conversations indiscernible while being able to discern shouting or the breaking of glass3.

    @@ -792,14 +792,14 @@

    Machine learning and operational datasets

    Remote biometric identification and classification relies in large part on datasets, for two key but distinct moments of their operation.

    -

    Machine learning datasets. These are the datasets used to train models through machine learning. We find three categories of such datasets. Publicly available datasets for object detection such as COCO, ImageNet, Pascal VOC include a varying number of images labelled in a range of categories, these can be used to train algorithms to detect for example people on an image (IPVM Team 2021a, 27). The most used open-source datasets for surveillance technologies are Celeb 500k, MS-Celeb-1Million-Cleaned, Labeled Faces in the Wild, VGG Face 2, DeepGlint Asian Celeb, IMDB-Face, IMDB-Wiki, CelebA, Diveface, Flickr faces and the IARPA Janus Benchmark (IPVM Team 2021b, 7). Many of these datasets also function as a public benchmark, against which the performance and accuracy of various algorithms is measured. For example, Labeled Faces in the Wild, the COCO dataset and NIST present such leaderboards on their website6. Government datasets are generally collections of images available to a government for other purposes (driver’s license, passport, or criminal record photo datasets). While in Europe most of these datasets are not accessible to the public, in China and in the US, they are made available for testing and training purposes to private companies, such as the Multiple Encounter Dataset (NIST, 2010). Finally proprietary datasets may be developed by providers for their specific applications.

    +

    Machine learning datasets. These are the datasets used to train models through machine learning. We find three categories of such datasets. Publicly available datasets for object detection such as COCO, ImageNet, Pascal VOC include a varying number of images labelled in a range of categories, these can be used to train algorithms to detect for example people on an image (IPVM Team 2021a, 27). The most used open-source datasets for surveillance technologies are Celeb 500k, MS-Celeb-1Million-Cleaned, Labeled Faces in the Wild, VGG Face 2, DeepGlint Asian Celeb, IMDB-Face, IMDB-Wiki, CelebA, Diveface, Flickr faces and the IARPA Janus Benchmark (IPVM Team 2021b, 7). Many of these datasets also function as a public benchmark, against which the performance and accuracy of various algorithms is measured. For example, Labeled Faces in the Wild, the COCO dataset and NIST present such leaderboards on their website6. Government datasets are generally collections of images available to a government for other purposes (driver’s license, passport, or criminal record photo datasets). While in Europe most of these datasets are not accessible to the public, in China and in the US, they are made available for testing and training purposes to private companies, such as the Multiple Encounter Dataset (NIST, 2010). Finally proprietary datasets may be developed by providers for their specific applications.

    Machine learning models. In the machine learning process, an algorithm gets iteratively configured for the optimal output, based on the particular dataset that it is fed with. This can be a neural network, but also e.g., the aforementioned Viola-Jones’ object detector algorithm. The model is the final configuration of this learning process. As such, it does not contain the images of the dataset in and of themselves. Rather, it represents the abstractions the algorithm “learned” over time. In other words, the model operationalises the machine learning dataset. For example, the YOLO object detection algorithm yields different results when it is trained on either the COCO or the model (in conjunction with the algorithm) which determines the translation of an image into a category, or of the image of a face into its embedding.

    Operational datasets, or image databases. Datasets used in training machine learning models should be distinguished from matching or operational datasets which are the “watchlists” of for example criminals, persons of interest or other lists of individuals against which facial recognition searches will be performed – whether these are in real time or post hoc. These datasets contain pre-processed images of individuals on the watchlist, and store the numerical representations of these faces, their feature vectors or embedding, in an index for fast retrieval and comparison with the queried features (using for example k-Nearest Neighbour or Support Vector Machines). Face or object detection models do not use such a dataset.

    Availability

    Facial recognition algorithms can be developed in-house, taken from an open-source repository, or purchased (IPVM Team 2021b, 14). Popular open-source facial recognition implementations include OpenCV, Face_pytorch, OpenFace and Insightface. Many of these software libraries are developed at universities or implement algorithms and neural network architectures presented in academic papers. They are free, and allow for a great detail of customisation, but require substantial programming skills to be implemented in a surveillance system. Moreover, when using such software, the algorithms run on one’s own hardware which provides the developer with more control, but also requires more maintenance.

    -

    Proprietary facial recognition. There are three possible routes for the use of proprietary systems: There are “turnkey” systems sold by manufacturers such as Hikvision, Dahua, Anyvision or Briefcam. Those integrate the software and hardware, and as such can be directly deployed by the client. Algorithm developers such as Amazon AWS Rekognition (USA), NEC (Japan), NTechlab (Russia), Paravision (USA) allow to implement their algorithms and customise them to one’s needs, and finally there are “cloud” API systems, a sub-set of the former category, where the algorithm is hosted in a datacentre and is accessed remotely (IPVM Team 2021b, 16). The latter type of technology bears important legal ramifications, as the data may travel outside of national or European jurisdictions. It should be noted that many of the proprietary products are based on similar algorithms and network architectures as their open-source counterparts (OpenCV, 2021). Contrary to the open-source software, it is generally unclear which datasets of images have been used to train the proprietary algorithms.

    +

    Proprietary facial recognition. There are three possible routes for the use of proprietary systems: There are “turnkey” systems sold by manufacturers such as Hikvision, Dahua, Anyvision or Briefcam. Those integrate the software and hardware, and as such can be directly deployed by the client. Algorithm developers such as Amazon AWS Rekognition (USA), NEC (Japan), NTechlab (Russia), Paravision (USA) allow to implement their algorithms and customise them to one’s needs, and finally there are “cloud” API systems, a sub-set of the former category, where the algorithm is hosted in a datacentre and is accessed remotely (IPVM Team 2021b, 16). The latter type of technology bears important legal ramifications, as the data may travel outside of national or European jurisdictions. It should be noted that many of the proprietary products are based on similar algorithms and network architectures as their open-source counterparts (OpenCV, 2021). Contrary to the open-source software, it is generally unclear which datasets of images have been used to train the proprietary algorithms.

    @@ -815,12 +815,12 @@

    More problematically, a lack of diversity, in particular when it comes to ethnicity, age, or gender leads to bias in the algorithm. This issue has been at the core of the US-based discussion on the banning of Facial Recognition. Public databases such as VGGFace2 (based on faces from Google images) and MS-Celeb-1M42 (celebrity faces) are often used to train facial recognition algorithms yet are far from representative of everyday populations – this is called representation bias (Fernandez et al. 2020, 30). The main goal of the project Gender Shades led by Joy Buolamwini was both to show the lack of representativity of existing datasets and address the problem of the consequent discrepancy between the error rates related to light-skinned men and dark-skinned women (Fernandez et al. 2020, 30–31).

    However, a representational dataset is not always a desirable dataset, because actual structural biases often do not match the values of society. Illustrative of this is that, when doing a Google image search for the term “CEO” it would originally return primarily photographs of white male people. While this was representative of the CEO population (and thus accurate), the results reinforce the vision of a world that does not align with progressive societal values (Suresh, 2019). Because of the gap between ideals of equality and actual societal structural inequalities, datasets can be either representative of an unequal society, or representative of desired equality – but never of both at the same time.

    Datasets upon which the computer algorithm will later be able to distinguish particular entities or behaviour are built through vast amounts of human labour. For example, the work that has gone into the image dataset ImageNet is equivalent to 19 years of working 24 hours a day, 7 days a week (Malevé, 2020). Nevertheless, quantity does not necessarily equal quality. Many of the categories with which images are annotated are ambiguous. Not in their dictionary definition per se, but when they enter the culture of the annotation workers. For example, the category of “ratatouille” contains images of various stews, salads and even a character of the eponymous Pixar movie. Similarly, the category “Parisian” contains images of Paris Hilton (Malevé, 2020). This ambiguity of categories does not only haunt ImageNet. The aforementioned COCO dataset contains images of a birdhouse in the shape of a bird, which is tagged as bird, or a bare pizza bottom which is tagged as pizza (Cochior and van de Ven, 2020). These examples show that even seemingly unambiguous concepts become fluid the moment they have to become strictly delineated in a dataset.

    -

    Another important issue with ethical and political repercussions is unethically collected data, as in the case of Clearview AI detailed above. When it comes to operational datasets, i.e., datasets used in the actual process of facial authentication and/or identification, we have seen that possible deployments include the use of cloud-based services (either for the processing or the storage of the sensitive information). This increases the risks of data breaches and attacks by hackers. (Fernandez et al. 2020, 34)

    +

    Another important issue with ethical and political repercussions is unethically collected data, as in the case of Clearview AI detailed above. When it comes to operational datasets, i.e., datasets used in the actual process of facial authentication and/or identification, we have seen that possible deployments include the use of cloud-based services (either for the processing or the storage of the sensitive information). This increases the risks of data breaches and attacks by hackers. (Fernandez et al. 2020, 34)

    @@ -843,23 +843,23 @@

    A broad range of deployments, which we consider in this first section, is not aimed at surveillance, but at authentication (see section 2.3 in this report), namely making sure that the person in front of the security camera is who they say they are.

    Live authentication

    -

    As in the cases of the use of Cisco systems powered FRT in two pilot projects in high schools of Nice (see section 8.1) and Marseille (France)7, or as in the case of the Anderstorp Upper Secondary School in Skelleftea (Sweden)8, the aim of these projects was to identify students who could have access to the premises. School-wide biometric databases were generated and populated with students’ portraits. Gates were fitted with cameras connected to facial recognition technology and allowed access only to recognised students. Another documented use has been for the Home Quarantine App (Hungary), in which telephone cameras are used by authorities to verify the identity of the persons logged into the app (see also section 10.1).

    +

    As in the cases of the use of Cisco systems powered FRT in two pilot projects in high schools of Nice (see section 8.1) and Marseille (France)7, or as in the case of the Anderstorp Upper Secondary School in Skelleftea (Sweden)8, the aim of these projects was to identify students who could have access to the premises. School-wide biometric databases were generated and populated with students’ portraits. Gates were fitted with cameras connected to facial recognition technology and allowed access only to recognised students. Another documented use has been for the Home Quarantine App (Hungary), in which telephone cameras are used by authorities to verify the identity of the persons logged into the app (see also section 10.1).

    In these deployments, people must submit themselves to the camera in order to be identified and gain access. While these techniques of identification pose important threats to the privacy of the concerned small groups of users (in both high school cases, DPAs banned the use of FRTs), and run the risk of false positives (unauthorised people recognised as authorised) or false negatives (authorised people not recognised as such) the risk of biometric mass surveillance strictly speaking is low to non-existent because of the nature of the acquisition of images and other sensor-based data.

    However, other forms of live authentication tie in with surveillance practices, in particular various forms of blacklisting. With blacklisting the face of every passer-by is compared to a list of faces of individuals who have been rejected access to the premises. In such an instance, people do not have to be identified, as long as an image of their face is provided. This has been used in public places, for example in the case of the Korte Putstraat in the Dutch city of 's-Hertogenbosch: during the carnival festivities of 2019 two people were rejected access to the street after they were singled out by the system (Gotink, 2019). It is unclear how many false positives were generated during this period. Other cases of blacklisting can be found at, for example, access control at various football stadiums in Europe, see also section 3.3. In many cases of blacklisting, individuals do not enrol voluntarily.

    Forensic authentication

    Biometric systems for the purposes of authentication are also increasingly deployed for forensic applications among law-enforcement agencies in the European Union. The typical scenario for the use of such technologies is to match the photograph of a suspect (extracted, for example, from previous records or from CCTV footage) against an existing dataset of known individuals (e.g., a national biometric database, a driver’s license database, etc.). (TELEFI, 2021). The development of these forensic authentication capabilities is particularly relevant to this study, because it entails making large databases ready for searches on the basis of biometric information.

    -

    To date, 11 out of 27 member states of the European Union are using facial recognition against biometric databases for forensic purposes: Austria (EDE)9, Finland (KASTU)10, France (TAJ)11, Germany (INPOL)12, Greece (Mugshot Database)13, Hungary (Facial Image Registry)14, Italy (AFIS)15, Latvia (BDAS)16, Lithuania (HDR)17, Netherlands (CATCH)18 and Slovenia (Record of Photographed Persons)19 (TELEFI 2021).

    +

    To date, 11 out of 27 member states of the European Union are using facial recognition against biometric databases for forensic purposes: Austria (EDE)9, Finland (KASTU)10, France (TAJ)11, Germany (INPOL)12, Greece (Mugshot Database)13, Hungary (Facial Image Registry)14, Italy (AFIS)15, Latvia (BDAS)16, Lithuania (HDR)17, Netherlands (CATCH)18 and Slovenia (Record of Photographed Persons)19 (TELEFI 2021).

    Seven additional countries are expected to acquire such capabilities in the near future: Croatia (ABIS)20, Czech Republic (CBIS)21, Portugal (AFIS) Romania (NBIS)22, Spain (ABIS), Sweden (National Mugshot Database), Cyprus (ISIS Faces)23, Estonia (ABIS) (TELEFI 2021).

    -

    When it comes to international institutions, Interpol (2020) has a facial recognition system (IFRS)24, based on facial images received from more than 160 countries. Europol has two sub-units which use the facial recognition search tool and database known as FACE: the European Counter Terrorism Center (ECTC) and the European Cybercrime Center (ECC). (TELEFI, 2021 149-153) (Europol 2020)

    +

    When it comes to international institutions, Interpol (2020) has a facial recognition system (IFRS)24, based on facial images received from more than 160 countries. Europol has two sub-units which use the facial recognition search tool and database known as FACE: the European Counter Terrorism Center (ECTC) and the European Cybercrime Center (ECC). (TELEFI, 2021 149-153) (Europol 2020)

    Only 9 countries in the EU so far have rejected or do not plan to implement FRT for forensic purposes: Belgium (see CHAPTER 6), Bulgaria, Denmark, Ireland, Luxembourg, Malta, Poland, Portugal, Slovakia.

    Map Description automatically generated

    Figure 1. EU Countries use of FRT for forensic applications25

    When it comes to databases, some countries limit the searches to criminal databases (Austria, Germany, France, Italy, Greece, Slovenia, Lithuania, UK), while other countries open the searches to civil databases (Finland, Netherlands, Latvia, Hungary).

    This means that the person categories can vary substantially. In the case of criminal databases it can range from suspects and convicts, to asylum seekers, aliens, unidentified persons, immigrants, visa applicants. When civil databases are used as well, such as in Hungary, the database contains a broad range of “individuals of known identity from various document/civil proceedings” (TELEFI 2021, appendix 3).

    Finally, the database sizes, in comparison to the authentication databases mentioned in the previous section, are of a different magnitude. The databases of school students in France and Sweden, mentioned in the previous section contains a few hundred entries. National databases can contain instead several millions. Criminal databases such as Germany’s INPOL contains 6,2 million individuals, France’s TAJ 21 million individuals and Italy’s AFIS 9 million individuals. Civil databases, such as Hungary’s Facial Image Registry contain 30 million templates (TELEFI, 2021 appendix 3).

    -

    Authentication has also been deployed as part of integrated “safe city” solutions, such as the NEC Technology Bio-IDiom system in Lisbon and London, deployed for forensic investigation purposes. For this specific product, authentication can occur via facial recognition, as well as other biometric authentication techniques such as ear acoustics, iris, voice, fingerprint, and finger vein recognition. We currently do not have public information on the use of Bio-IDiom in Lisbon nor in London. On NEC’s Website (2021) however, Bio-IDiom is advertised as a “multimodal” identification system, that has been used for example by the Los Angeles County Sheriff’s Department (LASD) for criminal investigations. The system “combines multiple biometric technologies including fingerprint, palm print, face, and iris recognition” and works “based on the few clues left behind at crime scenes. In Los Angeles, “this system is also connected to the databases of federal and state law enforcement agencies such as the California Department of Justice and FBI, making it the world’s largest-scale service-based biometrics system for criminal investigation”. We don’t know if that is the case in Portugal and in the UK deployments.

    +

    Authentication has also been deployed as part of integrated “safe city” solutions, such as the NEC Technology Bio-IDiom system in Lisbon and London, deployed for forensic investigation purposes. For this specific product, authentication can occur via facial recognition, as well as other biometric authentication techniques such as ear acoustics, iris, voice, fingerprint, and finger vein recognition. We currently do not have public information on the use of Bio-IDiom in Lisbon nor in London. On NEC’s Website (2021) however, Bio-IDiom is advertised as a “multimodal” identification system, that has been used for example by the Los Angeles County Sheriff’s Department (LASD) for criminal investigations. The system “combines multiple biometric technologies including fingerprint, palm print, face, and iris recognition” and works “based on the few clues left behind at crime scenes. In Los Angeles, “this system is also connected to the databases of federal and state law enforcement agencies such as the California Department of Justice and FBI, making it the world’s largest-scale service-based biometrics system for criminal investigation”. We don’t know if that is the case in Portugal and in the UK deployments.

    Case study: INPOL (Germany)

    @@ -879,19 +879,19 @@

    Smart surveillance features

    A first range of deployments of “smart” systems correspond to what can broadly be defined as “smart surveillance” yet do not collect or process biometric information per se26. Smart systems can be used ex-post, to assist CCTV camera operators in processing large amounts of recorded information, or can guide their attention when they have to monitor a large number of live video feeds simultaneously. Smart surveillance uses the following features:

    -

    - Anomaly detection. In Toulouse (France), the City Council commissioned IBM to connect 30 video surveillance cameras to software able to "assist human decisions" by raising alerts when "abnormal events are detected." (Technopolice 2021) The request was justified by the “difficulties of processing the images generated daily by the 350 cameras and kept for 30 days (more than 10,000 images per second)”. The objective, according to the digital direction is "to optimise and structure the supervision of video surveillance operators by generating alerts through a system of intelligent analysis that facilitates the identification of anomalies detected, whether: movements of crowds, isolated luggage, crossing virtual barriers north of the Garonne, precipitous movement, research of shapes and colour. All these detections are done in real time or delayed (Technopolice 2021). In other words, the anomaly detection is a way to operationalise the numerical output of various computer vision based recognition systems. Similar systems are used in the Smart video surveillance deployment in Valenciennes (France) or in the Urban Surveillance Centre (Marseille).

    -

    - Object Detection. In Amsterdam, around the Johan Cruijff ArenA (Stadium), the city has been experimenting with a Digitale Perimeter (digital perimeter) surveillance system. In addition to the usual features of facial recognition, and crowd monitorining, the system includes the possibility of automatically detecting specific objects such as weapons, fireworks or drones. Similar features are found in Inwebit’s Smart Security Platform (SSP) in Poland.

    -

    - Feature search. In Marbella (Spain), Avigilon deployed a smart camera system aimed at providing “smart” functionalities without biometric data. Since regional law bans facial and biometric identification without consent, the software uses “appearance search”. “Appearance search” provides estimates for “unique facial traits, the colour of a person’s clothes, age, shape, gender and hair colour”. This information is not considered biometric. The individual’s features can be used to search for suspects fitting a particular profile. Similar technology has been deployed in Kortrijk (Belgium), which provides search parameters for people, vehicles and animals (Verbeke 2019).

    -

    - Video summary. Some companies, such as Briefcam and their product Briefcam Review, offer a related product, which promises to shorten the analysis of long hours of CCTV footage, by identifying specific topics of interest (children, women, lighting changes) and making the footage searchable. The product combines face recognition, license plate recognition, and more mundane video analysis features such as the possibility to overlay selected scenes, thus highlighting recurrent points of activity in the image. Briefcam is deployed in several cities across Europe, including Vannes, Roubaix (in partnership with Eiffage) and Moirand in France.

    -

    - Object detection and object tracking. As outlined in chapter 2, object detection is often the first step in the various digital detection applications for images. An ‘object’ here can mean anything the computer is conditioned to search for: a suitcase, a vehicle, but also a person; while some products further process the detected object to estimate particular features, such as the colour of a vehicle, the age of a person. However, on some occasions — often to address concerns over privacy — only the position of the object on the image is stored. This is for example the case with the test of the One-and-a-half-meter monitor in Amsterdam (Netherlands), Intemo’s people counting system in Nijmegen (Netherlands), the KICK project in Brugge, Kortrijk, Ieper, Roeselare and Oostende in Belgium or the Eco-counter tracking cameras pilot project in Lannion (France).

    -

    - Movement recognition. Avigilon’s software that is deployed in Marbella (Spain) also detects unusual movement. “To avoid graffiti, we can calculate the time someone takes to pass a shop window, “explained Javier Martín, local chief of police in Marbella to the Spanish newspaper El País. “If it takes them more than 10 seconds, the camera is activated to see if they are graffitiing. So far, it hasn’t been activated.” (Colomé 2019) Similar movement recognition technology is used in, the ViSense deployment at the Olympic Park London (UK) and the security camera system in Mechelen-Willebroek (Belgium). It should be noted that movement recognition can be done in two ways: where projects such as the Data-lab Burglary-free Neighbourhood in Rotterdam (Netherlands)27 are only based on the tracking of trajectories of people through an image (see also ‘Object detection’), cases such as the Living Lab Stratumseind28 in Eindhoven (Netherlands) also process the movements and gestures of individuals in order to estimate their behaviour.

    +

    - Anomaly detection. In Toulouse (France), the City Council commissioned IBM to connect 30 video surveillance cameras to software able to "assist human decisions" by raising alerts when "abnormal events are detected." (Technopolice 2021) The request was justified by the “difficulties of processing the images generated daily by the 350 cameras and kept for 30 days (more than 10,000 images per second)”. The objective, according to the digital direction is "to optimise and structure the supervision of video surveillance operators by generating alerts through a system of intelligent analysis that facilitates the identification of anomalies detected, whether: movements of crowds, isolated luggage, crossing virtual barriers north of the Garonne, precipitous movement, research of shapes and colour. All these detections are done in real time or delayed (Technopolice 2021). In other words, the anomaly detection is a way to operationalise the numerical output of various computer vision based recognition systems. Similar systems are used in the Smart video surveillance deployment in Valenciennes (France) or in the Urban Surveillance Centre (Marseille).

    +

    - Object Detection. In Amsterdam, around the Johan Cruijff ArenA (Stadium), the city has been experimenting with a Digitale Perimeter (digital perimeter) surveillance system. In addition to the usual features of facial recognition, and crowd monitorining, the system includes the possibility of automatically detecting specific objects such as weapons, fireworks or drones. Similar features are found in Inwebit’s Smart Security Platform (SSP) in Poland.

    +

    - Feature search. In Marbella (Spain), Avigilon deployed a smart camera system aimed at providing “smart” functionalities without biometric data. Since regional law bans facial and biometric identification without consent, the software uses “appearance search”. “Appearance search” provides estimates for “unique facial traits, the colour of a person’s clothes, age, shape, gender and hair colour”. This information is not considered biometric. The individual’s features can be used to search for suspects fitting a particular profile. Similar technology has been deployed in Kortrijk (Belgium), which provides search parameters for people, vehicles and animals (Verbeke 2019).

    +

    - Video summary. Some companies, such as Briefcam and their product Briefcam Review, offer a related product, which promises to shorten the analysis of long hours of CCTV footage, by identifying specific topics of interest (children, women, lighting changes) and making the footage searchable. The product combines face recognition, license plate recognition, and more mundane video analysis features such as the possibility to overlay selected scenes, thus highlighting recurrent points of activity in the image. Briefcam is deployed in several cities across Europe, including Vannes, Roubaix (in partnership with Eiffage) and Moirand in France.

    +

    - Object detection and object tracking. As outlined in chapter 2, object detection is often the first step in the various digital detection applications for images. An ‘object’ here can mean anything the computer is conditioned to search for: a suitcase, a vehicle, but also a person; while some products further process the detected object to estimate particular features, such as the colour of a vehicle, the age of a person. However, on some occasions — often to address concerns over privacy — only the position of the object on the image is stored. This is for example the case with the test of the One-and-a-half-meter monitor in Amsterdam (Netherlands), Intemo’s people counting system in Nijmegen (Netherlands), the KICK project in Brugge, Kortrijk, Ieper, Roeselare and Oostende in Belgium or the Eco-counter tracking cameras pilot project in Lannion (France).

    +

    - Movement recognition. Avigilon’s software that is deployed in Marbella (Spain) also detects unusual movement. “To avoid graffiti, we can calculate the time someone takes to pass a shop window, “explained Javier Martín, local chief of police in Marbella to the Spanish newspaper El País. “If it takes them more than 10 seconds, the camera is activated to see if they are graffitiing. So far, it hasn’t been activated.” (Colomé 2019) Similar movement recognition technology is used in, the ViSense deployment at the Olympic Park London (UK) and the security camera system in Mechelen-Willebroek (Belgium). It should be noted that movement recognition can be done in two ways: where projects such as the Data-lab Burglary-free Neighbourhood in Rotterdam (Netherlands)27 are only based on the tracking of trajectories of people through an image (see also ‘Object detection’), cases such as the Living Lab Stratumseind28 in Eindhoven (Netherlands) also process the movements and gestures of individuals in order to estimate their behaviour.

    Audio recognition

    -

    - In addition to image (video) based products, some deployments use audio recognition to complement the decision-making process, for example used in the Serenecity (a branch of Verney-Carron) Project in Saint-Etienne (France), the Smart CCTV deployment in public transportation in Rouen (France) or the Smart CCTV system in Strasbourg (France). The project piloted in Saint-Etienne for example, worked by placing “audio capture devices” - the term microphone was avoided- in strategic parts of the city. Sounds qualified by an anomaly detection algorithm as suspicious would then alert operators in the Urban Supervision Center, prompting further investigation via CCTV or deployment of the necessary services (healthcare or police for example) (France 3 Auvergne-Rhône-Alpes 2019.)

    +

    - In addition to image (video) based products, some deployments use audio recognition to complement the decision-making process, for example used in the Serenecity (a branch of Verney-Carron) Project in Saint-Etienne (France), the Smart CCTV deployment in public transportation in Rouen (France) or the Smart CCTV system in Strasbourg (France). The project piloted in Saint-Etienne for example, worked by placing “audio capture devices” - the term microphone was avoided- in strategic parts of the city. Sounds qualified by an anomaly detection algorithm as suspicious would then alert operators in the Urban Supervision Center, prompting further investigation via CCTV or deployment of the necessary services (healthcare or police for example) (France 3 Auvergne-Rhône-Alpes 2019.)

    Emotion recognition

    -

    - Emotion recognition is a rare occurrence. We found evidence of its deployment only in a pilot project in Nice (see section 8.1) and in the Citybeacon project in Eindhoven, but even then, the project was never actually tested. The original idea proposed by the company Two-I was “a "real-time emotional mapping" capable of highlighting "potentially problematic or even dangerous situations". "A dynamic deployment of security guards in an area where tension and stress are felt, is often a simple way to avoid any overflow," also argues Two-I, whose "Security" software would be able to decipher some 10,000 faces per second. (Binacchi 2019)

    +

    - Emotion recognition is a rare occurrence. We found evidence of its deployment only in a pilot project in Nice (see section 8.1) and in the Citybeacon project in Eindhoven, but even then, the project was never actually tested. The original idea proposed by the company Two-I was “a "real-time emotional mapping" capable of highlighting "potentially problematic or even dangerous situations". "A dynamic deployment of security guards in an area where tension and stress are felt, is often a simple way to avoid any overflow," also argues Two-I, whose "Security" software would be able to decipher some 10,000 faces per second. (Binacchi 2019)

    Gait recognition

    @@ -902,7 +902,7 @@

    Integrated solutions

    Smart cities

    -

    While some cities or companies decide to implement some of the functionalities with their existing or updated CCTV systems, several chose to centralise several of these “smart” functions in integrated systems often referred to as “safe city” solutions. These solutions do not necessarily process biometric information. This is the case for example for the deployments in TIM’s, Insula and Venis’ Safe City Platform in Venice (Italy), Huawei’s Safe City in Valenciennes (France), Dahua’s integrated solution in Brienon-sur-Armançon (France), Thalès’ Safe City in La Défense and Nice (France), Engie Inéo’s and SNEF’s integrated solution in Marseille (France), the Center of Urban Supervision in Roubaix (France), AI Mars (Madrid, in development) or NEC’s platform in Lisbon and London.

    +

    While some cities or companies decide to implement some of the functionalities with their existing or updated CCTV systems, several chose to centralise several of these “smart” functions in integrated systems often referred to as “safe city” solutions. These solutions do not necessarily process biometric information. This is the case for example for the deployments in TIM’s, Insula and Venis’ Safe City Platform in Venice (Italy), Huawei’s Safe City in Valenciennes (France), Dahua’s integrated solution in Brienon-sur-Armançon (France), Thalès’ Safe City in La Défense and Nice (France), Engie Inéo’s and SNEF’s integrated solution in Marseille (France), the Center of Urban Supervision in Roubaix (France), AI Mars (Madrid, in development) or NEC’s platform in Lisbon and London.

    The way “Smart/Safe City” solutions work is well exemplified by the “Control room” deployed in Venice, connected to an urban surveillance network. The system is composed of a central command and control room which aggregates cloud computing systems, together with smart cameras, artificial intelligence systems, antennas and hundreds of sensors distributed on a widespread network. The idea is to monitor what happens in the lagoon city in real time. The scope of the abilities of the centre is wide-ranging. It promises to: manage events and incoming tourist flows, something particularly relevant to a city which aims to implement a visiting fee for tourists; predict and manage weather events in advance, such as the shifting of tides and high water, by defining alternative routes for transit in the city; indicating to the population in real time the routes to avoid traffic and better manage mobility for time optimisation; improve the management of public safety allowing city agents to intervene in a more timely manner; control and manage water and road traffic, also for sanctioning purposes, through specific video-analysis systems; control the status of parking lots; monitor the environmental and territorial situation; collect, process data and information that allow for the creation of forecasting models and the allocation of resources more efficiently and effectively; bring to life a physical "Smart Control Room" where law enforcement officers train and learn how to read data as well. (LUMI 2020)

    @@ -924,7 +924,7 @@

    - Live Facial Recognition pilot project in Brussels International Airport / Zaventem (Belgium, see detailed case study, CHAPTER 6)

    - Live Facial Recognition in Budapest (Hungary, see detailed case study, CHAPTER 10)

    - Live Facial Recognition pilot project during the Carnival in Nice (France, see detailed case study, CHAPTER 8)

    -

    - Live Facial Recognition Pilot Project Südkreuz Berlin (Germany, see detailed case study, CHAPTER 9)

    +

    - Live Facial Recognition Pilot Project Südkreuz Berlin (Germany, see detailed case study, CHAPTER 9)

    • Live Facial Recognition during Carnival 2019 in 's-Hertogenbosch’s Lange Putstraat (the (Netherlands)

    @@ -933,11 +933,11 @@

    Deployment of RBI in commercial spaces

    The number of deployments of live facial recognition systems in commercial spaces hosting the public is much higher, but because of its commercial nature, difficult to document and trace. Our research found the following instances:

    -

    - Live Facial Recognition project, Brøndby IF Football stadium (Denmark)

    -

    - Live Facial Recognition Pilot in Metz Stadium (France)

    -

    - Live Facial Recognition in Ifema (Spain)

    -

    - Live Facial Recognition in Mercadona or Mallorca, Zaragoza, Valencia (Spain)

    -

    The systems operate more or less in the same way as RBI in public spaces, or as forensic authentication systems if they were connected to live cameras. In the Brøndby IF Football stadium deployment for example, developed in partnership with Panasonic and the National University of Singapore, the football fans who want to access the game have to pass through a gate equipped with a camera, connected to a facial recognition algorithm. The stadium administration has constituted a database of unwanted individuals and if the software matches one of the incoming fans with a record in the database, it flags it to the system (Overgaard 2019).

    +

    - Live Facial Recognition project, Brøndby IF Football stadium (Denmark)

    +

    - Live Facial Recognition Pilot in Metz Stadium (France)

    +

    - Live Facial Recognition in Ifema (Spain)

    +

    - Live Facial Recognition in Mercadona or Mallorca, Zaragoza, Valencia (Spain)

    +

    The systems operate more or less in the same way as RBI in public spaces, or as forensic authentication systems if they were connected to live cameras. In the Brøndby IF Football stadium deployment for example, developed in partnership with Panasonic and the National University of Singapore, the football fans who want to access the game have to pass through a gate equipped with a camera, connected to a facial recognition algorithm. The stadium administration has constituted a database of unwanted individuals and if the software matches one of the incoming fans with a record in the database, it flags it to the system (Overgaard 2019).

    There is however little to no information of the uses of these technologies in commercial spaces, as there is no requirement to publicise the various components of these systems. The case studies of this report thus focus mostly on the deployment of RBI in public spaces. More research, and more transparency would however be welcome in order to understand the data gathering practices and the impact of these deployments.

    @@ -1056,7 +1056,7 @@

    Four positions in the policy debates

    Active promotion

    -

    A certain number of actors, both at the national and at the local level are pushing for the development and the extension of biometric remote identification. At the local level, the new technological developments meet a growing apetite for smart city initiatives and the ambitions of mayors that strive for developing digital platforms and employ technology-oriented solutions for governance and law enforcement. The intention of the mayor of Nice, Christian Etrosi, to make Nice a “laboratory” of crime prevention, despite repeated concerns of the French DPA, is a case in point (for a detailed analysis, see chapter 8 in this report, see also Barelli 2018). Law enforcement agencies across Europe also continue to press ahead with efforts to build digital and automated infrastructures that benefits tech companies who push their face recognition technologies with the concept of smart city and innovation tech (ex. Huawei, NEC, etc.).

    +

    A certain number of actors, both at the national and at the local level are pushing for the development and the extension of biometric remote identification. At the local level, the new technological developments meet a growing apetite for smart city initiatives and the ambitions of mayors that strive for developing digital platforms and employ technology-oriented solutions for governance and law enforcement. The intention of the mayor of Nice, Christian Etrosi, to make Nice a “laboratory” of crime prevention, despite repeated concerns of the French DPA, is a case in point (for a detailed analysis, see chapter 8 in this report, see also Barelli 2018). Law enforcement agencies across Europe also continue to press ahead with efforts to build digital and automated infrastructures that benefits tech companies who push their face recognition technologies with the concept of smart city and innovation tech (ex. Huawei, NEC, etc.).

    At the national level, Biometric systems for the purposes of authentication are increasingly deployed for forensic applications among law-enforcement agencies in the European Union. As we elaborate in Chapter 3, 11 out of 27 member states of the European Union are already using facial recognition against biometric databases for forensic purposes and 7 additional countries are expected to acquire such capabilities in the near future. The map of the European deployments of Biometric Identification Technologies (see Chapter 3) bear witness to a broad range of algorithmic processing of security images in a spectrum that goes from individual, localised authentication systems to generalised law enforcement uses of authentication, to Biometric Mass Surveillance.

    Several states that have not yet adopted such technologies seem inclined to follow the trend, and push further. Belgian Minister of Interior Pieter De Crem for example, recently declared he was in favour of the use of facial recognition both for judicial inquiries but also for live facial recognition, a much rarer instance.

    @@ -1066,13 +1066,13 @@

    Support with safeguards

    -

    A second category of actors has indeed adopted the point of view that the RBI technologies should be supported, to the condition that their development should be monitored because of the risks they potentially pose. We find in this category the EU Commission, the EU Council, some EU Political parties, as well as the Fundamental Rights Agency (FRA), national DPAs such as the CNIL, the CoE (Council of Europe), and a certain number of courts.

    +

    A second category of actors has indeed adopted the point of view that the RBI technologies should be supported, to the condition that their development should be monitored because of the risks they potentially pose. We find in this category the EU Commission, the EU Council, some EU Political parties, as well as the Fundamental Rights Agency (FRA), national DPAs such as the CNIL, the CoE (Council of Europe), and a certain number of courts.

    Developments in the field of AI for governance, security and law enforcement are widely encouraged and financially supported by EU institutions. In their communication Shaping Europe’s Digital Futures accompanying the White Paper on AI, the European Commission set out its guidelines and strategies to create a “Europe fit for the digital age” (European Commission 2020a). In support of a “fair and competitive economy” the Commission proposes a European Data Strategy (EDS) to make Europe a global leader in the data-agile economy. The EDS further aims to ensure Europe’s technological sovereignty in a globalised world and “unlock the enormous potential of new technologies like AI” (Newsroom 2020). Therefore, the Commission proposes, among others “building and deploying cutting-edge joint digital capacities in the areas of AI, cyber, super and quantum computing, quantum communication and blockchain;” as well as “[r]einforcing EU governments interoperability strategy to ensure coordination and common standards for secure and borderless public sector data flows and services.” (European Commission 2020a, 4)

    The financial support for these initiatives is planned to be channelled from the Digital Europe programme (DEP), the Connecting Europe Facility 2 and Horizon Europe. Through the Horizon Europe for instance, the Commission plans to invest €15 billion in the ‘Digital, Industry and Space’ cluster, with AI as a key activity to be supported. The DEP would benefit from almost €2.5 billion in deploying data platforms and AI applications while also supporting national authorities in making their high value data sets interoperable (Newsroom 2020).

    In the European Parliament, the EPPEuropean People's Party most aligns with this approach. “We want to regulate facial recognition technologies, not ban them. We need clear rules where they can be used and where they must not be used”, has for example declared Emil Radev MEP, EPP Group Member of the Legal Affairs Committee. As he puts it “Without a doubt, we want to prevent mass surveillance and abuse. But this cannot mean banning facial recognition all together. There are harmless and useful applications for facial recognition, which increase personal security" (European People’s Party, 2021)

    The FRA’s 2019 report on facial recognition technologies (FRA 2019), which builds on several previous reports concerning biometrics, IT systems and fundamental rights (FRA 2018); big data and decision making (FRA 2018); data quality and artificial intelligence (FRA 2019); calls for a moderate approach. The FRA advocates for a comprehensive understanding of how exactly facial recognition technologies work and what their impact on fundamental human rights are. Fundamental rights implications of using FRT, they argue, vary considerably depending on the purpose, scope and context. They highlight a number of issues based on the EU fundamental rights framework as well as the EU data protection legislation. For example, according to Article 9 of the GDPR, processing of biometric data is allowed based on the data subject’s explicit consent, which requires a higher threshold of precision and definitiveness including for processing purposes. In terms of using biometric surveillance in public spaces, explicit consent would not provide a lawful ground for the relevant data processing because– as observed by the CJEU in its Schwarz decision, the data subject who is entering the premises would not have any choice of opting out of data processing. If the processing of biometric data is based on substantial public interest, which is another lawful data processing ground under Article 9 of the GDPR, it must be “proportionate to the aim pursued, respect the essence of the right to data protection and provide for suitable and specific measures to safeguard the fundamental rights and interest of the data subjects” ((Article 9(2)(g), GDPR). Finally, when emphasising that the processing must be based on a lawful ground as recognised under the EU data protection legislation, the FRA was particularly vocal about the “function creep”, in regard to use of facial recognition systems and emphasised that the purpose of information collection must be strictly determined in light of the gravity of the intrusion upon people’s fundamental rights (25). 

    Therefore, the FRA places the right to privacy and protection of personal and sensitive data at the core of their problem definition, emphasising the potential dangers of FRTs undermining the freedom of expression, association and assembly. The FRA report also makes a case for the rights of special groups such as children, the elderly and people with disabilities, and addresses the issue of how the use of FRTs can contribute to further criminalise and stigmatise already discriminated groups of people (e.g., certain ethnic or racial minorities). In light of these considerations they advocate for a clear and “sufficiently detailed” legal framework, close monitoring and a thorough and continuous impact assessment of each deployment.

    -

    The French DPA, the CNIL, takes a similar position in the report “Facial Recognition. For a debate living up to the challenges” (CNIL 2019b). The CNIL report argues that the contactless and ubiquitous nature of the different FRTs can create an unprecedented potential for surveillance which, in the long run, could potentially undermine societal choices.  They also emphasise that biometric data is sensitive data therefore its collection is never completely harmless: “Even legitimate and well-defined use can, in the event of a cyber-attack or a simple error, have particularly serious consequences. In this context, the question of securing biometric data is crucial and must be an overriding priority in the design of any project of this kind” (CNIL 2019b, 6). In their recommendations, while calling for special vigilance, they acknowledge the legitimacy and proportionality of some uses. The CNIL pointed out that GDPR-endangering applications are often presented as “pilot projects”, and thus requested the drawing of “some red lines even before any experimental use”. They call instead for “a genuinely experimental approach” that test and perfect technical solutions that respect the legal framework (CNIL 2019b, 10).

    +

    The French DPA, the CNIL, takes a similar position in the report “Facial Recognition. For a debate living up to the challenges” (CNIL 2019b). The CNIL report argues that the contactless and ubiquitous nature of the different FRTs can create an unprecedented potential for surveillance which, in the long run, could potentially undermine societal choices.  They also emphasise that biometric data is sensitive data therefore its collection is never completely harmless: “Even legitimate and well-defined use can, in the event of a cyber-attack or a simple error, have particularly serious consequences. In this context, the question of securing biometric data is crucial and must be an overriding priority in the design of any project of this kind” (CNIL 2019b, 6). In their recommendations, while calling for special vigilance, they acknowledge the legitimacy and proportionality of some uses. The CNIL pointed out that GDPR-endangering applications are often presented as “pilot projects”, and thus requested the drawing of “some red lines even before any experimental use”. They call instead for “a genuinely experimental approach” that test and perfect technical solutions that respect the legal framework (CNIL 2019b, 10).

    The CoE’s Practical Guide on the Use of Personal Data in the Police Sector (Council of Europe 2018), supplementing Convention 108+, puts great emphasis on implementing specific safeguards where an automated biometric system is introduced and considers that due to the high risk that such system poses to individuals’ rights, data protection authorities should be consulted in its implementation (10). Also, as mentioned below, the Council of Europe’s Guidelines on Facial Recognition (Council of Europe 2021), while considering a moratorium on the live facial recognition technology, sets out certain requirements to be met when implementing (possibly forensic) facial recognition technology.

    @@ -1086,23 +1086,23 @@

    Outright Ban

    -

    Finally, a certain number of EU Political Parties, EU and national NGOs have argued that there is no acceptable deployment of RBI, because the danger of Biometric Mass Surveillance is too high. Such actors include organisations such as EDRi, La Quadrature du Net, Algorithm Watch or the French Défenseur des Droits29.

    +

    Finally, a certain number of EU Political Parties, EU and national NGOs have argued that there is no acceptable deployment of RBI, because the danger of Biometric Mass Surveillance is too high. Such actors include organisations such as EDRi, La Quadrature du Net, Algorithm Watch or the French Défenseur des Droits29.

    In the European Parliament, the European Greens have most vocally promoted the position of the ban, and have gathered support across party lines. In a letter to the European Commission dated 15 April 2021, 40 MEPs from the European Greens, the Party of the European Left, the Party of European Socialists, Renew Europe, a few non-attached MEPs and one member of the far-right party Identity and Democracy expressed their concerns about the leaked EU commission’ proposal for the AI Regulation a few days earlier. As they argued

    People who constantly feel watched and under surveillance cannot freely and courageously stand up for their rights and for a just society. Surveillance, distrust and fear risk gradually transforming our society into one of uncritical consumers who believe they have “nothing to hide” and - in a vain attempt to achieve total security - are prepared to give up their liberties. That is not a society worth living in!
    (Breyer et al. 2021)

    Taking in particular issue with Article 4 and the possible exemptions to regulation of AI “in order to safeguard public safety”, they urge the commissionEuropean Commission “to make sure that existing protections are upheld and a clear ban on biometric mass surveillance in public spaces is proposed. This is what a majority of citizens want” (Breyer et al. 2021)

    -

    European Digital Rights (EDRi), an umbrella organisation of 44 digital rights NGOs in Europe takes a radical stance on the issue. They argue that mass processing of biometric data in public spaces creates a serious risk of mass surveillance that infringes on fundamental rights, and therefore they call on the Commission to permanently stop all deployments that can lead to mass surveillance. In their report Ban Biometric Mass Surveillance (2020) they demand that the EDPB and national DPAs) “publicly disclose all existing and planned activities and deployments that fall within this remit.” (EDRi 2020, 5). Furthermore, they call for ceasing all planned legislation which establishes biometric processing as well as the funding for all such projects, amounting to an “immediate and indefinite ban on biometric processing”.

    -

    La Quadrature du Net (LQDN) one of EDRi’s founding members (created in 2008 to “promote and defend fundamental freedoms in the digital world") similarly called for a ban on any present and future use of facial recognition for security and surveillance purposes. Together with a number of other French NGOs monitoring legislation impacting digital freedoms, as well as other collectives, companies, associations and trade unions, the LQDN initiated a joint open letter in which they call on French authorities to ban any security and surveillance use of facial recognition due to their uniquely invasive and dehumanising nature. In their letter they point to the fact that in France there are a “multitude of systems already installed, outside of any real legal framework, without transparency or public discussion” referring, among others, to the PARAFE system and the use of FRTs by civil and military police. As they put it:

    +

    European Digital Rights (EDRi), an umbrella organisation of 44 digital rights NGOs in Europe takes a radical stance on the issue. They argue that mass processing of biometric data in public spaces creates a serious risk of mass surveillance that infringes on fundamental rights, and therefore they call on the Commission to permanently stop all deployments that can lead to mass surveillance. In their report Ban Biometric Mass Surveillance (2020) they demand that the EDPB and national DPAs) “publicly disclose all existing and planned activities and deployments that fall within this remit.” (EDRi 2020, 5). Furthermore, they call for ceasing all planned legislation which establishes biometric processing as well as the funding for all such projects, amounting to an “immediate and indefinite ban on biometric processing”.

    +

    La Quadrature du Net (LQDN) one of EDRi’s founding members (created in 2008 to “promote and defend fundamental freedoms in the digital world") similarly called for a ban on any present and future use of facial recognition for security and surveillance purposes. Together with a number of other French NGOs monitoring legislation impacting digital freedoms, as well as other collectives, companies, associations and trade unions, the LQDN initiated a joint open letter in which they call on French authorities to ban any security and surveillance use of facial recognition due to their uniquely invasive and dehumanising nature. In their letter they point to the fact that in France there are a “multitude of systems already installed, outside of any real legal framework, without transparency or public discussion” referring, among others, to the PARAFE system and the use of FRTs by civil and military police. As they put it:

    “Facial recognition is a uniquely invasive and dehumanising technology, which makes possible, sooner or later, constant surveillance of the public space. It creates a society in which we are all suspects. It turns our face into a tracking device, rather than a signifier of personality, eventually reducing it to a technical object. It enables invisible control. It establishes a permanent and inescapable identification regime. It eliminates anonymity. No argument can justify the deployment of such a technology.”
    (La Quadrature du Net. et al. 2019)
    -

    Another prominent voice asking for a full ban on FRTs is the Berlin-based NGO Algorithm Watch.  In their report Automating Society (2020) the NGO similarly calls for a ban to all facial recognition technology that might amount to mass surveillance. Their analysis and recommendations place FRTs in a broader discussion regarding Automated Decision-Making (ADM) systems. They condemn any use of live facial recognition in public spaces and demand that public uses of FRTs that might amount to mass surveillance be decisively "banned until further notice, and urgently, at the EU level” (Algorithm Watch 2020, 10).

    +

    Another prominent voice asking for a full ban on FRTs is the Berlin-based NGO Algorithm Watch.  In their report Automating Society (2020) the NGO similarly calls for a ban to all facial recognition technology that might amount to mass surveillance. Their analysis and recommendations place FRTs in a broader discussion regarding Automated Decision-Making (ADM) systems. They condemn any use of live facial recognition in public spaces and demand that public uses of FRTs that might amount to mass surveillance be decisively "banned until further notice, and urgently, at the EU level” (Algorithm Watch 2020, 10).

    They further demand meaningful transparency that not only means “disclosing information about a system’s purpose, logic, and creator, as well as the ability to thoroughly analyse, and test a system’s inputs and outputs. It also requires making training data and data results accessible to independent researchers, journalists, and civil society organisations for public interest research” (Algorithm Watch 2020, 11).

    Parallel to these reports there are also various campaigns that prove to be effective in raising awareness and putting pressure on governmental bodies both at a national and European level. In May 2020 EDRi launched the #ReclaimYourFace campaign, a European Citizens' Initiative (ECI) petition, that calls for a ban on all biometric mass surveillance practices. The campaign centres around the power imbalances inherent to surveillance. As of May 2021 the campaign has been supported by more than 50.000 individual signatures. #ReclaimYourFace is not the only campaign, though undoubtedly the most visible and influential, in a European Contextcontext. Other similar international initiatives are: "Ban the Scan" initiated by Amnesty International, "Ban Automated Recognition of Gender and Sexual Orientation" led by the international NGO Access Now, or "Project Panopticon" launched by the Indian based Panoptic Tracker.

    -

    In early June; a global coalition was launched under the hashtag #BanBS consisting of 175 organisations from 55 countries the. The coalition demands the halting of biometric surveillance practices. Drafted by Access Now, Amnesty International, European Digital Rights (EDRi), Human Rights Watch, Internet Freedom Foundation (IFF), and Instituto Brasileiro de Defesa do Consumidor (IDEC)), the open letter has been signed by almost 200 organisations, in which they call for an outright ban on uses of facial recognition and biometric technologies that enable mass surveillance and discriminatory targeted surveillance:

    +

    In early June; a global coalition was launched under the hashtag #BanBS consisting of 175 organisations from 55 countries the. The coalition demands the halting of biometric surveillance practices. Drafted by Access Now, Amnesty International, European Digital Rights (EDRi), Human Rights Watch, Internet Freedom Foundation (IFF), and Instituto Brasileiro de Defesa do Consumidor (IDEC)), the open letter has been signed by almost 200 organisations, in which they call for an outright ban on uses of facial recognition and biometric technologies that enable mass surveillance and discriminatory targeted surveillance: