Show simple item record

dc.contributor.authorDushi, Desara
dc.date.accessioned2020-07-10T15:33:30Z
dc.date.available2020-07-10T15:33:30Z
dc.date.issued2020
dc.identifier.urihttp://doi.org/20.500.11825/1625
dc.description.abstractFacial recognition technology is a type of biometric application used to identify people’s faces based on datasets and then makes assessments about those people based on algorithmic predictions. This technology can be used for three types of analytics: verification (matching the ID photo in airports), identification (matching a photo in a database) and classification (gender, age, etc). This technology is widely used by private companies for advertisement and marketing, by analysing facial expressions of clients to predict their preferences; for identifying ideal job candidates; or for automatic tagging of people in photos (Facebook for example). But, facial recognition is not used only by the private sector. Its evolution has attracted the public sector too, especially law enforcement and border management. This has generated many debates on the impact on human rights. Artificial Intelligence (AI) systems are typically trained on data generated by people. Therefore, it is possible that any AI system would reflect the social biases of the people who developed their datasets. On the other hand, it raises concerns of breach of privacy when used in public spaces (ie mass surveillance), discrimination (the algorithm has proven problematic for people of colour), false labelling based on facial expressions (ie in interviews or for criminal profiling), unwanted tagging and when used to send advertisements based on shops people have visited. It also causes intimidation to people and a feeling of intrusiveness. Public safety and expression of consent by people are classic justifications behind the use of such identification technology. But questions remain: Is it necessary? is it the best/right remedy? is it proportional? is it effective? and, ultimately, is the expressed consent informed consent?en_US
dc.language.isoenen_US
dc.publisherGlobal Campus of Human Rightsen_US
dc.relation.ispartofseriesPolicy Briefs 2020;
dc.subjectEuropean Unionen_US
dc.subjecthuman rightsen_US
dc.subjectboundariesen_US
dc.subjectsurveillanceen_US
dc.subjecttechnological innovationen_US
dc.subjectdiscriminationen_US
dc.subjectprivacyen_US
dc.subjectconsenten_US
dc.subjectrecognitionen_US
dc.titleThe use of facial recognition technology in EU law enforcement: Fundamental rights implicationsen_US
dc.typeWorking Paperen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

  • 01. Global Campus Policy Briefs
    The Global Campus Policy Observatory is a 'virtual hub' which comprehends a team of seven researches from the regional programmes to produce, publish and publicly present seven different policy analyses in form of policy briefs, with the aim of making of each regional programme a solid focal point for policy expert advisory in human rights issues.

Show simple item record