Marie Curie Grant to research the impact of facial recognition use in public space on the rights to privacy and data protection
The growing use of Artificial Intelligence-based technologies for security purposes, such as applications enabling face-recognition in public spaces, has become a subject of growing concern. The technology can make mistakes, be skin-colour or gender-biased, and it can also pose a danger to personal freedoms (including the right to privacy). Yet, the technologies are deployed at a breakneck pace around the world and lack specific regulatory frameworks. The EU-funded DATAFACE project aims to research the possible threats that the deployment of facial-recognition tools in public areas poses to the rights to privacy and data protection at European level. Legitimate and proportionate usage will be used as criteria to provide policy recommendations on adequate legal frameworks.
Facial Recognition Technology
FRT use in public space by public authorities
Four countries
France, the UK, the USA, and China
Right to privacy
Impact on the rights to privacy and data protection
DATAFACE pursues three research objectives:
- Observation and description of trends concerning the use of FRTS by public authorities in four countries (France, the UK, USA, and China).
- Acquisition of basic technical knowledge on the functioning of FRTs and their matching algorithms (through a secondment at the Norwegian Biometrics Laboratory within NTNU)
- Legal analysis of the EU rights to privacy and data protection as interpreted by the European Courts and secondary EU law adopted on the basis of these rights.
Research outcomes can be found under the publications and conferences/talks tab.
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 895978.