Facial recognition: A Dutch policy-investigation into the risks and regulatory strategies

By September 8, 2020Uncategorized

As a result of global protests against the violent and discriminatory actions of the police, IBM has decided to discontinue the development of facial recognition technology. The risk that this technology could contribute to racial profiling in the hands of the police is simply too great, the company stated in an open letter to the United States Congress. Microsoft and Amazon are also discontinuing the delivery of facial recognition technology to police departments, at least until there are clear laws and regulations in this area and sufficient safeguards to prevent improper use of their technologies.

The risks of facial recognition

When applying facial recognition, there is an inherent risk of bias and therefore discrimination. For example, research shows that facial recognition technologies from Microsoft, Face ++ and IBM score significantly better at recognizing men’s faces than women, and perform better with light-skinned than dark-skinned people. All companies struggle the most with identifying dark women.

However, the risk of discrimination caused by malfunctioning facial recognition is just one of many risks associated with the use of this technology. For example, in our report for the Dutch Ministry of Security and Justice, “At first sight. An exploration of facial recognition and privacy risks in horizontal relationships” we found that for training facial recognition models oftentimes photos and videos materials are scraped from the Internet without permission or legal basis. For example, the American company Clearview AI recently came in the news when it became known that, against the terms of use of platforms such as Twitter and Facebook, it collected images to train its facial recognition algorithms. This so-called “legacy” problem is sector-wide.

In addition, there is the trend that is also referred to as the “democratization of facial recognition technology”. Consumers can already start using facial recognition technologies themselves, for example by means of a smart doorbell.

Companies also make frequent use of facial recognition technologies. For example, there are events where facial recognition is used as access control, instead of the old-fashioned ticket. It is expected that this technology will become cheaper and cheaper, can be used for more and more purposes and will therefore also be used by more and more public services, companies and also citizens.  If this technology indeed becomes widespread, the question is whether it would still possible for citizens to enter the public and semi-public space anonymously.

In addition, a form of information asymmetry can arise. Others can obtain very profound insights through facial recognition – emotions that can be read from facial expressions, information that is linked to this person via the internet, similarities between this person and others with similar characteristics, etc.– It can therefore become increasingly difficult for people to make a realistic estimate of what others know about you. This can lead to the so-called “chilling effect”. People start to behave differently for fear of the possible negative consequences. Finally, there is also the danger of abuse and blackmail. That this is not an unfounded fear is evident from the example of the facial recognition application Findface, a smartphone application that could be used in Russia in combination with the very popular social media platform Vkontakte. For example, individuals who secretly worked for an escort company were identified through the app, then blackmailed and threatened.

Facial recognition in the Netherlands

In our study, we find that facial recognition is not yet widely used in many fields in the Netherlands. Rather, there are local, demarcated experiments and applications in the field of access control to investigate whether a clear use case and revenue model is possible. The companies we spoke to in the context of the investigation also indicated that they were deliberately cautious. From start-ups to established developers and buyers of the technology, all indicated that they are aware of the risks. For example, they mention the uncertainty whether their applications are permitted under the applicable General Data Protection Regulation (GDPR) and they also fear for reputational damage if something should go wrong. Although these companies often invest in strategies such as “fair AI principles” and “privacy-by-design” to tackle potential risks, they are not sure that this is always sufficient. In that sense, the retreat from companies like IBM, Microsoft and Amazon is not entirely unexpected.

Laws and regulations

This uncertainty immediately raises the question: is the use of facial recognition technology in legal terms allowed or not? In our report, we note that in the EU this is most likely not the case for the majority of current applications. The GDPR places high demands on the processing of personal data, especially in the case of facial recognition technology in which biometric data are processed. These are data that reveal something about, among other things, a person’s physical or behavioral characteristics and for which a strict “no, unless” regime applies. In other words, facial recognition should only be used under very strict and limited conditions. In the GDPR Implementation Act, the Dutch legislator cites the example of the security of a nuclear power plant. In that situation, there is such an important public interest at stake that it may be permissible to use facial recognition to control access to the premises.

There is also the option to ask data subjects for explicit consent. However, a variety of requirements are attached to this consent. For example, someone must make that choice well-informed and therefore have enough information to be able to grant permission. In addition, this consent must be free, which means that – in the example of facial recognition to register for a conference – there must also be the possibility to gain access without facial recognition. At all times, the requirements apply that facial recognition is necessary, proportional and subsidiary. In other words, a company must be able to demonstrate that the facial recognition application is necessary to achieve a particular goal, the privacy breach is proportionate to the goal to be achieved, and that there are no less privacy-invasive means that accomplish the same goal. In the case of facial recognition, these questions often cannot be answered positively.

What to do?

Now that companies are taking a step back when it comes to facial recognition, the question is even stronger: what position does a democratic society actually want to occupy in this discussion? Around the globe, nations are pondering on this question. In our report (which also includes an English summary), we provide –from an EU perspective, but likely relevant to non-EU actors as well– an overview of the regulatory options: from a general ban on facial recognition to a policy of tolerance.

However, in order to make a solid choice, the political and social question must first be answered: “what role do we actually see assigned to such a privacy-invasive technology as facial recognition in our democratic constitutional state?”

Given the risks outlined above, the intrusive privacy breaches, and the low returns, we believe that great caution is appropriate. At the very least, sectoral bans should be considered, as is now suggested by companies themselves. However, governments could go a step further by issuing a temporary ban. With such a ban, we would not exclude groundbreaking innovations or world-improving technologies, but rather ensure that companies get clarity and are stimulated to develop technological applications that do fit in a free and democratic society as we value them in the Netherlands and elsewhere.

Esther Keymolen, Merel Noorman and Bart van der Sloot, lead authors of the report “At first sight. An exploration of facial recognition and privacy risks in horizontal relationships” that was presented to the Dutch Parliament on 20 April 2020.

Note from AI Global:  Huge thanks to Esther, Merel, and Bart for this insight. Inspired by their work, we have recently been investigating how to start thinking about regulating these tools. This will take time, but we’ve started to share some of our findings in stories on our social channels. Check them out!