Your face is what makes you unique. It’s what gives you your distinct appearance, allows you to express your mood, emotions, and reactions, and enables you to communicate. Over the past decades, your face has become a tool for doing a whole lot more. It can unlock your phone, allow you to board a plane, cross borders, and pay for coffee. All of this is due to the rise of facial recognition technology (FRT), a type of artificial intelligence that utilizes deep learning to quantify the unique identifiers of individual faces, which are then analyzed and compared to databases of photos. While FRT has distinct advantages – such as crime and fraud prevention, efficiency, and convenience – the risks that accompany its widespread use signal the end of privacy as we know it. Yet governments around the world have been slow to initiate public debate and enact regulation pertaining to its use. All the while, FRT has proliferated both in the public and private sectors, resulting in the normalization of constant, immutable surveillance that is set to become the default for our future: one in which – without urgent government action - our ability to move through life unmonitored will cease to exist.
How Facial Recognition Tech Became Commonplace
Once the subject of dystopian fiction, FRT has come to infiltrate daily life around the globe over the past decade. In China, the world leader in FRT, its purposes range from decking out police with AI-enabled glasses, paying with a smile, and catching people who dump their waste. In September 2019, India’s nationwide ID authority, Aadhaar, made it compulsory for telecom service providers to verify 10% of their customers using FRT, and in October 2019, the French government announced plans to use it in a new national ID system, Alicem. Russia has attached the tech to 5,000 CCTV cameras in Moscow, and is piloting a facial recognition payment system in train stations.
In the United States, FRT is used – and sometimes abused – by law enforcement, border patrol, and an increasing number of the country’s largest stores (including Walmart and Target) in the name of theft prevention. A 2020 exposé by Kashmir Hill in The New York Times revealed yet another private-public partnership in FRT surveillance, this time documenting a Peter Thiel-funded company called Clearview AI that matches faces from images uploaded to "millions" of websites, including Facebook, Venmo, and YouTube, to a private database of approximately three billion photos. The artificial intelligence that Clearview AI has developed can also match faces via imperfect photos, such as those from surveillance cameras, with one police sergeant stating, "A person can be wearing a hat or glasses, or it can be a profile shot or partial view of their face.” Clearview AI is offering facial recognition services to nearly 600 U.S. law enforcement agencies and boasts a library of images seven-times that of the FBI.
In the United Kingdom, the London Metropolitan Police and South Wales Police ran FRT trials at sports matches and to monitor peaceful protests. The latter instance, despite vehement opposition and proven inaccuracy, was upheld as constitutional by the High Court in September 2019. In London (the world’s second-most monitored city after Beijing, with around 420,000 CCTV cameras that are increasingly being upgraded to include FRT capabilities), property developers used it to monitor people walking through King’s Cross and match them to databases provided by police, though the practice has now been scrapped. The tech is even being employed to determine who to serve next at a London bar, and supermarkets including Tesco and Sainsbury’s are gearing up to implement it for the purpose of age verification. Everywhere we go, FRT appears in the palms of our hands – from Facebook’s ‘tag a friend’ option to Apple’s iPhone unlock function. A predicted 64% of all smartphones will use the technology in 2020. All this amounts to a billion dollar global industry – one that’s set to grow from $3.2 billion in 2019 to $7 billion by 2024.
The cross-sector demand for FRT is on an exponential upwards curve worldwide. The factors fuelling this unrelenting growth are threefold. First, with the exception of a handful of cities in the United States, FRT is currently subject to very few regulations, and devoid of industry-wide standards. Second, FRT tracking systems are cheap (under $100 USD) and readily-available “to anyone with an internet connection and a credit card,” as a New York Times’ experiment in April 2019 proved. Third, the artificial intelligence behind FRT is smart and learning at ever-faster rates. FRT software’s ability to analyze poor-quality images (as low as one megapixel) is thanks to deep learning, a type of AI that mimics human brain function to process vast clusters of data through artificial neural networks.
The advancement of FRT algorithms relies on the accumulation of photos of people’s faces, which are amassed in extensive databases. These databases are built from a range of sources which vary depending on who’s using them, and for what purpose. Some are compiled from police watchlists or known offenders. Others are taken from people uploading photos to platforms or apps. This is the case for Facebook’s ‘tag a friend’ function, which is used to train the company’s own FRT algorithm, as well photo storage apps like Ever. Others, like Labeled Faces in the Wild or MegaFace, are comprised of photos that are scraped from the internet without the consent or even knowledge of those they depict. Academics and private companies make use of these openly-available databases to train the algorithms behind FRT. Almost every major tech company is developing its own system: Facebook has DeepFace, Google has FaceNet, and Amazon has Rekognition, to name a few. Each is investing heavily in what Microsoft’s Chief Executive Satya Nadella called the “race to the bottom” to develop the most powerful FRT systems to sell to governments, who deploy them for a wide range of policing and monitoring purposes.
The Ultimate Privacy Threat
While many societies have become accustomed to monitoring mechanisms like CCTV and other cameras in public spaces, automated license plate readers, camera-equipped doorbells and drones, facial recognition marks a considerably deeper infringement on personal privacy, due to its real-time ability to immutably identify unique psychical facts about individuals. By automating mass surveillance in a way that – as Woodrow Hartzog and Evan Selinger write in a New Yorker op-ed – “overcomes biological constraints on how many individuals anyone can recognise in real time,” it denies citizens the freedom to walk down streets, across cities, around supermarkets and through transport hubs without being watched by governments or corporations – for purposes of which they are not aware. At this rate, we are being led “down the path of eliminating any place where people aren’t surveilled,” warns David Paris of Australia’s Digital Rights Watch.
Unlike the degree of power we have over our personal data online, there are “no privacy settings for walking down a city street, or browsing in a mall. And you can’t delete your face,” explains Paris. The fact that FRT records our face means its use subjects law-abiding individuals to a “perpetual lineup” that negates the fundamental democratic principle of the presumption of innocence, and the accompanying requirement for reasonable suspicion of guilt that law enforcement usually needs to prove in order to obtain a warrant for surveillance. In this respect, FRT is altering the nature of democracy. All the while, government action to incite debate and discussion about the ethics and implications of its use has been patchy and stilted across the globe. Short of companies adopting voluntary moratorium on the development and sale of FRT (more on this later), any measures taken are essentially playing catch up while the tech continues to proliferate – and as ACLU Northern California points out, “once powerful surveillance systems like these are built and deployed, the harm will be extremely difficult to undo.”
Past Harms, Future Menaces
In addition to the encroachment on privacy and the undermining of democratic principles, FRT’s actual and potential harms are manifold: from bias and discrimination to abuse both by governments and bad actors. In the words of Kate Crawford, Co-Director of AI Now, “These tools are dangerous when they fail and harmful when they work.” In terms of the dangers of algorithmic bias, numerous studies have proven that, although FRT is highly accurate when it comes to identifying white men (a 1% misidentification rate, according to an MIT study), Microsoft, IBM and Face++, have been found to be more likely to misidentify transgender and non-binary people, as well as people of color, and especially women of color. In 2018, the ACLU evidenced this problem by using Amazon Rekognition to compare members of Congress against a database of 25,000 criminal mugshots. The experiment resulted in disproportionately higher false matches for Congresspeople of colour – 40%, despite people of colour making up only 20% of Congress. As Joy Buolamwini, founder of the Algorithmic Justice League – which raises awareness of and fights against bias and discrimination in technologies – has written, “whoever codes the system embeds her views.” According to Georgetown Law School’s project The Perpetual Line Up, this is compounded by the fact that “disproportionately high arrest rates, systems that rely on mug shot databases likely include a disproportionate number of African Americans.” As with many other forms of artificial intelligence, the inordinately negative effect that the technology has on African Americans and other communities of color is only being further entrenched as adoption of the technology races ahead with minimal accountability.
In the U.S., law enforcement has a long – and generally accepted – tradition of gleaning biometric data, such as fingerprints and DNA, from criminal suspects. But facial recognition gives police forces access to the biometric data of thousands of people who are not criminal suspects – all without congressional or judicial oversight. In July 2019, The Washington Post revealed that both the FBI and Immigration and Customs Enforcement (ICE) had been scanning driver’s licences to create extensive databases, which were then used in the search for suspects in crimes as low-level as petty theft. The potential for abuse of these systems is high – and they have been taken advantage of. As a report from Georgetown Law’s Center on Privacy and Technology found, the New York Police Department has engaged in widespread abuse of their FRT system, including altering images and uploading a celebrity portrait to their database of photos in order to apprehend a man who was described by a witness as resembling that celebrity. His crime was stealing a beer.
As the ACLU argues, FRT “poses a particular threat to communities already unjustly targeted in the current political climate.” Facial recognition systems developed by the controversial Palantir Technologies have been used by U.S. police forces as a powerful monitoring tool in President Trump’s war on immigration, used to identity undocumented immigrants for the purposes of deportation and proceedings. This is in addition to what we now know about the gargantuan image database and incredible capabilities of Clearview AI and its partnership with U.S. law enforcement.
In China, where almost all of the country’s 1.4 billion citizens are included in FRT databases, the Chinese Communist party uses the technology to track those it considers a threat to its rule. In particular, it has been used to surveil the Uighur Muslim population in the Xinjiang region. Chinese manufacturers of FRT technology are exporting their developments further afield. In 2018, the Guangzhou-based startup CloudWalk received $301m from the state to establish a mass FRT programme in Zimbabwe – a country in which China has heavily invested in – in the name of addressing social security issues.
FRT systems have security issues of their own – ones with serious implications, given that faces, unlike passwords, cannot be changed without resorting to plastic surgery. In early 2019, Security researcher Victor Gevers found that one of the databases the Chinese government had been using to track Uighurs, owned by a company called SenseNets, had been left open on the internet for months. Gevers stated, “This database contains over 2.565.724 records of people with personal information like ID card number issue & expire date, sex, nation, address, birthday, passport photo, employer and which locations with trackers they have passed in the last 24 hours.” A few months later, the U.S. Customs and Border Agency divulged that their database containing photos of travellers and licence plates – which had been managed by subcontractor, Perceptics, had been hacked. Speaking to Fast Company on the matter, the Electronic Frontier Foundation’s David Maass commented that the U.S. government should have been able to foresee the hack, “considering that India’s biometric system had been breached just a year before.” According to Maass, the hack underscored the concern that the security of FRT databases is not sufficiently regulated. “We’ve also seen law enforcement misplacing trust in vendors, for whom public safety and cybersecurity may not be a primary concerns,” he said.
Where Are the Regulators?
The above-described issues of bias, abuse, and security flaws have proliferated in the absence of federal, national or supranational government regulation – which has allowed FRT to operate free of the constraints of transparency, proportionality or democratic accountability. This has meant that regulation of the technology and the photo databases it processes has largely been left to private corporations (Amazon is proposing its own regulatory framework that the company hopes will be adopted by lawmakers, for example), fuelled by commercial incentives, or police departments with no independent oversight. “We’ve skipped some real fundamental steps in the debate,” Silkie Carlo of the privacy advocacy group Big Brother Watch told The New York Times. “Policymakers have arrived so late in the discussion and don’t fully understand the implications and the big picture.” Around the world, public opinion on the matter varies. While in China – according to the Chinese Academy of Sciences – 83% of the population endorses the “proper use” of government-led facial recognition, that number dips to 56% in the United States, according to the Pew Research Center.
In the United Kingdom, a 2019 survey conducted by the Ada Lovelace Institute found that 76% are against the use of FRT by companies for commercial purposes, such as in shopping centres, and 55% of Britons want the government to regulate police use of FRT. Police forces in the United Kingdom have been trialing the technology over the past few years for purposes including the monitoring of peaceful protests – a use case which the High Court in London endorsed as being compliant with human rights and data protection laws in September 2019. Yet as the London Policing Ethics Panel’s 2019 Report on Live Facial Recognition emphasises, FRT has a chilling effect on the rights to assembly and speech. 38% of 16-24 year olds said they would be more likely to stay away from events that were monitored by police using FRT. “People fear normalisation of surveillance, but are more likely to accept it when they see a public benefit articulated,” the Ada Lovelace Institute’s Olivia Varley-Winter told The Privacy Issue. “If there's a defined security risk, people tend be more accepting of its use. It's about having the choice, and the opportunity to opt out. Informed awareness of the tech is really low. There hasn't been proactive outreach – we're only beginning to have the debate in the media now. We need dialogue that isn't just owned by the people who want to see [FRT] in use,” she stated.
Around the world, demand for dialogue and action is growing louder by the day: the public, privacy activists and civil rights organisations, academics, politicians and even some police forces have expressed their resistance towards the unchecked use of FRT for policing and commercial purposes. Some local governments in the U.S. are taking notice. There is growing momentum across California and Massachusetts – at the time of publication, San Francisco, Oakland, and Berkeley, as well as Somerville, Brookline, and Cambridge, have all banned the use of FRT by their local government departments, including police. Data and Society AI policy advisor, Mutale Nkonde expects debate and action at local government level to continue growing. Across many states, Community Control Our Police Surveillance (CCOPS) legislation is being adopted by city councils to hold police use of the tech accountable. American cities including Portland and Cambridge are debating banning the use of FRT by the private sector, and forty of the world's largest music festivals have pledged not to use FRT. Across the Atlantic, the European Union is drafting legislation which is set to impose “sweeping regulations” over FRT.
Facing Up to AI: A Call to Action
Until rigorously debated and drafted FRT legislation at a national level is passed, calls for a halt to the use of FRT are gaining cross-sector momentum. The European Commission is considering a ban on facial recognition in public places for up to five years, according to a 2020 whitepaper draft. In the United States, a coalition of 30 civil society organisations, representing 15 million members, is petitioning for a nationwide ban on FRT’s use by law enforcement. Yet Varley-Winter argues that “outright bans risk being reactionary, stop-gap approaches. In order to forestall that eventuality, [the Ada Lovelace Institute] is calling for a moratorium as a more forward-looking approach to regulation that allows proportionate consideration and deliberation,” she told The Privacy Issue. “If there are ways to make facial recognition technology work for people and society, it's important we work out what they are – but industry, policymakers and wider society need time to do so in an inclusive, considered way.”
Varley-Winter points to Scotland’s establishment of an Independent Advisory Group on the use of biometric data in May 2017 as a positive example of government action on the matter. In May 2019, the Biometric Data Bill was introduced to Scottish Parliament, with the aim to “ensure independent oversight of the acquisition, retention, use and disposal of existing, emerging and future biometric data in the context of criminal justice in Scotland,” she said. “The Bill would create a new Scottish biometrics commissioner, with a specific focus on ethical and human rights considerations arising from the use of biometric data, and on maximising the benefit of biometric technologies.” Varley-Winter added that the approach of establishing an independent review process centred on human rights is a “promising model for other governments to consider.”
In the U.S., the AI Now Institutes’s Kate Crawford has also called for a voluntary moratorium on the use of FRT, urging the its makers to follow in the footsteps of Axon, “the world’s leading supplier of police body cameras”, who in 2019 stopped selling FRT-enabled cameras to police forces, due to the risk that it could “exacerbate existing inequities in policing, for example by penalising black or LGBTQ communities”. In an editorial for Nature, she cites four principles that the AI Now Institute has developed for a potential framework: (1) A bar on funding or deployment of FRT systems until they have been vetted, and strong legal protections have been established; (2) legislation requiring public input before they are used, as well as rigorous reviews of FRT for bias, privacy and civil rights concerns; (3) a government waiver on restrictions on the research or oversight of FRT systems; and (4) greater whistleblowing protections for tech company employees. In the UK, the Automated Facial Recognition Technology Bill drafted by the UK’s House of Lords Select Committee on Artificial Intelligence in late 2019 and is making its way through Parliament at the time of publication.
Building on the above recommendations – and taking into account the European Union’s High-Level Expert Group on AI's seven key requirements for trustworthy AI – The Privacy Issue calls for the following:
1. That governments lead extensive public consultation and debate on the use of FRT, ensuring a broad spectrum of voices have the opportunity to be heard, and are taken into account;
2. On the basis of this public consultation process, that lawmakers prioritize the passing of legislation regulating the use of private and public sector FRT, including the following:
- That the use of FRT by police be held strictly accountable to the principles of transparency and proportionality;
- That FRT use be regularly audited by an independent commissioner or oversight board;
- That individuals receive clear and sufficient notice before they are subject to FRT, and are accordingly able to give affirmative consent, or alternatively, to be able to affirmatively give (or alternatively, to revoke) consent;
- That independent research be conducted into algorithmic bias and its effects on vulnerable communities; and
3. That a voluntary moratorium on the sale and purchase of FRT by companies involved in its development and use until such regulation has been passed – as has been called for by ACLU Massachusetts, among other civil society actors.
Writing in the Financial Times, Ada Lovelace Institute Director Carly Kind cites the example of how a successful moratorium prevented discrimination and exploitation in Britain’s insurance sector. In 1998, British insurers voluntarily implemented a two-year ban on the use of genetic testing for health or life insurance policies, which prospective homeowners require to obtain a mortgage in the United Kingdom. Due to a gap in the law, it would have been legal for insurance companies to compel their customers to share the results of genetic testing, which they could then have used to heighten premiums or refuse cover. Instead, the Association of British Insurers adopted a moratorium, which was extended and eventually formalised in an official agreement with the government, binding 97% of the industry.
Though the pace of public debate and lawmaking may always languish behind the speed inherent in the development of new technologies, the example of the British insurers’ voluntary moratorium stands as proof that sweeping industry changes can be effectively implemented to prevent infringements on rights, and limit potential harms. Given FRT’s global scale and pace of development, the call to action is a bold one. Yet in the face of a rapidly proliferating technology that is not only eroding democracy and civil liberties, but is also routinely weaponised against at-risk communities, nothing less than boldness regulatory action is required.