Question

1. What does face surveillance tell us about society?

In science fiction and shows like Black Mirror, facial recognition technology (FRT) is implemented as the institutional antidote for all societal ills.

In these worlds, surveillance cameras are deployed by government agencies to instantly catch and intimidate aspiring terrorists, pedophiles, kidnappers, and thieves, while also neutralizing nearly all threats to the safety and security of humans. In fiction, the utopian vision of face surveillance as a catch-all solution often reveals a darker, dystopian side: this powerful technology can turn society into a police state with total surveillance and no end to police prying. As we continue into the 21st Century, the real world we inhabit is starting to resemble science fiction. Will biometric technology, combined with ever-present cameras, emerge as the champion of public safety and order? Or will this ubiquitous surveillance reveal a more sinister side to our world and those who control the cameras, amplifying the injustices that already exist?

In this explainer, The Privacy Issue approaches the subject by exploring the current state of affairs: describing how the technology works, the legal framework and challenges to face surveillance, and how these systems are implemented in practice.

Question

2. How do surveillance systems get a profile of my face?

Face surveillance relies upon artificial intelligence (AI) to build profiles on people, using neural networks that are trained to recognize faces. This is accomplished via an algorithm that scans a huge archive of photos, with faces in known orientations and positions. These images may come from video surveillance systems in public places like restaurants and college quads, research databases, or online photo archives in Google, Flickr, or social media.

Every time an image is scanned, the software estimates where faces are and isolates them using geometric relationships between facial features. These distinct facial features are compared to previously-stored face profiles, resulting in the creation of a new face profile or a match with an existing identity. There are a growing number of private companies that offer databases for training of face surveillance software, including MorphoTrust, 3M, FaceFirst, and NECGlobal.

Laws are being passed in the U.S. that enable the correlation of document and driver's license databases at the local, state, and federal level, powering private face surveillance via public identity requirements. Increasingly, users provide profiles of their faces via mobile apps with fun and seemingly-innocent features — asking you for a selfie in exchange for a cartoon caricature or telling you which celebrity or Renaissance painting you resemble. When deployment of face surveillance becomes ubiquitous, the practice of gathering faces from anywhere and everywhere becomes less taboo. As NYU law professor Jason Schultz describes in an exposé of IBM's face database, “This is the dirty little secret of AI training sets. Researchers often just grab whatever images are available in the wild.”

Question

3. How accurate is face surveillance?

Face surveillance has many limitations. Images snapped in obscure conditions, of moving figures, or on crowded streets return blurry results. The inaccuracy increases when there is no standardized photo for comparison, or when the comparison comes from a photo originating in an uncontrolled environment. False matches are common for young people, women, and people of color. In a 2012 study co-authored by the FBI, "the performances of [three commercial algorithms] were consistent in that they all exhibited lower recognition accuracies on the following cohorts: females, Blacks, and younger subjects (18 to 30 years old)."

In 2018, Amazon's Rekognition software was used to demonstrate the limits of face surveillance technology, falsely matching 28 members of Congress with mugshots of other people. Digital activist group Fight For The Future also scanned photos of over 400 faces of UCLA student athletes and faculty members with Rekognition, finding that 58 photos were incorrectly matched with mugshots. Often, when people were matched with “100 percent confidence,” the only factor in common between a UCLA photo and a mugshot was the person’s race, with the vast majority of false matches being people of color.

As Jameson Spivack, Policy Associate at Georgetown Law's Center on Privacy and Technology, told The Privacy Issue, police in New York scanned a photo of actor Woody Harrelson to catch a beer thief. Police used the photo of Harrelson because the thief was caught on blurry video and seemed to resemble the famous actor. Though the Harrelson lookalike was eventually exonerated, face matches are held in high regard by courts, often trumping other evidence and even a confirmed alibi. In one case, a person was arrested for stealing socks though witnesses insisted he was at his son’s birth when the crime was committed.

Facial recognition is less accurate than fingerprinting, particularly when used in real-time or on large databases. As such, companies and police departments need specialized training to make accurate matches — training which the automated features of surveillance systems tend to discourage. A 2016 report by the Center on Privacy and Technology noted, “We found only eight face recognition systems where specialized personnel reviewed and narrowed down potential matches.

Question

4. Are governments really scanning my face?

Face surveillance is a tool that many governments rely upon for identification and verification of citizens, with the most powerful countries amassing giant databases of facial profiles. As law enforcement increasingly looks toward automation and camera systems to identify suspects, these databases become more crucial in forensic analysis. In the midst of the COVID-19 pandemic, the call for biometric surveillance to prevent the spread of the virus is a powerful motivator for installing camera systems.

In China, advanced surveillance follows people across entire cities and cameras are even used to spot and fine jaywalkers. There, the government has used 500,000 face scans to monitor its Uighar Muslim minority. Buses now record passenger body temperatures, in attempts to detect fevers created by coronavirus, and also take snapshots of faces. Face surveillance is not limited to China, however, and is also implemented by its rival, the United States.

In the U.S., between 26-30 states allow law enforcement to run database searches of driver’s license and ID photos. The Los Angeles Police Force drive “smart cars” equipped with face recognition, while San Diego allows law enforcement from nearly 25 agencies to stop people on the street and use their mobile phones to photograph them. Altogether, at least five major police departments — including agencies in Chicago, Dallas, and Los Angeles — use surveillance cameras to track pedestrians as they stroll the streets.

Biometric and pattern recognition technologies are now utilized by federal, state, and local governments in attempts to thwart domestic terrorism. Similar techniques are now implemented in systems purporting to quell school shootings, with companies like AnyVision advertising cameras that respond to or try to prevent shooting incidents.

Nathan "nash" Sheard of the Electronic Frontier Foundation described the limitations of these systems to us at The Privacy Issue, pointing out that there’s little evidence that these surveillance systems work, even when one puts aside violations of our basic rights to privacy. He referred us to Oakland’s Surveillance and Community Safety ordinance that not only requires elected representatives of targeted demographics to monitor police surveillance policies and processes, but also that the wider community is given 30-60 days for a follow-up review on these procedures.

Question

5. What's next for face surveillance?

There is evidence that facial recognition technology (FRT) is getting more accurate. According to a study by the U.S. National Institute of Standards and Technology (NIST), in tests conducted on 12 million individuals FRT only failed in 0.2% of cases. New and advanced systems are progressing from 2D to 3D image processing to boost detection, and some software claims to identify faces even in the dark. As surveillance systems progress, other biometric markers are also collected and analyzed. There are now systems that scans irises, hands, gait (the way you walk), and heartbeat.

There has been pushback against face surveillance. Several cities — San Francisco, Oakland, Seattle, Brookline, and Somerville — limit or ban FRT. Utah Police recently declined to share data with Banjo, a technology company with a massive surveillance system. In contrast, the sheriff in Arizona’s notorious Maricopa County added the photo and driver’s license of every resident in Honduras to its facial recognition database.

Even while the American Civil Liberties Union (ACLU) petitioned Congress to “put the brakes” on government agencies that use FRT, few face recognition systems are audited for misuse. Of the 52 agencies utilizing FRT in Georgetown's " Perpetual Lineup", only the Ohio Bureau of Criminal Investigation prohibited its officers from using face recognition to track individuals engaging in political, religious, or other protected free speech. Only four of these agencies informed the public of the presence of its surveillance systems, and most of these agencies have never been audited, though their face databases may be more than a decade old.

Face surveillance is on the rise in both the private and public spheres, in the U.S. and elsewhere. Companies like WolfCom equip officers with body-worn cameras to identify individuals they observe on the street, and these cameras are often sold to customers who are not in law enforcement as well. In a widely-circulated story, Clearview AI was found to not only sell database access to morethan 600 law enforcement agencies in North America, but also to numerous private organizations

The COVID-19 pandemic is accelerating these trends, as we have documented in The Privacy Issue. Clearview AI and other companies are offering assistance to government agencies as a more hygienic alternative to other screening process, while Chinese companies are pioneering detection of partially-obscured faces. Wisesoft claims it has created a 3D facial recognition technology with Sichuan University that can identify people wearing masks with 98% accuracy, and there is at least one other company, Hanvon, making similar claims.

Question

6. What do civil liberties advocates in the U.S. suggest?

Nathan "nash" Sheard of the Electronic Frontier Foundation offered us his thoughts on legal challenges to face surveillance in the U.S. He insisted that the First, Fourth and 14th Amendments to the U.S. Constitution clearly and unambiguously oppose targeted surveillance of people or groups without their permission.

The First Amendment gives individuals certain rights (such as free speech) that are threatened by face surveillance. The Fourth Amendment protects privacy by prohibiting unreasonable searches and seizures, particularly on private property. Finally, the 14th Amendment says no state should “deprive any person of life, liberty, or property, without due process of law”. Sheard emphasized the long-ranging fallout from face surveillance, reminding us that “Technological limitations of FRT that include its bias may be resolved in time. But privacy violations are likely not only to linger but also to further deteriorate.”

Jameson Spivack of Georgetown Law's Center on Privacy and Technology offered similar thoughts, highlighting the broad power face surveillance affords governments. “Face recognition,” Spivack says, “gives the government power it’s never had before. It allows them to track many people, in secret, from a distance. This reveals intimate details about our personal lives, including visits to health clinics, houses of worship, and other private matters. This changes the balance of power between government and individuals, and could end up discouraging things like political participation, as people fear constant surveillance.”

In the U.S., legislative controls at the federal, state, and municipal level may be a powerful tool to limit the reach of face surveillance and, potentially, curb the deployment of the technology altogether. Though the municipal bans that have been passed in East and West Coast cities are often the center of attention, there are laws being passed around the country to limit scanning and analyzing of faces and other biometric markers.

With the recommendations of U.S. civil liberties experts in mind, we at The Privacy Issue offer the following list of model legislation.

  • The Illinois Biometric Information Privacy Act calls for transparency and accountability. This offers the public an insight into how these systems are used. To function properly, systems must be audited regularly and images must be carefully protected and stored.
  • Oakland's Surveillance and Community Safety ordinances and Face Surveillance Ban not only require initial accuracy assessments of the technology but also require regular reviews to assess their real-time impact, among other controls.
  • A 2019 bill in Michigan prohibits real-time use of face recognition, with an exception when there is an "imminent risk to an individual or individuals of death, serious physical injury, sexual abuse, live-streamed sexual exploitation, kidnapping, or human trafficking."
  • A 2020 bill in South Dakota requires a court order for law enforcement to access face recognition — except when responding to emergencies or to reports submitted to the National Center for Missing and Exploited Children.
  • The Commercial Facial Recognition Privacy Act of 2019 prohibits companies from using facial recognition to track individuals in public or to collect or sell your face data without your consent.