US cities are moving toward a ban on facial recognition - what's the threat?

You are here

Category: 
Wednesday, May 8, 2019

FastCompany
The creeping threat of facial recognition
By: S.A. Applin

This essay is part of The Privacy Divide, a series that explores the misconceptions, disparities, and paradoxes that have developed around our sense of privacy and its broader impacts on society.
 
In a unanimous vote on Monday, a San Francisco government committee pushed the city closer toward instituting a complete ban on government uses of face recognition technology. It would be the first US city to do so: As police, government agencies, and businesses gravitate toward the technology, there are neither federal laws nor strong local rules that govern its use. Many of us seem wary of surveillance systems designed to automatically identify or profile us in public, but that doesn’t seem to matter too much, and that is what is most concerning.

“[T]here’s a fundamental flaw in our justification for these technologies,” Accenture’s Responsible AI Lead Rumman Chowdhury wrote on Twitter last month, referring to the San Francisco proposal. “Do we live in a sufficiently dangerous state that we need this? Is punitive surveillance how we achieve a healthy and secure society (it is not)?”

My question about any new technology that is being rapidly adopted—AR, VR, big data, machine learning, whatever—is always “why?” Why do businesses or governments want face recognition? “‘We’ aren’t justifying these technologies,” I replied. “Those in power are, who mostly stand to profit from it. This is either through making/selling the gear, or using the tech to reduce headcount by replacing folks. This isn’t about society or even civilization, it’s about money and power.”

But money and power aren’t the only reasons behind the push to adopt facial recognition. Cooperation is how humans have managed to survive as long as we have, and the need to categorize some people as “the other” has been happening since there have been humans. Unfortunately, misconceptions and speculations about who some of us are and how we might behave have contributed to fear and insecurity among citizens, governments, and law enforcement. Today, those fearful ideas, in combination with a larger, more mobile, more diverse population, have created a condition by which we know of each other, but do not know each other, nor do we often engage with each “other” unless absolutely necessary. Our fears become another reason to invest in more “security,” even though, if we took time to be social, open, and cooperative in our communities, there would be less to fear, and more security as we looked out for each other’s well being.

EXPERIMENTING WITH SURVEILLANCE ON EACH OTHER
However, instead of that approach, we’ve been clocking each other. One way we began to enhance our ability to identify the “other” was through the use of surveillance cameras. As surveillance video became more affordable, many types of businesses increased their surveillance capabilities by adding cameras to their physical locations to discourage theft and violence. Security guards would monitor video feeds as well as (or instead of) watching people, but over time, cameras replaced many human guards. In this way, the idea of surveillance video became a psychological deterrent as much as a policing effort: Yes, we were being recorded, but we didn’t know if anyone was watching the recording, nor did we know if they would act on what they had seen.

As surveillance cameras have become smaller (and cheaper), they have been included in more consumer products, offering people the opportunity to incorporate this technology in their daily lives. Our smartphone cameras, Ring branded doorbells, and tiny surveillance cameras hidden in AirBnB’s (and many other places in society) have become normal. Surveillance has become distributed between governments, corporations, and each one of us who carries a smartphone or video camera.

Compared to more generalized surveillance, cameras in the home have an advantage in defending against the threat of the “other.” A home is a relatively contained environment, and any anomalies can be easily identified and reported by the homeowner or their software in real time. Homeowners are vigilant about their properties and may use additional apps that feed them local crime news, and in many cases also hire a third-party security company in tandem for extra observance. There is less data to process in the private home environment, too, as well as neighbors who look out for unusual practices, adding an additional layer of “knowing” and community knowledge to the surveillance process.

But there is one major flaw in the use of surveillance camera technology in society writ large: the profusion of cameras generating an abundance of footage has created a processing problem. There are masses of footage, backing up every camera you can see—and the many cameras you can’t—but there simply aren’t enough people or resources to process and make sense of those recorded images. Even when a crime is spotted, the perpetrator likely made their escape hours, or days, prior to the footage being seen, if it’s seen at all. That monitoring deficit makes the technology easy to circumvent. That monitoring deficit has emboldened some people to innovate work-arounds (a process referred to as “covert agency“).

Even assuming there is a way to comb through the surveillance data to find enough data to identify someone or their vehicle, the resources to enforce the recorded crimes often don’t exist. The sometimes interrelated and complex surveillance systems found outside our homes only work when most of us believe that something can be done with the data that is discovered.

When a technology isn’t working, we may introduce iterative innovations, as improvements, and that is what may be happening with surveillance cameras. For instance, municipalities have been quick to adopt wearable cameras for police officers. One argument for these body cameras is that they can help keep citizens (as well as officers) more in line; another argument is that they can assist in investigations and, perhaps soon, real-time surveillance. However, this approach is not without flaws, in that there are still very few policing resources, and the costs of storing and managing all the body camera video has been immense for many police departments. Meanwhile, many argue that much could be done to improve  policing through better practices and training and community interaction, instead of “better” or additional technologies.

INNOVATING THE NEXT SURVEILLANCE TECHNOLOGY
The net result of these approaches is that we now have surveillance cameras that may be ineffective for multiple reasons, yet have become an integral part of surveillance in public and in the larger private sphere of corporations, stores, and so on. In a situation that is complex, and requires human engagement, there are often desires for a quick, inexpensive, solution that builds upon what has gone on before, without taking into account how an add-on function might change outcomes. This is how facial recognition starts to be considered a panacea, as an “add-on” to the already established surveillance apparatus infrastructure.

Mass media has helped contribute to that notion. In science fiction, facial recognition technology just works. Films glorify detectives and cops who save humanity by using facial recognition to capture villains. This is unrealistic, for science fiction is scripted and its plots and characters are not functioning within any interdependent society with multiple, multiplexed experiences, beliefs, and issues. The glamour of big budget science fiction films and the perceived “cool” of the technology within them, is catnip to both technologists, who use fiction as a template for the technology they want to build, and to municipal authorities, who battle budget deficits and may see an affiliation with the “latest and greatest” technology as a badge of success and status. Technologists may not often be concerned, nor familiar, with how what they build will affect society. Some municipalities, meanwhile, seem to either gloss over potential outcomes and impacts of new technology in favor of providing a free test bed to tech companies, or misunderstand what will happen in their towns.

In essence, facial recognition offers a glittering promise of easily identifying and catching villains, like in the movies, without having to do any of the “messy” work of forming human relationships and getting to know the people in a community. It is much safer for police to use software than to interact with potentially dangerous criminals—or to take the risk to engage with people to find out that they are not, in fact, criminals at all. In this way, the cobbled together socio-technological scaffolding of surveillance cameras, which now may include facial recognition, starts to become used as a proxy for community knowledge and behavior.

But that data is not an accurate replacement for community knowledge because it can be misinterpreted and misapplied. The technology doesn’t work well enough for everyone equally and fairly, especially for those of non-white, non-CIS gender backgrounds: As one widely-cited MIT study found last year, three common facial-analysis systems showed an error rate of 0.8 percent for light-skinned men compared with 34.7 percent for dark-skinned women. For law enforcement scenarios in particular, the risks of misidentifying people could be severe.

FROM FACIAL RECOGNITION TO PROFILING
Facial recognition software is an innovation on the surveillance camera—which was deployed to solve a social problem. But only people, not technology, can solve social problems. People may have to apply technology to solve those problems, though, and therein lies the crux of our quandary: what technology is appropriate, and what is not, and what tools do we use, as Dr. Chowdhury and others ask, to form a “safe and healthy society”?

Answers to that question are now being offered up without sufficient public input. Tools for face recognition that are broadly available and inexpensive, and used without regulation or transparency, are most concerning. Also, it is unknown whether short-staffed, budget conscious, or technologically-inexperienced police departments will adhere to the voluntary rules set forth by facial recognition software vendors.

Once facial recognition software by companies like Amazon is widely deployed—and we are the subjects within these heterogeneous experiments—the next technology advancement application may be imported: artificial intelligence that utilizes facial recognition to draw conclusions about us and about our behavior. This is now what is happening in China, where AI and face recognition is being used to surveil 11 million Uighurs, a Muslim minority group.

“The facial recognition technology, which is integrated into China’s rapidly expanding networks of surveillance cameras, looks exclusively for Uighurs based on their appearance and keeps records of their comings and goings for search and review,” the New York Times reported recently. “The practice makes China a pioneer in applying next-generation technology to watch its people, potentially ushering in a new era of automated racism.”

Chinese authorities and companies are using the technology to catch criminal suspects at large-scale public events, and in more quotidian situations: identifying people at airports and hotels, shaming jaywalkers at crosswalks, and for targeted advertising. Face recognition is also spreading across the US, from border security to personalized ads in the freezer aisle. A New York property group has recently tried to create mandatory facial recognition-based keys for their units in its rent-stabilized apartment buildings.

It may be that we are on the cusp of some of our technologies coming home to roost in ways that are new to us that both push our boundaries and test our societal norms. For example, once we are “known” practically everywhere we go, then, just as has been done with other data that tracks and recognizes us, we can be constantly “profiled.” When we are “profiled,” then it is thought that our behavior can be algorithmically predicted. Once our behavior can be “predicted” by governments and marketers, then we may lose our agency (and sense of reality) in the face of algorithms, which generate more “trusted data” than our own accounts or self-knowledge and awareness, or those of the people we know.

Cooperation is achieved when all parties yield a bit of what they want to create an outcome that is acceptable. While sometimes people might forfeit their agency to aid an outcome they want, it is not a usual practice to do this repeatedly. That is enslavement, servitude. To not have agency, to not have the ability to choose how one is profiled or sold something, undermines the foundation of cooperation. The AI applications that utilize facial recognition for “convenience” become an even more dangerous step in the technological “innovation” around surveillance technologies, as we are forced to give more and more of ourselves away.

Efforts to ban uses of the software completely have faced resistance. Lawmakers and companies like Microsoft have mostly pushed for regulations that would, among other things, mandate clear signage to alert people when facial recognition tools are being used in public. However, with no means to opt-out of surveillance in a public or private space except to leave that area, identifying signage offers people no reasonable choice. And without a means of opting out of such a potentially powerful system, human beings begin to become enslaved. This is why serious, enforceable laws that can put restrictions on facial recognition are crucial, and why this discussion is so important at this juncture in our technological development.

Once facial recognition and other AI becomes pervasive—and in the absence of serious enforceable laws that can put guardrails on the technology—we will be unprotected, and as such will be subjected to any purpose to which the government or business wants to put our identities and locations. This is where greed, profit, and power come into play as motivators.

If we want to identify dangerous “others,” perhaps they are the entities who wish us to forfeit our faces, our identities, and our heterogeneity—not just so that they can profit but as a means of automated classification and societal control. This is why facial recognition is a critical technology for us to debate, and why growing numbers of us already wish to ban it in our society.

S. A. Applin, PhD, is an anthropologist whose research explores the domains of human agency, algorithms, AI, and automation in the context of social systems and sociability. You can find more at @anthropunk and PoSR.org.

CONTACT INFO

50 Thomas Patten Dr.<br />2nd Floor<br />Randolph, MA 02368<br /><a href="https://goo.gl/maps/ezTP8uVxQP22" target="_blank">Directions to location</a>