- Despite public apprehension related to the privacy and accuracy of facial recognition, companies offering the technology continue to attract the attention of investors.
- Suppliers of facial recognition technology are taking steps to address the ethical concerns raised by civil liberties organizations.
Despite the negative publicity around facial recognition, companies offering the technology continue to attract the attention of investors. This summer, U.S.-based Clearview AI raised $30 million from private investors; Israel-based AnyVision raised $235 million from SoftBank, Eldridge, and others. Both companies have been embroiled in controversy around the use of their technology by law enforcement, but neither seems to be hindered by it. Instead, both are using the opportunity to move the ethics conversation forward.
Clearview AI maintains a database of over 3 billion faces collected from public sources, including news sites, social media, and mugshot websites. The company boasts that its service helps law enforcement organizations across the U.S. solve crimes and enhance public safety.
AnyVision offers real-time facial recognition technology to identify individuals on watch lists, real-time analysis of body cameras for law enforcement, and access control to guard building or site entry points. It also points to applications of its technology that address pandemic-related concerns; it can be used to analyze occupancy, count people, and determine dwell times.
The use of facial analytics for facial recognition alarms many individuals and civil liberties groups. Top concerns center around privacy, since the solutions collect, maintain, and analyze large volumes of images without consent, and on accuracy, because the software can be less accurate when analyzing specific demographic groups.
However, facial analytics can be used for multiple applications beyond facial recognition. It can gauge reactions to new products during market research, ascertain demographics to better tailor content to consumers, assist with flight training, and help understand the cognitive state of drivers in private and commercial vehicles. The technology can assess age and gender, analyze changes in facial expressions to understand emotion, and track eye gaze for training and marketing purposes. It can also support pandemic-related applications by analyzing adherence to safety protocols, such as the use of face masks. These applications don’t store user data, allowing companies to assuage concerns related to privacy.
Clearly, interest in facial analytics is high. And despite negative public opinion, the use of facial recognition by law enforcement will likely continue. Proponents point to its effectiveness in identifying individuals that breached the U.S. Capitol on January 6, 2021, using the successes to muster support for the technology’s application in tracking down criminals.
At the same time, suppliers of the technology are taking steps to address public concerns regarding ethics. Last week, Clearview AI announced the formation of an advisory board that consists of senior public figures in government, cybersecurity, and enforcement, as well as individuals with experience in finance and law. AnyVision encourages the use of five indications for fair, ethical, and unbiased use of facial recognition technology, and it is discussing measures that address privacy concerns, such as blurring of bystanders, as well as limits to data retention times and to information on individuals not identified on watchlists.
Several communities have passed laws banning facial recognition. However, this is a short-term, stop-gap measure. The technology isn’t going away. Instead, conversations related to ethical concerns and the development of safeguards and standards related to acceptable use need to be accelerated.