Clearview AI Says It Identified A Terrorism Suspect. The Cops Say That's Not True. - 2020-01-23
Clearview AI, a facial recognition company that says it's amassed a database of billions of photos, has a fantastic selling point it offers up to police departments nationwide: It cracked a case of alleged terrorism in a New York City subway station last August in a matter of seconds. "How a Terrorism Suspect Was Instantly Identified With Clearview," read the subject line of a November email sent to law enforcement agencies across all 50 states through a crime alert service, suggesting its technology was integral to the arrest.
It's a compelling pitch that has helped rocket Clearview to partnerships with police departments across the country. But there's just one problem: The New York Police Department said that Clearview played no role in the case.
As revealed to the world in a startling story in the New York Times this weekend, Clearview AI has crossed a boundary that no other tech company seemed willing to breach: building a database of what it claims to be more than 3 billion photos that can be used to identify a person in almost any situation. It's raised fears that a much-hyped moment, when universal facial recognition could be deployed at a mass scale, is finally at hand.