Clearview AI's $750,000 mugshot purchase attempt failed
Clearview AI, a controversial facial recognition company, tried to buy 390 million mugshots and 690 million arrest records for $750,000. This effort aimed to expand its surveillance capabilities and included sensitive personal information like social security numbers. Documents show Clearview signed a contract in 2019 with Investigative Consultant, Inc., hoping to acquire these arrest records. Jeramie Scott, Senior Counsel at the Electronic Privacy Information Center, said the deal included a range of personal data. However, the attempt fell apart, leading to legal disputes between Clearview and the consulting firm. Clearview received initial data but called it "unusable," which resulted in breach of contract claims. An arbitrator ruled in December 2023 in favor of Clearview, but the company still seeks to recover its lost investment through court action. This situation raises concerns about the risks of combining facial recognition technology with criminal justice data, especially regarding bias in identification processes. Scott highlighted that incorrect identifications can disproportionately affect Black and brown individuals, who are overrepresented in the justice system. There have been several cases of wrongful arrests due to flawed facial recognition technology. One example involved a defendant who was wrongfully accused based on a faulty facial match but was ultimately cleared by other evidence. Clearview AI faces increasing legal challenges globally, including fines for privacy violations. The company recently avoided a £7.5 million fine in the UK by claiming it was outside their jurisdiction. However, it has had to surrender part of its ownership due to settlement agreements regarding biometric privacy laws. Despite other companies opting for traditional business practices, Clearview's aggressive method of collecting images from social media raises ethical questions. As the use of facial recognition technology grows, discussions about user privacy, consent, and algorithmic bias continue to be critical.