Clearview AI provides facial recognition services to law enforcement and other government agencies but, a Dutch data watchdog says its database of 30B images was built illegally.
The Dutch Data Protection Authority (DPA) says Clearview scraped images of people from the internet and used them without their consent. European GDPR rules are fiercely protective of the privacy of EU citizens, and the DPA has fined Clearview just over $33m for this alleged breach.
The DPA says that “Clearview has built an illegal database with billions of photos of faces, including of Dutch people. The Dutch DPA warns that using the services of Clearview is also prohibited.”
While EU regulators aren’t fans of AI-powered facial recognition, Clearview says it provides its services to the ‘good guys’ and benefits society. The company’s website says “Clearview AI’s investigative platform allows law enforcement to rapidly generate leads to help identify suspects, witnesses and victims to close cases faster and keep communities safe.”
The DPA doesn’t believe that the ends justify the means. Dutch DPA chairman Aleid Wolfsen says, “Facial recognition is a highly intrusive technology, that you cannot simply unleash on anyone in the world.
If there is a photo of you on the Internet – and doesn’t that apply to all of us? – then you can end up in the database of Clearview and be tracked. This is not a doom scenario from a scary film. Nor is it something that could only be done in China.”
In a statement sent to The Register, Jack Mulcaire, Chief Legal Officer at Clearview AI said “Clearview AI does not have a place of business in the Netherlands or the EU, it does not have any customers in the Netherlands or the EU, and does not undertake any activities that would otherwise mean it is subject to the GDPR. This decision is unlawful, devoid of due process, and is unenforceable.”
This isn’t the first time EU regulators have sued Clearview but the company is unlikely to change tack or pay up, as it operates outside the EU. Wolfsen says the DPA may find other ways to stop Clearview from using images of EU citizens in its database.
“Such company cannot continue to violate the rights of Europeans and get away with it. Certainly not in this serious manner and on this massive scale. We are now going to investigate if we can hold the management of the company personally liable and fine them for directing those violations,” Wolfsen said.
The recent arrest of Telegram CEO Pavel Durov shows that this isn’t an idle threat, and it’s an issue that could affect directors of other AI companies besides Clearview.
If AI models are trained using illegally obtained data or used to breach EU rules, more tech bosses could find themselves held personally liable if they set foot in Europe. Elon Musk has already suggested in a recent X post that he might “limit movements” to nations where free speech is “constitutionally protected.”
The ease with which Grok creates controversial images and loose content regulation on X is unlikely to impress EU data watchdogs. Meta has also held back on releasing some of its AI products in the EU due to potential rule breaches.
Last week, Uber was fined by DPA for violating EU rules by sending the personal data of European taxi drivers to the United States.
Clearview and other US-based companies argue that they aren’t subject to GDPR rules but their directors might want to recheck their travel plans if they were planning a European trip.