LOGO

sweden’s data watchdog slaps police for unlawful use of clearview ai

AVATAR Natasha Lomas
Natasha Lomas
Senior Reporter, TechCrunch
February 12, 2021
sweden’s data watchdog slaps police for unlawful use of clearview ai

Swedish Authority Fines Police for Clearview AI Usage

Sweden’s data protection authority, the IMY, has issued a fine of €250,000 (equivalent to $300,000 or more) to the local police authority. This penalty stems from the unlawful utilization of the facial recognition software Clearview AI, violating the nation’s Criminal Data Act.

Mandatory Training and Data Notification

As part of the enforcement action, the police are required to implement additional training and educational programs for their staff. This aims to prevent any future instances of personal data processing that contravene data protection rules and regulations.

Furthermore, the authority has directed the police to inform individuals whose personal data was transmitted to Clearview, contingent upon adherence to confidentiality regulations as determined by the IMY.

Investigation Findings

The IMY’s investigation revealed that the police had employed the facial recognition tool on multiple occasions. Several employees were found to have used the system without obtaining the necessary prior authorization.

Recently, Canadian privacy authorities determined that Clearview had violated local laws by collecting photographs of individuals for its facial recognition database without their knowledge or explicit consent.

IMY’s Detailed Assessment

“IMY concludes that the Police has not fulfilled its obligations as a data controller on a number of accounts with regards to the use of Clearview AI,” the Swedish data protection authority stated in a press release. “The Police has failed to implement sufficient organisational measures to ensure and be able to demonstrate that the processing of personal data in this case has been carried out in compliance with the Criminal Data Act.”

The use of Clearview AI by the Police resulted in the unlawful processing of biometric data for facial recognition purposes. Critically, a data protection impact assessment, required for this type of processing, was not conducted.

The full decision from the IMY is available online (in Swedish).

Legal Advisor’s Commentary

Elena Mazzotti Pallard, legal advisor at IMY, emphasized, “There are clearly defined rules and regulations on how the Police Authority may process personal data, especially for law enforcement purposes. It is the responsibility of the Police to ensure that employees are aware of those rules.”

Fine Amount and Potential Penalties

The fine, totaling SEK2.5 million in local currency, was determined through a comprehensive assessment. However, it is considerably lower than the maximum penalty permissible under Swedish law for these violations – which the IMY notes could have reached SEK10 million.

The authority’s decision clarifies that a lack of awareness regarding the rules or inadequate procedures do not justify a reduction in penalty fees. The reason for the lower fine remains unclear.

Data Deletion Request

The data authority acknowledged the inability to ascertain the fate of the data belonging to individuals whose photos were sent to Clearview. Consequently, the police have been instructed to take action to ensure Clearview deletes this data.

Investigation Trigger and Clearview’s Practices

The IMY initiated its investigation following reports in local media concerning the police’s use of the controversial technology.

Over a year ago, The New York Times revealed that U.S.-based Clearview AI had compiled a database containing billions of facial images. This database was created by scraping public social media postings and collecting sensitive biometric data without individuals’ knowledge or consent.

EU Data Protection Regulations

European Union data protection law imposes stringent requirements on the processing of special category data, including biometrics.

The ad hoc use of a commercial facial recognition database by police, with apparent disregard for local data protection laws, does not align with these requirements.

Further Legal Action

Last month, the Hamburg data protection authority began proceedings against Clearview following a complaint from a German resident regarding the unauthorized processing of their biometric data.

The Hamburg authority cited Article 9 (1) of the GDPR, which prohibits the processing of biometric data for unique identification without explicit consent (or other limited exceptions). This led to a finding that Clearview’s processing was unlawful.

Limited Orders and Advocacy Efforts

However, the German authority issued a limited order, specifically requiring the deletion of the complainant’s mathematical hash values – representing their biometric profile.

It did not order the deletion of the photos themselves, nor did it issue a pan-EU order banning the collection of European residents’ photos, despite advocacy from European privacy campaign group, noyb.

Recommendations from noyb

noyb is encouraging all EU residents to utilize forms on Clearview AI’s website to request a copy of their data and demand its deletion, as well as to object to its inclusion in the database. They also recommend submitting complaints against the company to their local DPAs.

Upcoming AI Regulations

European Union lawmakers are currently developing a risk-based framework to regulate applications of artificial intelligence. Draft legislation is anticipated this year, intended to complement existing data protections within the EU’s General Data Protection Regulation (GDPR).

Canadian Ruling and Ongoing Investigations

Earlier this month, Canadian privacy authorities deemed Clearview AI illegal and warned they would “pursue other actions” if the company does not comply with recommendations, including ceasing the collection of Canadians’ data and deleting previously collected images.

Clearview stated it had ceased providing its technology to Canadian customers last summer.

The company also faces a class action lawsuit in the U.S., based on Illinois’ biometric protection laws.

Last summer, the U.K. and Australian data protection watchdogs launched a joint investigation into Clearview’s personal data handling practices, which remains ongoing.

#Clearview AI#Sweden#data protection#privacy#facial recognition#police

Natasha Lomas

Natasha's Extensive Journalism Career

Natasha served as a senior reporter with TechCrunch for over twelve years, spanning from September 2012 to April 2025. Her reporting was conducted from a European base.

Prior to her time at TechCrunch, she gained experience reviewing smartphones for CNET UK. This followed a period of more than five years dedicated to business technology coverage.

Early Career at silicon.com

Natasha’s early career included a significant role at silicon.com, which was later integrated into TechRepublic. During this time, her focus encompassed several key areas.

  • Mobile and wireless technologies
  • Telecoms & networking infrastructure
  • IT skills and training

She consistently delivered insightful reporting on these evolving technological landscapes.

Freelance Contributions

Beyond her staff positions, Natasha broadened her journalistic portfolio through freelance work. She contributed articles to prominent organizations such as The Guardian and the BBC.

Educational Background

Natasha’s academic credentials demonstrate a strong foundation in both humanities and journalism. She earned a First Class degree in English from Cambridge University.

Furthering her expertise, she completed a Master of Arts (MA) degree in journalism at Goldsmiths College, University of London. This advanced degree honed her skills in journalistic practice.

Natasha Lomas