RCMP broke privacy laws in using controversial Clearview AI facial recognition tools, watchdog says

OTTAWA — Canada’s national police force broke privacy law by using controversial facial recognition software that put innocent Canadians in a “24/7 police lineup,” the federal privacy commissioner says.

The RCMP conducted “hundreds” of searches of Clearview AI’s database of billions of photos scraped from the public internet, including social media sites, without consent. The company lets law enforcement and private business then match photos against that database.

It was illegal, according to privacy commissioner Daniel Therrien — both Clearview AI’s collection of images without consent, and the RCMP’s use of that database of unlawfully collected images.

Clearview’s practices amounted to “mass surveillance,” Therrien concluded, and the RCMP’s use of its database broke the Privacy Act.

“The data involved in (facial recognition technology) speaks to the very core of individual identity and as both commercial and government use of the technology expands, it raises important questions about the kind of society we want to live in,” Therrien concluded in his report.

The RCMP initially denied it used Clearview AI, both publicly and to the privacy commissioner, who is an independent officer of Parliament.

After a joint Toronto Star and BuzzFeed News investigation found the Mounties had paid for Clearview’s services, however, the force publicly admitted to using the controversial tools on a limited basis — predominantly for identifying victims of child sexual exploitation.

But Therrien’s office concluded that not only did the RCMP initially “erroneously” say it had not used Clearview, the force “did not satisfactorily account for the vast majority of the searches it made.”

“The RCMP has serious and systemic gaps in its policies and systems to track, identify, assess and control novel collections of personal information,” the report concluded.

The RCMP refused an interview request Thursday morning, and said it would be issuing a statement later in the day.

In a statement, a lawyer for Clearview AI disputed the privacy commissioner’s findings — or even that Canadian privacy law applies to the U.S.-based company.

“Clearview AI disagrees with its assertion that the company’s actions are not fully in accordance with Canadian law, or even that Canadian law applies to its activities,” said Clearview AI attorney Doug Mitchell in a statement to the Star.

“Clearview AI has gone beyond its obligations and is willing to consider further accommodations in order to meet some of the privacy commissioner’s concerns within the bounds of the law and feasibility. Clearview AI hopes to continue the dialogue in order to find common ground.”

Loading…

Loading…Loading…Loading…Loading…Loading…

In addition to the RCMP, a dozen police forces and private businesses confirmed they had used the facial recognition software — including nine police forces that previously told the Star they did not use Clearview AI.

The U.S.-based company uses artificial intelligence sift through a database of billions of photos scraped from the public internet and match people’s imagines. The technology has been called “dystopian” by the company’s critics.

With files from Wendy Gillis and Kate Allen.

TORONTO STAR