Skip to content

Canadian Urbanism Uncovered

LORINC: Trying to police the way cops use AI-based investigation tools

Read more articles by

Late last month, the Toronto Police Services Board released a new policy meant to guide the agency’s future procurement and use of artificial intelligence (AI) technologies for law enforcement. The 4,800-word document, now posted on the TPSB’s website, is the product of a fairly extensive canvas of public and expert opinion, and said to be the first such governance framework for any Canadian law enforcement agency.

It consists of a general statement about guiding principles and an articulation of the policy’s purpose, as well as 21 separate operational provisions divided into four broad categories: review and assessment of new AI technologies; board approval and reporting prior to procurement, utilization and deployment; monitoring and reporting; and continuous review.

As with all matters policing, the board — which consists of elected and appointed civilians — sets the policy at a high level, while the chief of police is responsible for carrying it out and then reporting back to the TPSB on how things are going. On paper, an elegant arrangement, more often honoured in the breach than the observance, as the saying goes.

This new governance policy is the direct result of public outrage in the wake of media revelations from 2019 and 2020 about the use of the Clearview AI app by the Toronto Police Service, OPP, and numerous other forces in North America and Europe. Clearview, if you recall, is a facial recognition system trained on giant data pools of publicly accessible photos — Facebook, Instagram, Twitter — and capable of generating reams of personally identifiable information on someone just by taking their picture with a smart phone and seeing what Clearview dredged up.

Clearview, in effect, took surveillance to an entirely new level, and the company got a lot of mileage out of it by offering demos as loss leaders to law enforcement agencies. Cops were using it in a complete policy vacuum, although the grotesque privacy violations should have been obvious, and were confirmed last year by the Privacy Commissioner of Canada.

The upshot is that Canada’s largest police service now has a policy on its books on the use of such technologies, but we still need to know that the chief will enforce it, which I don’t think should be assumed, given recent police practices around these and other privacy-demolishing security technologies (i.e., Pegasus, a spyware product from Israel’s NSO Group that can intercept all smart phone data and communications before it gets encrypted).

So what about the document itself?

Among other provisions, the policy calls for a public consultation process for the adoption of new AI technologies; compels vendors to identify the training data they’ve used to build their algorithms and also disclose previous human rights violations; and requires certain types of analyses, such as the legal consequences of information obtained through these technologies and the likelihood of unintended consequences for equity-seeking groups.

I spoke to a few experts, who offered insights on the policy’s strengths and weaknesses.

“It looks like a very genuine effort to me to grapple with the issues,” says Teresa Scassa, a privacy law expert at the University of Ottawa. “What is perhaps most striking is that it was done at all — the leadership here is noteworthy.”

Criminal and human rights lawyer Kate Robertson, co-author of a Citizen Lab-LEAF submission to the TPSB, says the framework broadens the definition of mass surveillance and seems to ban the use of the sort of private sector predictive policing software products that were all the rage in cities like Los Angeles in the late 2010s.

Anna Artyushina, a York University data governance research fellow, adds that the so-called “risk-based approach” is encouraging, as is the policy’s promise of open procurement (i.e., issuing requests for proposals for new technologies).

Risk-based oversight, which has been adopted by European Union regulators, establishes a risk hierarchy, and links the vetting and governance of any system to the likelihood that its use can inflict real harm. For example, many website chatbots use off-the-shelf AI to generate generic answers to customer questions. The risk with such AI algorithms is very small, whereas the use of a facial recognition system that has been trained on non-representative sets of data, and is therefore likely to misidentify racialized individuals, presents a great deal of risk.

The TPSB’s policy also imposes an outright ban on “extreme risk” technologies, including those that don’t involve human discretion; provide mass surveillance or indiscriminately collected data, such as in New York, which has used facial recognition software in its network of traffic cameras since 2016; the use of AI in life-safety applications; and technologies that purport to predict the likelihood that an individual or a group will offend or re-offend.

On the downside, Artyushina says, the TPSB policy doesn’t really get at the question of why the police need to procure law-enforcement-based AI tools in the first instance. She points out that this new framework lays out a detailed and frequently bureaucratic process, but neglects to articulate any foundational justification for technologies that basically promise to predict the future — always a dubious enterprise.

Robertson adds that the policy doesn’t require TPS to seek out independent verification of the reliability, necessity and proportionality of an AI-based system that the service wants to acquire. She adds that the framework also seems to allow the TPS to train new AI software on its own data, which has been collected in highly contentious ways of the years (e.g., carding). “The re-purposing of policing data to train AI technology is a very high-risk and controversial use of technology in policing from a human-rights perspective,” she warns. “It is critical that it be regulated accordingly.”

One more potential red flag: the Information and Privacy Commissioner of Ontario, in its submission to the public consultation call issued by the TPSB last winter, recommended the establishment of whistle-blower protection. For example, if a TPS employee becomes aware of the fact that police are sharing a new AI-based app that hasn’t gone through the proscribed assessment, can they report this conduct without fear of professional reprisal?

The IPCO said this kind of measure should be “required” so service members could report violations securely and anonymously. The board’s new AI framework is silent on the matter. An IPCO spokesperson says the commission staff are still reviewing the policy. Scassa, however, notes that other examples of whistle-blower protection legislation in Canada haven’t been especially effective.

Perhaps the most salient detail in this discussion isn’t about what’s in or out of this particular policy, but rather the complete absence of legislation aimed at regulating police use of a suite of technologies that pose daunting challenges when it comes to public oversight, accountability and the degree to which we’re willing to allow surveillance software into our lives. This void isn’t just a Canadian problem. In the EU, which has the world’s most progressive data policies, several large municipalities have banded together to push national governments to do more to regulate the rapidly expanding use of facial recognition in public/semi-public spaces, like supermarkets and subways, Politico reports.

Scassa observes that Canada’s long-standing federal/provincial jurisdictional divides have impeded the development of a coherent legislative framework — a vacuum that helps private cyber-security/surveillance multi-nationals like Peter Thiel’s Palantir gain a foothold in Canada.

The TPSB’s new framework marks an important development in this story, yet it’s still only the sound of one hand clapping. Unless law enforcement agencies are held to account by strict regulation promulgated by provincial and federal legislatures, these mainly procedural documents will just end up on the ever-growing pile of previous policy projects that may generate a bit of upbeat PR for law enforcement agencies — and are then completely forgotten.

photo by Retis (cc)

Recommended