Submission to the ISED National Sprint to Shape Canada’s Renewed AI Strategy

Publication

This submission is in response to Innovation, Science and Economic Development Canada’s (ISED’s) National 30-Day Sprint to Shape Canada’s Renewed AI Strategy.

An important goal of this rapid public consultation is determining how to get Canadians to trust AI, so they will use it more. I worry that this sprint, if the Government of Canada plans to treat it as a one-off, will undermine that goal.

There is no doubt that AI is one of the most promising technologies of our time; AI stands to benefit Canadians immensely, if we build and deploy it responsibly. But recent headlines demonstrate the significant risks to which AI subjects Canadians, especially children and youth, when it is deployed frantically. Children are forming problematic emotional relationships with LLMs; Chatbots are exacerbating suicidal tendencies; Educators are deluged with pedagogical challenges due to AI-enabled cheating and deskilling; Democratic norms are being eroded through the rampant spread of misinformation and deepfakes; Studies suggest, despite anecdotal claims that AI is improving productivity, that AI is actually decreasing productivity in areas in which it is meant to excel, such as programming. All the while, AI-producing corporations continue to make unproven claims about the economic benefits of using their AI systems, and politicians, business leaders, and other decision-makers emphasize the need to rapidly adopt AI so Canadians don’t fall behind. The technological FOMO is palpable.

These mixed public messages have a predictable effect: according to a recent KPMG report, Canadians don’t really trust AI. But perhaps Canadians are justified in their reluctance to adopt AI. Their low trust of the technology seems appropriately calibrated—the current AI ecosystem appears untrustworthy.

We urge the Government of Canada to adopt the right goal: to increase the trustworthiness of AI systems that are deployed in Canada. Nobody should encourage unwarranted trust in AI. Increasing the trustworthiness of AI is at the core of our responses.

Because the sprint was rapid our approach at CRAiEDL was to select a subset of questions from ISED’s longer list that we felt landed in our general pool of expertise. We convened a small half-day workshop to draft responses to those questions, and collaboratively edited our answers, which form the remainder of this submission.

I would urge the Government of Canada to treat this sprint as a first step in a series of meaningful and much-needed public consultations on much-needed (and overdue) sovereign AI policies. The risks we outline in this submission are backed by a growing body of publicly available evidence of harms to Canadians. Canadians, Canadian children and youth most urgently, deserve strong federal policy responses that protect them from the most egregious deployments of risky, often knowingly harmful, AI-based products. Strong, enforceable red lines prohibiting specific harms promise to help make AI more trustworthy. Only with those protections in place should Canadians begin to trust AI and consider its broader adoption. Only with those protections in place can the Government of Canada responsibly urge them to do so.

Dr. Jason Millar

BScE, BA, MA, PhD, P.Eng.
Canada Research Chair in the Ethical Engineering of Robotics and AI
Associate Professor, School of Engineering Design and Teaching Innovation
Cross-Appointed to the Department of Philosophy
Faculty of Engineering
University of Ottawa

The Canadian Robotics and AI Ethical Design Lab. (2025). "Submission to the ISED National Sprint to Shape Canada’s Renewed AI Strategy". CRAiEDL.