I used an LLM to analyze the Government of Canada's recently released "What We Heard" report. The conclusions it reached suggest more transparency and honesty are required if the government wants their use of AI to be trustworthy, and to convince Canadians they are being heard. Read more at the link below.
Last week, as part of its efforts to revamp Canada’s National AI Strategy, the Government of Canada (GOC) released its “What We Heard” report following a 30-day public consultation “sprint”. To its credit, it also released the full set of public response data, essential for producing this analysis. As planned, the report was produced with significant automation using generative AI, which analyzed and helped summarize the impressive bulk of responses they received during the sprint.
Using an LLM (Anthropic’s Claude, using Sonnet 4.5 in Extended Thinking mode) and focusing on transparency, a key principle of responsible AI, I performed my own analysis comparing the “What We Heard” report to the data contained in the public response dataset. In particular, I tested the GOC’s bold claim that using several AI models to perform the data analysis “enabled thorough, unbiased reporting”. I also tested their claim that “The methodology, data science and use of AI in this project are all aligned with and conform to the Treasury Board of Canada Secretariat’s guidance on the responsible use of artificial intelligence in government”.