Webinar: Why Usability Testing Is Key for Accessible Digital Services
Rachael Anekwe
April 23, 2026
Recently, I started working with a government team that had a common problem — a long-term digital transformation project with an overwhelming backlog.
There were over 7000 items, some going back more than seven years. At first glance, the backlog looked complete. But once you dug in, things didn’t quite add up.
Some tickets were duplicates, while others had no clear description. A few were marked “done,” but weren’t actually implemented. Some features existed in the system but were still sitting as “open” in the backlog.
It made planning hard and prioritization even harder. At some point, it raised a bigger question — can we even trust this backlog?
If you’ve ever inherited something like this, you probably know the feeling.
Instead of trying to fix everything at once, I took a step back and started small, using AI to clean things up one piece at a time.
The first step was getting the backlog out of its existing location and into Excel.
Not because Excel was better, but because it gave me control. I could finally work in chunks instead of staring at thousands of tickets at once.
Using Microsoft Copilot in Excel, I started small:
Instead of manually scanning rows for hours, I used prompts to guide the process. I’d ask Copilot to analyze a chunk of tickets, clean them up, and summarize what it found. Not just what changed, but what patterns were emerging.
That part mattered more than I expected. It wasn’t just about cleaning the backlog it was about understanding what was actually in it.
Slowly, patterns started to emerge — repeated issues, gaps in documentation, and entire areas that hadn’t been maintained.
The process wasn’t perfect, and sometimes I had to rework prompts or validate results manually. But it was faster and, more importantly, it was consistent.
Cleaning the backlog was only half the problem. The next question was, what had actually been implemented?
To answer that, I pulled transcripts from Microsoft Teams demo recordings and walkthrough sessions. These recordings had valuable context, but they weren’t easy to use as-is.
So again, I leaned on Copilot. This time, I used it to question, not just summarize. I had it:
That shift made a difference because instead of guessing what a feature meant, I had something closer to a verified source of truth.
From there, I started mapping. I compared cleaned transcripts with cleaned backlog chunks to:
The process took time and iteration, but for the first time, the backlog started to reflect what actually existed, not just what the tool said.
Even with a cleaner backlog, there was still too much to tackle at once. So I didn’t try — instead, I focused on high-priority items first.
Working with subject matter experts, we defined:
With a cleaner backlog and clearer context, those conversations became easier. We weren’t debating what something meant anymore; instead we were deciding what mattered most.
Once priorities were set, the same process continued:
Clean → map → validate → prioritize
Over time, the backlog stopped being a static list and became something more useful — a living tool that actually supported planning and delivery.
Using AI didn’t magically fix the backlog or remove the need for thinking, validating, or collaborating with the team. But it did make the process manageable.
It helped break down a problem that felt too big into steps that were actually doable.
The real value lay in support when the work was repetitive, messy, or unclear.
If you’re dealing with a backlog that feels overwhelming, you don’t need to solve everything at once. You can start small, build confidence, and let clarity come in layers.
If you’re managing a complex backlog, try this with a small sample first:
Even this first step can reveal gaps you didn’t know existed. From there, you can build your own process, one clean, reliable chunk at a time.
Interested in learning how your team can use existing AI tools to create practical wins? Book a call today to learn how Code for Canada can help.
Code for Canada staff used AI tools to help develop this blog. It reflects our original work and insights, and a human editor reviewed, edited and approved the content. To learn more about how we use AI responsibly, read our ethical AI principles.
End of articles list