
3 Takeaways from Creating a Digital Prototype to Improve Water Access
Usama Arshad
October 8, 2025
Across Canada, government teams are working hard to meet new accessibility standards. It can be a serious challenge, and while automated scans and Web Content Accessibility Guidelines (WCAG) checklists are a good start, they rarely tell the full story.
At Code for Canada, we believe that testing products and services with real people is essential to creating usable, accessible digital products and services. So, when it came time to refresh our website, we decided to test it with people who use assistive technologies.
The process helped us identify accessibility barriers, better understand the needs of users with disabilities, and make practical improvements to the website.
In this blog, we’ve broken down how we approached the work, what we found, and why every public service team should test with real users.
Automated accessibility testing tools are helpful for catching technical issues like missing alt text, low colour contrast, or improper use of headings. But they can’t answer the most important question: Can someone using assistive technology complete a task on your site?
We discovered this firsthand during testing, when our participants found several barriers automated testing didn’t catch, including:
These issues don’t show up in a WCAG scan, but they have a major impact on usability and accessibility.
To improve how well our site supports assistive technology users, we focused on key areas where people are most likely to take action. These included the homepage, service pages, and contact forms.
We picked these areas because they represent typical user journeys on our site and rely on a mix of interactive elements like buttons, menus and links, which can present challenges for people using assistive technologies.
We designed a test with three core user flows:
Participants used a range of assistive technologies, including:
Each session was moderated and recorded with participant consent. We asked participants to speak aloud as they navigated the site and to share anything they found confusing or difficult. While the focus was on usability rather than technical audits, our participants brought deep expertise and lived experience to the process, providing valuable feedback grounded in real-world accessibility needs.
After testing, we reviewed each recording and synthesized the results. We logged specific barriers, grouped them by theme, and rated their impact from minor to critical. Each barrier was mapped to a WCAG criterion, with a clear recommendation for how to fix it. The result was a practical, developer-ready summary of what to improve and why.
The testing surfaced accessibility barriers across content, layout and interaction patterns. Some examples include:
Each of these barriers may seem small on its own, but together, they show how easily users can get blocked when a site is not tested with the people it is meant to serve.
If you are a public servant navigating accessibility requirements, testing with real users is one of the most powerful steps you can take to find and fix potential barriers. Automated tools and checklists have value, but they can’t show you what it’s like for someone with a disability to use your service with assistive technology.
Inclusive testing gives you the evidence you need to make changes that matter. It helps teams focus not just on compliance, but on usability, clarity, and independence for all users.
Want to test your digital service with real users of assistive technology? Code for Canada’s Inclusive User Research (IUR) program can help. Get in touch today to learn more.
End of articles list