Code for Canada’s Ethical AI Principles

Outline of a gear partially surrounding a microchip with the letters "AI" in the centre

Can artificial intelligence (AI) be used ethically and effectively for the public good? 

Governments worldwide are grappling with this important question, drafting early guidance and legislation to steer the use of this new technology in the public sector.

At Code for Canada, we work shoulder-to-shoulder with public servants, helping them find the right digital solution to meet their needs.

We believe that AI, like all technologies, can be a useful tool. It has the potential to automate routine tasks, offer high-level analysis, and even reduce human error.

But, like all tools, it can cause harm. It could perpetuate human bias, particularly when it comes to marginalized and vulnerable groups. It could contribute to disinformation and be the root cause of large-scale error. And, particularly where government is concerned, its misuse could contribute to an increased lack of trust in public institutions. 

The good news is we don’t have to start from scratch when deciding how to harness this new technology. We can apply many of the same principles that govern the ethical use of other technologies to AI. Around the world, governments, academics, and technologists are creating principles and frameworks to do just that.

Building on the work of these dedicated civic technologists, Code for Canada has created our own set of principles for ethically working with AI. 

These principles will guide our experimentation with AI and our work with our public sector partners. They are intended to safeguard against the risks of AI while offering guidance on how to use it to benefit the public. 

Our principles

Transparency 

Transparency is key when developing or working with an AI system. Clearly communicate when, how, and why AI is being used with all stakeholders. Be transparent about how information is being collected, used and disclosed.

Proactively inform the public of all AI projects or policies that will impact them. Whenever possible, embrace working in the open by publishing relevant documentation, including source code, training plans, data sources or ethical impact assessments.

Accountability

Accountability when working with AI can take many forms. Developing oversight mechanisms to manage responsible AI development and use is important. These mechanisms can include governance boards, ethics committees or AI working groups. 

Distribute accountability throughout your organization so everyone developing, deploying, or operating AI systems is responsible for their proper use. Human oversight must be present at every step of the development or use of AI systems to ensure that their outputs are accurate, legal and ethical. 

Fairness

AI has the potential to perpetuate existing human bias and exacerbate societal inequities. It is essential to develop and use AI systems in a way that assesses and mitigates these risks. Bias should be evaluated and addressed throughout an AI system’s life cycle.
People potentially affected by AI projects and policies should be given the opportunity to provide meaningful feedback through inclusive public engagement. Historically marginalized groups should be considered at every step of a system’s development, especially in the case of Indigenous stakeholders

Public Purpose 

When creating internal AI models, clearly articulate the public benefit. Set goals that align with your own needs and the needs of society, with measurable outcomes. 

Use human-centred design to develop and deploy AI systems to ensure real people's needs are met. 

Privacy & Data Protection

All training and input data for internally created AI systems should be collected, used, and disclosed in accordance with privacy laws. Measures to protect individual privacy rights should be taken, and privacy experts should be consulted when necessary. 

AI systems should use anonymized and synthetic data over personal information whenever possible. When using third-party generative AI tools, select those with the best privacy considerations in place. 

Safety & Security 

AI systems should have their safety and security risks assessed and managed throughout their life cycle. Clear measures should be put in place to protect sensitive information, evaluate the system's input and output for accuracy and bias, and conduct regular ethical reviews. AI models should also be carefully selected for their safety and security. 

Education 

All stakeholders involved in the selection, development, and use of AI systems should undergo continuous education and skill development. To manage AI projects, create multidisciplinary teams with various areas of expertise. As AI technology continues to evolve, efforts should be made to develop organization-wide AI literacy.

Testing & Experimentation

Create controlled, safe environments to experiment with AI technology. This experimentation should begin with safe or low-risk applications, allowing team members to learn and iterate safely and responsibly. Data quality and outputs should be assessed before adding complexity or considering deployments. Regular and interactive system testing should be conducted before and during the use of all AI systems.  

What’s next?

These principles are just a starting point — as AI and Code for Canada’s understanding of it continue to evolve, so will our approach to responsibly and effectively using it.

We are heavily indebted to the work of Think Digital. Their report on AI guidelines worldwide was incredibly helpful in informing our principles. 

We’re excited to dive into AI experimentation as an organization and with our partners. We’ll be sharing more as we continue our journey and encourage you to follow along.

If you or your organization are curious about how to approach AI in your work, we would love to hear from you. Get in touch.