From In-Basket to Inbox

Joey Hua

Joey Hua

March 27, 2019

15h RW3dsodicd Txor Tu B Vo Q

Greetings from the PSC fellowship team! It’s been five months since Siobhan, Caley, and Joey (that’s me!), were first embedded into the Public Service Commission (PSC).

As an expert on and advocate for fair hiring practices, the PSC provides hiring assessments to other federal departments and ministries. When people apply to a job at the Government of Canada, they go through a series of assessments to determine if they are qualified for the position. These could include everything from multiple choice tests, to language exams, task simulations and interviews.

Our mission is to help the PSC modernize its assessment practices. Specifically, we’ve been asked to build a modern, digital tool to assess candidates who have applied for managerial positions. The current managerial assessments are lengthy pen and paper in-basket tests (a simulation often used by companies to assess prioritization and organization skills), which no longer represent the digital environment federal employees live and work in.

To get a better picture of what we can (and should) build we undertook a lengthy user research journey, and we’re excited to share our findings!

Contextual and industry research

Our first forays into discovery research were largely about context gathering. Having come from the private sector, the comings and goings of public sector work were new to us, and there were a lot of unknowns.

We began by reviewing business requirements, technical documents, and findings from over four years of research conducted by a team of psychologists at the PSC.

From left: Code for Canada fellows Siobhan Ozege and Joey Hua review user research data contained on numerous printouts and sticky notes.
From left: Code for Canada fellows Siobhan Ozege and Joey Hua review user research data contained on numerous printouts and sticky notes.

The PSC’s research was aimed at understanding the needs of their clients and perceptions surrounding the pen and paper tests currently used to assess aspiring managers. They found that other managers and HR staff often lacked confidence in hiring decisions, and had expressed a desire for standardized tools to measure key leadership competencies. This research was really helpful to our team, and has formed the basis of PSC’s efforts to revamp the content of their managerial assessments.

In addition to reading, we spoke to those who lived the day-to-day context of the hiring and assessment world we’re immersed in. From the psychologists who handle test content, to the client-facing consultants, to in-house accessibility specialists, the conversations with our colleagues at the PSC helped us understand the new world around us.

A diagram illustrating how user research helps identify the motivations, goals, values, fears and rationales users bring to a given product or service
A diagram illustrating how user research helps identify the motivations, goals, values, fears and rationales users bring to a given product or service
User research

We believe a great product is one that balances business needs and user needs. Reading the PSC’s business requirements and previous research findings was immensely helpful in terms of understanding what such a product might look like in this context, but we needed more.

So we moved on to the second part of discovery: talking to users.

In total, we met with 28 people across the public service, who identified as managers, team leads or supervisors, test administrators and scorers. We asked them open-ended questions. We listened to stories about their work aspirations and challenges. And we learned about their experiences with hiring and assessments, both those administered by the PSC as well as federal hiring tests more generally.

We also did observational studies, where we shadowed staff to understand the types of tools they used and what sort of tasks and responsibilities they were given. Understanding the different digital (and non-digital) tools staff use, and what their day-to-day work experience is like will help us create a product that feels familiar and relevant to those taking a hiring test.

When we said “across the public service,” we meant it. We met with users from British Columbia, Alberta, the National Capital region, Ontario, Québec and Nova Scotia.
When we said “across the public service,” we meant it. We met with users from British Columbia, Alberta, the National Capital region, Ontario, Québec and Nova Scotia.
Our findings

So, after crisscrossing the country and meeting federal employees in the field, here’s what we learned.

Public servants care about delivering a fair hiring experience and want job candidates to succeed

From marketing to operations to IT, we connected with public servants from across departments and geographies. We learned a lot about their motivations, goals, and pain points related to hiring and assessments, and ultimately, what they need to succeed in their roles.

Regardless of where we were or who we were speaking to, one things always shone through: managers in the federal public service are huge supporters of their employees’ goals, and feel strongly that hiring and promotion should be fair, accessible and based on merit.

But there are some challenges…

Our user research shed light on common challenges both job applicants and hiring managers can encounter. In particular, we identified three major pain points related to hiring:

Regional diversity: Hiring managers brought up some logistical challenges when hiring and assessing candidates in different locations, i.e. a location other than where the manager or department is based. Specifically, assessing candidates from other regions was made more difficult due to the location and availability of testing centres (there are currently only five official PSC testing centres across Canada). This is why the initial scope of our project included Remote Supervised Testing — a practice that would enable hiring managers to more easily assess candidates in diverse locations. Participants also cited challenges for candidates coming from outside government; some felt the current testing practices can privilege candidates with knowledge (or the ability to gain that knowledge through personal connections) of the internal contexts and processes specific to the public service and/or the department.

Lengthy hiring processes: A major theme from the research was that hiring a candidate takes too long. The PSC has actually highlighted this as a priority in their departmental plan, with a goal to reduce hiring times to an average of 197 days. Meanwhile, some participants mentioned being in hiring pools for upwards of five years, or waiting nearly two years for a security clearance. These timelines can — and do — cause qualified candidates to drop out of process.

Assessment criteria isn’t always clear: Candidates being assessed sometimes felt they weren’t provided with an adequate amount of information prior to their tests, meaning they weren’t able to prepare effectively.

“If they knew what they were being assessed on, they might have done better”

Others were simply unaware of certain logistics, like being able to ask for feedback after a test to better understand how they performed, and how they could improve.

Paper tests do not reflect the realities of the modern workplace

Over 30% of our participants reported taking a managerial in-basket assessment, a paper-based simulation where prospective managers go through the tasks and memos provided in a basket on their desk. Perceptions of this test varied among participants. Some managers were interested in seeing candidates’ “ability to research, read something, and apply it elsewhere,” while others felt the paper in-baskets were not reflective of the modern work environment.

“We’re not sitting with paper files anymore with a signature and answering the phone. It’s all emails and Blackberries. The tools that they use don’t seem relevant.”

From our studies, we were able to peek into the work environments of managers to see the types of tools they used to get their work done. Research validated many of our assumptions about peoples’ work processes. We also used the term “tool” very broadly, and as a result, ended up with a plethora of answers.

Our research revealed that although the majority of managers were reliant on the use of technology and email, most people also mentioned the use of physical items such as whiteboards, notebooks and sticky notes.

Managers across the federal public service worked in various environments, but there were some common email tools mentioned.
Managers across the federal public service worked in various environments, but there were some common email tools mentioned.
Next Steps

As we’re moving out of discovery into building a product that works for its users, we will be testing and validating our research. We’ve started by mapping out the flows for each of our users, and are now in the process of pulling the data from our findings to validate some anticipated design decisions.

Soon, we’ll be refining and iterating on those features based on feedback through ongoing cyclical product build and testing. We can’t wait!

The outputs of the PSC fellows’ wireframe sketching session.
The outputs of the PSC fellows’ wireframe sketching session.
From our research, we were left with two big questions:
  1. How might our product address issues of fairness and improve communication with candidates?
  2. How might our product address the unique needs of external candidates and provide them with the tools to succeed on their assessment?

In order to build an accessible testing platform that addresses those questions, and alleviates users’ pain points, we believe it should meet the following considerations:

Ease of use: Making sure the tool is familiar and easy to use regardless of an applicant’s experience with the specific technologies used in the federal government.

Accessible by Design: We’ve identified some gaps in our research related (very broadly) to accessibility. We’ll be supplementing the user research we’ve done with more targeted outreach to persons with disabilities and those living outside the national capital region.

Providing candidate with the tools to succeed: Using clear and simple language throughout the platform and the entire assessment experience. This could include allowing candidates to access a sample test prior to their assessment, which would help them to understand the criteria and prepare.

Good design is getting out of the way and letting users do their thing. While we might not be able to make the process of taking a standardized job assessment delightful, our goal is minimize disadvantages for the person on the other end of the test product by providing the knowledge, tools and environment they need to do their best.

Thanks for reading, and stay tuned to the Code for Canada blog for more updates on our team and our product. In the meantime, if you’d like to participate in our user research, we’d be really grateful!