• Opinion
  • May 22, 2019
  • 12 minutes
  • 1

Here’s the red tape holding back government AI

Opinion: Strict rules are forcing AI practitioners to be creative

This article was written by Mathieu Audet, Government of Canada Entrepreneur and manager for Employment and Social Development Canada. For more like this, see our digital government newsfeed.


The Canadian federal government, like many others, is actively looking to understand how Artificial Intelligence (AI) will shape the way it develops policy as well as deliver programs and services to its citizens.

For example, what is the effect of AI-driven automation on the Canadian workforce? What tasks internal to government can AI automate? When is it ethical to do so and will we know the difference? These questions are the focus of numerous working groups, committees and task forces.

            • Want to write for us? Take a look at Apolitical’s guide for contributors 

I am privileged to sit as a GC Entrepreneur on one of Canada’s deputy level committees, the Task Force on Public Sector Innovation (TF-PSI). The Task Force looks to advance the adoption of and experimentation with AI in the Canadian public service. I was also fortunate to participate in Nesta’s States of Change training.

Answering the who and how

Recently, the TF-PSI heard how AI is improving business processes and augmenting employees in the Government of Canada (GoC). A number of GC Entrepreneurs including myself recognised that although there are a number of departments and agencies involved in AI-related work, overall there is a limited understanding of our current capacity as well as how to best develop, utilise, and grow AI across the GoC.

For example, how did our public servants learn to use AI? What are their academic or technical backgrounds? What tools are they using and are they facing any challenges? To shed light on these questions we crafted an AI scan — an overview of who in government is using AI — and went directly to those working with AI to ask. We began working on the scan in July 2018 and are currently sharing the findings and building support to implement its recommendations.

As GC Entrepreneurs, reporting to a government wide committee and not a single department, we have the opportunity to work horizontally and provide a picture of the current state of AI capacity across the GoC. Our goal was not to document what is done with AI, but to understand the who and the how.

In this blog I will likewise focus on how we did the AI scan. If you’re interested in our findings, our AI scan can be found here. Below are the steps we followed from start to finish. We also provide links to our supporting documents. I am hoping others can leverage our work to advance their knowledge and discussions around AI in their organisations. Full disclosure: I should note here that I have a working knowledge of AI, its tools and its mathematics. This likely helped the flow of the conversations but I would not consider it a prerequisite. I spearheaded the scan over a 2-month period, dedicating half of my time to the effort.

Before we began, we first crafted our overarching research question: Who are AI practitioners in the GoC and how did they establish themselves? Our working definition of an AI practitioner was a public servant using the latest in mathematics from machine learning (e.g. neural networks) on the most recognised software packages such as Python and R to solve problems in their workplace.

Seven questions for the experts

To get to the heart of our research question, we felt it was critical to have in depth conversations in the form of one on one interviews. We adopted a semi-structured format with a mix of categorical and open-ended questions to allow for some flexibility in the responses. We did not have a good sense of what we would find, and interviews seemed much more suitable to cover such diverse content often charged with an emerging technical language. As such, we intentionally steered away from quick online surveys. This allowed us to build rapport with AI practitioners and get a better sense of what it’s like to operate in this space.

When the discussion turned to how the practitioners got access to AI software and hardware, their shoulders would slouch and they would let out a long sigh

The questionnaire we used was developed in collaboration with one of the GoCs top subject matter experts in AI. We built the questionnaire on Google Forms. This free, simple to use platform allowed us to collect information live during the interviews. I accessed the survey online through a laptop connected to a local Wi-Fi hotspot I hosted from my phone.

The questionnaire consisted of seven sections:

  1. The team: Explores the team’s education, skills and experiences that have allowed the team to become AI practitioners.
  2. Why use AI?: Collects the various use cases for AI such as supporting administrative decisions.
  3. Tools of the trade: Maps out the specific AI techniques used by the team such as reinforcement learning using neural networks.
  4. Software and Hardware: Explore the various software and hardware configurations and the needs of AI practitioners.
  5. Procurement of AI services: Explores the experiences of AI practitioners with AI vendors.
  6. AI ecosystem: Explores the connectivity of AI practitioners inside and outside the GoC.
  7. Enablers and Obstacles: Explores the organisational support perceived by AI practitioners.

In order to scope out and recruit participants, we targeted data science communities on the GoC internal message boards and forums, inviting teams of practitioners to an interview where they could share their stories, and have them ultimately heard by deputies at the TF-PSI.

There was a healthy appetite for AI practitioners to share their experiences as our interview callout quickly snowballed in their networks.

Pride and sighs

We had initially anticipated the interviews to last about 20-30 minutes; however, we quickly discovered that many were keen to talk about their experiences, prolonging our discussions to around 45 minutes.

The conversational nature of our interviews allowed us to collect responses that were a mixture of interesting, funny and disheartening anecdotes. What was astounding to me was the body language of my interviewees during our conversation.

When describing how they were using AI in their workplace, they were filled with excitement and pride. However, when the discussion turned to how they got access to AI software and hardware, their shoulders would slouch and they would let out a long sigh. This is where I would hear about their struggles of installing free open-source software on their workstations.

You see, Information Technology (IT) services have endless approval processes that seem to be designed to dissuade employees from deviating from the status quo. In addition to a number of forms justifying the use of the software, IT shops often require upwards of 6 months to approve a request. Interestingly, IT teams can also be prisoners of their own rules.

How do you pay for free software?

One story that stuck with me was about an AI practitioner who had developed a strong relationship with his IT services counterpart which resulted in a clever work-around to a strict rule. The AI practitioner was looking to access R, a free open source statistical software suite. Despite their willingness to grant this access, the IT team were unable to do so because of a rule requiring all installed software to have a proof of purchase.

You can see how difficult it is to submit a purchase receipt for something that is free. To get around this, the AI practitioner donated his own money to the R foundation to provide a receipt to IT. This kind of resourcefulness and resiliency quickly emerged as a defining characteristic of all practitioners I interviewed.

I recall one analyst in particular who described the lack of support he was receiving from his organisation. This had driven him to pay from his own pocket to develop an AI model on Amazon Web Services so he could demonstrate its value in his workplace. I constantly call on examples like these when presenting the AI scan. They provide a clear picture of how our existing environment is not enabling the growth of AI in government.

After 25 interviews, the same stories were surfacing, and frankly, we could have terminated the scan after the first handful that we conducted. Once our findings were summarised into a presentation, we sent it back all participants to get their feedback and ensure they felt their stories were well reflected in the document. Since its completion, the AI scan has been presented in various forums across the GoC. This work continues to socialise the opportunities and challenges of AI and is elevating the discussions around best ways to support it. — Mathieu Audet.

(Photo credit: Death to the stock photo/JULIEN TSUJIMOTO)

Discussion

Leave a Reply

to leave a comment.
Master the skills you need for the public service.

Discover inspiring resources, tools and policies.