After immigration levels plummeted during the first year of the COVID-19 pandemic, Canada plans to welcome 400,000 newcomers by the end of 2021 and 411,000 in 2022.

These figures arrived in tandem with a spring announcement regarding new funding for the artificial intelligence-based “GeoMatch” tool, which is designed to optimize immigrant settlement patterns for improved financial outcomes. At the same time, the pace of data analytics testing in the sorting and management of temporary resident visa applications is accelerating.

Both of these developments suggest AI-rooted automation could play a role in supporting immigration targets and ensuring the success of new immigrants in Canada’s post-pandemic economy.

“In the immigration and asylum process, AI may lead to faster decisions, shorten delays and backlogs, or even reduce the ‘overconfidence’ of a decision-maker,” says Jona Zyfi, doctoral student at the Centre for Criminology & Sociolegal Studies at the University of Toronto. “But it may also exacerbate access to justice and present new barriers and issues altogether due to its unpredictability or what is referred to as its ‘black box’ nature.”

The AI market continues to grow, and is estimated to be worth around $37 billion by 2025. Canada is looking to play an active part in the industry.

“The discourses of efficiency and efficacy are increasingly so great that ethical-, political- and privacy-driven scepticism and hesitation is pushed aside in favour of rapid adoption.”

Benjamin Muller (Professor of Political Sociology and Critical Security at Western University)

Following the release of the federal budget in April, Canada’s Minister of Finance, Chrystia Freeland, announced that Canada would give $185 million to support the commercialization of AI research. This investment is alongside plans to increase funding for the Pan-Canadian Artificial Intelligence Strategy to $443.8 million over 10 years.

“Automation has become normalized in so many parts of daily life, even in the Canadian border experience at airports, that the acceleration and amplification of these technologies in mobility and border management seems somewhat inevitable,” says Benjamin Muller, Professor of Political Sociology and Critical Security at Western University.

“The discourses of efficiency and efficacy are increasingly so great that ethical-, political- and privacy-driven scepticism and hesitation is pushed aside in favour of rapid adoption,” says Muller.

A 2018 report by the University of Toronto’s Citizen Lab provided a detailed analysis of how testing of AI in the form of predictive analytics and machine learning is beginning to encroach into “new contexts that pose ever-higher risks to human rights and civil liberties,” including the immigration sector.

The authors lay out a number of concerns surrounding the use of AI and automation in Canada’s immigration system, beginning with the clandestine nature of AI testing by the government.

There are also questions about where and how the data is stored and protected.

“Where the subjects of surveillance are non-Canadian persons outside of Canada, no meaningful safeguards to protect their right to privacy exist,” according to the report. Asylum claimants awaiting refugee status are particularly vulnerable in this regard.

The biggest concerns, however, pertain to the consequences of AI technology for due process, procedural fairness, and human rights more broadly.

These issues were brought into sharp focus with evidence of plans for automation in sorting “humanitarian and compassionate” asylum claim applications and pre-removal risk assessments, whereby asylum claimants are exhausting their final appeal to stay in the country.

The challenges listed in the report must be considered before the development of further AI initiatives by the government. However, they should be examined alongside the impacts of slower application processing for newcomers and refugees.

“Long wait-times for many hopeful immigrants can be at the very least inconvenient and at most dangerous, in the case of many asylum seekers,” says Brennan Hoban, researcher at the Institute for Science and Technology Policy at George Washington University.

Despite the optimistic immigration levels for 2021-’22, there is still a sizeable backlog in visa applications and refugee claims. Projected wait times for refugee claims, for instance, were estimated at 24 months, with 12 months for appeals.

Given these circumstances, it’s important to explore whether AI could present a viable solution for improving efficiencies in an ethical way, both in application and claims processing, as well as in newcomer outcomes.

In Canada, there are currently two main ways that AI-based tools are being introduced into the immigration system. The first is through automation and advanced data analytics, for assistance in applications sorting. The second and more recent method explores the use of predictive analytics for creating recommendations on where newcomers should live upon arrival to Canada.

This brings us to the first possible strategy for mitigating the risks of AI: using it solely as a tool to provide recommendations that are then submitted for human evaluation.

Both tools use data and machine learning capabilities to build models based on historic outcomes and are designed to support human decision-making. The GeoMatch tool, for instance, considers factors such as previous immigrants’ work history and education to recommend locations where future newcomers are likely to find economic success.

“AI and machine learning tools are currently best used to support human decision-making rather than replace it,” says Dionne Aleman, Associate Professor, Industrial Engineering at the University of Toronto.

This brings us to the first possible strategy for mitigating the risks of AI: using it solely as a tool to provide recommendations that are then submitted for human evaluation.

One challenge that emerges here is the possibility of the human ‘supervisor’ simply placing a stamp of approval on an AI-based decision. Humans can become over-reliant on the system’s accuracy, and algorithmic decisions may be opaque to human oversight.

For human supervision to be a suitable solution for mitigating AI risks, a standardized process for reviewing AI decisions would have to be developed alongside the tool itself.

“There needs to be oversight and accountability to ensure that the data going in accurately represents the population that will be served, and that the engineers have consulted with various interest groups to understand the potential biases their system could emit,” says Hoban.

Obtaining good quality data can also prove challenging. The accuracy of an AI model, which corresponds to its prediction for a given set of data, depends on data quality. In the public sector, data is notoriously lacking, inaccurate, or in unusable formats.

This is particularly consequential if predictive analytics were to be used to support decision-making in a refugee claim. For instance, if a case hinged on whether the police force in a particular country was able to protect the individual in question, there would need to be data on appeals for protection from the same police force along with information on outcomes.

According to the authors of the new report “Artificial intelligence for a reduction of false denials in refugee claims,” this sort of data is incredibly scarce. However, this in itself presents an opportunity for AI, particularly in the case of refugee determination.

“With little data available, the machine will likely make inaccurate predictions. In contrast to humans, however, it is possible for the machine to have an explicit measure of the inaccuracy of the predictions,” says the report.

In other words, as a statistical tool, AI algorithms can offer clear acknowledgment of uncertainty in a given claim, and show decision-makers the uncertainty present in their judgments.

That being said, there’s an important criteria to the authors’ proposition. AI’s advantages would only outweigh disadvantages if there are meaningful revisions to the legal frameworks that would allow doubt to be resolved in the claimant’s favour, which is not currently the case.

“At the end of the day, technology and AI are just another tool and without a responsible guiding framework and regulation, it becomes useless and even detrimental.”

Jona Zyfi (Doctoral student at the Centre for Criminology & Sociolegal Studies at the University of Toronto)

While this solution would require revisions to well-established legal frameworks, it’s an important opportunity to consider as it shows the full expanse of possibilities that AI offers in supporting fair decision-making alongside improved efficiencies.

It also underscores that applying the latest AI tools to the immigration and refugee systems would, and should, require rethinking the legal frameworks governing these systems.

“At the end of the day, technology and AI are just another tool and without a responsible guiding framework and regulation, it becomes useless and even detrimental,” says Zyfi.

For this reason, complete transparency in AI implementation is another essential safeguard when it comes to mitigating risks.

After the publication of The Citizen Lab’s report in 2018, the Canadian government released its “Directive on Automated Decision Making,” designed to ensure transparency and accountability in AI-based decisions. It includes the Algorithmic Impact Assessment (AIA) tool, a sort of standardized questionnaire used to conduct risk assessments for AI-based projects.

While these initiatives are promising, full transparency must be reinforced by timely publication of all plans, pilots, and active initiatives involving AI, as well as the actual code pertaining to applicable algorithms.

“It would be important to be completely transparent about every decision made using AI,” says M.V. Ramana, professor at the University of British Columbia’s School of Public Policy and Global Affairs.

“If AI or automation are to be used, I would like the entire code to be made public so that independent software programmers and ethicists can see how these programs make their selection,” says Ramana.

This would enable peer review by the academic and scientific communities, and create additional safeguards for those impacted by the decisions.

As of now, there is still a long way to go when it comes to the ethical implementation of AI in Canada’s immigration system. However, if any of the benefits of AI are to be realized, it will require full transparency from the government alongside regular consultations with academic communities and groups directly impacted by AI decision-making.

It would also demand further research into ways of mitigating human over-reliance on AI and constructing regulatory frameworks that firmly comply with fundamental human rights.

Mariya holds an MA in Immigration and Settlement Studies from Ryerson University. Her thesis explored the numerous challenges facing asylum seekers, who experience protracted wait-times in Canada’s refugee determination system. She has also worked as a policy analyst for Immigration, Refugees, and Citizenship Canada.

Photo Credit: Adam Scotti, Prime Minister’s Office.

0 Shares:
You May Also Like