• Opinion
  • February 15, 2019
  • 8 minutes
  • 2

Is the world really ready for a global artificial intelligence framework?

Opinion: The UN wants to work on global tech governance — but the world may not be ready

This opinion piece was written by Jia Hao Chan. For more like this, see our digital government newsfeed.


One of the world’s most searched keywords, Artificial Intelligence (AI), made headlines in this year’s World Economic Forum (WEF) held in Davos, Switzerland.

Stating governance of emerging technologies as one of the UN’s top five priorities for 2019, Secretary-General Antonio Guterres reiterated the need for the international community to understand the humanitarian impact of the use of AI.

At the same time, however, his acknowledgment of the lack of even a minimum consensus on integrating AI into international legal regime defined decades ago entails a worrying sign amidst the Fourth Industrial Revolution.

            • Want to write for us? Take a look at Apolitical’s guide for contributors 

Already, facial recognition technologies are deployed globally across a wide spectrum of our everyday lives — from airport clearances, to gaining entrance into workplaces, and even to the dispensing of toilet paper.

Autonomous bots programmed to detect unusual web behaviour for cybersecurity purposes are also enhancing their assessment of individuals’ digital footprints through a combination of biometrics and facial authentication, on top of mere internet protocol addresses.

In the long term, as machine learning algorithms become trained and more familiar with the data trends they receive, many fear that AI systems will become less predictable and explainable in the methodologies they take to reach decisions. This is also known as the “black box problem” — outputs are derived from an impenetrable “black box” that is fed by inputs.

Most experts therefore agree that a set of standard principles to guide the designing and programming of AI applications worldwide is needed. The bulk of these repetitive calls, like the one at the tech leaders’ International Joint Conference on Artificial Intelligence, have led to rapid and positive steps taken globally.

The European Commission recently announced that it will launch its final set of guidelines by March 2019, an approach likely to complement the EU General Data Protection Regulations  and requiring human involvement in automated decision making. This comes as EU member states came together to sign a Declaration of Cooperation on AI in April 2018 to formulate joint approaches to tackling the ethical and legal challenges.

Similarly, in January 2019, policymakers from 36 OECD countries gathered at the Massachusetts Institute of Technology to debate and discuss recommendations for an OECD AI policy set to be released later this year.

At the recent WEF, Singapore’s Personal Data Protection Commission also released the city-state’s own Model AI Governance Framework in guiding local private organisations to address ethical and governance issues for AI usage, principled upon being human-centric, explainable, transparent and fairness.

France and Canada have jointly established an International Panel on Artificial Intelligence, seeking to co-lead the AI policymaking scene among G7 countries.

These efforts still appear to be far from a standardised global AI framework

Tech companies, on the other hand, have sought to join consortiums such as the Information and Technology Industry Council and the Partnership on AI to set and share best practices — including values and ethics in AI usage, their security and transparency and approaches to collaboration between humans and AI systems.

Nonetheless, these efforts still appear to be far from a standardised global AI framework. And a number of outstanding factors could further hinder this from happening.

First, within the present international AI landscape, the algorithms used in designing AI applications so far have been inconsistent in terms of ethical, privacy and sectoral considerations.

The second factor challenging the ushering of a global AI framework is that years of AI research and developments by nations have varied significantly in motivations, scope and strategies. For instance, China’s “A Next Generation Artificial Intelligence Development Plan”, published in July 2017, states that its AI applications will incorporate China’s 5G mobile communication, existing internet of things applications and data infrastructures.

The EU’s AI strategy, on the other hand, focuses on consistency of AI usage with its existing GDPR, while Germany plans to focus on integrating AI in its top exporting industries that are more manufacturing-driven. There was no consensus on the focus of AI regulations among nations to begin with.

Drafting a global AI governance framework also requires greater transformative efforts from specific industries and international organisations.

Take the emerging use of AI in the global financial industry. While countries like Singapore have released a set of principles to promote fairness, ethics, accountability and transparency, international financial regulators have yet to respond accordingly. The global financial industry continues to wait for AI usage policies to be updated into existing regulations like the US Bank Secrecy Act and those of the Financial Action Task Force.

Finally, an increasing partition in the global macro-technological environment complicates international cooperation could also hinder progress to a global treaty.

In 2018, a number of countries, including the UK, Japan, Australia, New Zealand and Canada, called for the banning of Chinese corporation Huawei’s 5G equipment in their respective countries, citing national security concerns. Yet emerging countries like Brazil, India and South Africa, continue to import Huawei products and other Chinese technologies.

The world unfortunately, may not yet be ready

In November 2018, the US Bureau of Industry and Security of the Department of Commerce called for public consultation on its proposal to establish sweeping export controls over a range of “emerging and foundational technologies” that include AI and robotics, likely to be targeted at China.

At the latest WEF, a joint statement by over 70 countries to commence World Trade Organisation talks on regulating e-commerce and related trade aspects failed to see support from emerging economies like India, Indonesia and South Africa, again reinforcing a dividing global digital economy.

While Guterres envisions a 2019 where the international community comes together to work on global tech governance, particularly on AI, the world unfortunately, may not yet be ready. — Jia Hao Chan

(Picture credit: Unsplash)

Discussion

Leave a Reply

to leave a comment.
Master the skills you need for the public service.

Discover inspiring resources, tools and policies.