This opinion piece was written by Syed Muntasir Mamun, a director in Bangladesh’s Ministry of Foreign Affairs and a graduate student at Oxford University’s Saïd business school. This piece also appears in our Digital Government newsfeed.
A recent Apolitical web conference on artificial intelligence, hosted by Robert Madelin, the EU’s former senior advisor for innovation, gave new life to questions which have been tormenting the field for the last 70 years. What decisions are to be made for and by AI? How should these decisions be delivered? And most importantly, why should we use AI in the first place?
AI promises many possibilities. It could lead us to reimagine what counts as “work” and “who” carries it out. It changes how decisions are taken — ranging from the very basic, such as food and sleep, to the complex, such as our choice of partners, or parliamentary candidates. Understanding what counts as not only legal but also ethical and moral in the field of artificial intelligence is a question which confounds both the state and the market.
Two questions affect the behaviour of the state in matters related to AI. The first one is related to how both trustworthiness and innovation-friendliness can be maintained. The second deals with measures to broaden the participation of the public in the design of AI.
Essentially, the question here is of awareness and understanding. Do we understand what AI actually means? Do we know what it could accomplish? Or for that matter, who this “us” would be? Would the world become more divided, or would AI move us closer to peace and harmony? Would there be space for dissent and diversity in the backdrop of what would be considered as “efficient”?
One possible, though time-consuming, option is to tweak academic curricula to include more about artificial intelligence systems so that children grow up AI-aware in the first place. Incorporating an awareness of technology in basic education is the first part of the answer.
The second part involves creating AI with a “human face” — an AI ecosystem which is friendly to the human individual. AI-enabled ecosystems to support human endeavours to elevate socio-political, cultural and economic values are required for mainstreaming artificial intelligence in the human thought process.
For the regulator, it is important to ensure human oversight is encoded into the entire AI ecosystem. While this could build human inefficiency into the coding environment, it would shore up the integrity and credibility of the policy.
To offset the drag, steps such as staged regulation for the deployment of an AI ecosystem would be helpful. Setting reasonable standards and incorporating multi-stakeholder participation in resolving conflicts at the inception level are also crucial.
It is worthwhile to understand the dynamics of trust between various layers of society and the state. Incorporating a “regulatory algorithm” which can successfully mitigate the dichotomy of interest between capital interests — represented by the private sector — and the demands for ensuring equity and justice — represented by the public sector — is an essential foundation for the health of the whole system. Utilising a distributed blockchain environment for ensuring transparency and broader participation are options worth considering.
The critical element here for the state is to ensure a policy environment which is ethically sound, business-friendly and system-wide.
Elevating the public to progressively higher levels of skills and expertise which could negate the fear of job loss as a result of AI deployment is required for sponsoring a better and faster fermentation and adoption of technology in the artificial intelligence space.
AI could — but likely shouldn’t — be used for tactical measures like reducing operational costs by cutting jobs. Rather, AI should be utilised to elevate the skill level of the population to better use the resources freed-up by the deployment of AI.
In a human world filled with strife it is also important to identify that the root algorithms only pick up the biases which their human progenitors embody. Hence, for ensuring equitable and just access to AI environments, special care should be taken to monitor and correct biases.
To ensure healthy competition, access to algorithm matrices and data vertices needs to be given to public agencies entrusted with oversight and information. The essential element supporting such an initiative rests within the domain of education, articulation and exposure.
(Picture credit: Conor McCabe Photography)
Syed Muntasir Mamun