4 AI Governance & Policy Trends

Table of Contents

Why Governance & Policy within Artificial Intelligence?

Since the beginning of the 21st century, several scandals have occurred within the world of AI, with the peak being the Cambridge Analytica one, namely the massive harvesting and use of personal data of millions of Facebook users without their consent. These incidents had some serious consequences on public opinion, which led to an increased apprehension among people towards AI.  

At AI Time Journal we have been discussing Governance & Policy in AI for quite a while now, and we have seen how AI can be beneficial to humanity. That is why it is of the utmost importance to clarify in which way and with what limits AI is helping all of us, whilst cutting down potential and undesired side effects that could affect the world in its entirety, as it happened with Cambridge Analytica.

Considering that none of us wants these sorts of AI issues anymore, a joint effort on Governance & Policy (whether you are an AI expert or not) is much needed.

But, what do we mean exactly when we talk about Governance & Policy in AI?

Generally, the term Governance refers to all those processes and stages related to administrating a state or organisation. It is how a society or an organisation manages decisions. Therefore, when we talk about Governance in AI, we refer to the means used to direct and guide AI, in order for policy frameworks, practices, and outcomes to be researched meticulously and implemented fairly.
As far as Policy in AI is concerned, Tim Dutton, Founder and Editor-in-Chief of Politics+AI, defines it as “those public policies that maximize the benefits of AI, while minimizing its potential costs and risks”.

Hence, the purpose of this article is to shed a light on all the current trends of the Governance & Policy aspects of AI, to better understand the recent advancements and tackle the existing challenges.

To achieve this goal we have surveyed prominent experts in the field and we asked them what are the biggest recent advancements and the current major challenges in Governance & Policy within AI. 

Incentive

One of the biggest challenges in Governance & Policy within AI is the incentive. As Abishur Prakash, Geopolitical Futurist and Author of Go.AI (Geopolitics of Artificial Intelligence) and The Age of Killer Robots, depicted “yesterday, as nations became globalized, they had an incentive to cooperate and build bridges except, now, with technology, they no longer need to operate this way; they can take steps on their own, with technology, that they could not have taken before”. Indeed – he continues – “any institution or organization that is trying to formulate a ‘global’ policy for AI is going to run into this problem of convincing nations to work together”.

Recently, things are getting better. In fact, George Firican, Founder of Lights on Data, confirms that “work has started on [Governance & Policy] by various countries; there are a few hundreds of such governance initiatives, guidance and regulations, and even infrastructure and funding that have started to be made available; we’ve even seen an international movement/cooperation on this”. The major challenge is that we still have to find a global consensus on this.

Growing awareness

Nonetheless, one huge advancement in this area of AI is the “growing awareness of AI and the social outcomes of it” as underlined by Tiago Cardoso, AI Product Manager at Nuxeo.

As Charlie Craine, CTO and Chief Data Officer at Meister Media Worldwide, suggested, “the one thing that I’ve seen advancing at lightning speed is AI Ethics; and for good reason, machine learning is advancing at a rapid pace and there aren’t a lot of rules around it”. This was an idea that was heavily shared by other experts.

In fact, Elina Noor, Director, Political-Security Affairs, and Deputy Director at Asia Society Policy Institute, noted that “in the last few years, there has been increasing awareness of the need for greater transparency and accountability vis-a-vis AI algorithms. How is data being collected? What kinds of data sets are being compiled? How representative are data sets? Is bias accounted for or not? These are just a few questions that have to be clarified if AI is to be a positive force for the communities it serves”. 

And in this sense, a significant milestone was achieved which represents an ongoing and positive transformation of AI and it is growth towards the low-code/no-code. “These systems generally have guardrails built-in – which – helps provide for more responsible AI” as suggested by Tom Taulli, Speaker, Start-up Advisor and Author of Artificial Intelligence Basics: A Non-Technical Introduction and The Robotic Process Automation Handbook: A Guide to Implementing RPA Systems

Furthermore, according to Stephan Jou, Interset CTO at Micro Focus, all of these advancements have been made possible thanks to the “common vocabulary, understanding and definitions on what it means for AI to be ethical and responsible, how to enforce, and how to implement, are all prerequisites to making progress in what began as a very fuzzy, ill-defined area”.

Lastly, a major concern connected to this ‘growing awareness’ aspect of AI is “how a country’s AI policies could impact its neighboring countries or other citizens all over the world” as inferred by George Firican.

Trust is the foundation

Trust is a vital aspect of the further development of Governance & Policy in AI. It is, in fact, one of its founding pillars. At the same time, “obtaining trust and then maintaining it is close to impossible without the proper policies and governance in place” as suggested by George Firican. Indeed, he reckons that “for AI to be positively transformative, there needs to be trust”. Trust that is still lacking and that should comprise both the one between nations and the one that society should have towards AI.

Abishur Prakash precisely pinpoints this lack of trust between countries when affirming that “the UN’s attempts to build rules for AI weapons have fallen short as nations refuse to trust one another”.

There also need to be trust within society and how it perceives AI companies, Tiago Cardoso asserts that “interpretability and explainability continue to be barriers for the application of some ML fields, not only because of transparency concerns but also trust; providing explanations can – indeed – make the AI outputs extremely more efficient in the real world”. On the same opinion is also Stephan Jou when stating that “it is difficult for people to trust an AI system when its decisions cannot be explained; AI needs to earn the trust of society for its full potential to be reached”.

An interesting conundrum related to the trust that society should have towards AI is the one asked by Charlie Craine: “do we trust that these companies are doing the right thing or do we need a governing body?”

Application of AI ethics in every level of AI application

In these past few years, creating and developing AI apps has transformed into something more accessible and straightforward, giving the chance to non-experts to exploit these new advancements and increasing the awareness of having proper Governance & Policy measures even more.

Indeed, Tiago Cardoso observes that “with the accelerating democratization of AI and ML tools, today is very accessible to develop and build AI apps; this expands the frontier of use cases and applied AI and exposes the general public to it”. It is suffice to say that – still according to Tiago – this leads to “an accelerating sense of the impacts that real-world AI deployment can have and produces heuristics on governance and the needed policies to be developed”.

That said, as claimed by Matthew Emerick, Founder of Cross Trained Mind, the biggest advancement as far as Governance & Policy is concerned, is the “application of privacy laws and ethical standards to the data used in AI applications” with the biggest challenge being to “have this in place at the lowest levels of the applications”.

A different yet really interesting perspective is given to us by Charlie Craine when admitting that he is not much of a policy type and that he would rather have AI and ML engineers and practitioners to “have AI Ethics credits and ongoing training; they need to continue this as anyone in medical or insurance or other industries do – because – programmers and tech companies should not get a free pass”.

About The Author

Scroll to Top
Share via
Copy link
Powered by Social Snap