What a Framework for International Cooperation on AI ethics could look like

The US Cyber Command announced the establishment of an artificial intelligence taskforce, which would deliver AI capabilities for operations, enable AI adoptions, and counter AI threats. The new taskforce is part of a growing international trend to integrate AI capabilities in national security paradigms, but also to view AI as a potential tool to advance both existing and new types of threats. Still, the efforts to address the concerns faced as AI becomes a fundamental part of the automation and creation process in various defense and security institutions are at their early stage. So far, individual countries are pursuing their own paths to tackling AI in the military and government agencies. The controversy about the ethics of using AI is on the one hand, at its peak of public outcries, but on the other hand, the efforts to create a more coordinated international approach towards analyzing and responding to these issues are still nascent.

The UK government took the lead on this effort by establishing the AI Opportunity Forum.  The first meeting took place in February 2024, to discuss how AI could be incorporated into all types of organizations, with discussions centering on the importance of safety and public trust in AI, building relevant skills in the workforce, and developing relevant contacts and ties with the academia. It was one of the first initiatives of that kind to advance a global discussion of the role of international institutions in advancing AI ethics.  The follow-up meetings were scheduled for April and June. The event took place just as ChatGPT, a form of generative AI, was coming to the forefront of the debate on the role of AI in contemporary society. The controversy soon took the discourse over this latest trend in groundbreaking innovation to examine the potential for existential threats.

On May 30, 2023, hundreds of artificial intelligence experts and other notable figures signed the following short Statement on AI Risk: Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war. The entity behind promoting this view is the Center for AI Safety, which focuses on research, infrastructure and new pathways for AI safety, and provides compute clusters.  Behind the mass panic is the confusion over AI’s capabilities, current and future. The discussion of the breakthroughs in generative AI cemented the view of AI’s ability to learn, evolve, and adapt to new information and challenges.

There is a widespread, but incorrect perspective, that taken to its logical conclusion, artificial intelligence could gain a form of sentience or become a substitute for human creativity, emotions, and disruptive analytical thinking. Indeed, the history of artificial intelligence records a deliberate effort by developers to advance a form of technological breakthrough closely resembling human neural pathways. The proof of concept for Alan Turing’s idea that machine learning could revolutionize problem solving emerged from a program called The Logic Theorist, presented in 1956 at the Dartmouth Summer Research Project on Artificial Intelligence. The program was developed by Allen Newell, Cliff Shaw, and Herbert Simon, and was designed to imitate human problem-solving skills. It is considered the earliest AI prototype and was funded by the RAND corporation.

By the 1970s, machine learning has improved enough to perform a variety of simple task while the US government was most interested in transcribing and translating human language and high throughput data processing. However, by 2024, AI is only beginning to catch up with these aspirational ideas of the early stages of its development. YouTube and other websites now have functions to provide automated transcriptions of English language video and audio recordings; data processing has improved by leaps and bounds. Still, at the end of the day, AI is a predictive algorithm which is only as good as the data input by human programmers. Indeed, even self-learning capabilities have a limit. In many cases, generative AI programs actually get worse the more they train. Even more sophisticated programs hallucinate, often with bizarre results, or outright fabricate results, which makes such endeavors unreliable in serious research.

At best at the current stage, these AI algorithms are useful in first drafts of research or writing but still require significant human impact to make the final product both accurate and distinguishable from similarly generic summaries. Other problems include theft of intellectual property: the algorithms are not too picky about what ends up incorporated in the final product of their endeavors. The best use of AI at this stage in automating existing process and enhancing creative experiences, which frees web designers and other creatives to pursue more advanced tasks.

Far from achieving sentience, AI advancement has still years to go before it can fulfill its utility potential, much less compete with most jobs. Still the debate about its proper role in society rages on. The US government drafted a framework, which was adopted by the UN general assembly to become the first international collaborative effort to wrestle with the AI ethics for the foreseeable future. The resolution outlines AI’s potential in accelerating and advancing the UN 17 Sustainable Development Goals, but also discusses cooperative steps members states can take jointly to maximize the benefits and address the ethical concerns. These include working to ensure that the use of AI is compliant with the international human rights law and not used to facilitate abuses. The resolution also calls for the closure of the digital divide, recognizing that developing or smaller countries may have less access to technological innovation between and within the countries, and that the Member States and stakeholders should cooperate to overcome these obstacles to innovation. This discussion does not, however, outline any specific measures but rather address the general categories for later conversations. While the framework calls for taking safety and security into consideration, it neither identifies particular concerns nor outlines actionable steps towards resolving identified problems.

The European Union approach to AI appears to be less reactionary and ad hoc and more proactive, going back to the presentation of its AI Strategy in 2021, before the US-based generative AI prospects really took off. The Strategy includes a communication on fostering a European approach to AI, a review of a coordinated plan on artificial intelligence, and a regulatory framework proposal on AI, as well as an impact assessment. By January 2024, the EU Commission advanced an AI Package focused on boosting start ups and SMEs. The aim is to identify trustworthy projects and to integrate them into EU’s economic ecosystems balancing innovation and security.  The regulatory framework is based on risk-driven approach, separating possible risks into four categories, and providing rules for prohibiting AI practices that pose unacceptable risks; determining a list of high-risk applications; setting clear requirements for AI systems for high-risk applications; defining specific obligations deployers and providers of high-risk AI applications; requiring a conformity assessment before a given AI system is put into service or placed on the market; putting enforcement in place after a given AI system is placed into the market; and establishing a governance structure at European and national level.

While the disparity between the thoughtful and systematic EU approach compared with the US-based experimental Wild West of AI may appear jarring to outside observes, it should be noted that overwhelming majority of the current breakthroughs in generative AI is coming out of the US and is overwhelmingly impacting English language markets and applications. The reason for that is that most of the generative AI is based in and skewed towards English language models, which means that EU and potentially other markets have the time to strategize and prepare ground for sound and mature regulation, whereas the US is where the innovation actually happens precisely because of the much more open and less regulated business environment where the law often lags behind innovation, but where innovation is key to US emerging as a leader on presenting groundbreaking technologies.

Moreover, due to the limitations of AI in general, the risks overall are likewise minimal for the time being. However, in the future, AI, just as any tool could be used for good or for evil. One example of far reaching AI implications could be the use of voice cloning and Deepfakes in sophisticated phishing operations by cybercriminals or hackers working for intelligence agencies. However, the same methods could be used by countries as a national security measure to obtain vital information about adversarial operations and ventures, or to infiltrate and disrupt plots and conspiracies. The UN resolution is thus far the first and biggest step to bring so many stakeholders to the table in discussing AI possibilities and best practices. The debate over ethics is quite heated even inside individual countries.

 Reconciling concerns, risks, interests, objectives, and paradigms will be quite difficult particularly in the current polarized climate amidst the Great Power Competition ramping up and taking into account several major international conflicts and civil wars going on simultaneously in various parts of the wars. AI already has important applications in guided precision weapons, surveillance drones, chatbots, and automated driving applications; the potentially dehumanizing aspect of AI application in conflicts may present ethical questions on par with the benefit of automation which may save lives, improve operational outcomes, and provide important automated support in various combat and humanitarian areas.  At the end of the day, the ethics of AI will continue to evolve along with other war-related rules and regulations, but the human element in decisionmaking and the concerns over politics and policies will not be upended by evolving technical advancements.