If you’re captivated by dystopian tales of technology gone awry, brace yourself for a reality check. It’s only a matter of time before someone starts questioning whether we should be willing to unleash the full potential of Artificial Intelligence (AI), “whatever it takes”. The question isn’t whether AI has potential — it’s how we harness it safely and equitably. 

The debate on artificial intelligence has reached new heights over the past few years, thanks to the famous language model tool developed on a machine learning engine that has caused a stir not only among technology enthusiasts but also among rival companies in the tech world that have been preparing to create tools or plugins to ‘modernize’ their technological arsenal with this tech. As is often the case with the proliferation of new technologies, legislative processes struggle to keep up with the times. But, the constant expansion of technologies empowered by artificial intelligence has led governments worldwide to create and implement laws to regulate AI.  

Regulating what’s new. 

Over the past few years, the debut of various AI language models implemented in everyone’s daily life, such as Bing, Chat-GPT, Google, etc., has thrust AI into policy discussions, revealing its game-changing capabilities and associated risks like content manipulation and social inequality.  

The OECD’s AI Principles have become a touchstone for global policy, shaping strategies in over 50 countries since 2017. These strategies aim to cultivate trust and drive progress by channelling investments into AI research and infrastructure for broader public access. As AI has a global impact, cross-border cooperation is essential, and nations are at different levels of policy development. The current emphasis is on converting these guiding principles into practical policies that tackle inclusivity, bias, and system transparency. Although AI-specific regulations are emerging, there’s a pressing need for international harmonization. 

In this sense, the merit of artificial intelligence lies in the fact that European politicians have immediately seized the opportunity to speed up – albeit with the timing of the EU institutions – a process that seems to represent a complete governance of AI: the AI Act. The question “To AI or Not to AI” has been answered well. The AI Act is built upon the EU’s Internal Market regulations, striving for a balanced methodology incorporating pre-established product safety principles and consumer protection principles. It allocates roles for compliance and oversight between the EU and member states. 

Outside of the EU, like-minded partners are pursuing different approaches. The National AI Initiative in the US aims to solidify the country’s position as a leader in AI while ensuring the technology is reliable. On the other hand, the Blueprint for an AI Bill of Rights establishes a quintet of guidelines focused on safeguarding the American public in the age of AI. Canada’s anticipated AI and Data Act aims to protect citizens and promote responsible AI. The Ministry of Innovation also introduced, in September 2023, a Voluntary Code of Conduct for Canadian companies to responsibly develop and manage generative AI systems until formal regulations are established. 

Back on this side of the Atlantic, the UK’s strategy on AI, known as the “Pro-innovation approach to AI regulation”, highlights a streamlined, non-statutory blueprint that prioritizes teamwork, modernization, and leveraging existing regulatory bodies. Other countries, like Japan, lack specific AI laws but promote “agile governance” through nonbinding guidance, while South Korea is working on AI legislation.  

The G7 approach 

While individual countries are making strides in AI regulation, international cooperation is also taking centre stage. In 2023, building on OECD principles and recommendations concerning the use of AI, the G7’s ‘Hiroshima AI Process’ leaders have agreed on ‘International Guiding Principles and a voluntary Code of Conduct for AI’. These aim to complement the EU’s legally binding AI Act, focusing on safety, trustworthiness, and responsible governance. The principles cover risk mitigation, cybersecurity, and a labelling system for AI-generated content. The G7 Hiroshima Artificial Intelligence Process was established to create global guardrails for advanced AI systems. 

The principles of managing AI risks from inception to real-world application are encapsulated in 11 guiding principles. These principles prioritize transparency and rigorous internal and external testing to guarantee AI systems’ safety, security, and trustworthiness. Collaboration is critical, especially in information sharing and incident reporting. Moreover, the principles encourage ground-breaking research to safeguard society and enhance safety. They also champion international standards and robust data protection measures. Covering a broad spectrum of risks — from cybersecurity to societal concerns—the principles are designed for ongoing refinement and encourage active collaboration among stakeholders. 

The G7’s guiding principles for governing AI are a significant step forward, but there is still a pressing need for a global Treaty on AI to achieve a universally accepted approach to the development and use of AI technologies. 

Considerations 

Whether we like it or not, AI remains a cheap, invisible, and pervasive technology with infinite use cases. Striking a balance is not just wise — it’s imperative regarding the risks and rewards of generative AI. Such an approach is fundamental when considering how AI can tackle global challenges like climate change and migration. Managing these risks and benefits is collective and requires agile and evolving regulatory frameworks.  

We must always remember that behind every algorithm is a human pulse as we work towards harnessing AI’s transformative power. As we explore new frontiers of generative AI, it’s crucial to balance risks and rewards from a technical standpoint and a human-centric perspective. While the stakes are high, the potential advantages are equally significant, ranging from manufacturing and transportation to finance and healthcare, with improvements to transformative solutions for pressing global issues. 

Developing trustworthy AI isn’t just an ideal—it’s a necessity. As we unlock its full potential, we’ll face tough choices. But with responsible practices and democratic systems as our guide, the rewards could be monumental. The question now is, are we ready to make those choices? 

whois: Andy White Freelance WordPress Developer London