Home / Regions / Asia / The Global Regulatory Landscape for AI

The Global Regulatory Landscape for AI

Guiding innovation and impact

In recent years, AI has undergone a dramatic transformation, evolving from a primarily academic pursuit to a transformative technology with far-reaching practical applications. This rapid shift has brought issues like model governance, safety, and risk to the forefront of the AI field, making them critical considerations for researchers, developers, and policymakers alike.

Common narratives about AI paint a picture of a utopian future where intelligent machines solve humanity’s most pressing problems, from climate change to disease. On the other hand, there is a growing narrative that portrays AI as an existential threat to humanity. This perspective often focuses on the potential for AI to develop superintelligence, surpassing human capabilities and posing a risk of uncontrolled or malevolent behavior. While it is important to acknowledge the potential risks of AI, a purely pessimistic narrative can be counterproductive.

A balanced perspective on AI acknowledges its potential benefits alongside its challenges. It recognizes AI’s transformative power while emphasizing the need for responsible development and deployment. To achieve this, five key areas are crucial: ethical considerations, international cooperation, education and awareness, regulation and governance, and investment in academic research. While each of these elements is essential, they must work together. A top-down approach, centered around governance and regulation, has been instrumental in guiding the evolving landscape of AI.

Woman with orange balls and technology

AI Regulations Around the World

China has been at the forefront of AI regulation, recognizing early on the need for a comprehensive strategy. The country published the “New Generation Artificial Intelligence Development Plan” in 2017, outlining its ambitious goals for AI development and deployment by 2030. China’s regulatory approach is characterized by a strong emphasis on national security and social stability.

While China has implemented stringent AI regulations, including restrictions on chatbots and internet data training, these measures have been subject to criticism for limiting innovation and hindering the development of AI applications. The government’s control over the technology landscape has also raised concerns about the potential for censorship and surveillance.

Unlike China, the United States has not yet established a comprehensive federal law governing AI. Instead, AI regulation in the US is a patchwork of guidelines, frameworks, and sector-specific regulations developed by various government agencies and industry groups. This approach has been criticized for its lack of coherence and potential for regulatory gaps.

Female worker with laptop in front of tech background

Despite the absence of a unified federal law, the US has made significant strides in AI frameworks. For example, the National Institute of Standards and Technology (NIST) has developed a framework that provides a voluntary, flexible, and non-sector-specific approach to managing risks associated with AI systems. It’s designed to help organizations of all sizes and in various industries assess, mitigate, and manage AI-related risks. Additionally, the Federal Trade Commission (FTC) guidance on AI and algorithms outlines the FTC’s expectations for businesses that use AI and algorithms, including the need to consider fairness, accuracy, and transparency. It also addresses potential concerns about bias and discrimination in AI systems.

By creating a stable and predictable regulatory environment, governments are paving the way for AI innovations that deliver financial returns alongside positive social and environmental impacts.

The European Union has taken a distinctive approach to AI regulation by adopting a harmonized framework that applies to all EU member states. The AI Act, which came into force in 2024, aims to ensure that AI systems deployed in the EU comply with fundamental rights and high safety standards. The Act classifies AI systems into four risk categories, ranging from minimal to unacceptable, and imposes corresponding regulatory requirements.

One of the most notable features of the AI Act is its stringent penalties for non-compliance. Companies that violate the Act could face fines of up to €35 million, or 7% of their global annual turnover. This has prompted businesses to prioritize compliance with the Act’s provisions.

Global Cooperation on Safe AI

The 2024 South Korea AI Safety Summit marked a significant milestone in global AI safety cooperation. Sixteen leading AI companies signed the “Frontier AI Safety Commitments,” pledging to publish responsible AI practices (RSPs). Additionally, 27 countries signed a statement supporting global AI safety cooperation.

The decision to hold the next AI Safety Summit in November 2024 by France is a continuation of the worldwide collaboration on AI safety. France has emerged as a significant player in the global AI landscape. According to the 2024 AI Index report by Stanford’s Institute for Human-Centered AI, France ranks third worldwide in terms of producing notable AI models, trailing only the United States and China. This recognition underscores France’s strength in AI research, development, and innovation.

Tech human hands shaking

France has actively sought to shape the global governance of AI. Its partnership with China, as evidenced by the joint declaration on global AI governance, highlights France’s ability to collaborate with major AI powers while maintaining its own independent perspective. France’s decision to host its own AI Safety Summit in November 2024, later renamed the “AI Action Summit,” demonstrates its continued commitment to addressing the ethical and societal implications of AI. However, the shift in focus from “safety” to “action” raises questions about the extent to which safety will be prioritized at the summit. Additionally, the postponement of the summit from November 2024 to February 2025 indicates potential challenges or delays in its organization.

AI Regulation and Investment in the Impact Economy

As global regulatory approaches continue to evolve, there is an increasing recognition of the potential for AI to drive not only technological advancement but also to contribute to broader social and environmental objectives. The global regulatory landscape is crucial not only for managing the risks of AI but also for fostering innovation that drives long-term, sustainable impacts. As AI systems continue to permeate various sectors, governments and regulators play a pivotal role in shaping an environment where responsible AI development is prioritized. Regulations, such as the EU’s AI Act, ensure that ethical standards are maintained, which can catalyze investment in AI-driven solutions that align with the principles of the Impact Economy.

Impact investors are increasingly looking toward AI technologies as tools to solve pressing social and environmental challenges. Clear regulatory frameworks that emphasize ethical use and responsible innovation can accelerate the flow of capital into AI companies working on solutions in areas such as healthcare, clean energy, and sustainable cities — aligning with several of the UN’s Sustainable Development Goals (SDGs). Moreover, regulations that promote transparency, fairness, and accountability in AI systems help ensure that the growth of AI is inclusive, benefiting not only businesses but also society at large.

By creating a stable and predictable regulatory environment, governments are paving the way for AI innovations that deliver financial returns alongside positive social and environmental impacts — cornerstones of the Impact Economy. In this way, AI regulations, investment, and governance are not just about mitigating risks but also about unlocking the potential of AI to drive systemic, impact-driven transformation on a global scale.

Isabella Wang, an Impact Entrepreneur correspondent, is an impact entrepreneur with a deep passion for creating a positive impact in today's fast-paced digital landscape. She is the founder of Digital Thinker, a purpose-driven startup on a mission to elevate humanity through impact innovations centered on Responsible AI, Digital Well-being, Sustainability, ... Read more
Monthly Premium H

Related Content

Comments

0 Comments

Submit a Comment

Impact Entrepreneur Premium Members get full, priority access to ValuesAdvisor. A curated database of values-aligned financial advisors.

Deep Dives

No posts found.

RECENT

Editor's Picks

Webinars

News & Events

Subscribe to our newsletter.

Subscribe to our newsletter to receive updates about new Magazine content and upcoming webinars, deep dives, and events.

Access all of Impact Entrepreneur.

Become a Premium Member to access the full library of webinars and deep dives, exclusive membership portal, member directory, message board, and curated live chats.

ie frog
Impact Entrepreneur