Home / Impact Economy / ESG Investing / EU’s AI Act Shapes Transparency and Trust

EU’s AI Act Shapes Transparency and Trust

Navigating the intersection of technology and human rights

At first glance, the EU’s new AI Act may seem like just another tech regulation, but it represents much more. Regulations are not only about the specific “topic” they address; they also reflect cultural shifts. Similar to other forms of technology regulation, this Act provides guardrails for how humans interact with technology. Unlike prescriptive regulations, which tend to be binary (e.g., do more of this or less of that), the AI Act is part of a larger movement toward transparency at both the product and organizational levels. As the EU states, “transparency obligations… ensure that humans are informed when necessary, fostering trust.” Transparency is a significant phenomenon that’s redefining what we consider “normal” versus “revolutionary” organizational behavior.

AI regulations might seem relevant only to tech companies or product teams, but CXOs and boards may feel the impact most acutely. This is because product strategies may need to change. Generally, transparency regulations don’t necessarily alter how we design products, but they do influence how we apply them in the “real world.” For example, AI usage within operational, programmatic, or governance structures may need to be reconsidered. The AI Act’s press release mentions “emotion recognition systems used in the workplace” and “categorizing people” for “predictive policing,” highlighting that these regulations focus on the relationship between human behavior and product application. While this doesn’t mean revenue must suffer, it does suggest that humane usage and benevolent use cases will become the norm.

Illustration of man working on laptop

In some ways, the EU’s AI Act serves as a consumer protection mechanism that shifts focus from human-made tools back to humans themselves, urging us to rethink AI’s potential for value creation and harm. This may seem counterintuitive, as policies are often not direct outcomes of research or data in the strictest sense. Instead, regulations typically arise when stakeholder groups are ready to engage in dialogue and establish structures around risk mitigation and opportunity maximization. In other words, regulatory processes move at the speed of human and societal capacity for change, which tends to be slow and fraught with resistance.

The AI Act is part of a larger movement toward transparency at both the product and organizational levels.

At this point, many organizations have already adapted to transparency through required Environmental, Social, and Governance (ESG) disclosures, which essentially serve as a framework for risk mitigation in investing. The new AI regulations are an extension of this—essentially, an encore. The AI Act is a hybrid social/tech policy and highlights the critical role of the “S” component of ESG in risk management, despite ongoing confusion. Leveraging technology to address social challenges presents a strong value proposition, and organizations that are already “guaranteeing human rights in the design, development, purchase, sale, and use of AI” are likely to be ahead of the curve.

Illustration of human eye with technology

The new Act will be implemented in stages between August 2024 and 2027. The EU AI office aims to foster “international dialogue and cooperation on AI issues,” recognizing the importance of global alignment on AI governance. Organizations have a 30-month runway to prepare as stewards of these new standards. Research indicates that ethical and healthy tech/human interactions can be beneficial, so by 2027, we may witness some well-established brands losing their market dominance while others, perhaps yet unheard of, rise in favor and market share. This shift will be driven partly by regulations and partly by evolving stakeholder sentiment.

Rather than stifling innovation, these regulations present a tremendous opportunity for organizations across industries and regions to emphasize human-centered design, which typically leads to better products. In the longer term, regulations are an invitation to integrate more research and design thinking into product application, which, in turn, creates business development opportunities. In this way, humanistic regulations can actually drive innovation and foster economic growth.

Dr. Sara Murdock, an Impact Entrepreneur Correspondent, is an award-winning expert in organizational effectiveness with 20 years of experience as an advisor, speaker, author, researcher, and executive. Known as a cultural futurist and business anthropologist, Dr. Murdock serves as a pioneer in aligning organizational objectives with actual impact. Sara cements ... Read more
Impact Entrepreneur Premium Members get full, priority access to ValuesAdvisor. A curated database of values-aligned financial advisors.

Comments

0 Comments

Submit a Comment

Impact Entrepreneur Premium Members get full, priority access to ValuesAdvisor. A curated database of values-aligned financial advisors.

Deep Dives

RECENT

Editor's Picks

Webinars

News & Events

Subscribe to our newsletter.

Subscribe to our newsletter to receive updates about new Magazine content and upcoming webinars, deep dives, and events.

Access all of Impact Entrepreneur.

Become a Premium Member to access the full library of webinars and deep dives, exclusive membership portal, member directory, message board, and curated live chats.

ie frog
Impact Entrepreneur