Generative AI Faces Fresh Transparency Concerns in EU

Date:

Share:

[ad_1]

Emerging as the first formal regulation of artificial intelligence, lawmakers in Europe signed off on a comprehensive set of rules—the EU’s Artificial Intelligence Act.

This groundbreaking legislation could serve as a potential blueprint for policymakers worldwide who are tasked with setting guardrails for the rapidly evolving technology.

What does the bill entail?

In the latest version of the bill, passed on Wednesday, generative AI would be subject to new transparency requirements. This includes publishing summaries of copyrighted material—something publishers have asked for under fair compensation. Additionally, makers of generative AI models will be required to put guardrails in place to prevent the generation of illegal content.

“The AI Act puts some fairly reasonable guardrails in place,” said Chris Pedigo, svp of government affairs at Digital Content Next. “The transparency piece gives publishers an opportunity to regain control over their content.”

The regulation is far from becoming law and its final version is not anticipated to be introduced until later this year. However, it’s the first of its kind and alleviates some publisher concerns over fair use and the possibility that they will lose out on traffic and revenue.

How does the rule work?

Measures to rein in AI were first proposed in 2021 but did not give much attention to generative AI. This time, makers of AI systems such as ChatGPT will be required to disclose information used to build the program. The law also regulates any product or service that uses AI while curtailing the use of facial recognition software.

The legislation follows a risk-based approach and categorizes AI systems into four levels of risk, ranging from minimal to unacceptable. Through risk assessments, makers of the technology will asses the everyday use of the tech before making it widely available.

The EU bloc, made up of 27 member states, will enforce the rules and could force companies to withdraw their products from the market. Proposed fines could reach $43 million or 7% of a company’s annual global revenue.

“It’s too early to tell if this act will have some real teeth to compel the tech companies to curb the harmful effects of AI,” said Chirag Shah, a professor at the Information School at the University of Washington. “What I see currently lacking, is a notion of accountability. Perhaps these details will emerge over time.”

Subscribe to our magazine

━ more like this

The Rise of Specialist Executive Recruitment Firms in the UK

Finding the right senior leader has never been easy. But in today’s fast-moving UK business environment, it has become even harder. Companies face rapid digital...

Why Non-Executive Directors Are Essential for Strong Governance and Business Growth

Did you know that companies with effective non-executive directors (NEDs) can outperform their competitors by up to 20%? This remarkable statistic underscores the vital...

What Canadian Bettors Look for in a Great Sports Betting Experience

What Canadian Bettors Look for in a Great Sports Betting Experience Sports betting has grown quickly across Canada. From casual fans placing weekend wagers to...

How Professional Bettors Manage Risk and Bankroll

Professional betting is often misunderstood. Many assume success comes from predicting winners more accurately than everyone else. In reality, long-term profitability depends far more...

Top Fire Watch Strategies for Events and Commercial Properties in 2026

Fire safety standards for events and commercial properties are evolving faster than ever. As we move through 2026, tighter regulations, stricter insurance evaluations, and...