OpenAI says it could ‘cease operating’ in the EU if it can’t comply with future regulation

Date:

Share:

[ad_1]

OpenAI CEO Sam Altman has warned that the company might pull its services from the European market in response to AI regulation being developed by the EU.

Speaking to reporters after a talk in London, Altman said he had “many concerns” about the EU AI Act, which is currently being finalized by lawmakers. The terms of the Act have been expanded in recent months to include new obligations for makers of so-called “foundation models” — large-scale AI systems that power services like OpenAI’s ChatGPT and DALL-E.

“The details really matter,” said Altman, according to a report from The Financial Times. “We will try to comply, but if we can’t comply we will cease operating.”

In comments reported by Time, Altman said the concern was that systems like ChatGPT would be designated “high risk” under the EU legislation. This means OpenAI would have to meet a number of safety and transparency requirements. “Either we’ll be able to solve those requirements or not,” said Altman. “[T]here are technical limits to what’s possible.”

In addition to technical challenges, disclosures required under the EU AI Act also present potential business threats to OpenAI. One provision in the current draft requires creators of foundation models to disclose details about their system’s design (including “computing power required, training time, and other relevant information related to the size and power of the model”) and provide “summaries of copyrighted data used for training.”

OpenAI used to share this sort of information but has stopped as its tools have become increasingly commercially valuable. In March, Open AI co-founder Ilya Sutskever told The Verge that the company had been wrong to disclose so much in the past, and that keeping information like training methods and data sources secret was necessary to stop its work being copied by rivals.

In addition to the possible business threat, forcing OpenAI to identify its use of copyrighted data would expose the company to potential lawsuits. Generative AI systems like ChatGPT and DALL-E are trained using large amounts of data scraped from the web, much of it copyright protected. When companies disclose these data sources it leaves them open to legal challenges. OpenAI rival Stability AI, for example, is currently being sued by stock image maker Getty Images for using its copyrighted data to train its AI image generator.

The recent comments from Altman help fill out a more nuanced picture of the company’s desire for regulation. Altman has told US politicians that regulation should mostly apply to future, more powerful AI systems. By contrast, the EU AI Act is much more focused on the current capabilities of AI software.

[ad_2]

Source link

Subscribe to our magazine

━ more like this

How Professional Bettors Manage Risk and Bankroll

Professional betting is often misunderstood. Many assume success comes from predicting winners more accurately than everyone else. In reality, long-term profitability depends far more...

Top Fire Watch Strategies for Events and Commercial Properties in 2026

Fire safety standards for events and commercial properties are evolving faster than ever. As we move through 2026, tighter regulations, stricter insurance evaluations, and...

Why Fast Fire Watch Relies on AI for Advanced Fire Detection Solutions

What if your fire detection system could predict danger before it happens? The fast fire watch company believes in that possibility, leveraging artificial intelligence...

How To Place Winning Bets Without Breaking The Bank

Did you know that nearly 70% of sports bettors lose money in the long run? If you’re tired of watching your hard-earned cash disappear...

Crypto Crime Investigation (C.C.I) Enhances Singapore’s Safety with Innovative Pig Butchering Fraud Recovery Technology

Crypto Crime Investigation (C.C.I) is proud to announce the launch of its groundbreaking Pig Butchering fraud recovery technology, a vital initiative aimed at protecting...