Days following Sam Altman, OpenAI CEO warned that the company could have to stop doing business in Europe if the European Union's AI Act restrictions were approved in their present form, he now appears to have changed his mind.
Altman recently told US lawmakers that he supported regulation of artificial intelligence, but this week, while addressing reporters in the United Kingdom, he stated he had many concerns regarding the EU's AI Act and he even accused that the EU was over - regulating.
The innovative but somewhat contentious ChatGPT AI system was created by OpenAI, a company supported by Microsoft.
According to a source from The Financial Times, Altman stated that they will try to comply with it but if they cannot, they will stop operating. The act, which is scheduled to be finalized the following year, is now being discussed by members of the EU Parliament, Council and Commission.
Altman, however, tried to tone down the rhetoric in a Friday morning Tweet, writing that very fruitful week of discussions in Europe concerning how to effectively regulate AI! They are delighted to carry on doing business there and, of course, have no intentions to leave.
His prior comments had enraged European lawmakers, with many politicians debating that the level of regulations being suggested by the EU was needed or not to address the worries about generative AI.
Thierry Breton, the EU Commissioner said to Reuters that people should be clear that the laws are not negotiable since they are in place to protect the safety and welfare of their citizens.
According to him, Europe has been far ahead of the curve in developing a strong and balanced legislative framework for AI that addresses issues relating to safety or basic rights while simultaneously fostering new ideas for Europe to establish itself as a leader in reliable AI.
Altman thinks it would be "wise" to regulate AI
Earlier in this month, Altman said to US senators at a Senate Judiciary subcommittee about privacy, technology, and law that legislation would be "wise" since users need to know whether they're talking to an AI system or viewing content—such as photographs, videos, or documents — produced by a chatbot.
When questioned whether voters should be worried that elections could be rigged by large language models like GPT - 4 or its chatbot app ChatGPT at the hearing, Altman responded that it bacame one of the areas of his greatest concern.
Additionally he said, given that they will be facing an election the following year and that these models are improving all the time, he thinks this is a really serious area of concern. These models have a more general ability to manipulate, convince, and provide more individually interactive disinformation.
Ehat's more, they will also need laws and norms regarding what should be disclosed by a corporation that offers a model that might be capable of the kinds of skills they are discussing. He is quite worried with regard to it.