Right after the end of the AI Action Summit in Paris, Anthropic’s co-founder and CEO Dario Amodei called the event a “missed opportunity.” He added that “greater focus and urgency is needed on several topics given the pace at which the technology is progressing” in the statement released on Tuesday.
The AI company held a developer-focused event in Paris in partnership with French startup Dust, and TechCrunch had the opportunity to interview Amodei on stage. At the event, he explained his line of thought and defended a third path that’s neither pure optimism nor pure criticism on the topics of AI innovation and governance, respectively.
“I used to be a neuroscientist, where I basically looked inside real brains for a living. And now we’re looking inside artificial brains for a living. So we will, over the next few months, have some exciting advances in the area of interpretability — where we’re really starting to understand how the models operate,” Amodei told TechCrunch.
“But it’s definitely a race. It’s a race between making the models more powerful, which is incredibly fast for us and incredibly fast for others — you can’t really slow down, right? … Our understanding has to keep up with our ability to build things. I think that’s the only way,” he added.
Since the first AI summit in Bletchley in the U.K., the tone of the discussion around AI governance has changed significantly. It is partly due to the current geopolitical landscape.
“I’m not here this morning to talk about AI safety, which was the title of the conference a couple of years ago,” U.S. Vice President JD Vance said at the AI Action Summit on Tuesday. “I’m here to talk about AI opportunity.”
Interestingly, Amodei is trying to avoid this antagonization between safety and opportunity. In fact, he believes an increased focus on safety is an opportunity.
“At the original summit, the U.K. Bletchley Summit, there were a lot of discussions on testing and measurement for various risks. And I don’t think these things slowed down the technology very much at all,” Amodei said at the Anthropic event. “If anything, doing this kind of measurement has helped us better understand our models, which in the end, helps us produce better models.”
And every time Amodei puts some emphasis on safety, he also likes to remind everyone that Anthropic is still very much focused on building frontier AI models.
“I don’t want to do anything to reduce the promise. We’re providing models every day that people can build on and that are used to do amazing things. And we definitely should not stop doing that,” he said.
“When people are talking a lot about the risks, I kind of get annoyed, and I say: ‘oh, man, no one’s really done a good job of really laying out how great this technology could be,’” he added later in the conversation.
DeepSeek’s training costs are ‘just not accurate’
When the conversation shifted to Chinese LLM-maker DeepSeek’s recent models, Amodei downplayed the technical achievements and said he felt like the public reaction was “inorganic.”
“Honestly, my reaction was very little. We had seen V3, which is the base model for DeepSeek R1, back in December. And that was an impressive model,” he said. “The model that was released in December was on this kind of very normal cost reduction curve that we’ve seen in our models and other models.”
What was notable is that the model wasn’t coming out of the “three or four frontier labs” based in the U.S. He listed Google, OpenAI and Anthropic as some of the frontier labs that generally push the envelope with new model releases.
“And that was a matter of geopolitical concern to me. I never wanted authoritarian governments to dominate this technology,” he said.
As for DeepSeek’s supposed training costs, he dismissed the idea that training DeepSeek V3 was 100x cheaper compared to training costs in the U.S. “I think [it] is just not accurate and not based on facts,” he said.
Upcoming Claude models with reasoning
While Amodei didn’t announce any new model at Wednesday’s event, he teased some of the company’s upcoming releases — and yes, it includes some reasoning capacities.
“We’re generally focused on trying to make our own take on reasoning models that are better differentiated. We worry about making sure we have enough capacity, that the models get smarter, and we worry about safety things,” Amodei said.
One of the issues that Anthropic is trying to solve is the model selection conundrum. If you have a ChatGPT Plus account, for instance, it can be difficult to know which model you should pick in the model selection pop-up for your next message.
The same is true for developers using large language model (LLM) APIs for their own applications. They want to balance things out between accuracy, speed of answers and costs.
“We’ve been a little bit puzzled by the idea that there are normal models and there are reasoning models and that they’re sort of different from each other,” Amodei said. “If I’m talking to you, you don’t have two brains and one of them responds right away and like, the other waits a longer time.”
According to him, depending on the input, there should be a smoother transition between pre-trained models like Claude 3.5 Sonnet or GPT-4o and models trained with reinforcement learning and that can produce chain-of-thoughts (CoT) like OpenAI’s o1 or DeepSeek’s R1.
“We think that these should exist as part of one single continuous entity. And we may not be there yet, but Anthropic really wants to move things in that direction,” Amodei said. “We should have a smoother transition from that to pre-trained models — rather than ‘here’s thing A and here’s thing B,’” he added.
As large AI companies like Anthropic continue to release better models, Amodei believes it will open up some great opportunities to disrupt the large businesses of the world in every industry.
“We’re working with some pharma companies to use Claude to write clinical studies, and they’ve been able to reduce the time it takes to write the clinical study report from 12 weeks to three days,” Amodei said.
“Beyond biomedical, there’s legal, financial, insurance, productivity, software, things around energy. I think there’s going to be — basically — a renaissance of disruptive innovation in the AI application space. And we want to help it, we want to support it all,” he concluded.
Read our full coverage of the Artificial Intelligence Action Summit in Paris.