OpenAI says its unhappy with Nvidia inference hardware, now looking at AMD, Cerebras, Groq
The post OpenAI says its unhappy with Nvidia inference hardware, now looking at AMD, Cerebras, Groq appeared on BitcoinEthereumNews.com.
OpenAI isn’t happy with Nvidia’s AI chips anymore, especially when it comes to how fast they can answer users. The company started looking for other options last year, and now it’s talking to AMD, Cerebras, and was even talking to Groq before that got shut down. This tension started getting real when OpenAI realized Nvidia’s chips weren’t fast enough for specific things like writing code and handling software-to-software tasks. One insider allegedly said OpenAI wants new chips to handle at least 10% of its inference needs going forward. That’s the part where the AI replies to users, not the part where it learns stuff. OpenAI wants faster chips for coding and user replies Most of OpenAI’s current work still runs on Nvidia, but behind the scenes, it’s testing chips that could make everything faster. This includes chips packed with SRAM, which helps speed things up by putting memory right next to the processor. Nvidia and AMD still use memory that sits outside the chip, which slows things down. People inside OpenAI pointed to Codex, the tool that writes code, as the place where the slowness was the biggest problem. Some staff even blamed the weak performance on Nvidia’s hardware. In a press call on January 30, OpenAI CEO Sam Altman said, “Customers using our coding models will put a big premium on speed for coding work.” Sam added that regular ChatGPT users don’t care about speed as much, but for developers and companies, every second counts. He said OpenAI had just signed a deal with Cerebras to help speed things up. At the same time, companies like Anthropic and Google are getting better results using their own chips. Google’s TPUs are built specifically for the kind of work inference needs. That’s made them faster at responding, especially for models like…
Filed under: News - @ February 2, 2026 11:25 pm