Nvidia intends to develop a new chip to enhance AI processing speed.

Nvidia (NVDA.O) intends to introduce a new processor aimed at assisting OpenAI and other clients in developing faster, more efficient AI systems, as reported by the Wall Street Journal on Friday, referencing sources with knowledge of the situation.
Nvidia is creating a novel system for “inference” computing, a processing method that enables AI models to answer inquiries, according to the report.
The new platform is scheduled for unveiling at Nvidia’s GTC developer conference in San Jose next month and will feature a chip developed by the startup Groq, according to sources familiar with the matter.
Reuters was unable to promptly authenticate the report. Nvidia and OpenAI did not promptly reply to Reuters’ request for commentary.
Earlier this month, Reuters reported that OpenAI is dissatisfied with the pace at which Nvidia’s hardware generates responses for ChatGPT users, particularly concerning software development and AI interactions with other software.
According to a source, it requires new hardware that will ultimately fulfill approximately 10% of OpenAI’s future inference computing requirements.
The creator of ChatGPT has engaged in discussions with startups such as Cerebras and Groq to supply chips for expedited inference, according to two sources. Nvidia finalized a $20 billion licensing agreement with Groq, which terminated OpenAI’s negotiations, according to one source cited by Reuters.
In September, Nvidia announced its intention to invest up to $100 billion in OpenAI as part of an agreement that provided the chip manufacturer with equity in the startup and supplied OpenAI with the necessary funds to acquire advanced chips.