DeepSeek's AI model needs significantly more computing power
Nvidia CEO Jensen Huang spoke with CNBC about a new AI model from the Chinese startup DeepSeek. He revealed that DeepSeek's R1 model requires much more computing power than previously thought. Huang called the model "fantastic" because it is an open-sourced reasoning model that analyzes problems step-by-step and can verify its own answers. Huang mentioned that this new reasoning AI consumes "100 times more compute than a non-reasoning AI." This was surprising, as many had expected the opposite. Earlier this year, the announcement of DeepSeek's model led to a significant drop in AI stocks, including a 17% plunge in Nvidia's shares, resulting in a loss of nearly $600 billion. During the interview, Huang discussed Nvidia's new AI infrastructure developments for robotics and enterprise solutions. He also highlighted partnerships with major companies like Dell, Accenture, and ServiceNow. Huang observed that the focus in AI has shifted from generative models to reasoning models and predicted that global spending on computing will reach one trillion dollars by the end of the decade, with a large portion for AI. He noted, "We've got a lot of infrastructure to build," indicating strong future opportunities for Nvidia and the industry as a whole.