Nvidia lays out AI roadmap as industry questions sustainability of boom


Nvidia

Nvidia used the CES 2026 technology conference in Las Vegas to offer its clearest picture yet of what comes next for the artificial intelligence hardware that has fueled its meteoric rise.

The company unveiled new details about “Vera Rubin,” its next-generation computing platform designed for AI data centers, outlining how it works, when it will launch, and why Nvidia believes it will shape the next phase of AI development.

Vera Rubin, which is already in production, is expected to power Nvidia’s first commercial products in the second half of 2026. The platform is aimed squarely at the growing demands of advanced AI systems, as models evolve from simple chatbots into tools that can reason through complex, multi-step tasks and act more like autonomous assistants.

Nvidia’s dominance in AI chips has made it a central player in the technology industry. Its hardware is deeply embedded across data centers operated by cloud providers, startups, and research labs, helping the company briefly reach a $5 trillion market value last year. That success, however, has also brought scrutiny, with investors and analysts questioning whether AI spending is sustainable and whether customers will eventually reduce their dependence on Nvidia by building their own chips.

Speaking on stage at CES, Nvidia founder and CEO Jensen Huang addressed concerns about where the massive funding for AI is coming from. He argued that companies are not inventing new budgets for AI, but rather redirecting money away from traditional computing research toward artificial intelligence.

According to Nvidia, Vera Rubin is designed to tackle a critical bottleneck in modern AI systems. While earlier generations of AI relied mainly on raw computing power, newer models struggle with managing large amounts of contextual information, such as memory, long conversations, and complex instructions. Nvidia says its new platform introduces a reworked approach to storage and data movement, enabling AI systems to process richer context more efficiently.

One example highlighted at CES showed how AI development has shifted. In a demonstration, Huang presented a personal assistant built by linking a small robot to several AI models running on Nvidia hardware. The system could handle tasks like recalling a to-do list or issuing simple commands, illustrating how AI agents can now combine perception, reasoning, and action. Huang said building such systems would have been nearly impossible a few years ago, but is now relatively straightforward thanks to large language models.

Nvidia executives emphasized that this change means storage and memory can no longer be treated as secondary concerns. As AI systems “think” through problems rather than delivering instant responses, the infrastructure supporting them must evolve as well.

The company is also expanding its focus on inference—the process by which trained AI models generate answers. Ahead of CES, Nvidia announced a licensing agreement with Groq, a company specializing in inference hardware, signaling Nvidia’s intent to strengthen its position across the entire AI pipeline.

Major cloud providers, including Microsoft, Amazon Web Services, Google Cloud, and CoreWeave, are expected to be among the first to deploy Vera Rubin-based systems. Hardware companies such as Dell and Cisco are also preparing to integrate the technology into their data centers, while leading AI labs like OpenAI, Anthropic, Meta, and xAI are likely to use it for training and advanced applications.

Beyond data centers, Nvidia also highlighted new work in autonomous vehicles and robotics, an area it calls “physical AI.” These efforts build on earlier announcements and reflect Nvidia’s ambition to extend AI beyond software into machines that interact with the real world.

Despite its momentum, Nvidia faces mounting pressure. Tech giants have poured tens of billions of dollars into AI infrastructure this year alone, and analysts estimate global data center investment could approach $7 trillion by 2030. Critics warn that much of this spending circulates among a small group of companies, raising concerns about long-term returns.

At the same time, rivals are closing in. Google, OpenAI, and others are developing custom chips, while competitors like AMD and Qualcomm are pushing into the data center market. As Nvidia rolls out Vera Rubin, it must not only deliver on its technical promises but also convince customers and investors that its central role in AI is built to last.

You May Also Like