New Qualcomm Chips Aim
Qualcomm’s innovative AI strategy shocks Silicon Valley with its unveiling of powerful AI200 and AI250 chips, posing a challenge to Nvidia and AMD with next-generation performance and efficiency.
Qualcomm has made its boldest move yet. On Monday, the tech giant announced a new series of data center chips designed to take on industry giants Nvidia and AMD in the emerging world of artificial intelligence. The revelation sent Qualcomm’s stock rising more than 15%, signaling investor confidence in the company’s ambitious push into the AI chip market.
The company unveiled two flagship processors—the AI200, which was scheduled to debut in 2026, and the AI250, which will arrive a year later. Qualcomm is also developing a third chip for release in 2028, committing to a steady annual launch schedule. These chips are specifically designed for AI inference—the process of running AI models—rather than training them from scratch, which helps keep power consumption and overall costs down.
According to Qualcomm senior vice president Durga Malladi, the upcoming AI250 chip will feature a revolutionary new memory architecture that increases bandwidth by more than ten times compared to traditional setups. This is a giant leap that could allow Qualcomm to compete with and potentially undercut Nvidia and AMD’s dominance in AI computing.
In a surprising twist, Qualcomm isn’t just making chips this time—it’s also making complete rack-level systems that are ready to plug directly into data centers. The company hopes this “full-stack” approach will appeal to enterprise customers looking for a complete, power-efficient AI infrastructure. One of its first major partners will be Saudi Arabian AI company Humane, which plans to deploy 200 megawatts of computing power using Qualcomm’s systems starting in 2026.
Qualcomm’s goal focuses on energy efficiency and low long-term costs. The company argues that its servers will consume much less power than competitors, leading to substantial savings over time—an important factor as data centers face increasing power demands. This emphasis on power efficiency is not new for Qualcomm; earlier this year, the company highlighted research showing that its mobile chips could drastically cut energy use for AI workloads.
The move marks a significant comeback attempt for Qualcomm in the data center world. The company previously tried to enter the server market with its Centric ARM-based processors in 2017, even partnering with Microsoft. But that effort failed due to stiff competition from Intel and AMD, as well as internal challenges and legal battles, which diverted attention from the project. Qualcomm officially exited the data center market the following year as part of a broader cost-cutting initiative.
Now, Qualcomm is back—and this time, it’s betting big on AI. Its new AI200 and AI250 chips use the company’s proprietary Hexagon NPU (Neural Processing Unit), a technology originally designed for its smartphone and PC processors. By extending that architecture to larger data centers, Qualcomm hopes to combine high performance with exceptional power efficiency, a combination that could attract cost-conscious customers.
In addition to performance, total cost of ownership has become an important selling point for cloud providers and AI companies. Large-scale data centers already cost billions of dollars to build, and operating them could be even more expensive. Qualcomm’s low-power approach could give it an edge among customers looking to reduce both upfront and long-term expenses.
