At Intel’s AI Summit in Taipei, it was confirmed that Microsoft’s Copilot AI service will soon run locally on PCs, moving away from cloud-based operations. This shift comes alongside a new requirement of 40 TOPS of performance on the Neural Processing Unit (NPU) for next-gen AI PCs. Intel executives discussed this development in a recent Q&A session with Tom’s Hardware.
Microsoft and Intel have jointly introduced a new definition for AI PCs, specifying components such as the NPU, CPU, GPU, Microsoft’s Copilot, and a physical Copilot key on the keyboard. While PCs adhering to these specifications are already on the market, the future holds advancements towards the next-gen AI PCs with the 40 TOPS NPU threshold.
The shift towards local execution of Copilot functions on NPUs aims to enhance latency, performance, and privacy benefits compared to cloud-based operations. Notably, Intel’s Meteor Lake NPU offers up to 10 TOPS, while AMD’s Ryzen Hawk Point platform provides 16 TOPS, both falling short of the 40 TOPS requirement. Qualcomm is expected to introduce X Elite chips with 45 TOPS later this year.
Ensuring a positive customer experience, Microsoft emphasizes running Copilot on the NPU instead of the GPU to minimize battery life impact. Intel’s roadmap includes next-gen processors to cater to various segments of the AI market, promising enhanced AI performance with upcoming Lunar Lake processors three times more powerful than existing chips.
As Intel expands AI features on its processors, the focus remains on optimizing applications for Intel’s silicon through programs like OpenVino. Despite the TOPS competition among different vendors, including Microsoft’s utilization of DirectML for Copilot, a race for higher performance and market dominance is expected in the AI PC realm in the years ahead.