Intel has announced a new AI chip for the data centre that it plans to launch next year, in a renewed push to break into the AI silicon market.
The new chip, a GPU, will be optimised for energy efficiency and support a wide range of uses such as running AI applications, or inference, Intel chief technology officer Sachin Katti said on Tuesday at the Open Compute Summit on Tuesday.
“It emphasises that focus that I talked about earlier, inference, optimised for AI, optimised, optimised for delivering the best token economics out there, the best performance per dollar out there,” Katti said.
The new chip, called Crescent Island, is the struggling US chip maker’s latest attempt to capitalise on the frenzy in AI spending that has generated billions in revenue for AMD and Nvidia.
The company’s plans trail behind competitors and represent the significant challenge Intel’s executives and engineers face to capture a meaningful portion of the market for AI chips and systems.
Intel CEO Lip-Bu Tan has vowed to restart the company’s stalled AI efforts after the company effectively mothballed projects such as the Gaudi line of chips and Falcon Shores processor.
Crescent Island will feature 160GB of a slower form of memory than the high bandwidth memory (HBM) that is found on AMD and Nvidia’s data centre AI chips. The chip will be based on a design that Intel has used for its consumer GPUs.
Intel did not disclose which manufacturing process Crescent Island would use. The company did not immediately respond to a request for comment.
Yearly updates
Since the generative AI boom with the launch of OpenAI’s ChatGPT in November 2022, start-ups and large cloud operators have rushed to grab GPUs that help run AI workloads on data centre servers. The demand explosion has led to a supply crunch and sky-high prices for chips designed or suited for AI applications.
Katti said at the San Jose trade show that the company would release new data centre AI chips every year, which would match the annual cadence set by AMD, Nvidia and several of the cloud computing companies that make their own chips.
Read: Panther Lake: the chip that Intel can’t let fail
Nvidia has dominated the market to build large AI models such as the one used for ChatGPT. Tan has said the company plans to focus its design efforts on building chips useful for running those AI models — which work behind the scenes to make AI software operate.
“Instead of trying to build for every workload out there, our focus is increasingly going to be on inference,” Katti said. Intel has taken an open and modular approach in which customers can mix and match chips from different vendors, he said.
Nvidia said last month it would invest US$5-billion in Intel, taking a roughly 4% stake and becoming one of its largest shareholders as part of a partnership to co-develop future PC and data centre chips. The deal is part of Intel’s effort to ensure that Intel’s CPUs are installed in every AI system that gets sold, Katti said. — Akash Sriram and Max Cherney, (c) 2025 Reuters
Get breaking news from TechCentral on WhatsApp. Sign up here.