Samsung Nears HBM4 Chip Production for Nvidia, AMD in AI Race

Samsung Electronics is on the verge of starting mass production of its next-generation HBM4 memory chips, with initial shipments slated for major AI chip customers Nvidia and AMD. The company is leveraging an industry-first 1c DRAM process and a 4-nanometer logic die to achieve superior data transfer speeds of 10.7 Gbps. This move is a strategic push to regain ground in the competitive high-bandwidth memory market, where it has recently trailed rival SK hynix. The chips are set to support upcoming AI platforms like Nvidia's Rubin and AMD's MI450, scheduled for release later this year.

Key Points: Samsung HBM4 Chips for Nvidia, AMD Near Production

  • Mass production begins next month
  • Chips destined for Nvidia & AMD
  • Uses advanced 1c DRAM & 4nm processes
  • Achieves 10.7 Gbps speed
  • Aims to regain market share from SK hynix
3 min read

Samsung nears mass production of next-gen HBM4 chips for Nvidia and AMD

Samsung nears mass production of next-gen HBM4 memory chips for Nvidia and AMD AI accelerators, aiming to close the gap with SK hynix.

"Samsung's decision to push ahead with the 1c DRAM process, despite very low initial yields, appears to have been critical in achieving the required performance. - Industry Expert"

Seoul, January 29

Samsung Electronics is nearing the mass production of its next-generation HBM4 memory chips, with the company expected to begin manufacturing as early as next month.

According to a report by the Korea Herald, initial shipments of these advanced chips are slated for major customers, including Nvidia and AMD. The move signals a potential turning point for Samsung as it seeks to regain ground in the high-bandwidth memory market for artificial intelligence, a sector where it has recently trailed behind its smaller rival SK hynix.

The company's strategy to bridge the competitive gap relies on the industry-first adoption of a sixth-generation 10-nanometer (1c) DRAM process. Despite facing weak early yields, Samsung has pushed forward with this technology and applied an advanced 4-nanometer foundry process to the logic die, which acts as the control unit for the memory stack.

The report states that Samsung recently passed final qualification tests for both Nvidia and AMD, paving the way for full-scale production to support next-generation AI accelerators like Nvidia's Rubin platform and AMD's MI450, both scheduled for release in the second half of this year.

This production milestone comes amid a shift in industry requirements. Leading AI chip customers have reportedly adjusted their qualification benchmarks to prioritise raw performance, raising speed requirements while easing thermal limits. Insiders noted that these changes follow advancements in custom accelerators, such as Google's tensor processing units, which have demonstrated performance levels comparable to or exceeding current market-leading GPUs.

In recent qualification testing, Samsung's HBM4 chips achieved data transfer speeds of 10.7 gigabits per second, surpassing Nvidia's requirement of 10 Gbps. In comparison, recent tests for products from SK hynix and Micron Technology recorded speeds of approximately 8.3 Gbps and 8 Gbps, respectively.

An industry expert who requested anonymity told The Korea Herald, "The shift in market demand raised the bar for speed while allowing higher operating temperatures. Samsung's decision to push ahead with the 1c DRAM process, despite very low initial yields, appears to have been critical in achieving the required performance."

The technical configuration of Samsung's HBM4, combining the 1c DRAM process with an in-house 4-nanometer logic die, offers greater design flexibility in through-silicon via density and routing. This architecture allows for higher input and output speeds compared to competing models.

By contrast, SK hynix's HBM4 utilises a fifth-generation 10-nanometer (1b) DRAM process and a 12-nanometer logic die. The report suggests that SK hynix has recently redesigned its HBM4 chips while maintaining the 1b process and is currently awaiting new evaluation results.

The timing is critical for memory suppliers. Nvidia CEO Jensen Huang said earlier this month that the company's next-generation Vera Rubin platform is already in "full production" ahead of its launch in the second half of this year.

- ANI

Share this article:

Reader Comments

P
Priya S
Very impressive technical leap. The 10.7 Gbps speed is a game-changer. But I hope they've truly solved the yield issues. Pushing a new process with low yields is a huge financial risk. Fingers crossed for stable production! 🤞
R
Rohit P
As someone working in IT, the ripple effect of this will be huge. Faster AI training means better models for everything from healthcare diagnostics to language translation for Indian languages. Can't wait to see what gets built on top of Rubin and MI450.
S
Sarah B
The strategic shift in requirements is interesting. Prioritizing raw speed over thermal limits shows how much the cooling solutions have advanced. This is a bold move by Samsung to catch up. The next few quarters will be crucial for them.
V
Vikram M
Good to see competition heating up. SK hynix had a comfortable lead. This push from Samsung will benefit everyone—better products, maybe better prices. Hope some of this advanced manufacturing knowledge finds its way to collaborations with Indian semiconductor initiatives.
K
Karthik V
The article mentions the timing is critical, and it truly is. With Nvidia's Rubin already in "full production," Samsung needs to deliver flawlessly and on time. Any delay or quality hiccup could cost them this entire generation of AI chips. No room for error.

We welcome thoughtful discussions from our readers. Please keep comments respectful and on-topic.

Leave a Comment

Minimum 50 characters 0/50