SK hynix Launches AI Memory Module for Nvidia's Vera Rubin Platform

SK hynix has commenced mass production of its SOCAMM2 memory module, designed specifically for Nvidia's forthcoming Vera Rubin AI platform. The module integrates 192GB of LPDDR5X DRAM, offering over twice the bandwidth and 75% greater power efficiency than conventional DDR5 modules. Its modular form factor allows for greater system design flexibility compared to soldered memory. The company positions SOCAMM2 as a key solution to alleviate memory bottlenecks in large-scale AI model training and inference.

Key Points: SK hynix Begins Mass Production of SOCAMM2 for Nvidia AI

  • Mass production of next-gen SOCAMM2 module
  • Optimized for Nvidia's Vera Rubin AI platform
  • Doubles bandwidth, 75% better power efficiency vs DDR5
  • Enables flexible, upgradable AI server design
3 min read

SK hynix initiates mass production of SOCAMM2 for Nvidia's Vera Rubin platform

SK hynix starts mass production of SOCAMM2, a high-efficiency AI memory module optimized for Nvidia's upcoming Vera Rubin platform.

"We expect SOCAMM2 to fundamentally address memory bottlenecks in training and inference for large language models - SK hynix"

Seoul, April 20

SK hynix, on Monday, announced that it began the mass production of SOCAMM2, a next-generation memory module developed to increase performance and power efficiency in artificial intelligence servers.

This new Small Outline Compression Attached Memory Module 2 is optimized for Nvidia's upcoming Vera Rubin platform, according to a report by The Korea Herald, signaling a deeper level of technical collaboration between the two companies.

The SOCAMM2 module integrates 192 gigabytes of memory using sixth-generation 10-nanometer LPDDR5X DRAM. While traditional server modules typically rely on standard DDR5, this specific design vertically stacks LPDDR chips to improve energy efficiency while maintaining the high performance required for modern AI workloads.

"We expect SOCAMM2 to fundamentally address memory bottlenecks in training and inference for large language models with hundreds of billions of parameters, significantly accelerating overall system performance," the report quoted SK hynix.

Data provided by the company indicated that SOCAMM2 delivers more than twice the bandwidth and over 75 per cent greater power efficiency when compared with conventional DDR5 RDIMM modules. This makes the hardware suitable for high-performance AI tasks. Data transfer speeds reached 9.6 gigabits per second, an increase from the 8.5 Gbps recorded in the previous SOCAMM1 generation. The module also features a higher number of input and output pins to enhance total data throughput.

These technical improvements are expected to reduce the total cost of ownership for hyperscale data center operators. In these environments, investment decisions depend on rack-level performance, power consumption, and cooling requirements rather than just the cost of individual components.

The report noted that while SOCAMM does not reach the ultra-high bandwidth levels of High Bandwidth Memory, its architecture allows for a simpler manufacturing process and higher yields, which provides a cost advantage on a per-capacity basis.

"In this hierarchy, SOCAMM serves as an intermediate layer, handling frequently accessed 'hot' data and buffering workloads between HBM and system memory to reduce bottlenecks," the report quoted an industry official.

The modular form factor represents a shift from conventional LPDDR memory, which is usually soldered directly onto boards and cannot be replaced. This new design allows for more flexibility in system maintenance and design. SK hynix worked with Nvidia to tailor the module for the Vera Rubin platform, which is scheduled for launch in the second half of the year. The company also expects to supply its next-generation HBM4 memory for the same platform.

"The 192GB SOCAMM2 sets a new benchmark for AI memory performance. We will strengthen our position as a trusted AI memory solutions provider through close collaboration with global AI customers," the report quoted Kim Ju-seon, SK hynix President and head of AI Infra.

- ANI

Share this article:

Reader Comments

P
Priya S
Fascinating read! The focus on reducing total cost of ownership is key. For Indian data centers dealing with high electricity costs and heat, a 75% improvement in power efficiency isn't just a number—it's a game-changer. Hope this tech trickles down and becomes affordable for our local cloud providers soon. 🤞
A
Arjun K
Great for global AI progress, but a bit disheartening to see no mention of Indian partnerships. South Korea and the US are locking arms on the hardware front. India has the talent—our engineers are everywhere in Silicon Valley. We need a national mission to get into this semiconductor and advanced memory game, not just software.
S
Sarah B
Working in Bangalore's tech scene, the bottleneck for running large local AI models is often memory bandwidth. SOCAMM2 acting as that "intermediate layer" sounds like a smart architectural choice. Faster inference could really boost practical AI applications in sectors like agriculture and healthcare here.
V
Vikram M
The modular form factor is a win for sustainability and e-waste reduction. In India, where electronic waste is a growing problem, being able to replace/upgrade a memory module instead of the whole board is a step in the right direction. Tech innovation should always consider end-of-life.
K
Karthik V
Respectfully, while the specs are impressive, articles like this often gloss over the real-world accessibility. This will be priced for giants like NVIDIA and hyperscalers. For the average Indian developer or small AI firm, this remains out of reach. The gap between global R&D and local affordability in our market is still too wide. We need solutions for that too.

We welcome thoughtful discussions from our readers. Please keep comments respectful and on-topic.

Leave a Comment

Minimum 50 characters 0/50