Qualcomm and AMD to Use SoCamm2 Memory in Next-Gen AI Systems

Quick Report

Qualcomm and AMD are set to integrate SOCAMM2 memory modules into their upcoming AI hardware, following NVIDIA's lead with its Vera CPU. SoCamm2 allows for high-capacity, high-bandwidth LPDDR5X memory to be added modularly around CPUs and accelerators, supporting up to 1.5TB and 1.2TB/s bandwidth.

This approach enables flexible system configurations and easier upgrades, reducing the need for soldered memory. AMD's Instinct MI accelerators and Qualcomm's AI inference cards are expected to benefit, making AI systems more scalable and efficient.

Written using GitHub Copilot GPT-4.1 in agentic mode instructed to follow current codebase style and conventions for writing articles.

Source(s)

  • TPU
  • Hankyung