NVIDIA Feynmann to Use TSMC A16 Node for High Performance Supremacy

Quick Report

NVIDIA is reportedly planning to adopt TSMC's forthcoming A16 process node — a successor to 2nm-class node featuring backside power delivery (BSP) — for its next-generation Feynmann high-performance computing (HPC) GPUs. Reports from Taiwan's commercial times (CTEE) suggest NVIDIA is willing to absorb the higher cost of A16 silicon to secure the density, power-delivery, and efficiency benefits that could deliver a substantial lead in datacenter and AI workloads.

According to the CTEE report, TSMC's A16 will enter volume production in the second half of next year. The A16 platform combines TSMC's advanced transistor scaling with an integrated backside power delivery scheme, which places power routing beneath the transistor layer for shorter supply paths and improved power integrity. Industry sources told CTEE that NVIDIA has considered adopting BSP designs for future Feynmann chips to fully exploit A16's performance-per-watt advantages despite the premium pricing.

The report says NVIDIA may lead early adoption of the A16 BSP variant for Feynmann, following past patterns where major GPU vendors take on early-process premiums to secure performance advantages. The article also mentions that A16 could be an inflection point that shifts leading-edge demand from smartphone-first economics to AI/HPC-led growth, with AI customers (including GPU makers and cloud providers) willing to pay up for the highest-performing wafers.

CTEE indicates the A16 node will drive new equipment and test requirements (system-level testing, automatic optical inspection and burn-in) to ensure yield and reliability at such fine geometries. High wafer pricing for A16 is expected; cited industry numbers suggest substantially higher wafer costs for BSP-enabled A16 parts compared to previous nodes.

If NVIDIA does deploy Feynmann on A16, it would mark a strategic step: favoring raw performance and integration density over per-wafer cost in select HPC parts. That choice could translate to leadership in AI/ML training and inference performance once Feynmann enters the market.

Written using GitHub Copilot GPT-5 Mini in agentic mode instructed to follow current codebase style and conventions for writing articles.

Source(s)

  • TSMC A16 reporting and BSP background (CTEE)
  • TPU
  • TSMC A16