STMicroelectronics Pushes 800 VDC Power Closer to AI GPUs



Uploaded image

Image credit: STMicroelectronics

Power delivery inside AI data centers is starting to look less like a scaling problem and more like a distribution problem. Moving energy across racks is no longer the hard part. Getting it from the rack into the silicon without wasting it as heat is where things begin to break down. The traditional 48 V and 54 V intermediate buses start to feel stretched once current levels climb and copper losses begin to dominate the design.

STMicroelectronics is working in that space with a set of 800 VDC conversion architectures aligned with NVIDIA’s 800 VDC reference design. These include new 800 V to 12 V and 800 V to 6 V conversion stages that shift where voltage is stepped down inside the system. The 800 VDC to 12 V and 6 V architectures are power conversion stages used to deliver energy from high-voltage rack distribution down to the voltage domains required by modern AI accelerators.

In a typical AI server rack, power enters at high voltage and is gradually stepped down through multiple conversion stages before reaching the GPU. Each stage introduces loss, heat, and physical complexity, and those penalties start to scale aggressively as compute density increases.

Why Intermediate Bus Voltage Is Becoming a Bottleneck

The long-standing approach of stepping down to an intermediate 48 V or 54 V rail before final point-of-load conversion has worked well for years. It keeps current manageable and simplifies distribution across the rack. But that balance starts to shift when GPU clusters demand higher instantaneous current and tighter transient response. At lower intermediate voltages, current rises quickly. That leads to thicker copper planes, increased resistive losses, and more heat concentrated in the power delivery network. At some point, the distribution network itself becomes harder to scale than the compute it is feeding. By introducing 12 V and 6 V stages directly from an 800 V backbone, the architecture begins to move conversion closer to the load while reducing the number of intermediate steps.

Moving Conversion Closer to the GPU Changes System Behavior

Once the conversion stage shifts closer to the GPU, a few things start to change at the system level. The physical distance between the power stage and the load shrinks, which reduces parasitic resistance and improves transient response during rapid load changes. That matters in AI workloads where current demand can shift abruptly depending on the compute task. Voltage droop and recovery time become tightly coupled to how close the power stage is to the silicon.

Reducing the number of conversion stages also removes cumulative inefficiencies. Instead of stepping down through multiple layers, energy moves more directly from the high-voltage bus to the load domain. It sounds straightforward, but it reshapes how the entire power tree is laid out across the rack and the board.

How 6 V and 12 V Rails Fit Different Server Architectures

Not every AI system will settle on the same intermediate voltage. Different server designs place different constraints on thermal density, board layout, and GPU packaging. A 12 V rail still provides a familiar stepping point for many systems, especially where existing power modules and design practices can be reused. The 6 V rail, on the other hand, pushes conversion even closer to the load and reduces current path losses further, but at the cost of tighter integration and potentially more complex local regulation. Both approaches are likely to coexist. Some systems will prioritize flexibility and compatibility, while others will push for maximum density and efficiency at the expense of design simplicity. The interesting part is not which voltage wins, but how the architecture adapts depending on where the constraints appear first.

Power Architecture Is Starting to Follow Compute, Not Lead It

For a long time, power delivery defined what was practical in system design. That relationship is starting to invert. AI accelerators are dictating power requirements that no longer fit comfortably within legacy distribution schemes. Architectures like 800 VDC distribution with localized 12 V or 6 V conversion reflect that shift. Instead of scaling existing approaches, the power system is being restructured around the behavior of the compute itself. It does not feel like a clean transition yet. Different approaches are appearing at the same time, and the industry has not settled on a single direction. But once current levels, thermal limits, and board space all start pushing in the same direction, the architecture tends to follow whether it is convenient or not.

Learn more and read the original announcement at www.st.com

Technology Overview

STMicroelectronics’ 800 VDC power architecture enables high-voltage distribution across AI data centers before converting down to lower voltage rails such as 12V and 6V near the load. This approach reduces current in distribution paths and minimizes resistive losses. The system combines multiple conversion stages to support efficient power delivery in high-density GPU servers.

Frequently Asked Questions

What is 800 VDC power distribution used for in data centers?

It is used to distribute power at high voltage across server racks to reduce current and minimize resistive losses before converting to lower voltages near the load.

Why move from 54V to 12V or 6V in AI servers?

Reducing or removing the 54V intermediate stage lowers conversion losses and allows power to be delivered closer to GPUs, improving efficiency and transient response.


You may also like

STMicroelectronics

About The Author

STMicroelectronics is a global semiconductor leader serving customers across the spectrum of electronics applications. With a portfolio spanning microcontrollers, sensors, power and analog devices, ST enables smarter mobility, more efficient power and energy management, and the wide-scale deployment of the Internet of Things.

Samtec Connector Solutions
Omnetics
DigiKey