
Nvidia and the Open Compute Project have already explored DC distribution, with OCP's Mt. Diablo initiative demonstrating ±400 VDC rack distribution derived from electric vehicle infrastructure, supporting 1 MW racks with reduced conversion losses.Alamy
Every server in a modern data center runs on direct current (DC). Yet the electricity that reaches those servers still travels most of the way as alternating current (AC), undergoing several conversions before it reaches the chip. Two international industry alliances argue that this conversion chain is one of the largest sources of avoidable energy waste in today’s facilities – and they have now formalized a plan to change it.
In March, the Current/OS Foundation and the Open Direct Current Alliance (ODCA) signed a memorandum of understanding (MoU) to align their technical work on DC power distribution and present coordinated positions to international standards bodies. The agreement formalizes collaboration that, by both groups’ accounts, was already underway. Current/OS is focused on DC distribution in commercial buildings, while ODCA’s approaches emerged from industrial settings. Both are Europe-based initiatives with global ambitions.
Related:Beyond x86: Alternative CPU Choices for GPU-Driven AI
“We are choosing cooperation in order to build a common, robust, and open standard that will benefit the entire European ecosystem,” Yannick Neyret, president of the Current/OS Foundation, told Data Center Knowledge.
Why Move the Conversion Point?
The efficiency case for DC begins at the rack. CPUs, GPUs, storage devices, and networking gear all operate internally on DC. Because the utility grid delivers AC, every data center must convert AC to DC somewhere; the only question is where to place that conversion.
“Whatever we do, we need to convert the AC energy coming from the public grid down to DC,” Neyret said. “The question is, do we do it at the server level? Do we do it at the white [space] or server-rack level, or do we do it just at the entrance, at the edge of the data centers?”
Pushing the AC-to-DC conversion to the building edge and then distributing DC within the facility eliminates redundant conversion stages in individual devices. Because DC-to-DC stages are generally more efficient than AC-to-DC at comparable power levels, the savings compound across the chain.
At hyperscale, small percentage gains are substantial. On a 1 GW campus, Neyret noted, a 1% efficiency improvement avoids roughly 10 MW of losses.
“The main rationale is it is more efficient to get the power to the chip,” said Hartwig Stammberger, chair of the board of ODCA. “There are fewer losses on the way there, and you need less effort, less wiring, and fewer components.”
Not New, But Newly Practical
Stammberger pushed back on the idea that DC distribution is speculative. Industrial systems at comparable voltages and power levels have been running for years, he said, while the underlying power electronics have advanced significantly over the past decade – especially in efficient DC-DC conversion and in safely interrupting DC faults. “It has industry-proven applications that stand behind it,” he said. “It’s not a new thing.”
Related:Rethinking Energy: How Data Centers Are Adapting to Grid Constraints
Ecosystem signals are multiplying. Nvidia has referenced DC distribution in a published white paper. The Open Compute Project (OCP) Mt. Diablo initiative – a collaboration involving Meta, Microsoft, and OCP – has demonstrated ±400 VDC rack distribution derived from electric vehicle infrastructure, supporting 1 MW racks with reduced conversion losses.
Standards Are the Bottleneck, Not the Technology
The main friction to broader DC adoption is not technical capability but the regulatory and standards framework, which was built over more than a century around AC systems. “The electrical business is ruled by national rules or national standards … very, very difficult to change, because they have been polished for more than 100 years,” Neyret said.
Two standards tracks matter for data center planning timelines now. First, an International Electrotechnical Commission (IEC) standard for semiconductor-based circuit breakers – enabling safe DC fault interruption via power electronics rather than electromechanical contacts – is expected to be published within months.
Related:Voltage Ride-Through: A Key Ingredient in Data Center Resilience
Second, Current/OS and ODCA are working with the National Fire Protection Association (NFPA) toward updates in the 2029 National Electrical Code revision cycle, which would then be adopted state by state in the United States after publication.
Critically, a formal standard is not always a legal prerequisite, Stammberger noted. In Germany, he said, utilities and insurers have accepted technically documented DC installations as “state of the art,” allowing commercial projects to proceed ahead of a published standard. “A standard helps,” Stammberger said. “The key point is the utilities and the insurers accepted that technical expertise … proven outside of an official standard.”
What Two Organizations Can Do That One Cannot
The immediate value of the partnership is clarity. Although they come from different domains – commercial buildings for Current/OS and industrial environments for ODCA –the two converged on similar core concepts for DC distribution, but remaining implementation variations could still confuse buyers.
“We need to explain to the market that these solutions are not incompatible or opposing, but are complementary,” Neyret said. “There are options.”
The groups have already shown the impact of a unified voice. Before the MoU, representatives jointly presented a proposal to an international low-voltage installation standards committee; the committee accepted it on the spot – an uncommon outcome in standardization work. “Two voices for a new technology are beneficial, especially when they don’t contradict each other,” Stammberger said.
The partnership also provides a counterweight to Chinese DC consortium efforts. European organizations concluded that Chinese proposals did not meet preferred safety and autonomy requirements, Neyret said, and that a coordinated European-aligned approach would be stronger. A joint white paper on low-voltage DC for data centers, shaped by both organizations, was published through the Open Compute Project alongside the MoU announcement.

Participants at the Joint Board meeting between Current/OS and the Open DC Alliance. (Credit: Current/OS, Open DC Alliance)
The Data Center Timeline
Both Neyret and Stammberger expect data centers to lead DC adoption, ahead of commercial buildings and general industry, driven by AI workloads that make conversion losses too costly to ignore.
The first DC-native data centers are being planned for completion around the end of 2027. More broadly, Neyret and Stammberger estimate that widespread DC adoption could occur within fewer than five years if current momentum holds.
The message to data center operators, Stammberger said, is direct: ”It is available, it has been proven. It will be good for you.”
Looking beyond immediate deployments, Current/OS and ODCA are advising OCP teams on designs two to three generations out, targeting not only efficiency gains but also reductions in copper, improvements in availability, and simplifications in facility architecture. “We are just at the beginning of discovering the benefits of DC,” Neyret said.
