A Bengaluru-based startup, C2i, has secured $15 million in funding to address the critical power bottleneck facing artificial intelligence data centers globally. The investment, led by Peak XV Partners, will support the company’s development of a novel “grid-to-GPU” efficiency technology aimed at reducing significant energy losses in AI infrastructure.
The Growing Power Challenge
Artificial intelligence computation, particularly the training of large language models, demands immense electrical power. This has placed unprecedented strain on data center grids and local power infrastructures worldwide. Industry reports indicate that a single AI server rack can consume over 50 kilowatts of power, more than ten times the energy used by a standard cloud server rack. The resulting inefficiencies and power constraints are becoming a major obstacle to the continued scaling of AI capabilities.
C2i’s proposed solution focuses on optimizing power delivery from the electrical grid directly to the graphics processing units (GPUs) that perform AI calculations. The company claims its integrated hardware and software platform can mitigate losses that occur during power conversion and distribution within a data center facility. These losses, often in the form of wasted heat, currently account for a substantial portion of a data center’s total energy consumption.
Investor Confidence and Market Need
The significant funding round underscores investor belief in the urgency of the power efficiency problem. Peak XV Partners, a prominent venture capital firm, led the investment. Other participants included returning investors. The capital is earmarked for research and development, team expansion, and initial deployment of C2i’s technology with early adopter clients in the coming year.
Analysts note that the push for efficiency is driven by both economic and environmental factors. As electricity costs rise and sustainability commitments become stricter for major tech companies, innovations that reduce power usage effectiveness (PUE) in data centers are gaining strategic importance. The industry’s power demand is projected to continue its rapid growth, making such technologies critical for future expansion.
Technical Approach and Industry Context
C2i’s “grid-to-GPU” methodology involves redesigning the power supply chain inside a data center. Traditional setups involve multiple stages of power conversion, from alternating current (AC) from the grid to various direct current (DC) voltages required by different components. Each conversion step results in energy loss. The startup’s technology seeks to streamline this process, delivering power more directly to the high-performance computing hardware with fewer intermediate steps.
This development occurs as major cloud providers and chip manufacturers are also investing heavily in custom silicon and liquid cooling solutions to improve efficiency. C2i’s approach represents a complementary effort targeting the foundational power delivery architecture itself. The company is currently conducting laboratory tests and plans to move to pilot deployments with select data center operators.
Forward-Looking Developments
The next phase for C2i involves completing its prototype validation and securing its first commercial pilot agreements within the next six to nine months. Success in these initial deployments will be closely watched by an industry keen to find scalable solutions to the power dilemma. Wider adoption of such efficiency technologies could influence data center design standards and potentially ease the regulatory and logistical challenges associated with building new AI compute capacity in power-constrained regions.
Source: GeekWire