Connect with us
Sarvam AI open-source models

Artificial Intelligence

Sarvam AI Launches Open-Source Models to Compete Globally

Sarvam AI Launches Open-Source Models to Compete Globally

A major Indian Artificial Intelligence research laboratory has unveiled a new suite of open-source AI models, positioning itself as a significant player in the global race for accessible and capable AI technology. Sarvam AI, based in Bengaluru, announced the release of its latest models on Tuesday, marking a strategic commitment to the open-source approach in a field increasingly dominated by proprietary systems from large U.S. corporations.

The new lineup includes two large language models, named OpenHathi-Hi-2.0, with 30 billion and 105 billion parameters respectively. These models are designed to handle both Hindi and English with high proficiency. Alongside these, the lab released a text-to-speech model for Indian languages, a speech-to-text model, and a vision model specifically engineered to parse and understand documents.

Technical Specifications and Open-Source Commitment

The 30-billion and 105-billion parameter models represent a significant scaling of computational power. Parameters are the internal variables that a model learns during training, and a higher count generally correlates with greater capability in understanding and generating complex language. By releasing these as open-source, Sarvam AI is making the underlying code and weights publicly available for researchers and developers to use, modify, and distribute.

The text-to-speech model, called Bhashini, is tailored for the phonetic and linguistic nuances of Indian languages. The complementary speech-to-text model is built to accurately transcribe spoken Indian languages into text. The vision model completes the suite by offering advanced optical character recognition and layout analysis for documents, a tool with potential applications in digitizing records and automating data entry.

Strategic Implications for the AI Ecosystem

This release challenges the current paradigm where the most powerful AI models are often developed and controlled by a handful of large technology firms, primarily in the United States and China. Sarvam AI’s bet on open-source viability argues for a more decentralized and collaborative future for AI development. It provides an alternative for organizations and nations seeking sovereign AI capabilities without reliance on foreign proprietary software.

The move is seen as part of a broader effort to build a domestic AI industry in India that can serve local needs, such as creating tools for the country’s numerous official languages, while also competing on the international stage. Open-source models allow for greater transparency, auditability, and customization, which are critical for enterprise adoption and for building public trust in AI systems.

Global Context and Industry Reaction

Sarvam AI’s announcement enters a competitive global landscape. Other entities, like Meta with its Llama models and the French company Mistral AI, have also championed the open-source approach. However, many industry leaders, including OpenAI and Google, maintain that their most advanced models must remain closed to prevent misuse and to protect commercial interests.

Initial reactions from the global AI research community have noted the technical ambition of the Indian lab’s release. Analysts point out that successfully training and releasing a 105-billion parameter model requires substantial computational resources and expertise, signaling Sarvam AI’s growing technical maturity. The focus on multilingual support, particularly for Indian languages, addresses a significant gap in the current market dominated by English-optimized models.

The next phase for Sarvam AI will involve scaling the adoption of its models among developers and enterprises. The company is expected to focus on demonstrating practical applications and building a developer ecosystem around its open-source tools. Further updates and refinements to the model family are anticipated in the coming months as user feedback is incorporated and as the lab continues its research into more efficient and capable AI architectures.

Source: GeekWire

More in Artificial Intelligence