Connect with us
compressed AI model

Artificial Intelligence

Spanish Startup Releases Free AI Model Outperforming Mistral

Spanish Startup Releases Free AI Model Outperforming Mistral

A Spanish artificial intelligence startup has released a new, freely available version of its large language model, claiming it surpasses a leading competitor’s offering. Multiverse Computing, based in San Sebastián, published its compressed HyperNova 60B model on the Hugging Face platform.

The company states this release demonstrates the effectiveness of its proprietary compression techniques. These methods are designed to reduce the computational resources required to run powerful AI models without significant loss of performance.

Technical Performance Claims

According to Multiverse Computing, the HyperNova 60B model outperforms Mistral AI’s Mixtral 8x7B model on several standard benchmarks. The evaluation includes common tests for reasoning, knowledge, and coding proficiency. The startup’s model achieves this while being a single, unified model rather than a mixture-of-experts architecture.

The release is positioned as a significant step in making advanced AI more accessible. By offering a compressed model, the company aims to lower the barrier for developers and researchers who lack extensive computing infrastructure.

Background on the Company

Multiverse Computing is known for applying quantum-inspired algorithms to classical computing problems. The firm has previously focused on financial modeling and optimization tasks. Its venture into the competitive field of large language models represents a strategic expansion of its technology portfolio.

The startup has attracted attention in European tech circles, sometimes referred to as a “soonicorn,” a term for companies poised to reach unicorn status. Its work in model compression addresses a critical challenge in the AI industry: the high cost of deploying and running large-scale models.

Industry Context and Implications

The AI model landscape is increasingly competitive, with both open-source and proprietary models vying for developer adoption. Releases on platforms like Hugging Face have become a standard method for distribution and community validation. Performance claims on such platforms are typically scrutinized by the global developer community.

Efficient model design is a key research area as AI applications scale. Techniques that reduce model size and computational demands can lead to wider adoption, lower environmental impact, and faster inference times. This is particularly relevant for deployment on edge devices or in cost-sensitive environments.

Availability and Licensing

The HyperNova 60B model is available for download under the Apache 2.0 license. This permissive open-source license allows for both commercial and research use with minimal restrictions. The model’s weights and necessary code are hosted on its Hugging Face repository.

This licensing approach contrasts with some other AI firms that use more restrictive terms for their model weights. The open availability allows independent researchers to verify the company’s performance benchmarks and experiment with the technology.

Future Developments and Next Steps

Multiverse Computing has indicated that the release of HyperNova 60B is part of a broader roadmap. The company plans to continue refining its compression technology and may release additional model variants or sizes in the future. Industry observers will likely monitor the model’s adoption and independent benchmark results in the coming weeks to assess its real-world performance against established alternatives.

Source: Adapted from original reporting

More in Artificial Intelligence