Blogs
Aethir Partners With TensorOpera to Support LLM and AI Innovation
June 20, 2024

Aethir Partners With TensorOpera to Support LLM and AI Innovation

Community

Aethir is teaming up with TensorOpera, a leading force in the AI industry, focusing on large language model (LLM) training and generative AI. The AI sector is the most GPU-hungry industry in the world, with an exponentially growing need for highly scalable GPU resources. With Aethir's help, TensorOpera and its new foundation model, TensorOpera Fox-1, will gain access to the world's largest decentralized GPU cloud infrastructure. TensorOpera Fox-1 is the first mass-scale LLM training use case on a decentralized cloud network. Thus it represents a pioneering effort in distributed AI technology. Our partnership aims to supply AI developers using TensorOpera in need of enterprise-grade GPU power with the raw power needed to conduct massive-scale LLM training efficiently.

The collaboration between TensorOpera and Aethir is the first intersection between Web 2.0 and Web 3.0 for AI training at scale on decentralized cloud infrastructure. LLM training hasn’t been done on decentralized physical infrastructure networks (DePIN) before.

The AI industry is revolutionizing how we communicate, research, develop apps, and create visual content. At the forefront of this revolution is generative AI, which provides lightning-fast responses on a wide range of topics. This sector heavily relies on large language models (LLMs) developed through GPU-demanding AI inference and training procedures with millions of data sets. 

TensorOpera Fox-1 was introduced last week as a cutting-edge open-source small language model (SLM) with highly advanced performance features that outperform many other models from big-tech providers such as Apple and Google. This language model is based on 1.6 billion parameters and trained on three trillion tokens using a nover 3-stage curriculum. These features make TensorOpera Fox-1 a staggering 78% deeper than similar models like Google’s Gemma 2B and surpasses competitors in standard LLM benchmarks like GSM8k and MMLU.

“I am thrilled about our partnership with Aethir,” said Salman Avestimehr, Co-Founder and CEO of TensorOpera. “In the dynamic landscape of generative AI,  the ability to efficiently scale up and down during various stages of model development and in-production deployment is essential. Aethir’s decentralized infrastructure offers this flexibility, combining cost-effectiveness with high-quality performance. Having experienced these benefits firsthand during the training of our Fox-1 model, we decided to deepen our collaboration by integrating Aethir's GPU resources into TensorOpera's AI platform to empower developers with the resources necessary for pioneering the next generation of AI technologies."

TensorOpera is a large-scale generative AI platform that enables developers and enterprises to easily, scalably, and economically build and commercialize their own generative AI applications. TensorOpera has over 4,500 platform users from 500+ universities and 100+ enterprises, making it a powerhouse in the LLM training and launching sector. The company recently launched TensorOpera Fox-1, an advanced foundation model that enables developers to create complex, multi-layered AI platforms that leverage LLM technology. 

AI solutions like TensorOpera Fox-1 require powerful GPU clusters that support high throughput, substantial memory capacity, and efficient parallel processing capabilities.

Currently, the GPU industry can't keep up with the pace of growth in the AI sector, and there's a constant shortage of GPU power. However, this shortage is fictional because there are millions of underutilized GPUs across the globe. Aethir's decentralized cloud infrastructure can power highly demanding AI apps, platforms, and whole networks by pooling resources from underutilized GPUs. Unlike centralized clouds that store computing resources in a few large data centers, Aethir leverages decentralized network architecture and distributes its vast network of GPU resources globally. By doing so, Aethir can pool resources from a multitude of GPUs to channel the processing power where it's needed efficiently without lagging or scalability issues.

Aethir has access to a constantly expanding network of enterprise-grade GPU resources spread across the globe to power AI, machine learning, and gaming companies at scale. With over 40,000 top-grade GPUs and over 3000 NVIDIA H100s, Aethir is able to power even the most demanding LLM training projects. In fact, TensorOpera Fox-1 was developed using high-quality H100 GPU clusters from Aethir's GPU fleet.

Through our collaboration, TensorOpera has integrated a pool of GPU resources from Aehtir. These can be used seamlessly via TensorOpera's Nexus AI platform for a variety of AI functions, such as model deployment and serving, fine-tuning, and training.  

We have contributed our distributed cloud infrastructure to TensorOpera’s ecosystem and offered a promotional pricing of $2.50/GPU/hour, which is highly competitive compared to other GPU compute providers. Now, TensorOpera invites generative AI model builders and application developers to the TensorOpera Nexus AI platform to easily start building, deploying, and serving their applications via Aethir's on-demand H100 and A100 GPUs.

Keep reading