Aethir and Theta EdgeCloud's GPU Marketplace
- AI and Gaming industries need significant amounts of GPU-based computing to innovate and scale while navigating supply chain issues, shortages, and inefficiencies in traditional cloud computing models.
- Without a radical improvement in GPU infrastructure efficiency, the industry will fail to meet increasing demands.
- This new partnership will enable the industry to access 20-30x greater GPU-based compute power and give AI developers and enterprises alike a simple point-and-click option to train, fine-tune and deploy any open-source or custom AI model.
Aethir is teaming up with Theta EdgeCloud, the first decentralized cloud-edge computing platform for AI, to launch the largest hybrid GPU marketplace in the world. This will empower every organization, large and small, with instant access to enterprise-grade GPU compute to power billions of AI, media, entertainment, and gaming interactions.
GPU-based computing has been deemed the world’s most valuable asset, yet demand is currently far outpacing supply, and the industry struggles with inefficiencies. This is the problem that both Aethir and Theta are built to solve with state-of-the-art decentralized GPU cloud computing infrastructure.
Both companies believe anyone, regardless of location or socio-economic status, should be able to play games, be entertained, and improve their lives with artificial intelligence. This partnership unlocks near-limitless innovation by empowering enterprises with world-class compute at scale – providing compute power regardless of distance or device.
Creating the Largest GPU Marketplace Yet
Aethir is an enterprise-focused, distributed GPU cloud infrastructure and bare metal provider with one of the largest GPU networks and the highest committed revenue within the DePIN sector. The platform currently has access to more than 40,000 enterprise-grade GPUs including more than 8,000 NVIDIA H100s, representing more than 170,000 TFLOPS of compute power.
Meanwhile, Theta’s Edge Network comprises one of the largest clusters of distributed GPU computing power in the world. High desktop performance GPUs including Nvidia 4090s (~1000 nodes) deliver 36,392 TFLOPS, Medium tier GPUs (~2000 nodes) deliver 28,145 TFLOPS and Low end GPUs (~7000 nodes) deliver an additional 13,002 TFLOPS, for a total of 77,538 TFLOPS or about 80 PetaFLOPS today, roughly equivalent to 250 Nvidia A100s, always available.
Together, Aethir and Theta are redefining the GPU cloud computing sector by shifting the paradigm from centralized to decentralized network infrastructure. Instead of relying on centralized servers located in just a few big data centers, Aethir and Theta distribute GPU computing resources across a much wider area. The goal is to cover the network’s edge to provide AI enterprises, media, entertainment, and gaming companies with the best-in-class GPU resources regardless of their end-user’s physical location.
Edge computing enables Aethir and Theta to reduce lag by bringing cloud infrastructure closer to users, lowering latency even during high traffic. Aethir's decentralized model offers superior GPU efficiency by pooling power from multiple GPUs, maximizing usage, and reducing costs, allowing clients to get more for less.
Aethir has already achieved an unprecedented level of community engagement and decentralization by selling over 65,000 Aethir Checker nodes to more than 11,000 individual buyers, making it the largest event of its kind.
These nodes play a crucial role in ensuring the stability and security of Aethir's distributed cloud infrastructure. As a reward for their contribution, node operators will receive 10% of the total $ATH token supply over the next four years after Aethir's Mainnet launch in Q2 2024, with an additional 5% allocated to nodes that demonstrate exceptional performance and commitment.
Later this year, Theta will launch its hybrid cloud-decentralized GPU marketplace fully integrated into the EdgeCloud platform. This will provide AI developers and enterprises alike with a simple point-and-click option to train, fine-tune and deploy any open-source or custom AI model.