Theta EdgeCloud is a cutting-edge decentralised software platform intended for peripheral computing that the Theta team is actively creating. They will be able to use GPU processing capabilities in the context of AI and video work by doing this, giving developers, researchers, and enterprises access to affordable connection. The performance is improved by the Theta Edge Network.
Theta’s roadmap states that the first phase of EdgeCloud will be released on May 1, 2024. The cornerstone that serves as the basis for EdgeCloud’s AI computing platform has been in development for a long time.
Theta Edge Network made its debut in 2021 with Mainnet 3.0, addressing the allocation, encoding, and transcoding needs of GPU-intensive video processing. The global Theta network consists of around 10,000 active edge nodes that are currently operated by the community. It is acknowledged globally as one of the largest collections of allotted GPU processing capacity.
GPUs with 2,000 nodes and intermediate performance yield 28,145 TFLOPS and 13,002 TFLOPS, respectively, whereas GPUs with 1,000 nodes and excellent performance yield 36,392 TFLOPS. In total, there are about 77,538 TFLOPS.
Because of Theta’s extensive processing capacity and connectivity to its cloud partners, which adds an extra 800+ PetaFLOPS, more than 2,500 NVIDIA A100s can be made available. This is all that is needed to train and maintain the largest language modules (LLMs). EdgeCloud’s decentralised hybrid cloud infrastructure supports state-of-the-art text-to-3D components and GenAI text-to-image and text-to-video modules like Llama2 and Stable dissemination.
The AI computing area cannot be altered by connectivity using the GPU processing capability given to it. In 2021, Theta submitted the first patent application for the Edge computing platform, which is backed by a smart contract-based blockchain network. This opened the door for the creation of a state-of-the-art hybrid computing architecture wherein computation jobs are stored on a blockchain and sent via a peer-to-peer, secure link to an edge computing node inside a decentralized computing network. The September 2023 issue of the patent was received.
A large number of AI affiliates were established in 2022 to focus on machine learning and natural language processing (NLP). As a result, Google Cloud, FedML, and Lavita.AI partnered to develop an AI module pipeline for video-to-text applications.
To create AI-powered applications, developers can rapidly select and arrange popular modules like Llama 2 and steady diffusion. The templates can be modified to incorporate popular AI modules like generative AI and chatbots, among others. Theta will soon make available their scaled edge node software, which will give EENs with 500,000TFUELs staked access to elite and booster features. This allows EdgeCloud AI compute activity awards to be shared amongst node operators.
The entire process of building an AI pipeline, from prototyping using Jupyter notebooks to training AI modules like hyperparameter tuning, neural framework search, and model refinement, will be handled by AI developers in the future. The models will then be prepared for distribution throughout EdgeCloud’s GPU network. Eventually, these will work with Ray clusters.