Cerebras Systems Announces Launch of WSE-3 AI Chip
Cerebras Systems has introduced the Wafer Scale Engine 3 (WSE-3), marking a significant milestone in developing chips designed for generative artificial intelligence (AI).
The announcement, made on March 13, 2024, positions the WSE-3 as the world’s largest semiconductor, aimed at advancing the capabilities of large language models with tens of trillions of parameters. This development comes on the heels of the intensifying race in the tech industry to create more powerful and efficient AI models.
Doubling Down on Performance
The WSE-3 chip improves the performance of its predecessor, WSE-2, two times without an increase in power consumption or cost. This accomplishment is celebrated as one of the strides made per Moore’s Law, which states that chip circuitry is expected to become twice as complex approximately every 18 months.
Consequently, the WSE-3 chip, manufactured by TSMC, shows a decrease in the transistor size from 7 nanometers to 5 nanometers, which increases the transistor count to 4 trillion on a chip the size of almost an entire 12-inch semiconductor wafer. This increase results in a doubling of the computational power from 62.5 petaFLOPs to 125 petaFLOPs, thus improving the chip’s efficiency in training AI models.
Advantages Over Competitors
Cerebras’ WSE-3 substantially surpasses the industry standard, Nvidia’s H100 GPU, in size, memory, and computational capabilities. Featuring 52 times more cores, 800 times larger on-chip memory, and significant improvements in memory bandwidth and fabric bandwidth, the WSE-3 delivers the largest performance improvements ever targeted at AI computations.
These improvements allow the training of substantial neural networks, including a hypothetical 24 trillion parameter model on a single CS-3 computer system, demonstrating the vast potential of WSE-3 in speeding up AI model development.
Innovations in AI Training and Inference
The release of the WSE-3 is associated with improvements in the training and inference phases of AI model development. Cerebras emphasizes the chip’s capability to simplify the programming process since it requires much fewer lines of code than GPUs for modeling GPT-3. The simplicity with which 2,048 machines could be clustered and trained makes this design able to train large language models 30 times faster than the current leading machines.
Cerebras has additionally revealed a tie-up with Qualcomm to improve the inference part, which is about predicting based on the AI model trained. Through methods like sparsity and speculative decoding, the partnership seeks to reduce the computational costs and energy usage of generative AI models to the bare minimum.
As a result, this collaboration signifies a strategic move towards optimizing the efficiency of AI applications, from training to real-world deployment.
Read Also: Charles Hoskinson Eyes Lightweight Consensus for Cardano
The post Cerebras Systems Announces Launch of WSE-3 AI Chip appeared first on CoinGape.
Filed under: News - @ January 1, 1970 12:00 am