Cerebras Selects Qualcomm to Deliver Unprecedented Performance in AI Inference

0
58
Cerebras Selects Qualcomm to Deliver Unprecedented Performance in AI Inference
Cerebras Selects Qualcomm to Deliver Unprecedented Performance in AI Inference

The best-in-class solution developed with Qualcomm® AI Cloud 100 Ultra offers up to 10x the number of tokens per dollar, radically lowering the operating costs of AI deployment.

Leveraging the latest cutting-edge ML techniques and world-class AI expertise, Cerebras will work with Qualcomm Technologies’ AI 100 Ultra to speed up AI inference.

Bangalore, India., March 15, 2024 Cerebras Systems, a pioneer in accelerating generative artificial intelligence (AI), today announced the company’s plans to deliver groundbreaking performance and value for production artificial intelligence (AI). By using Cerebras’ industry-leading CS-3 AI accelerators for training with the AI 100 Ultra, a product of Qualcomm Technologies, Inc., for inference, production-grade deployments can realize up to a 10x price-performance improvement.

“These joint efforts are aimed at ushering in a new era of high-performance, low-cost inference, and the timing couldn’t be better. Our customers are focused on training the highest-quality state-of-the-art models that won’t break the bank at the time of inference,” said Andrew Feldman, CEO and co-founder of Cerebras. “Utilizing the AI 100 Ultra from Qualcomm Technologies, we can radically reduce the cost of inference without sacrificing model quality, leading to the most efficient deployments available today.”

Leveraging the latest cutting-edge ML techniques and world-class AI expertise, Cerebras will work with Qualcomm Technologies’ AI 100 Ultra to speed up AI inference. Some of the advanced techniques to be used are as follows:

  • Unstructured Sparsity: Cerebras and Qualcomm Technologies solutions can perform training and inference using unstructured, dynamic sparsity, a hardware-accelerated AI technique that dramatically improves performance efficiency. For example, a Llama 13B model trained on Cerebras hardware with 85% sparsity trains 3–4x faster, and using AI 100 Ultra inference, it generates tokens with a 2-3x higher throughput.
  • Speculative Decoding: This advanced AI technique combines the high throughput of a small LLM with the accuracy of a large LLM. The Cerebras Software Platform can automatically train and generate both models, which are seamlessly ingested via the Qualcomm® AI Stack, a product of Qualcomm Technologies. The resulting model can output tokens at up to 2x the throughput with uncompromised accuracy.
  • Efficient MX6 inference: The AI 100 Ultra supports MX6, an industry-standard micro-exponent format that performs high-accuracy inference using half the memory footprint and twice the throughput of FP16.
  • NAS service from Cerebras: Using Network Architecture Search for Targeted Use Cases The Cerebras platform can deliver models that are optimized for the Qualcomm AI architecture, leading to up to 2x higher inference performance.

A combination of these and other advanced techniques is designed to allow the Cerebras and Qualcomm Technologies solutions to deliver an order of magnitude performance improvement while enabling it at model release, resulting in inference-ready models that can be deployed on Qualcomm cloud instances anywhere.

“The combination of Cerebras’ AI training solution with the AI 100 Ultra helps deliver industry-leading performance/TCO$ for AI Inference, as well as optimized and deployment-ready AI models to customers, helping reduce time to deployment and time to RoI,” said Rashid Attar, Vice President, Cloud Computing, Qualcomm Technologies, Inc.

By training on Cerebras, customers can now unlock massive performance and cost advantages with inference-aware training. Models trained on Cerebras are optimized to run inference on the AI 100 Ultra, leading to friction-free deployments.

“AI has become a key part of pharmaceutical research and development, and the cost of operating models is a critical consideration in the research budget,” said Kim Branson, Sr. Vice President and Global Head of AI/ML at GlaxoSmithKline. “Techniques like sparsity and speculative decoding that make inference faster while lowering operating costs are critical; this allows everyone to integrate and experiment with AI.”

For more information on Qualcomm Technologies and Cerebras AI training and inference solutions, please visit the Cerebras blog. The combined Cerebras CS-3 for AI training and Qualcomm AI 100 Ultra for inference at scale will be available in Q2 and Q3 2024.

Also readWomen in the technology industry is constantly increasing, says Rajita Bhatnagar

Do FollowCIO News LinkedIn Account | CIO News Facebook | CIO News Youtube | CIO News Twitter 

About us:

CIO News, a proprietary of Mercadeo, produces award-winning content and resources for IT leaders across any industry through print articles and recorded video interviews on topics in the technology sector such as Digital Transformation, Artificial Intelligence (AI), Machine Learning (ML), Cloud, Robotics, Cyber-security, Data, Analytics, SOC, SASE, among other technology topics.