Corsair C8 Beats Nvidia H100 on Key Tests with GPU-Like Card Equipped With 256GB RAM

Corsair C8 beats Nvidia H100 on key tests with a GPU-like card equipped with 256GB RAM. The Microsoft-backed AI startup claims nine times as much throughput as the state-of-the-art GPUs of today.

Corsair C8 Beats Nvidia H100

Corsair C8 Beats Nvidia H100

D-Matrix’s unique computing platform, that is widely known as the Corsair C8, can reportedly stake a huge claim to have displaced the industry-leading H100 GPU of Nvidia, well, at least according to some staggering test results that the startup has published.

Designed specifically for generative AI workloads, the Corsair C8 in question majorly differs from GPUs in the sense that it makes use of d-Matrix’s unique digital-in-memory computer (DIMC) architecture.

And the result? A nine-times increase in throughput versus the industry-leading H100 by Nvidia, as well as a 27-times increase versus the A100.

The startup for those that don’t know is one of the most hotly followed in Silicon Valley, reportedly raising $110 million from investors in its latest funding round, and this is inclusive of funding from Microsoft. This very development came alongside a $44 million investment round from backers which is inclusive of Microsoft, SK Hynix, and others, back in April 2022.

The Corsair C8 Card Specs and Features

The flagship of the startup, the Corsair C8 card includes 2,048 DIMC cores with 130 billion transistors as well as 256GB LPDDR5 RAM. The card in question can boast 2,400 to 9,600 TFLOPS of computing performance and also has a chip-to-chip bandwidth of 1TB/s.

These unique cards in question can easily produce up to 20 times high throughput for generative inference on large language models (LLMS), which is up to 20 times lower inference latency for LLMs, and then up to 30 times cost savings in contrast to traditional GPUs.

The Effect of Generative AI on the Industry

With generative AI quickly expanding, the industry is now locked in a race to build increasingly powerful hardware to help power future generations of the technology.

The leading components for those who don’t know are GPUs and, more specifically, the A100 and newer H100 units of Nvidia. But GPUs as you should have known by now are optimized for LLM inference, as per d-Matrix, and too many GPUs are needed to effectively handle AI workloads, thus leading to excessive energy consumption.

Bandwidth Demands of Running AI Inference

This very development is due to the fact that the bandwidth demands of running AI inference reportedly lead to GPUs spending a whole lot of time idle, waiting for data to come in from DRAM. And moving data out of DRAM also means higher energy consumption along with reduced throughput as well as added latency. This simply means that cooling demands are then heightened in the process.

The solution here as this firm claims, is its specialized DIMC architecture that reportedly mitigates many of the problems in GPUs. D-Matrix in question claims that its solution can help reduce costs by 10 to 20 times and in some cases even as much as 60 times.

Other Players in the Industry Looking To Overthrow Nvidia

Beyond the technology of d-Matrix, other players in the industry are starting to emerge in the race to outpace Nvidia’s H100. IBM just recently presented a new analog AI chip back in August that mimics the human brain and can also perform up to 14 times more efficiently.

MORE RELATED POSTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here