The new NVIDIA H100 is a mammoth GPU with 80 billion transistors. and it's not for you

In NVIDIA they offered yesterday an event full of news. The news focused on the field of data centers, but as it happened two years ago with Ampere, what was presented a few hours ago helps us to understand what will come to future GPUs for end users.
Among the novelties, they highlighted the new NVIDIA Hopper architecture, successor to Ampere —present in the RTX 3000—, and also its first practical implementation, the NVIDIA H100 GPU, which goes even further than its predecessor, the A100, and poses a unprecedented power thanks to its 80,000 million transistors.
How to know the components of your PC (RAM, Graphics, CPU…) and the state in which they are

Transistors to me

Virtually everything in the NVIDIA H100 is improved over its predecessor. The numbers are promising in all areas, but it is also true that the TDP almost doubles and becomes 700 W compared to 400 W: with the electricity bill at all-time highs, using these chips is not going to work out cheap to companies.
And is that the H100 is a GPU intended entirely for data centers. The commitment to areas related to artificial intelligence is enormous, and in fact the company placed special emphasis on its Transformer Engine, "designed to accelerate the training of artificial intelligence models." This kind of technology is behind systems like GPT-3, and it will make training those functions much faster.
This GPU also benefits from the presence of fourth-generation NVLink technology that allows the interconnection of all its nodes to scale. It offers up to 900 GB / s of bidirectional transfers per GPU, or what is the same, seven times the bandwidth of the PCIe 5 standard that has not even practically reached the market.
The Hopper architecture is also a fundamental part of these advances. At NVIDIA, they highlighted the ability of this new architecture to accelerate the so-called dynamic programming, "a problem-solving technique used in algorithms for genomics, quantum computing, route optimization, and more."
According to the manufacturer, all these operations will now be done 40 times faster thanks to the use of the new DXP instructions, a new set of instructions that can be applied in these areas.

Grace CPU Superchip

Another of the most striking announcements was that of its Grace CPU Superchip, which is made up of two processors connected through a low-latency NVLink-C2C link. The idea is to target this chip to "serve large-scale HPC centers and artificial intelligence applications" alongside Hopper architecture GPUs.
This "double chip" is the evolution of the Grace Hopper Superchip announced last year. Included in this iteration are 144 ARMv9 cores that achieve 1.5 times the performance of the dual CPU in long-time NVIDIA DGX A100 systems.
At NVIDIA they also indicated that they are creating a new supercomputer called Eos for artificial intelligence tasks. According to the manufacturer, it will be the most powerful in the world when it is implanted. This project will consist of 4,600 H100 GPUs that will offer 18.4 exaflops of performance in AI operations, and it is expected to be ready in a few months, although it will only be used for internal research at NVIDIA.
Grace Superchip CPUs are expected to be available in the first half of 2023, so we'll have to be patient. The NVIDIA H100 will be available earlier, in the third quarter of 2022.

More information | NVIDIA

Related Posts

Leave a Reply

%d bloggers like this: