H200
NVIDIA H200 Tensor Core GPU
Supercharging AI and HPC workloads
The GPU for Generative AI and HPC
The NVIDIA H200 Tensor Core GPU supercharges generative AI and high-performance computing (HPC) workloads with game-changing performance and memory capabilities. As the first GPU with HBM3e, the H200’s larger and faster memory fuels the acceleration of generative AI and large language models (LLMs) while advancing scientific computing for HPC workloads.
-
Higher memory capacity
The H200 comes with 141 GB of HBM3e memory, almost twice the capacity of the H100.
-
Increase memory bandwidth
The H200 has 4.8 TB/s of memory bandwidth, providing 1.4 times more bandwidth than the H100, enabling faster data processing
-
Enhanced AI performance
The H200 is optimized for generative AI and large language models (LLM) to enable faster and more efficient AI model training and inference.