NVIDIA H100

NVIDIA H100

Extraordinary performance, scalability, and security for every data center.

  • High-Performance Computing
  • Data Center Acceleration
  • Machine Learning Inference
  • Autonomous Vehicles
  • Medical Imaging and Diagnostics
  • Financial Modeling and Risk Analysis
  • Genomics and Drug Discovery
  • Energy Exploration and Simulation
  • Natural Language Processing
  • Smart Cities and Infrastructure
  • VR and AR
  • Advanced Robotics
  • 3D Rendering and Visualization
  • Climate and Weather Simulation
Reserve now

Specifications

FP64
34 TFLOPS
FP64 Tensor Core
67 TFLOPS
FP32
67 TFLOPS
TF32 Tensor Core²
989 TFLOPS
BFLOAT16 Tensor Core²
1,979 TFLOPS
FP16 Tensor Core²
1,979 TFLOPS
FP8 Tensor Core²
3,958 TFLOPS
INT8 Tensor Core²
3,958 TFLOPS
GPU Memory
80GB
GPU Memory Bandwidth
3.35TB/s
Max Thermal Design Power (TDP)
Up to 700W (configurable)
Multi-Instance GPUs
Up to 7 MIGS @ 10GB each
Interconnect
NVIDIA NVLink™: 900GB/s PCIe Gen5: 128GB/s
NVIDIA H100

Give Zettabyte a try and see for yourself, superior MFU in both training and inference.