NVIDIA A100

NVIDIA A100

Unprecedented acceleration at every scale.

  • High-Performance Computing
  • Data Center Acceleration
  • Machine Learning Inference
  • Autonomous Vehicles
  • Medical Imaging and Diagnostics
  • Financial Modeling and Risk Analysis
  • Genomics and Drug Discovery
  • Energy Exploration and Simulation
  • Natural Language Processing
  • Smart Cities and Infrastructure
  • VR and AR
  • Advanced Robotics
  • 3D Rendering and Visualization
  • Climate and Weather Simulation
Reserve now

Specifications

FP64
9.7 TFLOPS
FP64 Tensor Core
19.5 TFLOPS
FP32
19.5 TFLOPS
Tensor Float 32 (TF32)
156 TFLOPS
BFLOAT16 Tensor Core
312 TFLOPS
FP16 Tensor Core
312 TFLOPS
INT8 Tensor Core
624 TOPS
GPU Memory
80GB HBM2e
GPU Memory Bandwidth
1,935 GB/s
GPU Memory Bandwidth
1,935 GB/s
Max Thermal Design Power (TDP)
300W
Multi-Instance GPUs
Up to 7 MIGs @ 10GB
Interconnect
NVIDIA® NVLink® Bridge for 2 GPUs: 600 GB/s ** PCIe Gen4: 64 GB/s
NVIDIA A100

Give Zettabyte a try and see for yourself, superior MFU in both training and inference.