Get in touch with us.
NVIDIA – The best performance for AI.
Extensive computing power is required in order to be able to take advantage of AI, and one system that can meet this challenge is the NVIDIA® DGX™ based on the high performance NVIDIA GPU platform. Best of all, NVIDIA DGX offers companies GPU-optimised software and simplified management all in one compact system.
Benefits of DGX Station and DGX Server:
Integrated hardware and software.
Based on NVIDIA™ Tensor Core technology to accelerate AI and HPC.
Deep learning training, inference and acceleration of analyses in one single system.
Unmatched performance for faster iterations and innovations.
NVIDIA DGX™ A100 is the universal system for all AI workloads, delivering unprecedented computing density, performance and flexibility in the world’s first 5 petaFLOPS AI system. NVIDIA DGX A100 features the world’s most advanced accelerator, the NVIDIA A100 Tensor Core GPU, which enables organisations to consolidate training, inference and analytics into a unified, easy-to-deploy AI infrastructure with direct access to NVIDIA AI experts.
NVIDIA® DGX Station A100 is the compact workstation-sized system delivering the latest technologies, rapid deployment and incredible local computing power. To help you get the job done, a whisper-quiet cooling system has been integrated and a host of tools are available to accelerate development.
There is one processor that is particularly suitable for AI and that’s the NVIDIA® A100 delivers unprecedented acceleration at any scale in deep learning training and inference, data analytics and HPC.
With third-generation Tensor compute engines, the new Ampere architecture enables a 20x performance increase in the AI environment, while the new Multi-Instance GPU (MIG) feature enables and optimises workload distribution across up to seven instances, allowing data science teams to work as efficiently as possible.
The compact form factor NVIDIA® T4 is a real all-rounder. This accelerator delivers very high density and flexibility and is optimised for deep learning inference meaning it perfectly complements deep learning training systems.
In terms of graphics, the T4 delivers a native user experience for virtual office desktops as well as for virtual workstations in the mid-range CAD segment.
The NVIDIA® A40 is the best choice for virtual workstations. This accelerator delivers enough power for highly demanding applications such as CAD, simulations, VR and even enables real-time rendering for a photo-realistic result.
Choose the right NVIDIA Data Center GPU
Workload
| Description | NVIDIA A100 SXM4 | NVIDIA A100 PCIe | NVIDIA T4 | NVIDIA A40 |
| World's Most Powerful Data Center GPU | Versatile Data Center GPU for Mainstream Computing | World's Most Powerful Data Center GPU for Visual Computing | ||
Deep Learning Training | For the absolute fastest model training time | 8-16 GPUs | 4-8 GPUs |
|
|
Deep Learning Inference | For batch and real-time inference | 1 GPU w/ MIG | 1 GPU w/ MIG | 1 GPU |
|
HPC / AI | For scientific computing centers and higher ed and research institutions | 4 GPUs with MIG for | 1-4 GPUs with MIG for |
|
|
Render Farms | For batch and real-time rendering |
|
|
| 4-8 GPUs |
Graphics | For the best graphics performance on professional virtual workstations |
|
| 2–8 GPUs for mid-range | 4-8 GPUs for highest |
Enterprise Acceleration | Mixed Workloads – Graphics, ML, DL, analytics, training, inference | 1-4 with MIG for compute intensive | 1-4 GPUs with MIG for | 4-8 GPUs for balanced | 2-4 GPUs for graphics |
Edge Acceleration | Edge solutions with differing use cases and location |
| 1-2 GPU with MIG | 1-8 GPUs for inference | 2-4 GPUs for graphics |
Workload
| Description | NVIDIA A100 SXM4 | NVIDIA A100 PCIe |
| World's Most Powerful Data Center GPU | ||
Deep Learning Training | For the absolute fastest model training time | 8-16 GPUs | 4-8 GPUs |
Deep Learning Inference | For batch and real-time inference | 1 GPU w/ MIG | 1 GPU w/ MIG |
HPC / AI | For scientific computing centers and higher ed and research institutions | 4 GPUs with MIG for | 1-4 GPUs with MIG for |
Render Farms | For batch and real-time rendering |
|
|
Graphics | For the best graphics performance on professional virtual workstations |
|
|
Enterprise Acceleration | Mixed Workloads – Graphics, ML, DL, analytics, training, inference | 1-4 with MIG for compute intensive | 1-4 GPUs with MIG for |
Edge Acceleration | Edge solutions with differing use cases and location |
| 1-2 GPU with MIG |
Workload
| Description | NVIDIA T4 | NVIDIA A40 |
| Versatile Data Center GPU for Mainstream Computing | World's Most Powerful Data Center GPU for Visual Computing | |
Deep Learning Training | For the absolute fastest model training time |
|
|
Deep Learning Inference | For batch and real-time inference | 1 GPU |
|
HPC / AI | For scientific computing centers and higher ed and research institutions |
|
|
Render Farms | For batch and real-time rendering |
| 4-8 GPUs |
Graphics | For the best graphics performance on professional virtual workstations | 2–8 GPUs for mid-range | 4-8 GPUs for highest |
Enterprise Acceleration | Mixed Workloads – Graphics, ML, DL, analytics, training, inference | 4-8 GPUs for balanced | 2-4 GPUs for graphics |
Edge Acceleration | Edge solutions with differing use cases and location | 1-8 GPUs for inference | 2-4 GPUs for graphics |
* mandatory fields
Please read our Privacy Policy for information on how we process your data and protect your rights as a data subject.