From September 1, Beijing time, enterprise users can apply to test Didi Cloud bare-metal server products based on NVIDIA A100 GPU
NVIDIA A100 GPU introduction
NVIDIA A100 integrates more than 54 billion transistors and is currently the world’s largest 7nm process processor with 6912 CUDA cores, 40GB running memory, and memory network bandwidth up to 1.6TB/s. The performance of tensor computing core content is greatly improved, and the PERFORMANCE of TF32 reaches 156 trillion times /s. Using sparsity, its computing power can be doubled to 312 petaflops.
The NVIDIA A100 Tensor Core GPU is based on the latest Ampere architecture and has a lot of new features compared to the previous generation of NVIDIA V100 Gpus. It has better performance in HPC, AI and data analysis. The A100 offers exceptional scalability for GPU computing and deep learning applications, running on single or multiple GPU workstations, servers, clusters, cloud data centers, edge computing systems, and supercomputing centers. The A100 GPU can build flexible, resilient, and high-performance data centers.
The A100 is equipped with revolutionary multi-instance GPU (MIG) virtualization and GPU cutting capabilities, making it more friendly to cloud service providers (CSPs). When configured to run MIG, the A100 can help vendors improve GPU server utilization by splitting up to seven cores with no additional input. The A100’s stable fault separation also allows vendors to safely split gpus.
The A100 comes with a powerful third-generation Tensor Core, supports richer DL and HPC data types, and has a higher computational throughput than the V100. The A100’s new Sparsity feature further doubles computational throughput.
The new TensorFloat-32 (TF32) core arithmetic unit allows the A100 to easily speed up FP32 as input/output data in DL framework and HPC, up to 10 times faster than THE V100 FP32 FMA operation and up to 20 times faster with sparse optimization. The hybrid accuracy of FP16/FP32 can also reach 2.5 times that of V100, and it can reach 5 times after sparse optimization.
The new Bfloat16(BF16)/FP32 mixed precision Tensor Core unit and FP16/FP32 mixed precision run at the same frequency. The acceleration of The Tensor Core for INT8, INT4 and INT1 provides full support for DL reasoning, and the A100 SPARSE INT8 is 20 times faster than the V100 INT8. In HPC, the IEEE-compliant FP64 processing at the A100 Tensor core makes it 2.5 times better than the V100.
The A100 GPU is designed with extensive performance scalability. Customers can share a SINGLE A100 using MIG GPU partitioning technology, or use multiple A100 Gpus in the powerful new NVIDIA DGX™, NVIDIA HGX™, and NVIDIA EGX™ systems, And connected via the new third-generation NVLink® high-speed interconnect. Connected by the new NVIDIA NVSwitch™ and Mellanox® state-of-the-art infiniBand™ and Ethernet solutions, a100-based systems can scale to tens, hundreds, or thousands of A100s in computing clusters, cloud computing instances, or large supercomputers, To speed up multiple types of applications and workloads. In addition, the A100 GPU’s revolutionary new hardware capabilities are enhanced with new CUDA 11 features that improve programmability and reduce the complexity of AI and HPC software.
NVIDIA A100 GPU is the first elastic GPU rack that can be extended to giant Gpus using NVLink, NVSwitch and InfiniBand, or to support multiple independent users using MIG. Didi Cloud GPU/vGPU cloud server has excellent cost performance. Price advantage is obvious.
NVDIA A100 Tensor Core GPU makes the biggest generational leap in NVIDIA GPU-accelerated computing ever.
Didi Cloud GPU and machine learning products fully embrace the A100
As a long-term partner of NVIDIA, Didi Cloud will soon launch a series of cloud server products based on NVIDIA A100 GPU, including GPU cloud server products and bare-metal server products. Currently, bare-metal server products are open for testing to invited users. The product line will provide cloud acceleration services for deep learning training/reasoning, data analysis, scientific computing, genetic engineering, cloud gaming and other scenarios. X /2.x, PyTorch, MXNet and other performance optimized AI training framework and TensorRT inference framework. Save the time of initial environment installation.
Didi Cloud Machine Learning Studio (DAI) Notebook service will also add support for the A100 GPU. The Notebook is based on Jupyter, and the A100 GPU’s computing power will help machine learning developers build and train complex machine learning models that require more computing power.
Didi Cloud and GPU products, machine learning products introduction
Cloud drops was established in 2017, based on drops travel business technology and the accumulation of experience, with leading cloud computing architectures, high specification server cluster structures, high-performance resource allocation mechanism, refinement operation mode, is committed to providing a developer with simple and quick, efficient, stable, cost-effective, safe and reliable IT infrastructure cloud services.
Didi Cloud GPU cloud server is an advantage product of Didi Cloud. Currently, Didi Cloud provides five GPU cloud server products based on NVIDIA Tesla P4, P40, P100, T4 and A100, as well as vGPU cloud server products based on P4, P40 and T4. It has been widely used in deep learning reasoning/prediction, deep learning training, image rendering, floating point high performance computing, video codec and other application scenarios. Didi Cloud GPU/vGPU cloud server has excellent cost performance and obvious price advantage.
Didi Cloud AIBench will provide customers with accessible performance experience. For complex GPU cloud servers with various models and different specifications, the one-click running function can make AI performance indicators (training speed/reasoning delay) that customers care about clear at a glance, facilitating product and specification selection.
Didi Cloud Machine Learning Studio (DAI) provides a hosted machine learning environment to help enterprises and AI developers quickly build, train and deploy machine learning models. DAI provides a rich machine learning development environment that allows developers to focus on machine learning tasks themselves and produce high-quality AI models.
Apply for test The A100 GPU bare-metal server test is open. You can apply for trial use by scanning code consultation.
Click on the official website of Didi Cloud to register and receive the 10,000 yuan gift package