NVIDIA unveils the A100, the first GPU based on the company’s Ampere architecture, which unifies AI training and inference to boost performance by up to 20x over its predecessors. Its multi-instance GPU capability allows each A100 GPU to be partitioned into as many as seven independent instances for inferencing tasks, while third-generation NVLink interconnect technology enables multiple A100 GPUs to operate as one giant GPU for even larger training tasks. Read more for two videos and additional information.
Many leading cloud service providers have already integrated A100 GPUs into their offerings, including: Alibaba Cloud, Amazon Web Services (AWS), Atos, Baidu Cloud, Cisco, Dell Technologies, Fujitsu, GIGABYTE, Google Cloud, H3C, Hewlett Packard Enterprise (HPE), Microsoft Azure, Oracle and Tencent Cloud.
- System: Intel Core i7-9700F 8-Core 3. 0GHz (4. 70 GHz Max Turbo) | 16GB DDR4-2666 RAM | 1TB HDD | 240GB SSD | Genuine Windows 10 Home 64-bit
- Graphics: NVIDIA GeForce GTX 1660 Ti 6GB Dedicated Gaming Video Card | VR Ready | 1x DVI | 1x HDMI | 1x Display Port
- Connectivity: 4 x USB 3. 0 | 2 x USB 2. 0 | 1x RJ-45 Network Ethernet 10/100/1000 | Audio: 7. 1 Channel
- Special Add-Ons: Tempered Glass RGB Gaming Case | 802. 11AC Wi-Fi Included | 16 Color RGB Lighting Case | Free Gaming Keyboard & RGB Gaming Mouse | No Bloatware
- : 1 Year Parts & Labor + Free Lifetime US Tech Support | Assembled in the U. S. A
Modern and complex AI training and inference workloads that require a large amount of data can benefit from state-of-the-art technology like NVIDIA A100 GPUs, which help reduce model training time and speed up the machine learning development process. In addition, using cloud-based GPU clusters gives us newfound flexibility to scale up or down as needed, helping to improve efficiency, simplify our operations and save costs,” said Gary Ren, machine learning engineer at DoorDash.