Tesla officially unveiled its D1 chip yesterday at the company’s AI event, and these will eventually power its Project Dojo exascale supercomputer. This 7nm D1 chip was designed in-house to be used for machine learning as well as removing bandwidth bottlenecks. It has 354 chip nodes, each of which reportedly has one teraflops (1,024 gflops) of compute, while the entire D1 chip is capable of up to 363 teraflops of compute as well as 10tbps of on-chip bandwidth / 4tbps of off-chip bandwidth.
The company developed “Training Tiles” to house the chips, with each one consisting of 25 D1s in an integrated multi-chip module. Each of these tiles boasts 9 petaflops of compute and 36tbps of off-tile bandwidth. Project Dojo will be assembled using two trays of six tiles in a single cabinet, for 100 petaflops of compute per cabinet. When finished, Tesla will have a single ‘Exapod’ capable of 1.1 exaflops of AI compute via 10 connected cabinets. The completed supercomputer will have 120 tiles, 3,000 D1 chips, and over 1-million nodes.
- Ryzen 5 2600 6-Core 3.4GHz (3.9 GHz Max Boost) CPU Processor | 500G SSD – Up to 30x Faster Than Traditional HDD | A320M Motherboard
- NVIDIA GeForce GTX 1660 6GB GDDR5 Video Card | 8 GB Gaming Memory DDR4 3000 with Heat Spreader | Windows 10 Home 64-bit
- PCIe AC Wi-Fi with Antenna | No bloatware | 1 x HDMI, 1 x D-Sub, 1 x DVD-D | 4 x USB 3.1 Gen1 Ports, 2 x USB 2.0 | HD Audio and Mic | Free RGB...
There is no dark silicon, no legacy support, this is a pure machine learning machine. This was entirely designed by Tesla team internally, all the way from the architecture to the package. This chip is like GPU-level compute with a CPU level flexibility and twice the network chip-level I/O bandwidth,” said Ganesh Venkataramanan, Tesla’s senior director of Autopilot hardware and lead of Project Dojo.