With the supercomputer Dojo, Tesla wants to build the world’s fastest AI training machine. Tesla relies on the ability of hardware specially developed for AI training instead of pure computing power.
Tesla can build cars, but one important component, if you look ahead, is probably even the key feature of the company, the development of artificial intelligence.
Tesla wants to come inside Mechanical view of automatic cars safely driving on the streets, An enormous task for AI research, and even provides a program for it Development of the daily robot Before and with Neurology Tesla CEO Elon Musk has another AI project.
Dojo T1: AI chip for next generation AI training
For AI training of the mechanical vision process called “Tesla Vision” that guides Tesla’s vehicles, the company relies on supercomputers that have so far been best equipped with Nvidia’s A100 GPU.
On AI Day, Ganesh Venkataraman, President of Tesla’s Dojo Development, laid the groundwork for the next generation of AI training at Tesla: D1 chip. According to Venkataraman, it was designed for AI processes and was developed entirely in-house from the architecture to the finished set at Tesla.
“This chip offers flexibility at the CPU level, computing power at the GPU level and twice the IO bandwidth than a network chip,” says Venkataraman. Thousands of AI chips made at 7nm, known as “training tiles”, would elevate the Dojo Supercomputer to computer speed faster than an Exoflap.
Dojo offers less processing power on paper than the current Nvidia-based Tesla training system, which can deliver 1.8 exoflobs. According to Venkataraman, AI training with Dojo is very efficient and therefore fast. He assures Dojo that “there is no unwanted silicon, no conventional support, it is a pure machine for machine learning.”
The first tests are already running – regular activity from next year
Tesla wants to achieve exoflob performance by networking multiple training tiles: 25 D1 chips A total of nine betaflops are located in one shell. According to Venkataraman, the first dojo-tile has been in testing for a week.
These twelve tiles were then installed in two boxes on one computer unit. This equates to about 100 petaflops of computing power per computer unit. In the final stage, ten of these computer units need to be connected and 1.1 exoflops together AI performance in data format BF16 To provide
The finished Dojo Supercomputer delivers a total of 3000 T1 chips and 120 training tiles with over one million nodes. According to Elon Musk, Dojo will be operational next year.
You can watch the full Dojo presentation in the following video from 1:45:40.
Read more about AI training:
Dojo: Tesla Introduces New AI Supercomputer Last modified: August 21, 2021 By
“Travel maven. Beer expert. Subtly charming alcohol fan. Internet junkie. Avid bacon scholar.”