Britain plans self-driving cars by 2025
Britain aims to have a widespread implementation of driverless vehicles on roads...
Tesla supercomputer
The number of GPUs in Tesla in-house supercomputer has gone up by 1,600, which is a 28% increase over what was said a year ago.
Tim Zaman, who is in charge of engineering at Tesla, says that this would make the machine seventh in the world in terms of the number of GPUs it has.
The machine now has a total of 7,360 Nvidia A100 GPUs, which are designed for data centre servers but use the same architecture as the company’s best GeForce RTX 30-series cards.
Right now, Tesla probably needs all the processing power it can get. The company is currently working on “neural nets,” which are used to process the huge amounts of video data that the company’s cars collect.
With this latest upgrade, Tesla may be just getting started with its plans for high-performance computing (HPC).
Elon Musk claimed in June 2020, “Tesla is creating a neural net training computer dubbed Dojo.” He said the machine could do over 1 exaFLOPs, or 1,000 petaFLOPs.
If the machine could do 1 exaFLOPS, it would be a supercomputer. Only a few supercomputers, notably The Frontier in Tennessee, have passed the exascale barrier.
You could build a new computer, too. Musk tweeted, “Consider joining our AI or computer/chip teams if this sounds interesting.”
Dojo won’t use Nvidia hardware. Tesla’s D1 Dojo Chip will power the machine. At AI Day, the automaker revealed the chip may have 362 TFLOPs.
Catch all the Sci-Tech News, Breaking News Event and Latest News Updates on The BOL News
Download The BOL News App to get the Daily News Update & Follow us on Google News.