Story at a glance

  • The supercomputer is being used to train Tesla’s Autopilot and Full Self-Driving (FSD) artificial intelligence.
  • It is used to train neural networks, which are computer systems used to process huge amounts of data.
  • It appears to be a predecessor to the company’s upcoming Dojo supercomputer, which Tesla CEO Elon Musk has teased could be ready by the end of the year.

Electric carmaker Tesla on Sunday showed off its new supercomputer that’s being used to power the company’s Autopilot and Full Self-Driving (FSD) artificial intelligence. 

During a presentation at the 2021 Conference on Computer Vision and Pattern Recognition, Tesla’s Senior Director of Artificial Intelligence Andrej Karpathy revealed the supercomputer is the fifth most powerful in the world in terms of floating-point operations per second. 

The supercomputer is used to train neural networks, which are computer systems used to process huge amounts of data. Tesla uses the neural networks to process data coming from its fleet's onboard cameras to train its software to autonomously navigate the vehicles. 

“Training these neural networks like a mission, this is a 1.5 petabyte dataset, requires a huge amount of compute. So I wanted to briefly give a plug to this insane supercomputer that we are building and using now,” Karpathy said. 


America is changing faster than ever! Add Changing America to your Facebook or Twitter feed to stay on top of the news.


“Computer vision is the bread and butter of what we do and what enables the Autopilot and for that to work really well, you need a massive data set, we get that from the fleet, and you also need to train massive neural nets and experiment a lot. So we've invested a lot into the compute,” he said. 

The supercomputer appears to be a predecessor to the company’s upcoming Dojo supercomputer, which Tesla CEO Elon Musk has teased could be ready by the end of the year. 

Tesla’s new supercomputer specifications include: 

  • 720 nodes of 8x A100 80GB (5760 GPUs total) 
  • 1.8 EFLOPS (720 nodes *312 TFLOPS-FP16-A100 * 8 gpu/nodes)
  • 10 petabytes of “hot tier” NVME storage @1.6 terabytes per second
  • 640 terabytes per second of total switching capacity

READ MORE STORIES FROM CHANGING AMERICA

ELON MUSK PARTNER SAYS HE COULD BUILD THE REAL ‘JURASSIC PARK,’ WITH GENETICALLY ENGINEERED DINOSAURS

SCIENCE NOW SAYS YOU CAN JUDGE PEOPLE BY THEIR TASTE IN MUSIC

NASA SAYS SPECTACULAR PHOTO OF MARS RAINBOW ISN’T REAL

IN A BREAKTHROUGH RESEARCHERS IDENTIFY CANCER CELLS BY THEIR ACIDITY

NEW BIOINK GETS SCIENTISTS CLOSER TO BEING ABLE TO PRINT HUMAN ORGANS


 

Published on Jun 21, 2021