This week’s GPU Technology Conference, being held at the San Jose Convention Center, is a fascinating insight into the power of artificial intelligence, and how it is being harnessed by software developers and engineers to break through old barriers in a bevy of industries, from autonomous vehicles to virtual reality to medicine.
The conference, which has drawn more than 8,000 developers, researchers and executives this year, has been a bevy of activity, with attendees clustered around several key areas on the exhibit floor, in particular those focused on self-driving vehicles and virtual reality.
The opening keynote, by NVIDIA’s CEO Jensen Huang, was packed, and Huang offered his insights on a wide variety of topics, in addition to several big product announcements from the company.
Huang talked about the era of supercharged computing, in which CPU growth has fallen below Moore’s Law in recent years, but GPU-accelerated computing is instead growing at a faster rate than what he called “the miracle of laws.”
“There’s a new law going on, a supercharged law,” he said, “and I think this is the future of computing.”
He showed examples of how AI is impacting the rendering of images in video games and film—and noted how computing power is still lagging behind on major science initiatives.
According to Huang, a Caltech-Oak Ridge National Laboratory project on reinventing the lithium-ion battery took 7 days on the Titan supercomputer. A Princeton-Oak Ridge study on mapping the Earth’s core took 17 days on Titan. An Illinois/NCSA study looking into the structure of HIV took 16 days on Blue Water. And an ETH Zürich/MeteoSwiss look at cloud-resolving climate simulation took an amazing 840 days on Piz Daint. Soon, this technology will reduce those times to 1 day, he said, thanks to GPU-accelerated computing.
“There’s serious groundbreaking science to be done. We’re going to build an exascale computer, and these simulation times will be compressed down to one day … science needs supercharged computers.”
Among the corporate news releases that Huang discussed was a series of advances to the company’s deep learning computing platform, which delivers a 10x performance boost on deep learning workloads compared with the previous generation six months ago.
Advancements to the NVIDIA platform include a 2x memory boost to NVIDIA Tesla V100, the most powerful datacenter GPU, and a new GPU interconnect fabric called NVIDIA NVSwitch, which enables up to 16 Tesla V100 GPUs to simultaneously communicate at a record speed of 2.4 terabytes per second.
NVIDIA also introduced the DGX-2, which it said is the first single server capable of delivering two petaflops of computational power. DGX-2 has the deep learning processing power of 300 servers occupying 15 racks of datacenter space, while being 60x smaller and 18x more power efficient.
“The extraordinary advances of deep learning only hint at what is still to come,” said Huang. “Many of these advances stand on NVIDIA’s deep learning platform, which has quickly become the world’s standard. We are dramatically enhancing our platform’s performance at a pace far exceeding Moore’s law, enabling breakthroughs that will help revolutionize healthcare, transportation, science exploration and countless other areas.”
Filed Under: ALL INDUSTRY NEWS • PROFILES • COMMENTARIES, Automotive, Semiconductor, Virtual reality