Today on the podcast, we speak with Ian Buck and Kari Briski of NVIDIA about new updates and achievements in deep learning. Ian begins by telling hosts Jon and Mark about his first project at NVIDIA, CUDA, and how it has helped expand and pave the way for future projects in super computing, AI, and gaming. CUDA is used extensively in computer vision, speech and audio applications, and machine comprehension, Kari elaborates.
NVIDIA recently announced their new Tensor Cores, which maximize their GPUs and make it easier for users to achieve peak performance. Working with the Tensor Cores, TensorFlow AMP is an acceleration into the TensorFlow Framework. It automatically makes the right choices for neural networks and maximizes performance, while still maintaining accuracy, with only a two line change in Tensor Flow script.
Just last year, NVIDIA announced their T4 GPU with Google Cloud Platform. This product is designed for inferences, the other side of AI. Because AI is becoming so advanced, complicated, and fast, the GPUs on the inference side have to be able to handle the workload and produce inferences just as quickly. T4 and Google Cloud accomplish this together. Along with T4, NVIDIA has introduced TensorRT, a software framework for AI inference that’s integrated into TensorFlow.
Ian BuckIan Buck is general manager and vice president of Accelerated Computing at NVIDIA. He is responsible for the company’s worldwide datacenter business, including server GPUs and the enabling NVIDIA computing software for AI and HPC used by millions of developers, researchers and scientists. Buck joined NVIDIA in 2004 after completing his PhD in computer science from Stanford University, where he was development lead for Brook, the forerunner to generalized computing on GPUs. He is also the creator of CUDA, which has become the world’s leading platform for accelerated parallel computing. Buck has testified before the U.S. Congress on artificial intelligence and has advised the White House on the topic. Buck also received a BSE degree in computer science from Princeton University.
Kari BriskiKari Briski is a Senior Director of Accelerated Computing Software Product Management at NVIDIA. Her talents and interests include Deep Learning, Accelerated Computing, Design Thinking, and supporting women in technology. Kari is also a huge Steelers fan.
Cool things of the weekWhere can we learn more about Stadia?
Mark will be at Cloud NEXT, ECGC, and IO.
Jon may be going to Unite Shanghai and will definitely be at Cloud NEXT, ECGC, and IO.
NVIDIA will be at Cloud NEXT and KubeCon, as well as International Conference on Machine Learning, The International Conference on Learning Representations, and CVPR
Create your
podcast in
minutes
It is Free