Categories: FAANG

NVIDIA Chief Scientist Bill Dally to Keynote at Hot Chips

Bill Dally — one of the world’s foremost computer scientists and head of NVIDIA’s research efforts — will describe the forces driving accelerated computing and AI in his keynote address at Hot Chips, an annual gathering of leading processor and system architects.

Dally will detail advances in GPU silicon, systems and software that are delivering unprecedented performance gains for a wide range of applications. The talk will show how techniques such as mixed-precision computing, high-speed interconnects and sparsity can take the large language models driving generative AI forward to the next level.

“It’s a really exciting time to be a computer engineer,” said Dally in February, when he was inducted into the Silicon Valley Engineering Council’s Hall of Fame.

Dally’s keynote will kick off the third day of Hot Chips at 9 a.m. PT on Aug. 29.

Registration is available online to attend the event virtually. The live event  at Stanford University, in Palo Alto, is already sold out.

In a career spanning nearly four decades, Dally has pioneered many of the fundamental technologies underlying today’s supercomputer and networking architectures. As head of NVIDIA Research, he leads a team of more than 300 around the globe who are inventing technologies for a wide variety of applications, including AI, HPC, graphics and networking.

Prior to joining NVIDIA in 2009 as chief scientist and senior vice president of research, he chaired Stanford University’s computer science department for some four years.

Dally is a member of the National Academy of Engineering and a fellow of the American Academy of Arts & Sciences, the Institute of Electrical and Electronics Engineers and the Association for Computing Machinery.

He’s written four textbooks, published more than 250 papers and holds over 120 patents, and has received the IEEE Seymour Cray Award, ACM Eckert-Mauchly Award and ACM Maurice Wilkes Award.

More NVIDIA Talks at Hot Chips

In a separate Hot Chips talk, Kevin Deierling, vice president of networking at NVIDIA, will describe the flexibility of NVIDIA BlueField DPUs and NVIDIA Spectrum networking switches for allocating resources based on changing network traffic and user rules.

A new benchmark result for the NVIDIA Grace CPU Superchip will be part of a talk by Arm on leadership performance and power efficiency for next-generation cloud computing.

The event begins Sunday, Aug. 27, with a full day of tutorials, including talks from NVIDIA experts on AI inference and chip-to-chip interconnects.

AI Generated Robotic Content

Recent Posts

Let’s Destroy the E-THOT Industry Together!

I created a completely local Ethot online as an experiment. I dream of a world…

22 hours ago

Vector Databases Explained in 3 Levels of Difficulty

Traditional databases answer a well-defined question: does the record matching these criteria exist?

22 hours ago

Drop-In Perceptual Optimization for 3D Gaussian Splatting

Despite their output being ultimately consumed by human viewers, 3D Gaussian Splatting (3DGS) methods often…

22 hours ago

Frontend Engineering at Palantir: Redefining Real-Time Map Collaboration

How we built lightweight, real-time map collaboration for teams operating at the edge.About This SeriesFrontend engineering at…

22 hours ago

Run Generative AI inference with Amazon Bedrock in Asia Pacific (New Zealand)

Kia ora! Customers in New Zealand have been asking for access to foundation models (FMs)…

22 hours ago

The new AI literacy: Insights from student developers

AI has made it easier than ever for student developers to work efficiently, tackle harder…

22 hours ago