Categories: FAANG

NVIDIA Chief Scientist Bill Dally to Keynote at Hot Chips

Bill Dally — one of the world’s foremost computer scientists and head of NVIDIA’s research efforts — will describe the forces driving accelerated computing and AI in his keynote address at Hot Chips, an annual gathering of leading processor and system architects.

Dally will detail advances in GPU silicon, systems and software that are delivering unprecedented performance gains for a wide range of applications. The talk will show how techniques such as mixed-precision computing, high-speed interconnects and sparsity can take the large language models driving generative AI forward to the next level.

“It’s a really exciting time to be a computer engineer,” said Dally in February, when he was inducted into the Silicon Valley Engineering Council’s Hall of Fame.

Dally’s keynote will kick off the third day of Hot Chips at 9 a.m. PT on Aug. 29.

Registration is available online to attend the event virtually. The live event  at Stanford University, in Palo Alto, is already sold out.

In a career spanning nearly four decades, Dally has pioneered many of the fundamental technologies underlying today’s supercomputer and networking architectures. As head of NVIDIA Research, he leads a team of more than 300 around the globe who are inventing technologies for a wide variety of applications, including AI, HPC, graphics and networking.

Prior to joining NVIDIA in 2009 as chief scientist and senior vice president of research, he chaired Stanford University’s computer science department for some four years.

Dally is a member of the National Academy of Engineering and a fellow of the American Academy of Arts & Sciences, the Institute of Electrical and Electronics Engineers and the Association for Computing Machinery.

He’s written four textbooks, published more than 250 papers and holds over 120 patents, and has received the IEEE Seymour Cray Award, ACM Eckert-Mauchly Award and ACM Maurice Wilkes Award.

More NVIDIA Talks at Hot Chips

In a separate Hot Chips talk, Kevin Deierling, vice president of networking at NVIDIA, will describe the flexibility of NVIDIA BlueField DPUs and NVIDIA Spectrum networking switches for allocating resources based on changing network traffic and user rules.

A new benchmark result for the NVIDIA Grace CPU Superchip will be part of a talk by Arm on leadership performance and power efficiency for next-generation cloud computing.

The event begins Sunday, Aug. 27, with a full day of tutorials, including talks from NVIDIA experts on AI inference and chip-to-chip interconnects.

AI Generated Robotic Content

Recent Posts

10 Ways to Use Embeddings for Tabular ML Tasks

Embeddings — vector-based numerical representations of typically unstructured data like text — have been primarily…

3 hours ago

Over-Searching in Search-Augmented Large Language Models

Search-augmented large language models (LLMs) excel at knowledge-intensive tasks by integrating external retrieval. However, they…

3 hours ago

How Omada Health scaled patient care by fine-tuning Llama models on Amazon SageMaker AI

This post is co-written with Sunaina Kavi, AI/ML Product Manager at Omada Health. Omada Health,…

3 hours ago

Anthropic launches Cowork, a Claude Desktop agent that works in your files — no coding required

Anthropic released Cowork on Monday, a new AI agent capability that extends the power of…

4 hours ago

New Proposed Legislation Would Let Self-Driving Cars Operate in New York State

New York governor Kathy Hochul says she will propose a new law allowing limited autonomous…

4 hours ago

From brain scans to alloys: Teaching AI to make sense of complex research data

Artificial intelligence (AI) is increasingly used to analyze medical images, materials data and scientific measurements,…

4 hours ago