Categories: FAANG

NVIDIA Awards Up to $60,000 Research Fellowships to PhD Students

For more than two decades, the NVIDIA Graduate Fellowship Program has supported graduate students doing outstanding work relevant to NVIDIA technologies. Today, the program announced the latest awards of up to $60,000 each to 10 Ph.D. students involved in research that spans all areas of computing innovation.

Selected from a highly competitive applicant pool, the awardees will participate in a summer internship preceding the fellowship year. Their work puts them at the forefront of accelerated computing — tackling projects in deep learning, robotics, computer vision, computer graphics, circuits, autonomous vehicles and programming systems.

“Our fellowship recipients are among the most talented graduate students in the world,” said NVIDIA Chief Scientist Bill Dally. “They’re working on some of the most important problems in computer science, and we’re delighted to support their research.”

The NVIDIA Graduate Fellowship Program is open to applicants worldwide.

The 2024-2025 fellowship recipients are:

  • Bailey Miller, Carnegie Mellon University — Developing practical Monte Carlo methods for physical simulation that match the scalability and robustness of Monte Carlo rendering algorithms, focused on designing accelerated random-walk methods that are amenable to differentiation and use volumetric models to handle intractably complex geometry.
  • Nicklas Hansen, University of California, San Diego — Developing data-driven world models that enable robots to understand and interact with the real world.
  • Payman Behnam, Georgia Institute of Technology — Researching high-performance, low-latency and energy-efficient design at the intersection of machine learning and systems.
  • Reinhard Wiesmayr, ETH Zürich — Researching machine learning-assisted signal processing methods for wireless communication systems.
  • Songwei Ge, University of Maryland, College Park — Focusing on generative models applied to images and videos, and working on developing synthesis methods for content generation, controllable creation processes that allow humans to provide guidance, and approachable interfaces to facilitate human engagement.
  • Toluwanimi Odemuyiwa, University of California, Davis — Using the language of tensor algebra to design an end-to-end abstraction and framework for graph algorithms, from platform-agnostic, declarative descriptions of the computation to platform-specific implementations.
  • Yiming Li, New York University — Developing robust, efficient and scalable AI algorithms for 3D scene parsing and decision-making from high-dimensional sensory input, as well as curating large-scale datasets to effectively train and verify these algorithms for autonomous robots.
  • Yue Zhao, University of Texas, Austin — Developing a video-centric foundation model that will be trained on workstation-grade hardware and deployed on everyday devices like laptops and mobile devices to enable widespread use, training and collaborative sharing.
  • Zhiqi Li, Nanjing University — Developing vision-centric perception methods for autonomous driving.
  • Zihao Ye, University of Washington — Focusing on machine learning compilation, serving systems for foundation models and sparse computation.

We also acknowledge the 2024-2025 fellowship finalists:

  • Andrew Szot, Georgia Institute of Technology
  • Bobbi Yogatama, University of Wisconsin-Madison
  • Guanzhi Wang, California Institute of Technology
  • Sehoon Kim, University of California, Berkeley
  • Xi Deng, Cornell University
AI Generated Robotic Content

Recent Posts

Mixture-of-recursions delivers 2x faster inference—Here’s how to implement it

Mixture-of-Recursions (MoR) is a new AI architecture that promises to cut LLM inference costs and…

17 mins ago

A Top NASA Official Is Among Thousands of Staff Leaving the Agency

Makenzie Lystrup’s departure from NASA’s Goddard Space Flight Center comes soon after the resignation of…

17 mins ago

AI chatbots remain overconfident—even when they’re wrong, study finds

Artificial intelligence chatbots are everywhere these days, from smartphone apps and customer service portals to…

17 mins ago

The Gory Details of Finetuning SDXL and Wasting $16k

Details on how the big diffusion model finetunes are trained is scarce, so just like…

23 hours ago

Advanced version of Gemini with Deep Think officially achieves gold-medal standard at the International Mathematical Olympiad

Our advanced model officially achieved a gold-medal level performance on problems from the International Mathematical…

23 hours ago

On Information Geometry and Iterative Optimization in Model Compression: Operator Factorization

The ever-increasing parameter counts of deep learning models necessitate effective compression techniques for deployment on…

23 hours ago