GBoard2520PrivacyHero 1

Advances in private training for production on-device language models

Posted by Zheng Xu, Research Scientist, and Yanxiang Zhang, Software Engineer, Google Language models (LMs) trained to predict the next word given input text are the key technology for many applications [1, 2]. In Gboard, LMs are used to improve users’ typing experience by supporting features like next word prediction (NWP), Smart Compose, smart completion …

GBoard2520PrivacyHero

Advances in private training for production on-device language models

Posted by Zheng Xu, Research Scientist, and Yanxiang Zhang, Software Engineer, Google Language models (LMs) trained to predict the next word given input text are the key technology for many applications [1, 2]. In Gboard, LMs are used to improve users’ typing experience by supporting features like next word prediction (NWP), Smart Compose, smart completion …

image1 GHFGVyc.max 1000x1000 1

Orchestrate Vertex AI’s PaLM and Gemini APIs with Workflows

Introduction Everyone is excited about generative AI (gen AI) nowadays and rightfully so. You might be generating text with PaLM 2 or Gemini Pro, generating images with ImageGen 2, translating code from language to another with Codey, or describing images and videos with Gemini Pro Vision.  No matter how you’re using gen AI, at the …

DALL-E 3 on NightCafe – Your Questions Answered

Prompt used:  complimentary_colors, great composition, Looking out the window, manhattan_penthouse, rainy_moody_night, beautiful twinkling city view, Room filled with plants, complimentary_colors, by_artist_”anime”, warm light, Dan Mumford, film_grain, Victo Ngai, Concept Art, magazine_print_grain, Fantasy Art, Johan Grenier, Van Gogh OpenAI’s DALL-E 3 is now available to use on NightCafe! You asked, we listened. Still, this release didn’t …

Keyframer: Empowering Animation Design using Large Language Models

Large language models (LLMs) have the potential to impact a wide range of creative domains, as exemplified in popular text-to-image generators like DALL·E and Midjourney. However, the application of LLMs to motion-based visual design has not yet been explored and presents novels challenges such as how users might effectively describe motion in natural language. Further, …