modelgarden ai21 walkthrough

Announcing the Jamba 1.5 Model Family from AI21 Labs on Vertex AI

Today, we’re announcing the launch of the Jamba 1.5 Model Family  — AI21 Labs’ new family of open models — in public preview on Vertex AI Model Garden. The model family includes two models designed for scaled enterprise applications:   Jamba 1.5 Mini: AI21’s most efficient and lightweight model, engineered for speed and efficiency in tasks …

Hydrogels can play Pong by ‘remembering’ previous patterns of electrical simulation

Non-living hydrogels can play the video game Pong and improve their gameplay with more experience, researchers report. The researchers hooked hydrogels up to a virtual game environment and then applied a feedback loop between the hydrogel’s paddle — encoded by the distribution of charged particles within the hydrogel — and the ball’s position — encoded …

Researchers unleash machine learning in designing advanced lattice structures

Characterized by their intricate patterns and hierarchical designs, lattice structures hold immense potential for revolutionizing industries ranging from aerospace to biomedical engineering, due to their versatility and customizability. However, the complexity of these structures and the vast design space they encompass have posed significant hurdles for engineers and scientists, and traditional methods of design exploration …

ML 17387 image001

Enhance call center efficiency using batch inference for transcript summarization with Amazon Bedrock

Today, we are excited to announce general availability of batch inference for Amazon Bedrock. This new feature enables organizations to process large volumes of data when interacting with foundation models (FMs), addressing a critical need in various industries, including call center operations. Call center transcript summarization has become an essential task for businesses seeking to …

GPU blog gif 2

Run your AI inference applications on Cloud Run with NVIDIA GPUs

Developers love Cloud Run for its simplicity, fast autoscaling, scale-to-zero capabilities, and pay-per-use pricing. Those same benefits come into play for real-time inference apps serving open gen AI models. That’s why today, we’re adding support for NVIDIA L4 GPUs to Cloud Run, in preview. This opens the door to many new use cases to Cloud …

Lightweight Champ: NVIDIA Releases Small Language Model With State-of-the-Art Accuracy

Developers of generative AI typically face a tradeoff between model size and accuracy. But a new language model released by NVIDIA delivers the best of both, providing state-of-the-art accuracy in a compact form factor. Mistral-NeMo-Minitron 8B — a miniaturized version of the open Mistral NeMo 12B model released by Mistral AI and NVIDIA last month …