1dYkxcfpXlLXmZ9Lsb6HZUg

Palantir Is Still Not a Data Company (Palantir Explained, #7)

A refresher on the most common misconceptions about Palantir, what we do, and how we work Editor’s Note: This is the seventh post in Palantir Explained, a series that explores a range of topics, including our approach to privacy, security, AI/ML safety, and more. In this post, we revisit some of the themes discussed in our …

Picture1 5 1024x478 1

Build a serverless audio summarization solution with Amazon Bedrock and Whisper

Recordings of business meetings, interviews, and customer interactions have become essential for preserving important information. However, transcribing and summarizing these recordings manually is often time-consuming and labor-intensive. With the progress in generative AI and automatic speech recognition (ASR), automated solutions have emerged to make this process faster and more efficient. Protecting personally identifiable information (PII) …

Accelerate your gen AI: Deploy Llama4 & DeepSeek on AI Hypercomputer with new recipes

The pace of innovation in open-source AI is breathtaking, with models like Meta’s Llama4 and DeepSeek AI’s DeepSeek. However, deploying and optimizing large, powerful models can be  complex and resource-intensive. Developers and machine learning (ML) engineers need reproducible, verified recipes that articulate the steps for trying out the models on available accelerators.  Today, we’re excited …

ML 17395 1 Legacy architecture 1

Modernize and migrate on-premises fraud detection machine learning workflows to Amazon SageMaker

This post is co-written with Qing Chen and Mark Sinclair from Radial. Radial is the largest 3PL fulfillment provider, also offering integrated payment, fraud detection, and omnichannel solutions to mid-market and enterprise brands. With over 30 years of industry expertise, Radial tailors its services and solutions to align strategically with each brand’s unique needs. Radial …

image1 dpDlahDmax 1000x1000 1

Google is a Leader in the 2025 Gartner® Magic Quadrant™ for Data Science and Machine Learning Platforms report

Today, we are excited to announce that Gartner® has named Google as a Leader in the 2025 Magic Quadrant™ for Data Science and Machine Learning Platforms report (DSML). We believe that this recognition is a reflection of continued innovations to address the needs of data science and machine learning teams, as well as new types …

Proxy-FDA: Proxy-Based Feature Distribution Alignment for Fine-Tuning Vision Foundation Models Without Forgetting

Vision foundation models pre-trained on massive data encode rich representations of real-world concepts, which can be adapted to downstream tasks by fine-tuning. However, fine-tuning foundation models on one task often leads to the issue of concept forgetting on other tasks. Recent methods of robust fine-tuning aim to mitigate forgetting of prior knowledge without affecting the …

ml 18410 arch diag

Impel enhances automotive dealership customer experience with fine-tuned LLMs on Amazon SageMaker

This post is co-written with Tatia Tsmindashvili, Ana Kolkhidashvili, Guram Dentoshvili, Dachi Choladze from Impel. Impel transforms automotive retail through an AI-powered customer lifecycle management solution that drives dealership operations and customer interactions. Their core product, Sales AI, provides all-day personalized customer engagement, handling vehicle-specific questions and automotive trade-in and financing inquiries. By replacing their …

maxresdefault

Announcing new MCP integrations to Google Cloud Databases to enable AI-assisted development

Last month at Google Cloud Next ‘25, we announced MCP Toolbox for Databases to make it easier to connect generative AI agents to databases, and automate core enterprise workflows. MCP Toolbox for Databases (Toolbox) is an open-source Model Context Protocol (MCP) server that allows developers to easily connect gen AI agents to enterprise data. It …

Analyzing the Effect of Linguistic Similarity on Cross-Lingual Transfer: Tasks and Input Representations Matter

Cross-lingual transfer is a popular approach to increase the amount of training data for NLP tasks in a low-resource context. However, the best strategy to decide which cross-lingual data to include is unclear. Prior research often focuses on a small set of languages from a few language families or a single task. It is still …