FAANG

Category Added in a WPeMatico Campaign

Amazon SageMaker AI in 2025, a year in review part 1: Flexible Training Plans and improvements to price performance for inference workloads

In 2025, Amazon SageMaker AI saw dramatic improvements to core infrastructure offerings along four dimensions: capacity, price performance, observability, and…

2 days ago

Build AI workflows on Amazon EKS with Union.ai and Flyte

As artificial intelligence and machine learning (AI/ML) workflows grow in scale and complexity, it becomes harder for practitioners to organize…

3 days ago

Using Google Cloud AI to measure the physics of U.S. freestyle snowboarding and skiing

Nearly every snowboard trick carries a number. A 1080 means three full rotations. A 1440 means four. The convention is…

3 days ago

Unifying Ranking and Generation in Query Auto-Completion via Retrieval-Augmented Generation and Multi-Objective Alignment

Query Auto-Completion (QAC) is a critical feature of modern search systems that improves search efficiency by suggesting completions as users…

4 days ago

Build unified intelligence with Amazon Bedrock AgentCore

Building cohesive and unified customer intelligence across your organization starts with reducing the friction your sales representatives face when toggling…

4 days ago

Powering the next generation of agents with Google Cloud databases

For developers building AI applications, including custom agents and chatbots, the open-source Model Context Protocol (MCP) standard enables your innovations…

4 days ago

Models That Prove Their Own Correctness

How can we trust the correctness of a learned model on a particular input of interest? Model accuracy is typically…

5 days ago

Asynchronous Verified Semantic Caching for Tiered LLM Architectures

Large language models (LLMs) now sit in the critical path of search, assistance, and agentic workflows, making semantic caching essential…

6 days ago

A Small-Scale System for Autoregressive Program Synthesis Enabling Controlled Experimentation

What research can be pursued with small models trained to complete true programs? Typically, researchers study program synthesis via large…

1 week ago

Scaling LLM Post-Training at Netflix

Baolin Li, Lingyi Liu, Binh Tang, Shaojing LiIntroductionPre-training gives Large Language Models (LLMs) broad linguistic ability and general world knowledge, but…

1 week ago