FAANG

Entropy-Preserving Reinforcement Learning

Policy gradient algorithms have driven many recent advancements in language model reasoning. An appealing property is their ability to learn…

3 weeks ago

How Ring scales global customer support with Amazon Bedrock Knowledge Bases

This post is cowritten with David Kim, and Premjit Singh from Ring. Scaling self-service support globally presents challenges beyond translation.…

3 weeks ago

Less Gaussians, Texture More: 4K Feed-Forward Textured Splatting

Existing feed-forward 3D Gaussian Splatting methods predict pixel-aligned primitives, leading to a quadratic growth in primitive count as resolution increases.…

3 weeks ago

To Infinity and Beyond: Tool-Use Unlocks Length Generalization in State Space Models

State Space Models (SSMs) have become the leading alternative to Transformers for sequence modeling. Their primary advantage is efficiency in…

4 weeks ago

How to build production-ready AI agents with Google-managed MCP servers

As ​​developers build AI agents with more sophisticated reasoning systems, they require higher-quality fuel–in the form of enterprise data and…

4 weeks ago

Drop-In Perceptual Optimization for 3D Gaussian Splatting

Despite their output being ultimately consumed by human viewers, 3D Gaussian Splatting (3DGS) methods often rely on ad-hoc combinations of…

4 weeks ago

Frontend Engineering at Palantir: Redefining Real-Time Map Collaboration

How we built lightweight, real-time map collaboration for teams operating at the edge.About This SeriesFrontend engineering at Palantir goes far beyond building…

4 weeks ago

Run Generative AI inference with Amazon Bedrock in Asia Pacific (New Zealand)

Kia ora! Customers in New Zealand have been asking for access to foundation models (FMs) on Amazon Bedrock from their…

4 weeks ago

The new AI literacy: Insights from student developers

AI has made it easier than ever for student developers to work efficiently, tackle harder problems, and pursue ambitious projects.…

4 weeks ago

Exclusive Self Attention

We introduce exclusive self attention (XSA), a simple modification of self attention (SA) that improves Transformer’s sequence modeling performance. The…

4 weeks ago