faang

OpenAI Data Partnerships

Working together to create open-source and private datasets for AI training.

2 years ago

Five scalability pitfalls to avoid with your Kafka application

Apache Kafka is a high-performance, highly scalable event streaming platform. To unlock Kafka’s full potential, you need to carefully consider…

2 years ago

Responsible AI at Google Research: Context in AI Research (CAIR)

Posted by Katherine Heller, Research Scientist, Google Research, on behalf of the CAIR Team Artificial intelligence (AI) and related machine…

2 years ago

Promote pipelines in a multi-environment setup using Amazon SageMaker Model Registry, HashiCorp Terraform, GitHub, and Jenkins CI/CD

Building out a machine learning operations (MLOps) platform in the rapidly evolving landscape of artificial intelligence (AI) and machine learning…

2 years ago

3 new ways Duet AI can help you get things done fast in the Google Cloud console

Editorial note: Whether you're new to Google Cloud or an experienced user, read on to learn how Duet AI can…

2 years ago

SeMAnD: Self-Supervised Anomaly Detection in Multimodal Geospatial Datasets

*= Equal Contributors We propose a Self-supervised Anomaly Detection technique, called SeMAnD, to detect geometric anomalies in Multimodal geospatial datasets.…

2 years ago

The Next Step in Personalization: Dynamic Sizzles

Authors:Bruce Wobbe, Leticia KwokAdditional Credits:Sanford Holsapple, Eugene Lok, Jeremy KellyIntroductionAt Netflix, we strive to give our members an excellent personalized experience, helping…

2 years ago

Building on a year of focus to help IBM Power clients grow with hybrid cloud and AI

At the beginning of the year, we laid out a new strategy for IBM Power under the leadership of Ken…

2 years ago

Build a medical imaging AI inference pipeline with MONAI Deploy on AWS

This post is cowritten with Ming (Melvin) Qin, David Bericat and Brad Genereaux from NVIDIA. Medical imaging AI researchers and…

2 years ago

Introducing Accurate Quantized Training (AQT) for accelerated ML training on TPU v5e

AI models continue to get bigger, requiring larger compute clusters with exa-FLOPs (10^18 FLOPs) of computing. While large-scale models continue…

2 years ago