1 Wayfair Vertex AI.max 1000x1000 1

How Wayfair is reaching MLOps excellence with Vertex AI

Editor’s note: In part one of this blog, Wayfair shared how it supports each of its 30 million active customers using machine learning (ML). Wayfair’s Vinay Narayana, Head of ML Engineering, Bas Geerdink, Lead ML Engineer, and Christian Rehm, Senior Machine Learning Engineer, take us on a deeper dive into the ways Wayfair’s data scientists …

Founders and tech leaders share their experiences in “Startup Stories” podcast

From some angles, a lot of startup founders consider broadly similar questions, such as “should I use serverless?”, “how do I manage my data?”, or “do I have a use case for Web3?”  But the deeper you probe, the more every startup’s rise becomes unique, from the early moments among founders, to the of hiring …

AlphaFold.0989054419781089.max 1000x1000 1

Running AlphaFold batch inference with Vertex AI Pipelines

Today, to accelerate research in the bio-pharma space, from the creation of treatments for diseases to the production of new synthetic biomaterials, we are announcing a new Vertex AI solution that demonstrates how to use Vertex AI Pipelines to run DeepMind’s AlphaFold protein structure predictions at scale.  Once a protein’s structure is determined and its …

Vertex AI 1.max 1000x1000 1

Access larger dataset faster and easier to accelerate your ML models training in Vertex AI

Vertex AI Training delivers a serverless approach to simplify the ML model training experience for customers. As such, training data does not persist on the compute clusters by design. In the past, customers had only Cloud Storage (GCS) or BigQuery (BQ) as storage options. Now, you can also use NFS shares, such as Filestore, for …

Sharing is caring: How NVIDIA GPU sharing on GKE saves you money

Developers and data scientists are increasingly turning to Google Kubernetes Engine (GKE) to run demanding workloads like machine learning, visualization/rendering and high-performance computing, leveraging GKE’s support for NVIDIA GPUs. In the current economic climate, customers are under pressure to do more with fewer resources, and cost savings are top of mind. To help, in July, …

1 Scaling heterogeneous graph sampling.max 1000x1000 1

Scaling heterogeneous graph sampling for GNNs with Google Cloud Dataflow

This blog presents an open-source solution to heterogeneous graph sub-sampling at scale using Google Cloud Dataflow (Dataflow). Dataflow is Google’s publicly available, fully managed environment for running large scale Apache Beam compute pipelines. Dataflow provides monitoring and observability out of the box and is routinely used to scale production systems to easily handle extreme datasets. …

1 R to train.max 1000x1000 1

Use R to train and deploy machine learning models on Vertex AI

R is one of the most widely used programming languages for statistical computing and machine learning. Many data scientists love it, especially for the rich world of packages from tidyverse, an opinionated collection of R packages for data science. Besides the tidyverse, there are over 18,000 open-source packages on CRAN, the package repository for R. …

AIML VbefCPO.max 600x600 2

How Cohere is accelerating language model training with Google Cloud TPUs

Over the past few years, advances in training large language models (LLMs) have moved natural language processing (NLP) from a bleeding-edge technology that few companies could access, to a powerful component of many common applications. From chatbots to content moderation to categorization, a general rule for NLP is that the larger the model, the greater …

image5 7iFlW0t.max 1000x1000 1

New 20+ pipeline operators for BQML

Today we are excited to announce the release of over twenty new BigQuery and BigQuery ML (BQML) operators for Vertex AI Pipelines, that help make it easier to operationalize BigQuery and BQML jobs in a Vertex AI Pipeline. The first five BigQuery and BQML pipeline components were released earlier this year. These twenty-one new, first-party, …

1 co hosting.max 1000x1000 1

Introducing model co-hosting to enable resource sharing among multiple model deployments on Vertex AI

When deploying models to the Vertex AI prediction service, each model is by default deployed to its own VM. To make hosting more cost effective, we’re excited to introduce model co-hosting in public preview, which allows you to host multiple models on the same VM, resulting in better utilization of memory and computational resources. The …