Categories: FAANG

Harvesting hardware: Our approach to carbon-aware fleet deployment

When it comes to managing the infrastructure and AI that powers Google’s products and platforms – from Search to YouTube to Google Cloud – every decision we make has an impact. Traditionally, meeting growing demands for machine capacity means deploying new machines and that has an associated embodied carbon impact. That’s why we’re working to reduce the embodied carbon impact at our data centers by optimizing machine placement and promoting the reuse of technical infrastructure hardware.

In this post, we shine a spotlight on our hardware harvesting program, an approach to fleet deployment that prioritizes the reuse of existing hardware.

The hardware harvesting program

The concept is simple: As we deploy new machines or components in our fleet, we repurpose older equipment for alternative and/or additional use cases. The harvesting program prioritizes the reuse of existing hardware, which reduces our carbon emissions compared to exclusively buying brand new machines from the market. This program also helps conserve valuable resources and minimize waste, which contributes to a more circular economy. By scrutinizing the carbon impact of deployment decisions, we’re not just reducing emissions — we’re embedding carbon considerations into the very core of our data center machine operations and business decisions.

Hardware harvesting is not without its challenges. For the program to be successful, we need to ensure the harvested machines meet the specific demands of our workloads and our customers’ requirements, which vary depending on the type of machine and its configuration. However, our heterogeneous fleet, with a wide variety of computational, storage, and accelerator machines, gives us the flexibility to find creative solutions that support both our services and our sustainability goals.

aside_block
<ListValue: [StructValue([(‘title’, ‘Try Google Cloud for free’), (‘body’, <wagtail.rich_text.RichText object at 0x3efbcb6da310>), (‘btn_text’, ‘Get started for free’), (‘href’, ‘https://console.cloud.google.com/freetrial?redirectPath=/welcome’), (‘image’, None)])]>

Hardware harvesting in action

Google’s harvesting program has already yielded strong benefits. By prioritizing the reuse of existing hardware, we’ve been able to optimize the use of new equipment, reduce our carbon footprint, minimize waste and lower costs.

For example, in 2024, we needed more specific models and configurations of certain components (PCBs, CPUs, motherboards, and HDDs). We harvested them from existing machines by migrating configuration-agnostic jobs from existing machines to more efficient ones, then reclaimed the components from these specific machines. In 2024, the harvesting program helped us reuse over 293,000 components to fulfill new demand, save carbon emissions, and reduce costs. Scaling this hardware harvesting approach across Google’s data center infrastructure presents an opportunity for cost, resource, and carbon reduction.  

Looking ahead: Leading by example

Harvesting is just one example of how we’re embedding carbon considerations into our data center practices. We believe that these initiatives will play a role in helping us achieve our company-wide net-zero goal and build a more sustainable future for cloud computing and AI. Read our 2024 Environmental Report to learn more about our sustainability practices.

As we continue to refine our strategies, we aim to lead by example and encourage other companies, especially those in the cloud computing industry, to consider similar approaches.

AI Generated Robotic Content

Recent Posts

Update: Distilled v1.1 is live

We've pushed an LTX-2.3 update today. The Distilled model has been retrained (now v1.1) with…

19 hours ago

How to Implement Tool Calling with Gemma 4 and Python

The open-weights model ecosystem shifted recently with the release of the

19 hours ago

Structured Outputs vs. Function Calling: Which Should Your Agent Use?

Language models (LMs), at their core, are text-in and text-out systems.

19 hours ago

Cram Less to Fit More: Training Data Pruning Improves Memorization of Facts

This paper was accepted at the Workshop on Navigating and Addressing Data Problems for Foundation…

19 hours ago

How to build effective reward functions with AWS Lambda for Amazon Nova model customization

Building effective reward functions can help you customize Amazon Nova models to your specific needs,…

19 hours ago

How to find the sweet spot between cost and performance

At Google Cloud, we often see customers asking themselves: "How can we manage our generative…

19 hours ago