FAANG

Improving simulations of clouds and their effects on climate

Posted by Tapio Schneider, Visiting Researcher, and Yi-fan Chen, Engineering Lead, Google Research Today’s climate models successfully capture broad global…

2 years ago

Amazon EC2 DL2q instance for cost-efficient, high-performance AI inference is now generally available

This is a guest post by A.K Roy from Qualcomm AI. Amazon Elastic Compute Cloud (Amazon EC2) DL2q instances, powered…

2 years ago

Open sourcing Project Guideline: A platform for computer vision accessibility technology

Posted by Dave Hawkey, Software Engineer, Google Research Two years ago we announced Project Guideline, a collaboration between Google Research…

2 years ago

Palantir and the NHS

Today, the NHS has chosen Palantir, supported by a group of companies including Accenture, PwC, NECS and Carnall Farrar, to…

2 years ago

Incremental Processing using Netflix Maestro and Apache Iceberg

by Jun He, Yingyi Zhang, and Pawan DixitIncremental processing is an approach to process new or changed data in workflows. The…

2 years ago

Your Black Friday observability checklist

Black Friday—and really, the entire Cyber Week—is a time when you want your applications running at peak performance without completely…

2 years ago

Open sourcing Project Guideline: A platform for computer vision accessibility technology

Posted by Dave Hawkey, Software Engineer, Google Research Two years ago we announced Project Guideline, a collaboration between Google Research…

2 years ago

How Amazon Music uses SageMaker with NVIDIA to optimize ML training and inference performance and cost

In the dynamic world of streaming on Amazon Music, every search for a song, podcast, or playlist holds a story,…

2 years ago

SSD vs. NVMe: What’s the difference?

Recent technological advancements in data storage have prompted businesses and consumers to move away from traditional hard disk drives (HDDs)…

2 years ago

Use Amazon SageMaker Studio to build a RAG question answering solution with Llama 2, LangChain, and Pinecone for fast experimentation

Retrieval Augmented Generation (RAG) allows you to provide a large language model (LLM) with access to data from external knowledge…

2 years ago