Categories: FAANG

New strides in making AI accessible for every enterprise

We’ve been thrilled to see the recent enthusiasm and adoption of Gemini 1.5 Flash — our fastest model to date, optimized for high-volume and high-frequency tasks at scale. Every day, we learn about how people are using Gemini to do amazing things like transcribe audio, understand code errors, and build apps in minutes. Companies like Jasper.ai are also building with Gemini to deliver fantastic experiences for their own users:

“As an AI-first company focused on empowering enterprise marketing teams to get work done faster, it is imperative that we use high quality multimodal models that are cost-effective yet fast, so that our customers can create amazing content quickly and easily and reimagine existing assets,” said Suhail Nimji, Chief Strategy Officer at Jasper.ai. “With Gemini 1.5 Pro and now Flash, we will continue raising the bar for content generation, ensuring adherence to brand voice and marketing guidelines all while improving productivity in the process.”

But we also realize the true value goes beyond just providing great models. It’s about giving you a holistic ecosystem that makes it easy to access, evaluate, and deploy these models at scale. That’s why we’re rolling out updates to help you move into production and expand to global audiences:

  • More models, more possibilities: We expanded our Model Garden with open models like Meta’s Llama 3.1 and Mistral AI’s latest models. We made them available as a fully managed “Model-as-a-service,” so you can find the perfect fit for your unique needs without the development overheads. (While we’re on the topic of models, It’s been so much fun to see the buzz around our new experimental version of Gemini 1.5 Pro available for early testing and feedback in AI Studio. We are loving the creativity you’re unleashing!)
  • Removing language barriers: We’re enabling Gemini 1.5 Flash and Gemini 1.5 Pro to understand and respond in 100+ languages, making it easier for our global community to prompt and receive responses in their native languages.
  • Predictable performance: We understand how critical reliability and performance are. That’s why we are making Provisioned Throughput in Vertex AI, coupled with a 99.5% uptime service level agreement (SLA), generally available.
  • Scale your AI, not your costs: We’ve improved Gemini 1.5 Flash to reduce the input costs by up to ~85% and output costs by up to ~80%, starting August 12th, 2024. This, coupled with capabilities like context caching can significantly reduce the cost and latency of your long context queries. Using Batch API instead of standard requests can further optimize costs for latency intensive tasks. With these advantages combined, you can handle massive workloads and take advantage of our 1 million token context window.

These enhancements are a direct response to what you, our customers, have been asking for. They represent our ongoing commitment to not just building the best models, but to provide an AI ecosystem that makes enterprise-scale AI accessible. Try out Gemini 1.5 Flash today with more languages, Provisioned Throughput in GA, and a new lower price on Vertex AI starting August 12th, 2024.

AI Generated Robotic Content

Recent Posts

Optimizing Contextual Speech Recognition Using Vector Quantization for Efficient Retrieval

Neural contextual biasing allows speech recognition models to leverage contextually relevant information, leading to improved…

12 hours ago

Amazon Bedrock Prompt Management is now available in GA

Today we are announcing the general availability of Amazon Bedrock Prompt Management, with new features…

12 hours ago

Getting started with NL2SQL (natural language to SQL) with Gemini and BigQuery

The rise of Natural Language Processing (NLP) combined with traditional Structured Query Language (SQL) has…

12 hours ago

Did Elon Musk Win the Election for Trump?

Through his wealth and cultural influence, Elon Musk undoubtedly strengthened the Trump campaign. WIRED unpacks…

13 hours ago

Unique memristor design with analog switching shows promise for high-efficiency neuromorphic computing

The growing use of artificial intelligence (AI)-based models is placing greater demands on the electronics…

13 hours ago

Unleash the power of generative AI with Amazon Q Business: How CCoEs can scale cloud governance best practices and drive innovation

This post is co-written with Steven Craig from Hearst.  To maintain their competitive edge, organizations…

2 days ago