Categories: FAANG

New strides in making AI accessible for every enterprise

We’ve been thrilled to see the recent enthusiasm and adoption of Gemini 1.5 Flash — our fastest model to date, optimized for high-volume and high-frequency tasks at scale. Every day, we learn about how people are using Gemini to do amazing things like transcribe audio, understand code errors, and build apps in minutes. Companies like Jasper.ai are also building with Gemini to deliver fantastic experiences for their own users:

“As an AI-first company focused on empowering enterprise marketing teams to get work done faster, it is imperative that we use high quality multimodal models that are cost-effective yet fast, so that our customers can create amazing content quickly and easily and reimagine existing assets,” said Suhail Nimji, Chief Strategy Officer at Jasper.ai. “With Gemini 1.5 Pro and now Flash, we will continue raising the bar for content generation, ensuring adherence to brand voice and marketing guidelines all while improving productivity in the process.”

But we also realize the true value goes beyond just providing great models. It’s about giving you a holistic ecosystem that makes it easy to access, evaluate, and deploy these models at scale. That’s why we’re rolling out updates to help you move into production and expand to global audiences:

  • More models, more possibilities: We expanded our Model Garden with open models like Meta’s Llama 3.1 and Mistral AI’s latest models. We made them available as a fully managed “Model-as-a-service,” so you can find the perfect fit for your unique needs without the development overheads. (While we’re on the topic of models, It’s been so much fun to see the buzz around our new experimental version of Gemini 1.5 Pro available for early testing and feedback in AI Studio. We are loving the creativity you’re unleashing!)
  • Removing language barriers: We’re enabling Gemini 1.5 Flash and Gemini 1.5 Pro to understand and respond in 100+ languages, making it easier for our global community to prompt and receive responses in their native languages.
  • Predictable performance: We understand how critical reliability and performance are. That’s why we are making Provisioned Throughput in Vertex AI, coupled with a 99.5% uptime service level agreement (SLA), generally available.
  • Scale your AI, not your costs: We’ve improved Gemini 1.5 Flash to reduce the input costs by up to ~85% and output costs by up to ~80%, starting August 12th, 2024. This, coupled with capabilities like context caching can significantly reduce the cost and latency of your long context queries. Using Batch API instead of standard requests can further optimize costs for latency intensive tasks. With these advantages combined, you can handle massive workloads and take advantage of our 1 million token context window.

These enhancements are a direct response to what you, our customers, have been asking for. They represent our ongoing commitment to not just building the best models, but to provide an AI ecosystem that makes enterprise-scale AI accessible. Try out Gemini 1.5 Flash today with more languages, Provisioned Throughput in GA, and a new lower price on Vertex AI starting August 12th, 2024.

AI Generated Robotic Content

Recent Posts

The realism that you wanted – Z Image Base (and Turbo) LoRA

submitted by /u/Major_Specific_23 [link] [comments]

21 hours ago

Document Clustering with LLM Embeddings in Scikit-learn

Imagine that you suddenly obtain a large collection of unclassified documents and are tasked with…

21 hours ago

Parallel Track Transformers: Enabling Fast GPU Inference with Reduced Synchronization

Efficient large-scale inference of transformer-based large language models (LLMs) remains a fundamental systems challenge, frequently…

21 hours ago

How Amazon uses Amazon Nova models to automate operational readiness testing for new fulfillment centers

Amazon is a global ecommerce and technology company that operates a vast network of fulfillment…

21 hours ago

Gemini Enterprise Agent Ready (GEAR) program now available, a new path to building AI agents at scale

Today’s reality is agentic – software that can reason, plan, and act on your behalf…

21 hours ago

Salesforce Workers Circulate Open Letter Urging CEO Marc Benioff to Denounce ICE

The letter comes after Benioff joked at a company event on Monday that ICE was…

22 hours ago