1 e79MC4t

Faster food: How Gemini helps restaurants thrive through multimodal visual analysis

Businesses across all industries are turning to AI for a clear view of their operations in real-time. Whether it’s a busy factory floor, a crowded retail space, or a bustling restaurant kitchen, the ability to monitor your work environment helps businesses be more proactive and ultimately, more efficient.  Gemini 1.5 Pro’s multimodal and long context …

ML 17895 image001

Speed up your AI inference workloads with new NVIDIA-powered capabilities in Amazon SageMaker

This post is co-written with Abhishek Sawarkar, Eliuth Triana, Jiahong Liu and Kshitiz Gupta from NVIDIA.  At re:Invent 2024, we are excited to announce new capabilities to speed up your AI inference workloads with NVIDIA accelerated computing and software offerings on Amazon SageMaker. These advancements build upon our collaboration with NVIDIA, which includes adding support …

1 gbkOMf5.max 1000x1000 1

Vertex AI grounding: More reliable models, fewer hallucinations

At the Gemini for Work event in September, we showcased how generative AI is transforming the way enterprises work. Across all the customer innovation we saw at the event, one thing was clear – if last year was about gen AI exploration and experimentation, this year is about achieving real-world impact. Gen AI has the …

Screenshot 2024 11 22 at 2.44.12E280AFPM

Cohere Rerank 3.5 is now available in Amazon Bedrock through Rerank API

We are excited to announce the availability of Cohere’s advanced reranking model Rerank 3.5 through our new Rerank API in Amazon Bedrock. This powerful reranking model enables AWS customers to significantly improve their search relevance and content ranking capabilities. This model is also available for Amazon Bedrock Knowledge Base users. By incorporating Cohere’s Rerank 3.5 …

static adapters

Easily deploy and manage hundreds of LoRA adapters with SageMaker efficient multi-adapter inference

The new efficient multi-adapter inference feature of Amazon SageMaker unlocks exciting possibilities for customers using fine-tuned models. This capability integrates with SageMaker inference components to allow you to deploy and manage hundreds of fine-tuned Low-Rank Adaptation (LoRA) adapters through SageMaker APIs. Multi-adapter inference handles the registration of fine-tuned adapters with a base model and dynamically …

ML16564 blogvideo

Search enterprise data assets using LLMs backed by knowledge graphs

Enterprises are facing challenges in accessing their data assets scattered across various sources because of increasing complexities in managing vast amount of data. Traditional search methods often fail to provide comprehensive and contextual results, particularly for unstructured data or complex queries. Search solutions in modern big data management must facilitate efficient and accurate search of …

Speech is More Than Words: Do Speech-to-Text Translation Systems Leverage Prosody?

This paper was accepted at the Ninth Conference on Machine Translation (WMT24) at EMNLP 2024. The prosody of a spoken utterance, including features like stress, intonation and rhythm, can significantly affect the underlying semantics, and as a consequence can also affect its textual translation. Nevertheless, prosody is rarely studied within the context of speech-to-text translation …

12ARXkvt3sicflleUvkCwXiPA

Ethical AI in Defense Decision Support Systems

Ethical AI in Defense Decision Support Systems (Defense AI Ethics, #2) Editors Note: In a previous post in our series on ethical defense and AI, we examined the ethical implications of technology providers in the defense domain. In this post, we delve into Palantir’s work on Decision Support Systems and how it reinforces critical military ethics …

ML 17726 image001

Unleash your Salesforce data using the Amazon Q Salesforce Online connector

Thousands of companies worldwide use Salesforce to manage their sales, marketing, customer service, and other business operations. The Salesforce cloud-based platform centralizes customer information and interactions across the organization, providing sales reps, marketers, and support agents with a unified 360-degree view of each customer. With Salesforce at the heart of their business, companies accumulate vast …

image1 6

Deploy Meta Llama 3.1 models cost-effectively in Amazon SageMaker JumpStart with AWS Inferentia and AWS Trainium

We’re excited to announce the availability of Meta Llama 3.1 8B and 70B inference support on AWS Trainium and AWS Inferentia instances in Amazon SageMaker JumpStart. Meta Llama 3.1 multilingual large language models (LLMs) are a collection of pre-trained and instruction tuned generative models. Trainium and Inferentia, enabled by the AWS Neuron software development kit …