12AYktwWDg4PjMCnLCo2wxYaQ

Safeguarding Freedom

How Defense Efforts Align with Human Rights Palantir’s Founding Connection to Human Rights Palantir has its origins and identity in the defense of the values and traditions of liberal democratic societies. Our company was founded in response to the 9/11 attacks with the mission of supporting bedrock defense and intelligence institutions without compromising the protection of the …

ML 17003 image001

Improve governance of models with Amazon SageMaker unified Model Cards and Model Registry

You can now register machine learning (ML) models in Amazon SageMaker Model Registry with Amazon SageMaker Model Cards, making it straightforward to manage governance information for specific model versions directly in SageMaker Model Registry in just a few clicks. Model cards are an essential component for registered ML models, providing a standardized way to document …

11 jEFgWwa

Data loading best practices for AI/ML inference on GKE

As AI models increase in sophistication, there’s increasingly large model data needed to serve them. Loading the models and weights along with necessary frameworks to serve them for inference can add seconds or even minutes of scaling delay, impacting both costs and the end-user’s experience.  For example, inference servers such as Triton, Text Generation Inference …

Japan Develops Next-Generation Drug Design, Healthcare Robotics and Digital Health Platforms

To provide high-quality medical care to its population — around 30% of whom are 65 or older — Japan is pursuing sovereign AI initiatives supporting nearly every aspect of healthcare. AI tools trained on country-specific data and local compute infrastructure are supercharging the abilities of Japan’s clinicians and researchers so they can care for patients, …

header

Virtual Personas for Language Models via an Anthology of Backstories

We introduce Anthology, a method for conditioning LLMs to representative, consistent, and diverse virtual personas by generating and utilizing naturalistic backstories with rich details of individual values and experience. What does it mean for large language models (LLMs) to be trained on massive text corpora, collectively produced by millions and billions of distinctive human authors? …