Categories: Image

Stable Chat, research preview and participation in DEFCON AI Village

On July 21st, 2023, we released a powerful new open access large language model. At the time of launch, it was the best open LLM in the industry, comprising intricate reasoning and linguistic subtleties, capable of solving complex mathematics problems and similar high-value problem-solving. 

We invited AI safety researchers and developers to help us iterate on our technology and improve its safety and performance. However, evaluating these models requires significant computing power beyond the reach of everyday researchers. So today, we announce two initiatives to widen the availability of our best model: 

  1. Stable Chat: A free website that enables AI safety researchers and enthusiasts to interactively evaluate our best LLMs’ responses and to provide safety and usefulness feedback

  2. Our best model will be featured in a White House-sponsored red-teaming “AI Village” contest at DEFCON 31 in Las Vegas from August 10-13, 2023 – to test the limits of our model.

Stable Chat research preview

Today, we are launching Stable Chat research preview, a new web interface that empowers the AI community to evaluate our large language models interactively. Through Stable Chat, researchers can provide feedback on the safety and quality of its responses and flag biased or harmful content to help us improve these open models.

As part of our efforts at Stability AI to build the world’s most trusted language models, we’ve set up a research-purpose-only website to test and improve our technology. We will continue to update new models as our research progresses rapidly. We ask that you please avoid using this site for real-world applications or commercial uses.

We invite you to try Stable Chat. Users can create a free account or log in using a Gmail account.

If you encounter a biased or harmful output, please report it using the flag icon:

Stability AI’s best model will be featured at DEFCON31

This August, the White House-sponsored red-teaming event at DEFCON 31 will feature our open language model and other companies. Attendees will evaluate and research vulnerabilities, biases, and safety risks. Their findings will help us and the community build safer AI models and demonstrate the importance of independent evaluation for AI safety and accountability. We are committed to promoting transparency in AI, and our participation in DEFCON deepens our collaboration with external security researchers.

AI Generated Robotic Content

Share
Published by
AI Generated Robotic Content
Tags: ai images

Recent Posts

11 Best Beard Trimmers (2024): Full Beards, Hair, Stubble

These beard tools deliver a quality trim for all types of facial hair.

12 hours ago

5 of the Most Influential Machine Learning Papers of 2024

Artificial intelligence (AI) research, particularly in the machine learning (ML) domain, continues to increase the…

1 day ago

PEFT fine tuning of Llama 3 on SageMaker HyperPod with AWS Trainium

Training large language models (LLMs) models has become a significant expense for businesses. For many…

1 day ago

OpenAI’s o3 shows remarkable progress on ARC-AGI, sparking debate on AI reasoning

o3 solved one of the most difficult AI challenges, scoring 75.7% on the ARC-AGI benchmark.…

2 days ago

How NASA Might Change Under Donald Trump

The Trump transition team is looking for “big changes” at NASA—including some cuts.

2 days ago

An AI system has reached human level on a test for ‘general intelligence’—here’s what that means

A new artificial intelligence (AI) model has just achieved human-level results on a test designed…

2 days ago