Categories: AI/ML News

Benchmarking framework reveals major safety risks of using AI in lab experiments

While artificial intelligence (AI) models have proved useful in some areas of science, like predicting 3D protein structures, a new study shows that it should not yet be trusted in many lab experiments. The study, published in Nature Machine Intelligence, revealed that all of the large-language models (LLMs) and vision-language models (VLMs) tested fell short on lab safety knowledge. Overtrusting these AI models for help in lab experiments can put researchers at risk.
AI Generated Robotic Content

Share
Published by
AI Generated Robotic Content

Recent Posts

Train, Serve, and Deploy a Scikit-learn Model with FastAPI

FastAPI has become one of the most popular ways to serve machine learning models because…

6 hours ago

Apple Machine Learning Research at ICLR 2026

Apple is advancing AI and ML with fundamental research, much of which is shared through…

6 hours ago

Frontend Engineering at Palantir: Engineering Multilingual Collaboration

Frontend Engineering at Palantir: Building Multilingual CollaborationAbout this SeriesFrontend engineering at Palantir goes far beyond…

6 hours ago

Cost-effective multilingual audio transcription at scale with Parakeet-TDT and AWS Batch

Many organizations are archiving large media libraries, analyzing contact center recordings, preparing training data for…

6 hours ago

Day 1 at Google Cloud Next ‘26 recap

Last year at Google Cloud Next ‘25, we asked you to imagine a new future…

6 hours ago