Categories: AI/ML News

Benchmarking framework reveals major safety risks of using AI in lab experiments

While artificial intelligence (AI) models have proved useful in some areas of science, like predicting 3D protein structures, a new study shows that it should not yet be trusted in many lab experiments. The study, published in Nature Machine Intelligence, revealed that all of the large-language models (LLMs) and vision-language models (VLMs) tested fell short on lab safety knowledge. Overtrusting these AI models for help in lab experiments can put researchers at risk.
AI Generated Robotic Content

Share
Published by
AI Generated Robotic Content

Recent Posts

What model did they use here?

I’ve been seeing this TikTok account a lot where they make mini vlogs as if…

17 hours ago

AI benchmark helps robots plan and complete their chores in the real world

No matter how sophisticated they are, robots can often be indecisive and struggle with multi-step…

18 hours ago

[Update] ComfyUI VACE Video Joiner v2.5 – Seamless loops, reduced RAM usage on assembly

Github | CivitAI Point this workflow at a directory of clips and it will automatically…

2 days ago

Less Gaussians, Texture More: 4K Feed-Forward Textured Splatting

Existing feed-forward 3D Gaussian Splatting methods predict pixel-aligned primitives, leading to a quadratic growth in…

2 days ago

What Is the Best Garmin Watch Right Now? (2026)

We tested Garmin’s GPS-enabled fitness trackers and found the perfect picks for casual hikers, backcountry…

2 days ago

Human creativity still resists automation: Artists rank highest, with unguided AI coming in last

New research confirms it: the creativity of artificial intelligence (AI) is a myth. Although current…

2 days ago