Filtered data stops openly-available AI models from performing dangerous tasks, study finds

Researchers from the University of Oxford, EleutherAI, and the UK AI Security Institute have reported a major advance in safeguarding open-weight language models. By filtering out potentially harmful knowledge during training, the researchers were able to build models that resist subsequent malicious updates—especially valuable in sensitive domains such as biothreat research.

From terabytes to insights: Real-world AI obervability architecture

GUEST: Consider maintaining and developing an e-commerce platform that processes millions of transactions every minute, generating large amounts of telemetry data, including metrics, logs and traces across multiple microservices. When critical incidents occur, on-call engineers face the daunting task of sifting through an ocean of data to unravel r…Read More

Robotic drummer gradually acquires human-like behaviors

Humanoid robots, robots with a human-like body structure, have so far been primarily tested on manual tasks that entail supporting humans in their daily activities, such as carrying objects, collecting samples in hazardous environments, supporting older adults or acting as physical therapy assistants. In contrast, their potential for completing expressive physical tasks rooted in creative …