New memristor training method slashes AI energy use by six orders of magnitude

In a Nature Communications study, researchers from China have developed an error-aware probabilistic update (EaPU) method that aligns memristor hardware’s noisy updates with neural network training, slashing energy use by nearly six orders of magnitude versus GPUs while boosting accuracy on vision tasks. The study validates EaPU on 180 nm memristor arrays and large-scale simulations.

LTX 2 is amazing : LTX-2 in ComfyUI on RTX 3060 12GB

My setup: RTX 3060 12GB VRAM + 48GB system RAM. I spent the last couple of days messing around with LTX-2 inside ComfyUI and had an absolute blast. I created short sample scenes for a loose spy story set in a neon-soaked, rainy Dhaka (cyberpunk/Bangla vibes with rainy streets, umbrellas, dramatic reflections, and a mysterious …

The breakthrough that makes robot faces feel less creepy

Humans pay enormous attention to lips during conversation, and robots have struggled badly to keep up. A new robot developed at Columbia Engineering learned realistic lip movements by watching its own reflection and studying human videos online. This allowed it to speak and sing with synchronized facial motion, without being explicitly programmed. Researchers believe this …

ParaRNN: Unlocking Parallel Training of Nonlinear RNNs for Large Language Models

Recurrent Neural Networks (RNNs) laid the foundation for sequence modeling, but their intrinsic sequential nature restricts parallel computation, creating a fundamental barrier to scaling. This has led to the dominance of parallelizable architectures like Transformers and, more recently, State Space Models (SSMs). While SSMs achieve efficient parallelization through structured linear recurrences, this linearity constraint limits …

ML 198324

Advanced fine-tuning techniques for multi-agent orchestration: Patterns from Amazon at scale

Our work with large enterprise customers and Amazon teams has revealed that high stakes use cases continue to benefit significantly from advanced large language model (LLM) fine-tuning and post-training techniques. In this post, we show you how fine-tuning enabled a 33% reduction in dangerous medication errors (Amazon Pharmacy), engineering 80% human effort reduction (Amazon Global …

Tom Currymax 1000x1000 1

Cloud CISO Perspectives: Practical guidance on building with SAIF

Welcome to the first Cloud CISO Perspectives for January 2026. Today, Tom Curry and Anton Chuvakin, from Google Cloud’s Office of the CISO, share our new report on using Google’s Secure AI Framework with Google Cloud capabilities and services to build boldly and responsibly with AI. As with all Cloud CISO Perspectives, the contents of …

Listen Labs raises $69M after viral billboard hiring stunt to scale AI customer interviews

Alfred Wahlforss was running out of options. His startup, Listen Labs, needed to hire over 100 engineers, but competing against Mark Zuckerberg’s $100 million offers seemed impossible. So he spent $5,000 — a fifth of his marketing budget — on a billboard in San Francisco displaying what looked like gibberish: five strings of random numbers. …