Categories: AI/ML News

Machine learning masters massive data sets: Algorithm breaks the exabyte barrier

A machine-learning algorithm demonstrated the capability to process data that exceeds a computer’s available memory by identifying a massive data set’s key features and dividing them into manageable batches that don’t choke computer hardware. Developed at Los Alamos National Laboratory, the algorithm set a world record for factorizing huge data sets during a test run on Oak Ridge National Laboratory’s Summit, the world’s fifth-fastest supercomputer.
AI Generated Robotic Content

Share
Published by
AI Generated Robotic Content

Recent Posts

Launching your first AI project with a grain of RICE: Weighing reach, impact, confidence and effort to create your roadmap

A new framework inspired by the RICE scoring model balances business value, time-to-market, scalability and…

1 hour ago

The 11 Best Xbox Accessories You Can Buy (2025)

From headsets to hard drives, these are the best Xbox accessories.

1 hour ago

Statistical Methods for Evaluating LLM Performance

The large language model (LLM) has become a cornerstone of many AI applications.

1 day ago

Getting started with computer use in Amazon Bedrock Agents

Computer use is a breakthrough capability from Anthropic that allows foundation models (FMs) to visually…

1 day ago

OpenAI’s strategic gambit: The Agents SDK and why it changes everything for enterprise AI

OpenAI's new API and Agents SDK consolidate a previously fragmented complex ecosystem into a unified,…

1 day ago

Under Trump, AI Scientists Are Told to Remove ‘Ideological Bias’ From Powerful Models

A directive from the National Institute of Standards and Technology eliminates mention of “AI safety”…

1 day ago