Categories: FAANG

TiC-LM: A Web-Scale Benchmark for Time-Continual LLM Pretraining

This paper was accepted to the ACL 2025 main conference as an oral presentation.
This paper was accepted at the Scalable Continual Learning for Lifelong Foundation Models (SCLLFM) Workshop at NeurIPS 2024.
Large Language Models (LLMs) trained on historical web data inevitably become outdated. We investigate evaluation strategies and update methods for LLMs as new data becomes available. We introduce a web-scale dataset for time-continual pretraining of LLMs derived from 114 dumps of Common Crawl (CC) – orders of magnitude larger than previous continual language modeling benchmarks. We also…
AI Generated Robotic Content

Recent Posts

3 Months later – Proof of concept for making comics with Krita AI and other AI tools

Some folks might remember this post I made a few short months ago where I…

20 hours ago

NASA Delays Launch of Artemis II Lunar Mission Once Again

A failure in the helium flow of the SLS rocket has prompted NASA to delay…

21 hours ago

Jailbreaking the matrix: How researchers are bypassing AI guardrails to make them safer

A paper written by University of Florida Computer & Information Science & Engineering, or CISE,…

21 hours ago

Turns out LTX-2 makes a very good video upscaler for WAN

I have had a lot of fun with LTX but for a lot of usecases…

2 days ago

Sony’s WH-CH720N headphones offer excellent value at full price, but right now they’re a steal.

Sony’s WH-CH720N headphones offer excellent value at full price, but right now they're a steal.

2 days ago

AI model edits can leak sensitive data via update ‘fingerprints’

Artificial intelligence (AI) systems are now widely used by millions of people worldwide, as tools…

2 days ago