Categories: FAANG

TiC-LM: A Web-Scale Benchmark for Time-Continual LLM Pretraining

This paper was accepted to the ACL 2025 main conference as an oral presentation.
This paper was accepted at the Scalable Continual Learning for Lifelong Foundation Models (SCLLFM) Workshop at NeurIPS 2024.
Large Language Models (LLMs) trained on historical web data inevitably become outdated. We investigate evaluation strategies and update methods for LLMs as new data becomes available. We introduce a web-scale dataset for time-continual pretraining of LLMs derived from 114 dumps of Common Crawl (CC) – orders of magnitude larger than previous continual language modeling benchmarks. We also…
AI Generated Robotic Content

Recent Posts

Qwen Image Edit 2511 — Coming next week

submitted by /u/Queasy-Carrot-7314 [link] [comments]

6 hours ago

BERT Models and Its Variants

This article is divided into two parts; they are: • Architecture and Training of BERT…

6 hours ago

Lean4: How the theorem prover works and why it’s the new competitive edge in AI

Large language models (LLMs) have astounded the world with their capabilities, yet they remain plagued…

7 hours ago

13 Best MagSafe Power Banks for iPhones (2025), Tested and Reviewed

Keep your iPhone or Qi2 Android phone topped up with one of these WIRED-tested Qi2…

7 hours ago

I love Qwen

It is far more likely that a woman underwater is wearing at least a bikini…

1 day ago

100% Unemployment is Inevitable*

TL;DR AI is already raising unemployment in knowledge industries, and if AI continues progressing toward…

1 day ago