TiC-LM: A Web-Scale Benchmark for Time-Continual LLM Pretraining
This paper was accepted to the ACL 2025 main conference as an oral presentation. This paper was accepted at the Scalable Continual Learning for Lifelong Foundation Models (SCLLFM) Workshop at NeurIPS 2024. Large Language Models (LLMs) trained on historical web data inevitably become outdated. We investigate evaluation strategies and update methods for LLMs as new …
Read more “TiC-LM: A Web-Scale Benchmark for Time-Continual LLM Pretraining”