AI scaling laws: Universal guide estimates how LLMs will perform based on smaller models in same family

When researchers are building large language models (LLMs), they aim to maximize performance under a particular computational and financial budget. Since training a model can amount to millions of dollars, developers need to be judicious with cost-impacting decisions about, for instance, the model architecture, optimizers, and training datasets before committing to a model.

How to ensure high-quality synthetic wireless data when real-world data runs dry

To train artificial intelligence (AI) models, researchers need good data and lots of it. However, most real-world data has already been used, leading scientists to generate synthetic data. While the generated data helps solve the issue of quantity, it may not always have good quality, and assessing its quality has been overlooked.