NVIDIA Introduces CLIMB: A Framework for Iterative Data Mixture Optimization in Language Model Pretraining

NVIDIA Introduces CLIMB: A Framework for Iterative Data Mixture Optimization in Language Model Pretraining


Challenges in Constructing Effective Pretraining Data Mixtures

As large language models (LLMs) scale in size and capability, the choice of pretraining data remains a critical determinant of downstream performance. Most LLMs are trained on large, web-scale datasets such as Common Crawl, which provide broad coverage but lack explicit domain labels. This introduces difficulties in curating mixtures that balance general knowledge with domain-specific expertise.

Manual dataset curation, as seen in efforts like The Pile, is labor-intensive and does not scale well. Moreover, the nonlinear relationship between data composition and model performance makes it non-trivial to determine what proportions of domain data are optimal. These constraints motivate the need for automated, scalable, and adaptive data selection methods.

CLIMB: An Iterative Framework for Data Mixture Discovery

To address this, NVIDIA researchers propose CLIMB—CLustering-based Iterative Data Mixture Bootstrapping—a framework that automates the discovery and refinement of data mixtures for language model pretraining. CLIMB combines unsupervised clustering with iterative optimization to identify mixtures that are well-suited for general or domain-specific objectives.

The pipeline begins by embedding large-scale text data into a semantic space using pretrained encoders. K-means clustering is then applied to organize the data into coherent groups, which are pruned and merged based on content quality and redundancy. This forms the basis for constructing candidate mixtures.

Subsequently, CLIMB uses proxy models to evaluate sampled mixtures and fits a regression-based predictor (e.g., LightGBM) to estimate mixture performance. An iterative bootstrapping procedure progressively refines the sampling space, prioritizing high-performing configurations. This allows CLIMB to converge on an effective data mixture under a fixed compute budget.

Technical Details and Design Considerations

The optimization process is framed as a bi-level problem: at the lower level, proxy models are trained on candidate mixtures; at the upper level, a predictor is learned to approximate performance outcomes. This predictor guides further sampling and pruning, enabling efficient exploration of the mixture space.

CLIMB supports sparsity in mixture weights, encouraging the discovery of compact, domain-relevant data subsets. The use of clustering over embeddings—rather than token-level features—ensures semantic coherence within clusters. The iterative refinement is structured to balance breadth (search space coverage) with depth (predictive accuracy), and ablation studies confirm that careful compute allocation across iterations improves convergence and final performance.

The framework also exhibits robustness across proxy model sizes and cluster granularities. While larger proxy models yield slightly better predictions, even smaller models preserve key structural trends. Similarly, CLIMB is relatively insensitive to initial cluster count, provided it is within a reasonable range.

Empirical Evaluation and Observations

CLIMB was evaluated on several general reasoning tasks, including PIQA, ARC (Easy and Challenge), HellaSwag, and WinoGrande. A 1B-parameter model trained on CLIMB-discovered mixtures achieved an average accuracy of 60.41%, outperforming comparable baselines such as DoReMi and RegMix.

When extended to 400B-token pretraining, this 1B model outperformed Llama-3.2-1B by 2.0% on a broad suite of benchmarks. Similarly, in the sub-500M model category, CLIMB-based pretraining led to consistent improvements over models like SmolLM and TinyLlama.

Domain specialization further highlights CLIMB’s utility. In targeted MMLU benchmarks across STEM, humanities, and social sciences, CLIMB-trained models outperformed both random selection and exhaustive search baselines. The iterative process showed consistent gains over each stage, indicating effective guidance from the predictive model.

To facilitate reproducibility and further research, NVIDIA has released two resources:

ClimbLab: A 1.2-trillion-token corpus organized into 20 semantic clusters.

ClimbMix: A 400-billion-token optimized mixture for efficient pretraining.

Models trained on ClimbMix outperform those trained on datasets like Nemotron-CC and SmolLM under equivalent token budgets, demonstrating improved scaling characteristics.

Conclusion

CLIMB presents a systematic approach for optimizing data mixtures in LLM pretraining. By combining semantic clustering with proxy-based iterative search, it avoids reliance on manual annotations or static heuristics. The method supports both generalist and specialist training goals and adapts to varying compute and data constraints.

This framework contributes to ongoing efforts in data-centric AI by offering a scalable and principled alternative to handcrafted data pipelines. Its empirical performance underscores the importance of data mixture optimization in maximizing model utility, particularly under fixed resource budgets.

Check out the Paper, ClimbLab on HF and ClimbMix on HF . Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 90k+ ML SubReddit.

🔥 [Register Now] miniCON Virtual Conference on AGENTIC AI: FREE REGISTRATION + Certificate of Attendance + 4 Hour Short Event (May 21, 9 am- 1 pm PST) + Hands on Workshop

Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.



Source link