ÜberWeb: Insights from Multilingual Curation for a 20-Trillion-Token Dataset

24 min read Original article ↗

[1]Hyung Won Chung, Xavier Garcia, Adam Roberts, Yi Tay, Orhan Firat, Sharan Narang, Noah Constant. "UniMax: Fairer and More Effective Language Sampling for Large-Scale Multilingual Pretraining." *The Eleventh International Conference on Learning Representations* (2023)

[2]Xue, Linting, Constant, Noah, Roberts, Adam, Kale, Mihir, Al-Rfou, Rami, Siddhant, Aditya, Barua, Aditya, Raffel, Colin. "mT5: A Massively Multilingual Pre-trained Text-to-Text Transformer." *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies* (2021): 483–498 Link

[3]Qwen Team. "Qwen2.5 Technical Report." (2024) Link

[4]Ortiz Suárez, Pedro Javier, Romary, Laurent, Sagot, Benoît. "A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages." *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics* (2020): 1703–1714 Link

[5]IBM Granite Team. "Granite 3.0 Language Models." (2024) Link

[6]Bakouch, Elie, Ben Allal, Loubna, Lozhkov, Anton, Tazi, Nouamane, Tunstall, Lewis, Patiño, Carlos Miguel, Beeching, Edward, Roucher, Aymeric, Reedi, Aksel Joonas, Gallouédec, Quentin, Rasul, Kashif, Habib, Nathan, Fourrier, Clémentine, Kydlicek, Hynek, Penedo, Guilherme, Larcher, Hugo, Morlon, Mathieu, Srivastav, Vaibhav, Lochner, Joshua, Nguyen, Xuan-Son, Raffel, Colin, von Werra, Leandro, Wolf, Thomas. "SmolLM3: smol, multilingual, long-context reasoner." (2025)

[7]Laurençon, Hugo, Saulnier, Lucile, et al.. "The BigScience ROOTS Corpus: A 1.6TB Composite Multilingual Dataset." *Proceedings of the 36th Conference on Neural Information Processing Systems (NeurIPS 2022) Datasets and Benchmarks Track* (2022) Link

[8]Feng, Fangxiaoyu, Yang, Yinfei, Cer, Daniel, Arivazhagan, Naveen, Wang, Wei. "Language-agnostic BERT Sentence Embedding." *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)* (2022): 870–883

[9]Reimers, Nils, Gurevych, Iryna. "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks." *Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)* (2019): 3982–3992

[10]Wang, Liang, Yang, Nan, Huang, Xiaolong, Jiao, Binxing, Yang, Linjun, Jiang, Daxin, Majumder, Rangan, Wei, Furu. "Text Embeddings by Weakly-Supervised Contrastive Pre-training." *arXiv preprint arXiv:2212.03533* (2026)

[11]Kudugunta, Sneha, Caswell, Isaac, Zhang, Biao, Garcia, Xavier, Xin, Derrick, Kusupati, Aditya, Stella, Romi, Bapna, Ankur, Firat, Orhan. "Madlad-400: A multilingual and document-level large audited dataset." *Advances in Neural Information Processing Systems* 36 (2026): 67284–67296

[12]Guilherme Penedo, Hynek Kydlíček, Vinko Sabolčec, Bettina Messmer, Negar Foroutan, Amir Hossein Kargaran, Colin Raffel, Martin Jaggi, Leandro Von Werra, Thomas Wolf. "FineWeb2: One Pipeline to Scale Them All — Adapting Pre-Training Data Processing to Every Language." *Second Conference on Language Modeling* (2025) Link

[13]Pfeiffer, Jonas, Vulić, Ivan, Gurevych, Iryna, Ruder, Sebastian. "MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer." *Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)* (2020): 7654–7673 Link

[14]Pfeiffer, Jonas, Goyal, Naman, Lin, Xi Victoria, Li, Xian, Cross, James, Riedel, Sebastian, Artetxe, Mikel. "Lifting the Curse of Multilinguality by Pre-training Modular Transformers." *Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies* (2022): 3479–3495 Link

[15]Hestness, Joel, Narang, Sharan, Ardalani, Newsha, Diamos, Gregory, Jun, Heewoo, Kianinejad, Hyuseung, Patwary, Md Mostofa Ali, Yang, Yang, Zhou, Yanqi. "Deep Learning Scaling is Predictable, Empirically." *arXiv preprint arXiv:1712.00409* (2017) Link

[16]Khan, Mohammed, Mehta, Priyam, Sankar, Ananth, Kumaravelan, Umashankar, Doddapaneni, Sumanth, Jain, Sparsh, Kunchukuttan, Anoop, Kumar, Pratyush, Dabre, Raj, Khapra, Mitesh M, others. "IndicLLMSuite: A Blueprint for Creating Pre-training and Fine-Tuning Datasets for Indian Languages." *Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)* (2026): 15831–15879

[17]Singh, Shivalika, Vargus, Freddie, Dsouza, Daniel, Karlsson, Börje F., Mahendiran, Abinaya, Ko, Wei-Yin, Shandilya, Herumb, Patel, Jay, Mataciunas, Deividas, O’Mahony, Laura, others. "Aya Dataset: An Open-Access Collection for Multilingual Instruction Tuning." *Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)* (2024): 11521–11567 Link

[18]Xue, Linting, Barua, Aditya, Constant, Noah, Al-Rfou, Rami, Narang, Sharan, Kale, Mihir, Roberts, Adam, Raffel, Colin. "ByT5: Towards a token-free future with pre-trained byte-to-byte models." *Transactions of the Association for Computational Linguistics* 10 (2022): 291–306 Link

[19]Iacob, Andrei, et al.. "DEPT: Decoupled Embeddings for Pre-training Transformers." *arXiv preprint arXiv:2501.00987* (2025) Link

[20]Wang, Zirui, Lipton, Zachary C., Tsvetkov, Yulia. "Negative Interference in Multilingual Models: Findings and A Meta-Learning Treatment." *Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)* (2020): 4373–4388 Link

[21]Foroutan, Negar, Teiletche, Paul, Tarun, Ayush Kumar, Bosselut, Antoine. "Revisiting Multilingual Data Mixtures in Language Model Pretraining." *arXiv preprint arXiv:2510.25947* (2025) Link

[22]Guilherme Penedo, Hynek Kydlíček, Amir Hossein Kargaran, Leandro von Werra. "FineTranslations." *Hugging Face repository* (2026)

[23]Conneau, Alexis, Khandelwal, Kartikay, Goyal, Naman, Chaudhary, Vishrav, Wenzek, Guillaume, Guzmán, Francisco, Grave, Edouard, Ott, Myle, Zettlemoyer, Luke, Stoyanov, Veselin. "Unsupervised cross-lingual representation learning at scale." *Proceedings of the 58th annual meeting of the association for computational linguistics* (2026): 8440–8451

[24]Bandarkar, Lucas, Liang, Davis, Muller, Benjamin, Artetxe, Mikel, Shukla, Satya Narayan, Husa, Donald, Goyal, Naman, Krishnan, Abhinandan, Zettlemoyer, Luke, Khabsa, Madian. "The belebele benchmark: a parallel reading comprehension dataset in 122 language variants." *Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)* (2026): 749–775

[25]He, Yifei, Benhaim, Alon, Patra, Barun, Vaddamanu, Praneetha, Ahuja, Sanchit, Chopra, Parul, Chaudhary, Vishrav, Zhao, Han, Song, Xia. "Scaling laws for multilingual language models." *Findings of the Association for Computational Linguistics: ACL 2025* (2026): 4257–4273

[26]Habib, Nathan, Fourrier, Clémentine, Kydlíček, Hynek, Wolf, Thomas, Tunstall, Lewis. "LightEval: A lightweight framework for LLM evaluation." (2023) Link

[27]Thunder Research Group. "Korean Benchmarks." (2025)

[28]Lai, Viet, Nguyen, Chien, Ngo, Nghia, Nguyễn, Thuật, Dernoncourt, Franck, Rossi, Ryan, Nguyen, Thien. "Okapi: Instruction-tuned large language models in multiple languages with reinforcement learning from human feedback." *Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations* (2026): 318–327

[29]Singh, Shivalika, Romanou, Angelika, Fourrier, Clémentine, Adelani, David Ifeoluwa, Ngui, Jian Gang, Vila-Suero, Daniel, Limkonchotiwat, Peerat, Marchisio, Kelly, Leong, Wei Qi, Susanto, Yosephine, others. "Global mmlu: Understanding and addressing cultural and linguistic biases in multilingual evaluation." *Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)* (2026): 18761–18799

[30]Longpre, Shayne, Kudugunta, Sneha, Muennighoff, Niklas, Hsu, I-Hung, Caswell, Isaac, Pentland, Alex, Arik, Sercan, Lee, Chen-Yu, Ebrahimi, Sayna. "ATLAS: Adaptive Transfer Scaling Laws for Multilingual Pretraining, Finetuning, and Decoding the Curse of Multilinguality." *arXiv preprint arXiv:2510.22037* (2025) Link

[31]Kaplan, Jared, McCandlish, Sam, Henighan, Tom, Brown, Tom B, Chess, Benjamin, Child, Rewon, Gray, Scott, Radford, Alec, Wu, Jeffrey, Amodei, Dario. "Scaling Laws for Neural Language Models." *arXiv preprint arXiv:2001.08361* (2020) Link

[32]Hoffmann, Jordan, Borgeaud, Sebastian, Mensch, Arthur, Buchatskaya, Elena, Cai, Trevor, Rutherford, Eliza, de Las Casas, Diego, Hendricks, Lisa Anne, Welbl, Johannes, Clark, Aidan, others. "Training compute-optimal large language models." *Proceedings of the 36th International Conference on Neural Information Processing Systems* (2026): 30016–30030

[33]Fernandes, Patrick, Ghorbani, Behrooz, Garcia, Xavier, Freitag, Markus, Firat, Orhan. "Scaling laws for multilingual neural machine translation." *International Conference on Machine Learning* (2023): 10053–10071

[34]Lai, Viet, Nguyen, Chien, Ngo, Nghia, Nguyen, Thuat, Dernoncourt, Franck, Rossi, Ryan, Nguyen, Thien. "Okapi: Instruction-tuned Large Language Models in Multiple Languages with Reinforcement Learning from Human Feedback." *Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations* (2023): 318–327 Link

[35]Blevins, Terra, Limisiewicz, Tomasz, Gururangan, Suchin, Li, Margaret, Gonen, Hila, Smith, Noah A, Zettlemoyer, Luke. "Breaking the curse of multilinguality with cross-lingual expert language models." *Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing* (2026): 10822–10837 Link

[36]Chang, Tyler A., Arnett, Catherine, Tu, Zhuowen, Bergen, Benjamin K.. "When Is Multilinguality a Curse? Language Modeling for 250 High- and Low-Resource Languages." *Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing* (2024): 4074–4096 Link

[37]Khanna, Saurabh, Li, Xinxu. "Invisible Languages of the LLM Universe." *arXiv preprint arXiv:2510.11557* (2025) Link

[38]Li, Jeffrey, Fang, Alex, Smyrnis, Georgios, Ivgi, Maor, Jordan, Matt, Gadre, Samir Yitzhak, Bansal, Hritik, Guha, Etash, Keh, Sedrick Scott, Arora, Kushal, others. "Datacomp-lm: In search of the next generation of training sets for language models." *Advances in Neural Information Processing Systems* 37 (2026): 14200–14282

[39]Singh, Varun, Krauss, Lucas, Jaghouar, Sami, Sirovatka, Matej, Goddard, Charles, Obeid, Fares, Ong, Jack Min, Straube, Jannik, Fern, Harley, Aria, others. "Arcee Trinity Large Technical Report." (2026) Link

[40]Joshi, Siddharth, Yin, Haoli, Adiga, Rishabh, Monti, Ricardo, Carranza, Aldo, Fang, Alex, Deng, Alvin, Abbas, Amro, Larsen, Brett, Blakeney, Cody, others. "DatBench: Discriminative, Faithful, and Efficient VLM Evaluations." *arXiv preprint arXiv:2601.02316* (2026)

[41]Merrick, Luke, Fang, Alex, Carranza, Aldo, Deng, Alvin, Abbas, Amro, Larsen, Brett, Blakeney, Cody, Teh, Darren, Schwab, David, Pan, Fan, others. "Luxical: High-Speed Lexical-Dense Text Embeddings." *arXiv preprint arXiv:2512.09015* (2026)

[42]Seto, Skyler, Ter Hoeve, Maartje, de Seyssel, Maureen, Grangier, David. "Assessing the Role of Data Quality in Training Bilingual Language Models." *Findings of the Association for Computational Linguistics: EMNLP 2025* (2025): 22694–22720 Link

[43]Su, Dan, Kong, Kezhi, Lin, Ying, Jennings, Joseph, Norick, Brandon, Kliegl, Markus, Patwary, Mostofa, Shoeybi, Mohammad, Catanzaro, Bryan. "Nemotron-cc: Transforming common crawl into a refined long-horizon pretraining dataset." *Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)* (2026): 2459–2475

[44]Olmo, Team, Ettinger, Allyson, Bertsch, Amanda, Kuehl, Bailey, Graham, David, Heineman, David, Groeneveld, Dirk, Brahman, Faeze, Timbers, Finbarr, Ivison, Hamish, others. "Olmo 3." *arXiv preprint arXiv:2512.13961* (2026)

[45]Touvron, Hugo, Lavril, Thibaut, Izacard, Gautier, Martinet, Xavier, Lachaux, Marie-Anne, Lacroix, Timothée, Rozière, Baptiste, Goyal, Naman, Hambro, Eric, Azhar, Faisal, others. "Llama: Open and efficient foundation language models." *arXiv preprint arXiv:2302.13971* (2026)

[46]Choi, Dami, Xin, Derrick, Dadkhahi, Hamid, Gilmer, Justin, Garg, Ankush, Firat, Orhan, Yeh, Chih-Kuan, Dai, Andrew M, Ghorbani, Behrooz. "Order matters in the presence of dataset imbalance for multilingual learning." *Advances in Neural Information Processing Systems* 36 (2019): 66902–66922 Link

[47]Wendler, Chris, Veselovsky, Veniamin, Monea, Giovanni, West, Robert. "Do llamas work in english? on the latent language of multilingual transformers." *Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)* (2026): 15366–15394

[48]Ahuja, Sanchit, Aggarwal, Divyanshu, Gumma, Varun, Watts, Ishaan, Sathe, Ashutosh, Ochieng, Millicent, Hada, Rishav, Jain, Prachi, Ahmed, Mohamed, Bali, Kalika, others. "Megaverse: Benchmarking large language models across languages, modalities, models and tasks." *Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)* (2026): 2598–2637

[49]Sorscher, Ben, Geirhos, Robert, Shekhar, Shashank, Ganguli, Surya, Morcos, Ari. "Beyond neural scaling laws: beating power law scaling via data pruning." *Advances in Neural Information Processing Systems* 35 (2026): 19523–19536

[50]Messmer, Bettina, Sabolčec, Vinko, Jaggi, Martin. "Enhancing Multilingual LLM Pretraining with Model-Based Data Selection." *Proceedings of the 10th edition of the Swiss Text Analytics Conference* (2025) Link

[51]Wang, Jiayi, Lu, Yao, Weber, Maurice, Ryabinin, Max, Adelani, David, Chen, Yihong, Tang, Raphael, Stenetorp, Pontus. "Multilingual Language Model Pretraining using Machine-translated Data." (2025) Link

[52]Goyal, Naman, Gao, Cynthia, Chaudhary, Vishrav, Chen, Peng-Jen, Wenzek, Guillaume, Ju, Da, Krishnan, Sanjana, Ranzato, Marc’Aurelio, Guzmán, Francisco, Fan, Angela. "The flores-101 evaluation benchmark for low-resource and multilingual machine translation." *Transactions of the Association for Computational Linguistics* 10 (2022): 522–538

[53]Üstün, Ahmet, Aryabumi, Viraat, Yong, Zheng, Ko, Wei-Yin, D’souza, Daniel, Onilude, Gbemileke, Bhandari, Neel, Singh, Shivalika, Ooi, Hui-Lee, Kayid, Amr, others. "Aya Model: An Instruction Finetuned Open-Access Multilingual Language Model." *arXiv preprint arXiv:2402.06619* (2024) Link

[54]Chandak, Nikhil, Goel, Shashwat, Prabhu, Ameya, Hardt, Moritz, Geiping, Jonas. "Answer Matching Outperforms Multiple Choice for Language Model Evaluation." *arXiv preprint arXiv:2507.02856* (2026)

[55]Rahmanzadehgervi, Pooyan, Bolton, Logan, Taesiri, Mohammad Reza, Nguyen, Anh. "Vision language models are blind." *arXiv preprint arXiv:2407.06581* (2026)

[56]Schick, Timo, others. "Fluid Language Model Benchmarking." *arXiv preprint arXiv:2509.11106* (2026)

[57]Unknown Author Team. "Pretraining on the Test Set Is No Longer All You Need: A Debate-Driven Approach to QA Benchmarks." *arXiv preprint arXiv:2507.17747* (2026)

[58]Lord, Frederic M. "Applications of item response theory to practical testing problems." (1980)

[59]Li, Xian, Li, Yu, Zhang, Rui, Zhou, Jie, Sun, Maosong. "Can Multiple-Choice Questions Really Be Useful in Detecting the Abilities of LLMs?." *Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING)* (2024) Link

[60]Ahmed Masry, Do Xuan Long, Jia Qing Tan, Shafiq Joty, Enamul Hoque. "ChartQA: A Benchmark for Question Answering about Charts with Visual and Logical Reasoning." (2022) Link

[61]Zirui Wang, Mengzhou Xia, Luxi He, Howard Chen, Yitao Liu, Richard Zhu, Kaiqu Liang, Xindi Wu, Haotian Liu, Sadhika Malladi, Alexis Chevalier, Sanjeev Arora, Danqi Chen. "CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs." (2024) Link

[62]Ahmed Masry, Mohammed Saidul Islam, Mahir Ahmed, Aayush Bajaj, Firoz Kabir, Aaryaman Kartha, Md Tahmid Rahman Laskar, Mizanur Rahman, Shadikur Rahman, Mehrad Shahmohammadi, Megh Thakkar, Md Rizwan Parvez, Enamul Hoque, Shafiq Joty. "ChartQAPro: A More Diverse and Challenging Benchmark for Chart Question Answering." (2025) Link

[63]Minesh Mathew, Viraj Bagal, Rubèn Pérez Tito, Dimosthenis Karatzas, Ernest Valveny, C. V Jawahar. "InfographicVQA." (2021) Link

[64]Anand Mishra, Shashank Shekhar, Ajeet Kumar Singh, Anirban Chakraborty. "OCR-VQA: Visual Question Answering by Reading Text in Images." *ICDAR* (2019)

[65]Zhibo Yang, Jun Tang, Zhaohai Li, Pengfei Wang, Jianqiang Wan, Humen Zhong, Xuejing Liu, Mingkun Yang, Peng Wang, Shuai Bai, LianWen Jin, Junyang Lin. "CC-OCR: A Comprehensive and Challenging OCR Benchmark for Evaluating Large Multimodal Models in Literacy." (2024) Link

[66]Minesh Mathew, Dimosthenis Karatzas, C. V. Jawahar. "DocVQA: A Dataset for VQA on Document Images." (2021) Link

[67]Zhang, Yi-Fan, Zhang, Huanyu, Tian, Haochen, Fu, Chaoyou, Zhang, Shuangqing, Wu, Junfei, Li, Feng, Wang, Kun, Wen, Qingsong, Zhang, Zhang, others. "MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans?." *arXiv preprint arXiv:2408.13257* (2026)

[68]Singh, Amanpreet, Natarjan, Vivek, Shah, Meet, Jiang, Yu, Chen, Xinlei, Parikh, Devi, Rohrbach, Marcus. "Towards VQA Models That Can Read." *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition* (2026): 8317-8326

[69]Penedo, Guilherme, Kydlíček, Hynek, Lozhkov, Anton, Mitchell, Margaret, Raffel, Colin A, Von Werra, Leandro, Wolf, Thomas, others. "The fineweb datasets: Decanting the web for the finest text data at scale." *Advances in Neural Information Processing Systems* 37 (2026): 30811–30849

[70]Joshi, Siddharth, Mirzasoleiman, Baharan. "Data-Efficient Contrastive Self-supervised Learning: Most Beneficial Examples for Supervised Learning Contribute the Least." *Proceedings of the 40th International Conference on Machine Learning* (2023): 15356–15370 Link

[71]Joshi, Siddharth, Jain, Arnav, Payani, Ali, Mirzasoleiman, Baharan. "Data-Efficient Contrastive Language-Image Pretraining: Prioritizing Data Quality over Quantity." *Proceedings of The 27th International Conference on Artificial Intelligence and Statistics* (2024): 1000–1008 Link

[72]Alex Fang, Albin Madappally Jose, Amit Jain, Ludwig Schmidt, Alexander Toshev, Vaishaal Shankar. "Data Filtering Networks." (2023) Link

[73]Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, William Fedus. "Emergent Abilities of Large Language Models." (2022) Link

[74]Liu, Yuan, Duan, Haodong, Zhang, Yuanhan, Li, Bo, Zhang, Songyang, Zhao, Wangbo, Yuan, Yike, Wang, Jiaqi, He, Conghui, Liu, Ziwei, others. "Mmbench: Is your multi-modal model an all-around player?." *European conference on computer vision* (2024): 216–233

[75]Bean, Andrew M, Seedat, Nabeel, Chen, Shengzhuang, Schwarz, Jonathan Richard. "Scales++: Compute Efficient Evaluation Subset Selection with Cognitive Scales Embeddings." *arXiv preprint arXiv:2510.26384* (2026)

[76]Vivek, Rajan, Ethayarajh, Kawin, Yang, Diyi, Kiela, Douwe. "Anchor points: Benchmarking models with much fewer examples." *Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)* (2026): 1576–1601

[77]Polo, Felipe Maia, Weber, Lucas, Choshen, Leshem, Sun, Yuekai, Xu, Gongjun, Yurochkin, Mikhail. "tinyBenchmarks: evaluating LLMs with fewer examples." *arXiv preprint arXiv:2402.14992* (2026)

[78]Kipnis, Alex, Voudouris, Konstantinos, Buschoff, Luca M Schulze, Schulz, Eric. "metabench--A Sparse Benchmark of Reasoning and Knowledge in Large Language Models." *arXiv preprint arXiv:2407.12844* (2026)

[79]Tate, Robert F. "Correlation between a discrete and a continuous variable. Point-biserial correlation." *The Annals of mathematical statistics* 25, no. 3 (1954): 603–607

[80]Wang, Jiayu, Ming, Yifei, Shi, Zhenmei, Vineet, Vibhav, Wang, Xin, Li, Sharon, Joshi, Neel. "Is a picture worth a thousand words? delving into spatial reasoning for vision language models." *Advances in Neural Information Processing Systems* 37 (2026): 75392–75421

[81]Lee, Kang-il, Kim, Minbeom, Yoon, Seunghyun, Kim, Minsung, Lee, Dongryeol, Koh, Hyukhun, Jung, Kyomin. "VLind-Bench: Measuring Language Priors in Large Vision-Language Models." *Findings of the Association for Computational Linguistics: NAACL 2025* (2025): 4129–4144 Link

[82]Jian Li, Weiheng Lu, Hao Fei, Meng Luo, Ming Dai, Min Xia, Yizhang Jin, Zhenye Gan, Ding Qi, Chaoyou Fu, Ying Tai, Wankou Yang, Yabiao Wang, Chengjie Wang. "A Survey on Benchmarks of Multimodal Large Language Models." (2024) Link

[83]YiFan Zhang, Yang Shi, Weichen Yu, Qingsong Wen, Xue Wang, Wenjing Yang, Zhang Zhang, Liang Wang, Rong Jin. "Debiasing Multimodal Large Language Models via Penalization of Language Priors." (2025) Link

[84]Zhiqiu Lin, Xinyue Chen, Deepak Pathak, Pengchuan Zhang, Deva Ramanan. "Revisiting the Role of Language Priors in Vision-Language Models." (2024) Link

[85]Guan, Jian, Dodge, Jesse, Wadden, David, Huang, Minlie, Peng, Hao. "Language Models Hallucinate, but May Excel at Fact Verification." *Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)* (2024) Link

[86]Liao, Yuan-Hong, Mahmood, Rafid, Fidler, Sanja, Acuna, David. "Can Large Vision-Language Models Correct Semantic Grounding Errors By Themselves?." *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)* (2025): 14667-14678

[87]Saad-Falcon, Jon, Buchanan, E Kelly, Chen, Mayee F, Huang, Tzu-Heng, McLaughlin, Brendan, Bhathal, Tanvir, Zhu, Shang, Athiwaratkun, Ben, Sala, Frederic, Linderman, Scott, others. "Shrinking the Generation-Verification Gap with Weak Verifiers." *arXiv preprint arXiv:2506.18203* (2026)

[88]V Venktesh, Mandeep Rathee, Avishek Anand. "Trust but Verify! A Survey on Verification Design for Test-time Scaling." (2025) Link

[89]Lord, Frederic M. "A theory of test scores." *Psychometrika measures* 7, no. 1 (2026)

[90]Baker, Frank B. "The basics of item response theory." (2001)

[91]Shuai Bai, Yuxuan Cai, Ruizhe Chen, Keqin Chen, Xionghui Chen, Zesen Cheng, Lianghao Deng, Wei Ding, Chang Gao, Chunjiang Ge, Wenbin Ge, Zhifang Guo, Qidong Huang, Jie Huang, Fei Huang, Binyuan Hui, Shutong Jiang, Zhaohai Li, Mingsheng Li, Mei Li, Kaixin Li, Zicheng Lin, Junyang Lin, Xuejing Liu, Jiawei Liu, Chenglong Liu, Yang Liu, Dayiheng Liu, Shixuan Liu, Dunjie Lu, Ruilin Luo, Chenxu Lv, Rui Men, Lingchen Meng, Xuancheng Ren, Xingzhang Ren, Sibo Song, Yuchong Sun, Jun Tang, Jianhong Tu, Jianqiang Wan, Peng Wang, Pengfei Wang, Qiuyue Wang, Yuxuan Wang, Tianbao Xie, Yiheng Xu, Haiyang Xu, Jin Xu, Zhibo Yang, Mingkun Yang, Jianxin Yang, An Yang, Bowen Yu, Fei Zhang, Hang Zhang, Xi Zhang, Bo Zheng, Humen Zhong, Jingren Zhou, Fan Zhou, Jing Zhou, Yuanzhi Zhu, Ke Zhu. "Qwen3-VL Technical Report." (2025) Link

[92]Lu, Pan, Bansal, Hritik, Xia, Tony, Liu, Jiacheng, Li, Chunyuan, Hajishirzi, Hannaneh, Cheng, Hao, Chang, Kai-Wei, Galley, Michel, Gao, Jianfeng. "MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts." *International Conference on Learning Representations (ICLR)* (2026)

[93]Yijia Xiao, Edward Sun, Tianyu Liu, Wei Wang. "LogicVista: Multimodal LLM Logical Reasoning Benchmark in Visual Contexts." (2024) Link

[94]Kazemzadeh, Sahar, Ordonez, Vicente, Matten, Mark, Berg, Tamara. "ReferItGame: Referring to Objects in Photographs of Natural Scenes." *Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)* (2014) Link

[95]Matt Deitke, Christopher Clark, Sangho Lee, Rohun Tripathi, Yue Yang, Jae Sung Park, Mohammadreza Salehi, Niklas Muennighoff, Kyle Lo, Luca Soldaini, Jiasen Lu, Taira Anderson, Erin Bransom, Kiana Ehsani, Huong Ngo, YenSung Chen, Ajay Patel, Mark Yatskar, Chris Callison-Burch, Andrew Head, Rose Hendrix, Favyen Bastani, Eli VanderBilt, Nathan Lambert, Yvonne Chou, Arnavi Chheda, Jenna Sparks, Sam Skjonsberg, Michael Schmitz, Aaron Sarnat, Byron Bischoff, Pete Walsh, Chris Newell, Piper Wolters, Tanmay Gupta, Kuo-Hao Zeng, Jon Borchardt, Dirk Groeneveld, Crystal Nam, Sophie Lebrecht, Caitlin Wittlif, Carissa Schoenick, Oscar Michel, Ranjay Krishna, Luca Weihs, Noah A. Smith, Hannaneh Hajishirzi, Ross Girshick, Ali Farhadi, Aniruddha Kembhavi. "Molmo and PixMo: Open Weights and Open Data for State-of-the-Art Vision-Language Models." (2024) Link

[96]Sarvam AI. "Sarvam 1: The first Indian language LLM." (2024) Link

[97]Raymond Ng, Thanh Ngan Nguyen, Yuli Huang, Ngee Chia Tai, Wai Yi Leong, Wei Qi Leong, Xianbin Yong, Jian Gang Ngui, Yosephine Susanto, Nicholas Cheng, Hamsawardhini Rengarajan, Peerat Limkonchotiwat. "SEA-LION: Southeast Asian Languages in One Network." (2025) Link

[98]Liang, Percy, Bommasani, Rishi, Lee, Tony, Tsipras, Dimitris, Soylu, Dilara, Yasunaga, Michihiro, Zhang, Yian, Narayanan, Deepak, Wu, Yuhuai, Kumar, Ananya, others. "Holistic Evaluation of Language Models." (2022) Link

[99]Roni Paiss, Ariel Ephrat, Omer Tov, Shiran Zada, Inbar Mosseri, Michal Irani, Tali Dekel. "Teaching CLIP to Count to Ten." (2023) Link

[100]Aniruddha Kembhavi, Mike Salvato, Eric Kolve, Minjoon Seo, Hannaneh Hajishirzi, Ali Farhadi. "A Diagram Is Worth A Dozen Images." (2016) Link

[101]Xiang Yue, Tianyu Zheng, Yuansheng Ni, Yubo Wang, Kai Zhang, Shengbang Tong, Yuxuan Sun, Botao Yu, Ge Zhang, Huan Sun, Yu Su, Wenhu Chen, Graham Neubig. "MMMU-Pro: A More Robust Multi-discipline Multimodal Understanding Benchmark." (2025) Link

[102]Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, Devi Parikh. "Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering." *Conference on Computer Vision and Pattern Recognition (CVPR)* (2017)

[103]Mao, Junhua, Huang, Jonathan, Toshev, Alexander, Camburu, Oana, Yuille, Alan L., Murphy, Kevin. "Generation and Comprehension of Unambiguous Object Descriptions." *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)* (2026)

[104]xAI. "RealWorldQA: A Benchmark for Real-World Visual Understanding." (2024)

[105]Siddharth Joshi, Yu Yang, Yihao Xue, Wenhan Yang, Baharan Mirzasoleiman. "Challenges and Opportunities in Improving Worst-Group Generalization in Presence of Spurious Features." (2025) Link

[106]Varma, Maya, Delbrouck, Jean-Benoit, Chen, Zhihong, Chaudhari, Akshay, Langlotz, Curtis. "RaVL: Discovering and Mitigating Spurious Correlations in Fine-Tuned Vision-Language Models." *Advances in Neural Information Processing Systems* (2026): 82235–82264 Link

[107]Siddharth Joshi, Besmira Nushi, Vidhisha Balachandran, Varun Chandrasekaran, Vibhav Vineet, Neel Joshi, Baharan Mirzasoleiman. "MM-GEN: Enhancing Task Performance Through Targeted Multimodal Data Curation." (2025) Link

[108]Amro Abbas, Kushal Tirumala, Dániel Simig, Surya Ganguli, Ari S. Morcos. "SemDeDup: Data-efficient learning at web-scale through semantic deduplication." (2023) Link

[109]Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, Benjamin Newman, Binhang Yuan, Bobby Yan, Ce Zhang, Christian Cosgrove, Christopher D. Manning, Christopher Ré, Diana Acosta-Navas, Drew A. Hudson, Eric Zelikman, Esin Durmus, Faisal Ladhak, Frieda Rong, Hongyu Ren, Huaxiu Yao, Jue Wang, Keshav Santhanam, Laurel Orr, Lucia Zheng, Mert Yuksekgonul, Mirac Suzgun, Nathan Kim, Neel Guha, Niladri Chatterji, Omar Khattab, Peter Henderson, Qian Huang, Ryan Chi, Sang Michael Xie, Shibani Santurkar, Surya Ganguli, Tatsunori Hashimoto, Thomas Icard, Tianyi Zhang, Vishrav Chaudhary, William Wang, Xuechen Li, Yifan Mai, Yuhui Zhang, Yuta Koreeda. "Holistic Evaluation of Language Models." (2023) Link

[111]Contributors OpenCompass. "OpenCompass: A Universal Evaluation Platform for Foundation Models." (2026)

[112]Adiga, Rishabh, Nushi, Besmira, Chandrasekaran, Varun. "Attention Speaks Volumes: Localizing and Mitigating Bias in Language Models." *Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)* (2025): 26403–26423 Link

[113]Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, Pontus Stenetorp. "Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity." (2022) Link

[114]Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, Vaishaal Shankar. "Do ImageNet Classifiers Generalize to ImageNet?." (2019) Link

[115]Jason Wei, Najoung Kim, Yi Tay, Quoc V. Le. "Inverse scaling can become U-shaped." (2023) Link

[116]Emma Strubell, Ananya Ganesh, Andrew McCallum. "Energy and Policy Considerations for Deep Learning in NLP." (2019) Link

[117]OpenAI. "Introducing GPT-5.2." (2025) Link

[118]Siddharth Joshi, Jiayi Ni, Baharan Mirzasoleiman. "Dataset Distillation via Knowledge Distillation: Towards Efficient Self-Supervised Pre-Training of Deep Networks." (2025) Link

[119]An Yang, Anfeng Li, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Gao, Chengen Huang, Chenxu Lv, Chujie Zheng, Dayiheng Liu, Fan Zhou, Fei Huang, Feng Hu, Hao Ge, Haoran Wei, Huan Lin, Jialong Tang, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jing Zhou, Jingren Zhou, Junyang Lin, Kai Dang, Keqin Bao, Kexin Yang, Le Yu, Lianghao Deng, Mei Li, Mingfeng Xue, Mingze Li, Pei Zhang, Peng Wang, Qin Zhu, Rui Men, Ruize Gao, Shixuan Liu, Shuang Luo, Tianhao Li, Tianyi Tang, Wenbiao Yin, Xingzhang Ren, Xinyu Wang, Xinyu Zhang, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yinger Zhang, Yu Wan, Yuqiong Liu, Zekun Wang, Zeyu Cui, Zhenru Zhang, Zhipeng Zhou, Zihan Qiu. "Qwen3 Technical Report." (2025) Link

[120]Lele Liao, Qile Zhang, Ruofan Wu, Guanhua Fang. "Toward a unified framework for data-efficient evaluation of large language models." (2025) Link

[121]Spearman, Charles. "The proof and measurement of association between two things." *American Journal of Psychology* (2026)

[122]Voorhees, Ellen M.. "Evaluation by highly relevant documents." *SIGIR* (2026)

[123]Buckley, Chris, Voorhees, Ellen M.. "Retrieval evaluation with incomplete information." *SIGIR* (2026)

[124]Sakai, Tetsuya. "On the reliability of information retrieval metrics." *SIGIR* (2026)

[125]Lambert, Nathan. "Good Researchers Obsess Over Evals: The Story of OLMo 3 (Post-Training), Told Through Evals." (2025) Link

[126]Team Olmo, :, Allyson Ettinger, Amanda Bertsch, Bailey Kuehl, David Graham, David Heineman, Dirk Groeneveld, Faeze Brahman, Finbarr Timbers, Hamish Ivison, Jacob Morrison, Jake Poznanski, Kyle Lo, Luca Soldaini, Matt Jordan, Mayee Chen, Michael Noukhovitch, Nathan Lambert, Pete Walsh, Pradeep Dasigi, Robert Berry, Saumya Malik, Saurabh Shah, Scott Geng, Shane Arora, Shashank Gupta, Taira Anderson, Teng Xiao, Tyler Murray, Tyler Romero, Victoria Graf, Akari Asai, Akshita Bhagia, Alexander Wettig, Alisa Liu, Aman Rangapur, Chloe Anastasiades, Costa Huang, Dustin Schwenk, Harsh Trivedi, Ian Magnusson, Jaron Lochner, Jiacheng Liu, Lester James V. Miranda, Maarten Sap, Malia Morgan, Michael Schmitz, Michal Guerquin, Michael Wilson, Regan Huff, Ronan Le Bras, Rui Xin, Rulin Shao, Sam Skjonsberg, Shannon Zejiang Shen, Shuyue Stella Li, Tucker Wilde, Valentina Pyatkin, Will Merrill, Yapei Chang, Yuling Gu, Zhiyuan Zeng, Ashish Sabharwal, Luke Zettlemoyer, Pang Wei Koh, Ali Farhadi, Noah A. Smith, Hannaneh Hajishirzi. "Olmo 3." (2025) Link

[127]Adhiraj Ghosh, Sebastian Dziadzio, Ameya Prabhu, Vishaal Udandarao, Samuel Albanie, Matthias Bethge. "ONEBench to Test Them All: Sample-Level Benchmarking Over Open-Ended Capabilities." (2025) Link

[128]DatologyAI. "DatologyAI Technical Deep-Dive: Curating Our Way to a Billion-State-of-the-Art Text Dataset." (2024) Link

[129]Yang, An, Li, Anfeng, Yang, Baosong, Zhang, Beichen, Hui, Binyuan, Zheng, Bo, Yu, Bowen, Gao, Chang, Huang, Chengen, Lv, Chenxu, others. "Qwen3 technical report." *arXiv preprint arXiv:2505.09388* (2026)

[130]DatologyAI, Abbas, Amro, Wills, Josh, Yin, Haoli, Burstein, Paul, Cao, Ning, Carranza, Aldo, Deng, Alvin, Goyal, Priya, Maini, Pratyush, McGrath, Joshua, Pan, Fan, Urbanek, Jack, Kada, Vineeth, Razzak, Muhammed, Shah, Vishwa, Veerendranath, Vishruth, Gaza, Bogdan, Morcos, Ari, Leavitt, Matthew. "DatologyAI Technical Deep-Dive: Image-Text Data Curation at the Billion-Sample Scale." (2024) Link

[131]moondream. "RefCOCO-M: Refined Referring Expression Segmentation." (2025) Link

[132]Ling Fu, Zhebin Kuang, Jiajun Song, Mingxin Huang, Biao Yang, Yuzhe Li, Linghao Zhu, Qidi Luo, Xinyu Wang, Hao Lu, Zhang Li, Guozhi Tang, Bin Shan, Chunhui Lin, Qi Liu, Binghong Wu, Hao Feng, Hao Liu, Can Huang, Jingqun Tang, Wei Chen, Lianwen Jin, Yuliang Liu, Xiang Bai. "OCRBench v2: An Improved Benchmark for Evaluating Large Multimodal Models on Visual Text Localization and Reasoning." (2025) Link

[133]Renrui Zhang, Dongzhi Jiang, Yichi Zhang, Haokun Lin, Ziyu Guo, Pengshuo Qiu, Aojun Zhou, Pan Lu, Kai-Wei Chang, Peng Gao, Hongsheng Li. "MathVerse: Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems?." (2024) Link

[134]Ke Wang, Junting Pan, Weikang Shi, Zimu Lu, Houxing Ren, Aojun Zhou, Mingjie Zhan, Hongsheng Li. "Measuring Multimodal Mathematical Reasoning with MATH-Vision Dataset." *The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track* (2024) Link

[135]Acharya, Manoj, Kafle, Kushal, Kanan, Christopher. "TallyQA: Answering Complex Counting Questions." *AAAI* (2026)

[136]DatologyAI. "BeyondWeb: Lessons from Scaling Synthetic Data for Trillion-scale Pretraining." (2025) Link

[137]DatologyAI, Pratyush Maini, Vineeth Dorna, Parth Doshi, Aldo Carranza, Fan Pan, Jack Urbanek, Paul Burstein, Alex Fang, Alvin Deng, Amro Abbas, Brett Larsen, Cody Blakeney, Charvi Bannur, Christina Baek, Darren Teh, David Schwab, Haakon Mongstad, Haoli Yin, Josh Wills, Kaleigh Mentzer, Luke Merrick, Ricardo Monti, Rishabh Adiga, Siddharth Joshi, Spandan Das, Zhengping Wang, Bogdan Gaza, Ari Morcos, Matthew Leavitt. "BeyondWeb: Lessons from Scaling Synthetic Data for Trillion-scale Pretraining." (2025) Link

[138]Jinyan Su, Jennifer Healey, Preslav Nakov, Claire Cardie. "Between Underthinking and Overthinking: An Empirical Study of Reasoning Length and correctness in LLMs." (2025) Link

[139]Gu, Yuling, Tafjord, Oyvind, Kuehl, Bailey, Haddad, Dany, Dodge, Jesse, Hajishirzi, Hannaneh. "Olmes: A standard for language model evaluations." *Findings of the Association for Computational Linguistics: NAACL 2025* (2026): 5005–5033

[140]Trillion Labs. "Tri-7B-Base." (2025) Link

[141]Liquid AI. "Introducing LFM2.5: The Next Generation of On-Device AI." (2026) Link

[142]Yue Wang, Qiuzhi Liu, Jiahao Xu, Tian Liang, Xingyu Chen, Zhiwei He, Linfeng Song, Dian Yu, Juntao Li, Zhuosheng Zhang, Rui Wang, Zhaopeng Tu, Haitao Mi, Dong Yu. "Thoughts Are All Over the Place: On the Underthinking of o1-Like LLMs." (2025) Link

[143]Yuyang Wu, Yifei Wang, Ziyu Ye, Tianqi Du, Stefanie Jegelka, Yisen Wang. "When More is Less: Understanding Chain-of-Thought Length in LLMs." (2025) Link

[144]Andreas Hochlehnert, Hardik Bhatnagar, Vishaal Udandarao, Samuel Albanie, Ameya Prabhu, Matthias Bethge. "A Sober Look at Progress in Language Model Reasoning: Pitfalls and Paths to Reproducibility." (2025) Link

[145]Zhixun Chen, Ping Guo, Wenhan Han, Yifan Zhang, Haobin Lin, Fengze Liu, Yan Zhao, Bingni Zhang, Taifeng Wang, Yin Zheng, Trevor Cohn, Meng Fang. "MuRating: A High Quality Data Selecting Approach to Multilingual Large Language Model Pretraining." *The Thirty-ninth Annual Conference on Neural Information Processing Systems* (2026)

[146]Thomas F Burns, Letitia Parcalabescu, Stephan Wäldchen, Michael Barlow, Gregor Ziegltrum, Volker Stampa, Bastian Harren, Björn Deiseroth. "Aleph-Alpha-GermanWeb: Improving German-language LLM pre-training with model-based data curation and synthetic data generation." (2025) Link

[147]Mehdi Ali, Manuel Brack, Max Lübbering, Elias Wendt, Abbas Goher Khan, Richard Rutmann, Alex Jude, Maurice Kraus, Alexander Arno Weber, David Kaczér, Florian Mai, Lucie Flek, Rafet Sifa, Nicolas Flores-Herr, Joachim Köhler, Patrick Schramowski, Michael Fromm, Kristian Kersting. "Judging Quality Across Languages: A Multilingual Approach to Pretraining Data Filtering with Language Models." (2025) Link

[148]Pires, Telmo, Schlinger, Eva, Garrette, Dan. "How multilingual is multilingual BERT?." *arXiv preprint arXiv:1906.01502* (2026)

[149]Choi, Dami, Xin, Derrick, Dadkhahi, Hamid, Gilmer, Justin, Garg, Ankush, Firat, Orhan, Yeh, Chih-Kuan, Dai, Andrew M, Ghorbani, Behrooz. "Order matters in the presence of dataset imbalance for multilingual learning." *Advances in Neural Information Processing Systems* 36 (2026): 66902–66922