RUS  ENG
Full version
JOURNALS // Zapiski Nauchnykh Seminarov POMI // Archive

Zap. Nauchn. Sem. POMI, 2025 Volume 546, Pages 6–31 (Mi znsl7627)

Efficient tokenization: balancing babymmlu, fertility and speed

I. Bychkov, F. Chernogorskii, S. Averkiev, A. Fenogenova

SberDevices

Abstract: In Natural Language Processing (NLP), tokenization is a critical pre-processing step that significantly influences model performance. The choice of the tokenizer is crucial, especially given the contemporary situation with large LMs that are expensive to train. Our study investigates various subword-level tokenizers, considering their strengths and limitations. Based on our analysis, we propose a practical approach for comparing these tokenizers, considering factors such as tokenization effectiveness, vocabulary size, and tokenization speed. The paper reviews current tokenizer evaluation methods and contributes to a new evaluation dataset. Therefore, this paper aims to help researchers choose and train the most appropriate tokenizer for their tasks, especially when faced with limited training resources. Our objective is to empower the research community to make well-informed decisions about tokenizer selection and improve the quality of their language models.

Key words and phrases: NLP, LLM, tokenizer, tokenization, optimization, benchmark, dataset.

UDC: 004.89

Received: 28.02.2025

Language: English



© Steklov Math. Inst. of RAS, 2026