Аннотация:Automatic Term Extraction (ATE) is a critical NLP task for identifying domain-specific terms, which are essential for tasks like information retrieval, machine translation, and ontology construction. Cross-domain nested term extraction further complicates the task, as traditional methods often fail to handle hierarchical term structures and domain variability. This paper introduces both the CL-RuTerm3 dataset, a novel resource featuring nested term annotations across six domains (the main one is computational linguistics, also mathematics, medicine, economics, literature studies, and agrochemistry), and the RuTermEval-2024 competition, designed to evaluate term extraction systems on this data. The CL-RuTerm3 dataset, comprising 1270 abstracts and 15 full-text articles (over 165k tokens with over 37k annotated entities), is the largest of its kind for Russian scientific texts. Terms are classified into three categories based on lexical and domain specificity: specific terms, common terms, and nomens. The dataset’s unique features, such as nested term markup and cross-domain coverage, enable more realistic evaluation of ATE systems. The paper concludes with an analysis of participant approaches in the RuTermEval-2024 competition, emphasizing the effectiveness of contrastive learning. This work aims to advance ATE research by providing a robust dataset and fostering discussions on term extraction methodologies.