One-Shot Prompting for Russian Dependency Parsingстатья
Статья опубликована в журнале из списка RSCI Web of Science
Информация о цитировании статьи получена из
Scopus
Статья опубликована в журнале из списка Web of Science и/или Scopus
Дата последнего поиска статьи во внешних источниках: 23 января 2026 г.
Аннотация:This study investigates the application of Large Language Models (LLMs) for dependencyparsing of Russian sentences. We evaluated several models (including Qwen, RuAdapt, Yan-dexGPT, T-pro, T-lite, and Llama) in a one-shot mode across multiple Russian treebanks: Syn-TagRus, GSD, PUD, Poetry, and Taiga. Among the models tested, Llama70 achieved the highestscores in both UAS and LAS. Furthermore, we observed a general trend where larger modelstended to perform better. Our analysis also revealed that parsing quality for Qwen4 and Ru-Adapt4 on the Taiga treebank was notably sensitive to prompt design. However, the results fromall LLMs remained lower than those obtained from classical neural parsers. A key challenge en-countered by many models was a difference between generated token sets and gold token sets,which was observed in a considerable portion of each treebank. Additionally, the T-pro and T-lite models produced a significant number of extra lines. The implementation for this study is publicly available at https://github.com/Derinhelm/llm\.parsing/tree/main.