Аннотация:This paper presents an overview of the AIM 2025 Challenge on Screen Content Video Quality Assessment. The challenge included a set of 150 source videos. To receive distorted versions, the source videos were transmitted through video conferencing applications, introducing real-world distortions such as compression artifacts and frame drops. Distorted versions were labeled by human crowdsourcing assessors to receive reference subjective scores. The evaluation was based on subjective quality assessment via crowdsourcing, obtaining votes from over 8,000 assessors. The goal of the participants was to develop an algorithm to assess the visual quality of the videos, achieving the highest correlation with the subjective scores. The challenge attracted more than 45 registered teams, 5 of which passed the final phase with source code verification. The outcomes may provide insights into the state of the art in screen-content video quality assessment and highlight emerging trends and effective strategies in this evolving research area. All data, including the processed videos and subjective comparison votes and scores, is made publicly available -- https://github.com/msu-video-group/AIM25_SC_Quality_Assessment