Аннотация:We present an agent-oriented system designed to simulate collaborative and competitive interactions between heterogeneous agents, including random agents, rule-based algorithmic agents (operating in pairs or triples), biologically inspired cognitive architecture (BICA) agents, and neural network-based agents. To study human-AI interaction, the system integrates a Telegram-based interface, allowing human players to participate in real-time gameplay. The core environment is a strategic game called “Stones,” where agents move between nodes in a graph, aiming to eliminate all but two nodes (“stones”) under a shared constraint: a stone is removed if exactly two agents occupy it after a move. The collaborative objective is to achieve this in the minimal number of steps, fostering emergent cooperation and/or competition.Our platform serves as a testbed for implementing and comparing different intelligent agent architectures. It enables systematic evaluation of agent behaviors through recorded gameplay sessions, which are then used to train neural agents via imitation learning. Unlike traditional approaches that rely solely on reinforcement learning or on predefined cognitive models, our method focuses on training neural agents to mimic prototype agents—including human players—by learning directly from their demonstrated strategies. This approach not only provides insights into the relative strengths of different agent types but also facilitates the development of adaptive AI systems that can emulate human-like decision-making in collaborative environments. The recorded interactions further allow for iterative refinement of neural agents, bridging the gap between rule-based systems and data-driven AI.This study is primarily exploratory, illustrating the feasibility of the proposed neuro-agent training methodology.