Bridging the Capability Gap: Joint Alignment Tuning for Harmonizing LLM-based Multi-Agent SystemsMinghang Zhu, Zhengliang Shi, Zhiwei Xu, Shiguang Wu, Lingjie Wang, Pengjie Ren, Zhaochun Ren, Zhumin Chenhttps://arxiv.org/abs/2509.09629
Bridging the Capability Gap: Joint Alignment Tuning for Harmonizing LLM-based Multi-Agent SystemsThe advancement of large language models (LLMs) has enabled the construction of multi-agent systems to solve complex tasks by dividing responsibilities among specialized agents, such as a planning agent for subgoal generation and a grounding agent for executing tool-use actions. Most existing methods typically fine-tune these agents independently, leading to capability gaps among them with poor coordination. To address this, we propose MOAT, a Multi-Agent Joint Alignment Tuning framework that i…