Multi-Agent Uncertainty Sharing for Cooperative Multi-Agent Reinforcement Learning | |
Yang GK(杨光开)2,3; Chenhao(陈皓)2,3; Junge Zhang(张俊格)2,3; Qiyue Yin(尹奇跃)2,3; Kaiqi Huang(黄凯奇)1,2,3 | |
2022-02 | |
会议日期 | 2022-07 |
会议地点 | 意大利 |
英文摘要 | Cooperative multi-agent reinforcement learning has been considered promising to complete many complex cooperative tasks in the real world such as coordination of robot swarms and self-driving. To promote multi-agent cooperation, Centralized Training with Decentralized Execution emerges as a popular learning paradigm due to partial observability and communication constraints during execution and computational complexity in training. Value decomposition has been known to produce competitive performance to other methods in complex environment within this paradigm such as VDN and QMIX, which approximates the global joint Q-value function with multiple local individual Q-value functions. However, existing works often neglect the uncertainty of multiple agents resulting from the partial observability and very large action space in the multi-agent setting and can only obtain the sub-optimal policy. To alleviate the limitations above, building upon the value decomposition, we propose a novel method called multiagent uncertainty sharing (MAUS). This method utilizes the Bayesian neural network to explicitly capture the uncertainty |
资助项目 | National Natural Science Foundation of China[61876181] |
内容类型 | 会议论文 |
源URL | [http://ir.ia.ac.cn/handle/173211/48977] |
专题 | 智能系统与工程 |
通讯作者 | Junge Zhang(张俊格) |
作者单位 | 1.中国科学院脑科学与智能技术卓越创新中心 2.中国科学院大学人工智能学院 3.中科院自动化所 |
推荐引用方式 GB/T 7714 | Yang GK,Chenhao,Junge Zhang,et al. Multi-Agent Uncertainty Sharing for Cooperative Multi-Agent Reinforcement Learning[C]. 见:. 意大利. 2022-07. |
个性服务 |
查看访问统计 |
相关权益政策 |
暂无数据 |
收藏/分享 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论