Global-Guided Selective Context Network for Scene Parsing
Jiang, Jie1,3; Liu, Jing1,3; Fu, Jun1,3; Zhu, Xinxin1,3; Li, Zechao2; Lu, Hanqing1,3
刊名IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
2022-04-01
卷号33期号:4页码:1752-1764
关键词Semantics Task analysis Decoding Logic gates Image color analysis Fuses Feature extraction Attention mechanism (AM) contextual selection global guidance (GG) scene parsing
ISSN号2162-237X
DOI10.1109/TNNLS.2020.3043808
通讯作者Liu, Jing(jliu@nlpr.ia.ac.cn)
英文摘要Recent studies on semantic segmentation are exploiting contextual information to address the problem of inconsistent parsing prediction in big objects and ignorance in small objects. However, they utilize multilevel contextual information equally across pixels, overlooking those different pixels may demand different levels of context. Motivated by the above-mentioned intuition, we propose a novel global-guided selective context network (GSCNet) to adaptively select contextual information for improving scene parsing. Specifically, we introduce two global-guided modules, called global-guided global module (GGM) and global-guided local module (GLM), to, respectively, select global context (GC) and local context (LC) for pixels. When given an input feature map, GGM jointly employs the input feature map and its globally pooled feature to learn its global contextual demand based on which per-pixel GC is selected. While GLM adopts low-level feature from the adjacent stage as LC and synthetically models the input feature map, its globally pooled feature and LC to generate local contextual demand, based on which per-pixel LC is selected. Furthermore, we combine these two modules as a selective context block and import such SCBs in different levels of the network to propagate contextual information in a coarse-to-fine manner. Finally, we conduct extensive experiments to verify the effectiveness of our proposed model and achieve state-of-the-art performance on four challenging scene parsing data sets, i.e., Cityscapes, ADE20K, PASCAL Context, and COCO Stuff. Especially, GSCNet-101 obtains 82.6% on Cityscapes test set without using coarse data and 56.22% on ADE20K test set.
资助项目National Natural Science Foundation of China[61922086] ; National Natural Science Foundation of China[61872366] ; Beijing Natural Science Foundation[4192059] ; Beijing Natural Science Foundation[JQ20022]
WOS研究方向Computer Science ; Engineering
语种英语
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
WOS记录号WOS:000778930100034
资助机构National Natural Science Foundation of China ; Beijing Natural Science Foundation
内容类型期刊论文
源URL[http://ir.ia.ac.cn/handle/173211/48249]  
专题自动化研究所_模式识别国家重点实验室_图像与视频分析团队
通讯作者Liu, Jing
作者单位1.Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100049, Peoples R China
2.Nanjing Univ Sci & Technol, Sch Comp Sci, Nanjing 210094, Peoples R China
3.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
推荐引用方式
GB/T 7714
Jiang, Jie,Liu, Jing,Fu, Jun,et al. Global-Guided Selective Context Network for Scene Parsing[J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS,2022,33(4):1752-1764.
APA Jiang, Jie,Liu, Jing,Fu, Jun,Zhu, Xinxin,Li, Zechao,&Lu, Hanqing.(2022).Global-Guided Selective Context Network for Scene Parsing.IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS,33(4),1752-1764.
MLA Jiang, Jie,et al."Global-Guided Selective Context Network for Scene Parsing".IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 33.4(2022):1752-1764.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace