VMAN: A Virtual Mainstay Alignment Network for Transductive Zero-Shot Learning
Xie, Guo-Sen5,6; Zhang, Xu-Yao1; Yao, Yazhou5; Zhang, Zheng2,3; Zhao, Fang4; Shao, Ling4
刊名IEEE TRANSACTIONS ON IMAGE PROCESSING
2021
卷号30页码:4316-4329
关键词Semantics Training Task analysis Image reconstruction Whales Manifolds Generative adversarial networks Zero-shot learning virtual sample generation transductive
ISSN号1057-7149
DOI10.1109/TIP.2021.3070231
通讯作者Yao, Yazhou(yazhou.yao@njust.edu.cn)
英文摘要Transductive zero-shot learning (TZSL) extends conventional ZSL by leveraging (unlabeled) unseen images for model training. A typical method for ZSL involves learning embedding weights from the feature space to the semantic space. However, the learned weights in most existing methods are dominated by seen images, and can thus not be adapted to unseen images very well. In this paper, to align the (embedding) weights for better knowledge transfer between seen/unseen classes, we propose the virtual mainstay alignment network (VMAN), which is tailored for the transductive ZSL task. Specifically, VMAN is casted as a tied encoder-decoder net, thus only one linear mapping weights need to be learned. To explicitly learn the weights in VMAN, for the first time in ZSL, we propose to generate virtual mainstay (VM) samples for each seen class, which serve as new training data and can prevent the weights from being shifted to seen images, to some extent. Moreover, a weighted reconstruction scheme is proposed and incorporated into the model training phase, in both the semantic/feature spaces. In this way, the manifold relationships of the VM samples are well preserved. To further align the weights to adapt to more unseen images, a novel instance-category matching regularization is proposed for model re-training. VMAN is thus modeled as a nested minimization problem and is solved by a Taylor approximate optimization paradigm. In comprehensive evaluations on four benchmark datasets, VMAN achieves superior performances under the (Generalized) TZSL setting.
资助项目National Natural Science Foundation of China[61702163] ; National Natural Science Foundation of China[61976116] ; National Natural Science Foundation of China[62002085] ; Fundamental Research Funds for the Central Universities[30920021135]
WOS研究方向Computer Science ; Engineering
语种英语
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
WOS记录号WOS:000641960800002
资助机构National Natural Science Foundation of China ; Fundamental Research Funds for the Central Universities
内容类型期刊论文
源URL[http://ir.ia.ac.cn/handle/173211/44494]  
专题自动化研究所_模式识别国家重点实验室_模式分析与学习团队
通讯作者Yao, Yazhou
作者单位1.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
2.Harbin Inst Technol, Shenzhen Key Lab Visual Object Detect & Recognit, Shenzhen 518055, Peoples R China
3.Peng Cheng Lab, Shenzhen 518055, Peoples R China
4.Incept Inst Artificial Intelligence, Abu Dhabi, U Arab Emirates
5.Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Nanjing 210094, Peoples R China
6.Mohamed Bin Zayed Univ Artificial Intelligence, Machine Learning Dept, Abu Dhabi, U Arab Emirates
推荐引用方式
GB/T 7714
Xie, Guo-Sen,Zhang, Xu-Yao,Yao, Yazhou,et al. VMAN: A Virtual Mainstay Alignment Network for Transductive Zero-Shot Learning[J]. IEEE TRANSACTIONS ON IMAGE PROCESSING,2021,30:4316-4329.
APA Xie, Guo-Sen,Zhang, Xu-Yao,Yao, Yazhou,Zhang, Zheng,Zhao, Fang,&Shao, Ling.(2021).VMAN: A Virtual Mainstay Alignment Network for Transductive Zero-Shot Learning.IEEE TRANSACTIONS ON IMAGE PROCESSING,30,4316-4329.
MLA Xie, Guo-Sen,et al."VMAN: A Virtual Mainstay Alignment Network for Transductive Zero-Shot Learning".IEEE TRANSACTIONS ON IMAGE PROCESSING 30(2021):4316-4329.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace