Given the portraits of a list of casts, our goal is to search them in a sequential movie with an online fashion following the human behaviors. To tackle this challenging problem, we propose a novel online multi-modal searching machine (OMS). There are four key components in OMS, i.e. multimodal feature representations (MFR), a dynamic memory bank (DMB), an uncertain instance cache (UIC) and a controller. Each instance, represented by multi-modal features, is compared with the exemplars stored in the memory bank to judge its identity. The controller then determines whether this instance should be used to update memory or put into the uncertain instance cache for later comparisons. The memory bank and the uncertain instance cache are dynamically updated over time, with a strategy operated by the controller. All these components together build an “intelligent machine” to watch a movie and gradually recognize the characters like what humans do.
@inproceedings{xia2020online,
title={Online Multi-modal Person Search in Videos},
author={Xia, Jiangyue and Rao, Anyi and Xu, Linning and Huang, Qingqiu and Wen, Jiangtao and Lin, Dahua},
booktitle = {The European Conference on Computer Vision (ECCV)},
year={2020}
}