Referring Expression Comprehension (REC) aims to locate a specific object within an image by interpreting a referring expression articulated in natural language. This task comprises two essential branches: understanding and localizing. The former entails processing cognitive information from multimodal data, while the latter involves realizing the predictions in the perceptive visual space. Although various advanced approaches have been developed for each of these branches separately, existing REC approaches are unable to effectively leverage them due to the specific designs of architectures or objectives for REC, which bind understanding and localizing inseparably. To overcome this challenge, we propose the Decoupling-Cooperative Framework (DCF). The decoupling scheme in DCF enables us to utilize up-to-date methods for understanding and localizing with minimal constraints. Meanwhile, the proposed cooperative modules enable better integration of the strengths from both branches to achieve further enhancements. Extensive experiments demonstrate that DCF achieves state-of-the-art performance across four benchmarks, thus highlighting the generalizability of DCF.
- Multimodal understanding
- object detection and localization
- referring expression comprehension (REC)