Abstract:To address the limitations of existing hashing methods in modality-specific information preservation, multi-label semantic embedding, and the discriminative capability of hash functions, a cross-modal hashing retrieval method based on multi-label semantic similarity guiding is proposed in this paper. The method preserves modality-specific information in the hash codes learning stage by learning independent latent representation spaces for the two modalities. Protective margins and a multi-label similarity constraint are then incorporated into the semantic embedding process, establishing inter-modal relation and considering the supervisory role of semantic information and overlapping label information in aligning cross-modal representations. In the hash functions learning phase, semantic hashing constraints are introduced to guide the learning of hash functions using semantic similarity and hash codes, which enhances the discriminative capability of the hash functions. Extensive comparative experiments on three benchmarks with nine advanced hashing methods validate the effectiveness and superiority of the proposed method for cross-modal retrieval tasks.