Vector-based image search technology, which encodes images and queries as vectors in a high-dimensional space, is increasingly used for efficient image retrieval. This approach involves several key steps: extracting embeddings through convolutional neural networks, developing an indexing workflow that includes preprocessing, extracting, and compressing these embeddings, and inserting them into a searchable index. The search engine utilizes approximate nearest neighbor search (ANNS) and compression methods to enhance efficiency. Traditionally, search engine optimization for images relies on keywords either embedded in metadata or placed on hosting pages. However, vector-based searches face challenges in adaptively enhancing image discoverability due to the static nature of image vectors. This paper introduces a novel method to optimize image discoverability for vector-based search engines while minimizing visual impact. The approach frames the problem as an optimization task, using an iterative process to adjust images based on intended and unwanted search queries. It employs backward propagation with loss functions to fine-tune images, generating embeddings that align with target queries and minimizing visual deviations through perceptual loss. Segmentation masks can further direct visual adjustments to specific areas of an image. This technique can be applied to newly uploaded images or existing asset libraries to improve search ranking.
Introduction to vector-based image search Limitations of vector-based image search Adversarial methods to trick image classifiers How to automatically alter images to influence results of image search Applications to asset libraries and video and scene search