Efficient image retrieval using multi neural hash codes and bloom filters
This paper aims to deliver an efficient and modified approach for image retrieval using multiple neural hash codes and limiting the number of queries using bloom filters by identifying false positives beforehand. Traditional approaches involving neural networks for image retrieval tasks tend to use higher layers for feature extraction. But it has been seen that the activations of lower layers have proven to be more effective in a number of scenarios. In our approach, we have leveraged the use of local deep convolutional neural networks which combines the powers of both the features of lower and higher layers for creating feature maps which are then compressed using PCA and fed to a bloom filter after binary sequencing using a modified multi k-means approach. The feature maps obtained are further used in the image retrieval process in a hierarchical coarse-to-fine manner by first comparing the images in the higher layers for semantically similar images and then gradually moving towards the lower layers searching for structural similarities. While searching, the neural hashes for the query image are again calculated and queried in the bloom filter which tells us whether the query image is absent in the set or maybe present. If the bloom filter doesn't necessarily rule out the query, then it goes into the image retrieval process. This approach can be particularly helpful in cases where the image store is distributed since the approach supports parallel querying.
Babenko, A., Slesarev, A., Chigorin, A. and Lempitsky, V., 2014, September. Neural codes for image retrieval. In European conference on computer vision (pp. 584-599). Springer, Cham.
Yu, W., Sun, X., Yang, K., Rui, Y. and Yao, H., 2018. Hierarchical semantic image matching using CNN feature pyramid. Computer Vision and Image Understanding, 169, pp.40-51.
Yue-Hei Ng, J., Yang, F. and Davis, L.S., 2015. Exploiting local features from deep networks for image retrieval. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops (pp.53-61).
Ercoli, S., Bertini, M. and Del Bimbo, A., 2015, June. Compact hash codes and data structures for efficient mobile visual search. In 2015 IEEE International Conference on Multimedia & Expo Workshops (ICMEW) (pp. 1-6). IEEE.
Babenko, A. and Lempitsky, V., 2015. Aggregating deep convolutional features for image retrieval. arXiv preprint arXiv:1510.07493.
Radenovi ́c, F., Tolias, G. and Chum, O., 2018. Fine-tuning CNN image retrieval with no human annotation. IEEE transactions on pattern analysis and machine intelligence, 41(7), pp.1655-1668.
Rae, J.W., Bartunov, S. and Lillicrap, T.P., 2019. Meta-learning neural Bloom filters. arXiv preprint arXiv:1906.04304.
Araujo, A., Chaves, J., Lakshman, H., Angst, R. and Girod, B., 2016. Large-scale query-by-image video retrieval using bloom filters. arXivpreprint arXiv:1604.07939.
Xia, Z., Feng, X., Lin, J. and Hadid, A., 2017. Deep convolutional hashing using pairwise multi-label supervision for large-scale visualsearch. Signal Processing: Image Communication, 59, pp.109-116.
Uricchio, T., Bertini, M., Seidenari, L. and Bimbo, A., 2015. Fisher encoded convolutional bag-of-windows for efficient image retrieval and social image tagging. In Proceedings of the IEEE International Conference on Computer Vision Workshops (pp. 9-15).
J ́egou, H. and Zisserman, A., 2014. Triangulation embedding and democratic aggregation for image search. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3310-3317).
Sharif Razavian, A., Azizpour, H., Sullivan, J. and Carlsson, S., 2014. CNN features off-the-shelf: an astounding baseline for recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops (pp. 806-813).
Inoue, K. and Kise, K., 2009, September. Compressed representation of feature vectors using a Bloomier filter and its application to specific object recognition. In 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops (pp. 2133-2140).
Bloom, B.H., 1970. Space/time trade-offs in hash coding with allowable errors. Communications of the ACM, 13(7), pp.422-426.
Arandjelovic, R. and Zisserman, A., 2013. All about VLAD. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (pp. 1578-1585).
Lowe, D.G., 2004. Distinctive image features from scale-invariant key-points. International journal of computer vision, 60(2), pp.91-110.
Philbin, J., Chum, O., Isard, M., Sivic, J. and Zisserman,A., 2007, June. Object retrieval with large vocabularies and fast spatial matching. In 2007 IEEE conference on computer vision and pattern recognition (pp. 1-8). IEEE.
Philbin, J., Chum, O., Isard, M., Sivic, J. and Zisserman,A., 2008, June. Lost in quantization: Improving particular object retrieval in large scale image databases. In 2008 IEEE conference on computer vision and pattern recognition (pp. 1-8). IEEE.
Jegou, H., Douze, M. and Schmid, C., 2008, October. Hammingem-bedding and weak geometric consistency for large scale image search. In European conference on computer vision (pp. 304-317). Springer, Berlin, Heidelberg.
LeCun, Y., Boser, B.E., Denker, J.S., Henderson, D., Howard, R.E., Hubbard, W.E. and Jackel, L.D., 1990. Handwritten digit recognition with a back-propagation network. In Advances in neural information processing systems (pp. 396-404).
Jogin, M., Madhulika, M.S., Divya, G.D., Meghana, R.K.and Apoorva, S., 2018, May. Feature extraction using Convolution Neural Networks (CNN) and Deep Learning. In 2018 3rd IEEE International Conferenceon Recent Trends in Electronics, Information & Communication Technology (RTEICT) (pp. 2319-2323). IEEE.
Ng, S.C., 2017. Principal component analysis to reduce dimension on digital image. Procedia computer science, 111, pp.113-119.
S ́anchez, J., Perronnin, F., Mensink, T. and Verbeek, J., 2013. Image Classification with the fisher vector: Theory and practice. International Journal of computer vision, 105(3), pp.222-245.
Yang, L. and Jin, R., 2006. Distance metric learning: A comprehensive survey. Michigan State University, 2(2), p.4.
Gao, L., Song, J., Zou, F., Zhang, D. and Shao, J., 2015, October. Scalable multimedia retrieval by deep learning hashing with relative similarity learning. In Proceedings of the 23rd ACM international conference on Multimedia (pp. 903-906).
Lin, K., Yang, H.F., Hsiao, J.H. and Chen, C.S., 2015. Deep learning binary hash codes for fast image retrieval. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops (pp.27-35).
Sun, J., Shum, H.Y. and Zheng, N.N., 2002, May. Stereo matching using belief propagation. In European Conference on Computer Vision (pp.510-524). Springer, Berlin, Heidelberg.
Boykov, Y., Veksler, O. and Zabih, R., 2001. Fast approximate energy minimization via graph cuts. IEEE Transactions on pattern analysis and machine intelligence, 23(11), pp.1222-1239.
Kanazawa, A., Jacobs, D.W. and Chandraker, M., 2016. Warpnet: Weakly supervised matching for single-view reconstruction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 3253-3261).
Barnes, C., Shechtman, E., Goldman, D.B. and Finkelstein, A., 2010, September. The generalized patchmatch correspondence algorithm. In European Conference on Computer Vision (pp. 29-43). Springer, Berlin, Heidelberg.
Danielsson, O., 2015, June. Category-sensitive hashing and Bloom filter based descriptors for online keypoint recognition. In Scandinavian Conference on Image Analysis (pp. 329-340). Springer, Cham.
Appleby, A., 2011. Murmur3 hash function.
Berg, A. and Deng, J., 2010. and L Fei-Fei. Large scale visual recognition challenge(ILSVRC).
Babenko, A. and Lempitsky, V., 2014. The inverted multi-index. IEEE transactions on pattern analysis and machine intelligence, 37(6), pp.1247-1260
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
To ensure uniformity of treatment among all contributors, other forms may not be substituted for this form, nor may any wording of the form be changed. This form is intended for original material submitted to AJCT and must accompany any such material in order to be published by AJCT. Please read the form carefully.
The undersigned hereby assigns to the Asian Journal of Convergence in Technology Issues ("AJCT") all rights under copyright that may exist in and to the above Work, any revised or expanded derivative works submitted to AJCT by the undersigned based on the Work, and any associated written, audio and/or visual presentations or other enhancements accompanying the Work. The undersigned hereby warrants that the Work is original and that he/she is the author of the Work; to the extent the Work incorporates text passages, figures, data or other material from the works of others, the undersigned has obtained any necessary permission. See Retained Rights, below.
AJCT distributes its technical publications throughout the world and wants to ensure that the material submitted to its publications is properly available to the readership of those publications. Authors must ensure that The Work is their own and is original. It is the responsibility of the authors, not AJCT, to determine whether disclosure of their material requires the prior consent of other parties and, if so, to obtain it.
RETAINED RIGHTS/TERMS AND CONDITIONS
1. Authors/employers retain all proprietary rights in any process, procedure, or article of manufacture described in the Work.
2. Authors/employers may reproduce or authorize others to reproduce The Work and for the author's personal use or for company or organizational use, provided that the source and any AJCT copyright notice are indicated, the copies are not used in any way that implies AJCT endorsement of a product or service of any employer, and the copies themselves are not offered for sale.
3. Authors/employers may make limited distribution of all or portions of the Work prior to publication if they inform AJCT in advance of the nature and extent of such limited distribution.
4. For all uses not covered by items 2 and 3, authors/employers must request permission from AJCT.
5. Although authors are permitted to re-use all or portions of the Work in other works, this does not include granting third-party requests for reprinting, republishing, or other types of re-use.
INFORMATION FOR AUTHORS
AJCT Copyright Ownership
It is the formal policy of AJCT to own the copyrights to all copyrightable material in its technical publications and to the individual contributions contained therein, in order to protect the interests of AJCT, its authors and their employers, and, at the same time, to facilitate the appropriate re-use of this material by others.
If you are employed and prepared the Work on a subject within the scope of your employment, the copyright in the Work belongs to your employer as a work-for-hire. In that case, AJCT assumes that when you sign this Form, you are authorized to do so by your employer and that your employer has consented to the transfer of copyright, to the representation and warranty of publication rights, and to all other terms and conditions of this Form. If such authorization and consent has not been given to you, an authorized representative of your employer should sign this Form as the Author.
AJCT requires that the consent of the first-named author and employer be sought as a condition to granting reprint or republication rights to others or for permitting use of a Work for promotion or marketing purposes.
1. The undersigned represents that he/she has the power and authority to make and execute this assignment.
2. The undersigned agrees to indemnify and hold harmless AJCT from any damage or expense that may arise in the event of a breach of any of the warranties set forth above.
3. In the event the above work is accepted and published by AJCT and consequently withdrawn by the author(s), the foregoing copyright transfer shall become null and void and all materials embodying the Work submitted to AJCT will be destroyed.
4. For jointly authored Works, all joint authors should sign, or one of the authors should sign as authorized agent
for the others.
Licenced by :