Geo-Location And Information Retrieval For On-Premise Signs
Index Terms: Real-world objects, street view scenes, learning and recognition, object image data set.
Abstract: Image recognition has become an integral and important part of today’s technical world. The various application scenarios give rise to a key technique of daily life visual object recognition. On-premise signs (OPSs), a popular form of commercial advertising, are widely used in our living life. The OPSs often exhibit great visual diversity (e.g., appearing in arbitrary size), accompanied with complex environmental conditions (e.g., foreground and background clutter). Observing that such real-world characteristics are lacking in most of the existing image data sets, in this paper, we ﬁrst proposed an OPS data set, in which comprises of OPS images of different businesses which are basically collected from Google’s Street View. Further, for addressing the problem of real-world OPS learning and recognition, we developed a probabilistic framework based on the distributional clustering, in which we proposed to exploit the distributional information of each visual feature (the distribution of its associated OPS labels) as a reliable selection criterion for building discriminative OPS models. This approach is simple, linear, and can be executed in a parallel fashion, making it practical and scalable for large-scale multimedia applications. This project provides very simple and modified features, which are very easy to view and operate. This project is designed and organized in a very simplified manner to hold the details of images and return the geo-location of the image.
 R. Ji, L.-Y.Duan, J. Chen, S. Yang, T. Huang, H. Yao, et al “PKUBench: A context rich mobile visual search benchmark,” in Proc. IEEE ICIP, Jun. 2012, pp. 2545–2548.
 V. R. Chandrasekhar, D. M. Chen, S. S. Tsai, N.-M. Cheung, H. Chen, G. Takacs, et al., “The stanford mobile visual search data set,” in Proc.2nd ACM Conf. Multimedia Syst., Feb. 2011, pp. 117–122.
 B. Girod, V. Chandrasekhar, N.-M. C. David M. Chen, R. Grzeszczuk, Y. Reznik, et al., “Mobile visual search: Linking the virtual and physical worlds,” IEEE Signal Process. Mag., vol. 28, no. 4, pp. 61–76, Jul. 2011.
 Y. Zhang, L. Wang, R. Hartley, and H. Li, “Where’s the weet-bix?” in Proc. 8th ACCV, 2007, pp. 800–810.
 D. Conroy, What’s Your Signage (How On-Premise Signs Help Small Businesses Tap Into a Hidden Profit Center). New York, NY, USA: StateSmall Bus. Develop. Center, 2004.
 (2013). Social Shopping [Online]. Available: http://en.wikipedia.org/wiki/Social_shopping
 C.-W. You, W.-H.Cheng, A. W. Tsui, T.-H. Tsai, and A. Campbell, “MobileQueue: An image-based queue card retrieving system through augmented reality phones,” in Proc. 14th ACM Int. Conf. UbiquitousComput., 2012, pp. 1–2.
 J. Kleban, X. Xie, and W.-Y.Ma, “Spatial pyramid mining for logo detection in natural scenes,” in Proc. IEEE ICME, Apr. 2008, pp. 1077–1080.
 Joly and O. Buisson, “Logo retrieval with a contrario visual query expansion,” in Proc. 17th ACM Int. Conf. Multimedia, 2009, pp. 581–584.
 S. Romberg, L. G. Pueyo, R. Lienhart, and R. van Zwol, “Scalable logo recognition in real-world images,” in Proc. 1st ACM ICMR, 2011, pp. 1–25.
 J. Revaud, M. Douze, and C. Schmid, “Correlation-based burstinessfor logo retrieval,” in Proc. 20th ACM Int. Conf. Multimedia, 2012, pp. 965–968.
 J. Park, G. Lee, E. Kim, J. Lim, S. Kim, H. Yang, et al., “Automatic detection and recognition of Korean text in outdoor signboard images,” Pattern Recognit. Let, vol. 31, no. 12, pp. 1728–1739, Sep. 2010.
 Zamir, A. Darino, and M. Shah, “Street view challenge: Identification of commercial entities in street view imagery,” in Proc. 10th ICMLAWorkshops, vol. 2. 2011, pp. 380–383.
 R. Fergus, L. Fei-Fei, P. Perona, and A. Zisserman, “Learning object categories from internet image searches,” Proc. IEEE, vol. 98, no. 8, pp. 1453–1466, Aug. 2010.
 S. Vijayanarasimhan and K. Grauman, “Large-scale live active learning: Training object detectors with crawled data and crowds,” in Proc. IEEE Conf. CVPR, Aug. 2011, pp. 1449–1456.
 P. Siva and T. Xiang, “Weakly supervised object detector learning with model drift detection,” in Proc. IEEE ICCV, Nov. 2011, pp. 343–350.
 N. Pinto, D. D. Cox, and J. J. DiCarlo, “Why is real-world visual object recognition hard?”PLoSComput. Biol., vol. 4, no. 1, pp. 1–6, Jan. 2008.
 T. Yeh, J. Lee, and T. Darrell, “Fast concurrent object localization and recognition,” in Proc. IEEE Conf. CVPR, Jun. 2009, pp. 280–287.
 W.-L. Zhao and C.-W. Ngo, “Scale-rotation invariant pattern entropy for keypoint-based near-duplicate detection,” IEEE Trans. Image Process., vol. 18, no. 2, pp. 412–423, Feb. 2009.
 L. Wu, S. Hoi, and N. Yu, “Semantics-preserving bag-of-words models and applications,” IEEE Trans. Image Process., vol. 19, no. 7, pp. 1908–1920, Jul. 2010.
 G. Kim, C. Faloutsos, and M. Hebert, “Unsupervised modeling of object categories using link analysis techniques,” in Proc. IEEE Conf. CVPR, Jun. 2008, pp. 1–8.
 F. Schroff, A. Criminisi, and A. Zisserman, “Harvesting image databases from the web,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 4, pp. 754–766, Apr. 2010.
 B. C. Russell, W. T. Freeman, A. A. Efros, J. Sivic, and A. Zisserman, “Using multiple segmentations to discover objects and their extent in image collections,” in Proc. IEEE Conf. CVPR, Jun. 2006, pp. 1605–1614.
 L.-J. Li and L. Fei-Fei, “OPTIMOL: Automatic online picture collection via incremental model learning,” Int. J. Comput. Vis., vol. 88, no. 2, pp. 147–168, 2010.