IJTEEE

Open Access Journal of Scientific, Technology & Engineering Research


International Journal of Technology Enhancements and Emerging Engineering Research (ISSN 2347-4289)
QUICK LINKS
CURRENT PUBLICATIONS



IJTEEE >> Volume 3 - Issue 8, August 2015 Edition



International Journal of Technology Enhancements and Emerging Engineering Research  
International Journal of Technology Enhancements and Emerging Engineering Research

Website: http://www.ijteee.org

ISSN 2347-4289



Geo-Location And Information Retrieval For On-Premise Signs

[Full Text]

 

AUTHOR(S)

Charulata

 

KEYWORDS

Index Terms: Real-world objects, street view scenes, learning and recognition, object image data set.

 

ABSTRACT

Abstract: Image recognition has become an integral and important part of today’s technical world. The various application scenarios give rise to a key technique of daily life visual object recognition. On-premise signs (OPSs), a popular form of commercial advertising, are widely used in our living life. The OPSs often exhibit great visual diversity (e.g., appearing in arbitrary size), accompanied with complex environmental conditions (e.g., foreground and background clutter). Observing that such real-world characteristics are lacking in most of the existing image data sets, in this paper, we first proposed an OPS data set, in which comprises of OPS images of different businesses which are basically collected from Google’s Street View. Further, for addressing the problem of real-world OPS learning and recognition, we developed a probabilistic framework based on the distributional clustering, in which we proposed to exploit the distributional information of each visual feature (the distribution of its associated OPS labels) as a reliable selection criterion for building discriminative OPS models. This approach is simple, linear, and can be executed in a parallel fashion, making it practical and scalable for large-scale multimedia applications. This project provides very simple and modified features, which are very easy to view and operate. This project is designed and organized in a very simplified manner to hold the details of images and return the geo-location of the image.

 

REFERENCES

[1] R. Ji, L.-Y.Duan, J. Chen, S. Yang, T. Huang, H. Yao, et al “PKUBench: A context rich mobile visual search benchmark,” in Proc. IEEE ICIP, Jun. 2012, pp. 2545–2548.

[2] V. R. Chandrasekhar, D. M. Chen, S. S. Tsai, N.-M. Cheung, H. Chen, G. Takacs, et al., “The stanford mobile visual search data set,” in Proc.2nd ACM Conf. Multimedia Syst., Feb. 2011, pp. 117–122.

[3] B. Girod, V. Chandrasekhar, N.-M. C. David M. Chen, R. Grzeszczuk, Y. Reznik, et al., “Mobile visual search: Linking the virtual and physical worlds,” IEEE Signal Process. Mag., vol. 28, no. 4, pp. 61–76, Jul. 2011.

[4] Y. Zhang, L. Wang, R. Hartley, and H. Li, “Where’s the weet-bix?” in Proc. 8th ACCV, 2007, pp. 800–810.

[5] D. Conroy, What’s Your Signage (How On-Premise Signs Help Small Businesses Tap Into a Hidden Profit Center). New York, NY, USA: StateSmall Bus. Develop. Center, 2004.

[6] (2013). Social Shopping [Online]. Available: http://en.wikipedia.org/wiki/Social_shopping

[7] C.-W. You, W.-H.Cheng, A. W. Tsui, T.-H. Tsai, and A. Campbell, “MobileQueue: An image-based queue card retrieving system through augmented reality phones,” in Proc. 14th ACM Int. Conf. UbiquitousComput., 2012, pp. 1–2.

[8] J. Kleban, X. Xie, and W.-Y.Ma, “Spatial pyramid mining for logo detection in natural scenes,” in Proc. IEEE ICME, Apr. 2008, pp. 1077–1080.

[9] Joly and O. Buisson, “Logo retrieval with a contrario visual query expansion,” in Proc. 17th ACM Int. Conf. Multimedia, 2009, pp. 581–584.

[10] S. Romberg, L. G. Pueyo, R. Lienhart, and R. van Zwol, “Scalable logo recognition in real-world images,” in Proc. 1st ACM ICMR, 2011, pp. 1–25.

[11] J. Revaud, M. Douze, and C. Schmid, “Correlation-based burstinessfor logo retrieval,” in Proc. 20th ACM Int. Conf. Multimedia, 2012, pp. 965–968.

[12] J. Park, G. Lee, E. Kim, J. Lim, S. Kim, H. Yang, et al., “Automatic detection and recognition of Korean text in outdoor signboard images,” Pattern Recognit. Let, vol. 31, no. 12, pp. 1728–1739, Sep. 2010.

[13] Zamir, A. Darino, and M. Shah, “Street view challenge: Identification of commercial entities in street view imagery,” in Proc. 10th ICMLAWorkshops, vol. 2. 2011, pp. 380–383.

[14] R. Fergus, L. Fei-Fei, P. Perona, and A. Zisserman, “Learning object categories from internet image searches,” Proc. IEEE, vol. 98, no. 8, pp. 1453–1466, Aug. 2010.

[15] S. Vijayanarasimhan and K. Grauman, “Large-scale live active learning: Training object detectors with crawled data and crowds,” in Proc. IEEE Conf. CVPR, Aug. 2011, pp. 1449–1456.

[16] P. Siva and T. Xiang, “Weakly supervised object detector learning with model drift detection,” in Proc. IEEE ICCV, Nov. 2011, pp. 343–350.

[17] N. Pinto, D. D. Cox, and J. J. DiCarlo, “Why is real-world visual object recognition hard?”PLoSComput. Biol., vol. 4, no. 1, pp. 1–6, Jan. 2008.

[18] T. Yeh, J. Lee, and T. Darrell, “Fast concurrent object localization and recognition,” in Proc. IEEE Conf. CVPR, Jun. 2009, pp. 280–287.

[19] W.-L. Zhao and C.-W. Ngo, “Scale-rotation invariant pattern entropy for keypoint-based near-duplicate detection,” IEEE Trans. Image Process., vol. 18, no. 2, pp. 412–423, Feb. 2009.

[20] L. Wu, S. Hoi, and N. Yu, “Semantics-preserving bag-of-words models and applications,” IEEE Trans. Image Process., vol. 19, no. 7, pp. 1908–1920, Jul. 2010.

[21] G. Kim, C. Faloutsos, and M. Hebert, “Unsupervised modeling of object categories using link analysis techniques,” in Proc. IEEE Conf. CVPR, Jun. 2008, pp. 1–8.

[22] F. Schroff, A. Criminisi, and A. Zisserman, “Harvesting image databases from the web,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 4, pp. 754–766, Apr. 2010.

[23] B. C. Russell, W. T. Freeman, A. A. Efros, J. Sivic, and A. Zisserman, “Using multiple segmentations to discover objects and their extent in image collections,” in Proc. IEEE Conf. CVPR, Jun. 2006, pp. 1605–1614.

[24] L.-J. Li and L. Fei-Fei, “OPTIMOL: Automatic online picture collection via incremental model learning,” Int. J. Comput. Vis., vol. 88, no. 2, pp. 147–168, 2010.