• Title/Summary/Keyword: Scene Recognition

Search Result 193, Processing Time 0.028 seconds

Real Scene Text Image Super-Resolution Based on Multi-Scale and Attention Fusion

  • Xinhua Lu;Haihai Wei;Li Ma;Qingji Xue;Yonghui Fu
    • Journal of Information Processing Systems
    • /
    • v.19 no.4
    • /
    • pp.427-438
    • /
    • 2023
  • Plenty of works have indicated that single image super-resolution (SISR) models relying on synthetic datasets are difficult to be applied to real scene text image super-resolution (STISR) for its more complex degradation. The up-to-date dataset for realistic STISR is called TextZoom, while the current methods trained on this dataset have not considered the effect of multi-scale features of text images. In this paper, a multi-scale and attention fusion model for realistic STISR is proposed. The multi-scale learning mechanism is introduced to acquire sophisticated feature representations of text images; The spatial and channel attentions are introduced to capture the local information and inter-channel interaction information of text images; At last, this paper designs a multi-scale residual attention module by skillfully fusing multi-scale learning and attention mechanisms. The experiments on TextZoom demonstrate that the model proposed increases scene text recognition's (ASTER) average recognition accuracy by 1.2% compared to text super-resolution network.

GMM-KL Framework for Indoor Scene Matching (실내 환경 이미지 매칭을 위한 GMM-KL프레임워크)

  • Kim, Jun-Young;Ko, Han-Seok
    • Proceedings of the KIEE Conference
    • /
    • 2005.10b
    • /
    • pp.61-63
    • /
    • 2005
  • Retreiving indoor scene reference image from database using visual information is important issue in Robot Navigation. Scene matching problem in navigation robot is not easy because input image that is taken in navigation process is affinly distorted. We represent probabilistic framework for the feature matching between features in input image and features in database reference images to guarantee robust scene matching efficiency. By reconstructing probabilistic scene matching framework we get a higher precision than the existing feaure-feature matching scheme. To construct probabilistic framework we represent each image as Gaussian Mixture Model using Expectation Maximization algorithm using SIFT(Scale Invariant Feature Transform).

  • PDF

Comparisons of Object Recognition Performance with 3D Photon Counting & Gray Scale Images

  • Lee, Chung-Ghiu;Moon, In-Kyu
    • Journal of the Optical Society of Korea
    • /
    • v.14 no.4
    • /
    • pp.388-394
    • /
    • 2010
  • In this paper the object recognition performance of a photon counting integral imaging system is quantitatively compared with that of a conventional gray scale imaging system. For 3D imaging of objects with a small number of photons, the elemental image set of a 3D scene is obtained using the integral imaging set up. We assume that the elemental image detection follows a Poisson distribution. Computational geometrical ray back propagation algorithm and parametric maximum likelihood estimator are applied to the photon counting elemental image set in order to reconstruct the original 3D scene. To evaluate the photon counting object recognition performance, the normalized correlation peaks between the reconstructed 3D scenes are calculated for the varied and fixed total number of photons in the reconstructed sectional image changing the total number of image channels in the integral imaging system. It is quantitatively illustrated that the recognition performance of the photon counting integral imaging system can be similar to that of a conventional gray scale imaging system as the number of image viewing channels in the photon counting integral imaging (PCII) system is increased up to the threshold point. Also, we present experiments to find the threshold point on the total number of image channels in the PCII system which can guarantee a comparable recognition performance with a gray scale imaging system. To the best of our knowledge, this is the first report on comparisons of object recognition performance with 3D photon counting & gray scale images.

Construction Site Scene Understanding: A 2D Image Segmentation and Classification

  • Kim, Hongjo;Park, Sungjae;Ha, Sooji;Kim, Hyoungkwan
    • International conference on construction engineering and project management
    • /
    • 2015.10a
    • /
    • pp.333-335
    • /
    • 2015
  • A computer vision-based scene recognition algorithm is proposed for monitoring construction sites. The system analyzes images acquired from a surveillance camera to separate regions and classify them as building, ground, and hole. Mean shift image segmentation algorithm is tested for separating meaningful regions of construction site images. The system would benefit current monitoring practices in that information extracted from images could embrace an environmental context.

  • PDF

Object Recogniton for Markerless Augmented Reality Embodiment (마커 없는 증강 현실 구현을 위한 물체인식)

  • Paul, Anjan Kumar;Lee, Hyung-Jin;Kim, Young-Bum;Islam, Mohammad Khairul;Baek, Joong-Hwan
    • Journal of Advanced Navigation Technology
    • /
    • v.13 no.1
    • /
    • pp.126-133
    • /
    • 2009
  • In this paper, we propose an object recognition technique for implementing marker less augmented reality. Scale Invariant Feature Transform (SIFT) is used for finding the local features from object images. These features are invariant to scale, rotation, translation, and partially invariant to illumination changes. Extracted Features are distinct and have matched with different image features in the scene. If the trained image is properly matched, then it is expected to find object in scene. In this paper, an object is found from a scene by matching the template images that can be generated from the first frame of the scene. Experimental results of object recognition for 4 kinds of objects showed that the proposed technique has a good performance.

  • PDF

Density Change Adaptive Congestive Scene Recognition Network

  • Jun-Hee Kim;Dae-Seok Lee;Suk-Ho Lee
    • International journal of advanced smart convergence
    • /
    • v.12 no.4
    • /
    • pp.147-153
    • /
    • 2023
  • In recent times, an absence of effective crowd management has led to numerous stampede incidents in crowded places. A crucial component for enhancing on-site crowd management effectiveness is the utilization of crowd counting technology. Current approaches to analyzing congested scenes have evolved beyond simple crowd counting, which outputs the number of people in the targeted image to a density map. This development aligns with the demands of real-life applications, as the same number of people can exhibit vastly different crowd distributions. Therefore, solely counting the number of crowds is no longer sufficient. CSRNet stands out as one representative method within this advanced category of approaches. In this paper, we propose a crowd counting network which is adaptive to the change in the density of people in the scene, addressing the performance degradation issue observed in the existing CSRNet(Congested Scene Recognition Network) when there are changes in density. To overcome the weakness of the CSRNet, we introduce a system that takes input from the image's information and adjusts the output of CSRNet based on the features extracted from the image. This aims to improve the algorithm's adaptability to changes in density, supplementing the shortcomings identified in the original CSRNet.

A study on hand gesture recognition using 3D hand feature (3차원 손 특징을 이용한 손 동작 인식에 관한 연구)

  • Bae Cheol-Soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.4
    • /
    • pp.674-679
    • /
    • 2006
  • In this paper a gesture recognition system using 3D feature data is described. The system relies on a novel 3D sensor that generates a dense range mage of the scene. The main novelty of the proposed system, with respect to other 3D gesture recognition techniques, is the capability for robust recognition of complex hand postures such as those encountered in sign language alphabets. This is achieved by explicitly employing 3D hand features. Moreover, the proposed approach does not rely on colour information, and guarantees robust segmentation of the hand under various illumination conditions, and content of the scene. Several novel 3D image analysis algorithms are presented covering the complete processing chain: 3D image acquisition, arm segmentation, hand -forearm segmentation, hand pose estimation, 3D feature extraction, and gesture classification. The proposed system is tested in an application scenario involving the recognition of sign-language postures.

Design and Implementation of the Perception Mechanism for the Agent in the Virtual World (가상 세계 거주자의 지각 메커니즘 설계 및 구현)

  • Park, Jae-Woo;Jung, Geun-Jae;Park, Jong-Hee
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.8
    • /
    • pp.1-13
    • /
    • 2011
  • In order to create an intelligent autonomous agent in virtual world, we need a sophisticated design for perception, recognition, judgement and behavior. We develop the perception and recognition functions for such an autonomous agent. Our perception mechanism identifies lines based on differences in color, the primitive visible data, and exploits those lines to grasp shapes and regions in the scene. We develop an inferencing algorithm that can infer the original shape from a damaged or partially hidden shape using its characteristics from the ontology in order to intelligently recognize the perceived shape. Several individually recognized 2D shapes and their spatial relations form 3D shapes and those 3D shapes in turn constitute a scene. Each 3D shape occupies its respective region, and an agent analyzes the associated objects and relevant scenes to recognize things and phenomena. We also develop a mechanism by which an agent uses this recognition function to accumulate and use her knowledge on the scene in the historical context. We implement these functions presented above against an example situation to demonstrate their sophistication and realism.

Fast Object Recognition using Local Energy Propagation from Combination of Saline Line Groups (직선 조합의 에너지 전파를 이용한 고속 물체인식)

  • 강동중
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.311-311
    • /
    • 2000
  • We propose a DP-based formulation for matching line patterns by defining a robust and stable geometric representation that is based on the conceptual organizations. Usually, the endpoint proximity and collinearity of image lines, as two main conceptual organization groups, are useful cues to match the model shape in the scene. As the endpoint proximity, we detect junctions from image lines. We then search for junction groups by using geometric constraint between the junctions. A junction chain similar to the model chain is searched in the scene, based on a local comparison. A Dynamic Programming-based search algorithm reduces the time complexity for the search of the model chain in the scene. Our system can find a reasonable matching, although there exist severely distorted objects in the scene. We demonstrate the feasibility of the DP-based matching method using both synthetic and real images.

  • PDF

Novel View Generation Using Affine Coordinates

  • Sengupta, Kuntal;Ohya, Jun
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1997.06a
    • /
    • pp.125-130
    • /
    • 1997
  • In this paper we present an algorithm to generate new views of a scene, starting with images from weakly calibrated cameras. Errors in 3D scene reconstruction usually gets reflected in the quality of the new scene generated, so we seek a direct method for reprojection. In this paper, we use the knowledge of dense point matches and their affine coordinate values to estimate the corresponding affine coordinate values in the new scene. We borrow ideas from the object recognition literature, and extend them significantly to solve the problem of reprojection. Unlike the epipolar line intersection algorithms for reprojection which requires at least eight matched points across three images, we need only five matched points. The theory of reprojection is used with hardware based rendering to achieve fast rendering. We demonstrate our results of novel view generation from stereopairs for arbitrary locations of the virtual camera.

  • PDF