• Title/Summary/Keyword: Scene Understanding

Search Result 111, Processing Time 0.029 seconds

Object's orientation and motion for scene understanding

  • Sakai, Y.;Kitazawa, M.;Okuno, Y.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1993.10b
    • /
    • pp.271-276
    • /
    • 1993
  • Here in the present paper. A methodology for understanding scenes which includes moving objects in it, in the framework of notion of concepts. First by conceptualizing, understanding an object which is an element of a scene will be described. Then how to know the direction to which that object is heading will be discussed. Further, the methodology proposed, for understanding conceptually the motion of an object will be described utilizing the above knowledge of direction.

  • PDF

Construction Site Scene Understanding: A 2D Image Segmentation and Classification

  • Kim, Hongjo;Park, Sungjae;Ha, Sooji;Kim, Hyoungkwan
    • International conference on construction engineering and project management
    • /
    • 2015.10a
    • /
    • pp.333-335
    • /
    • 2015
  • A computer vision-based scene recognition algorithm is proposed for monitoring construction sites. The system analyzes images acquired from a surveillance camera to separate regions and classify them as building, ground, and hole. Mean shift image segmentation algorithm is tested for separating meaningful regions of construction site images. The system would benefit current monitoring practices in that information extracted from images could embrace an environmental context.

  • PDF

Graph-based Segmentation for Scene Understanding of an Autonomous Vehicle in Urban Environments (무인 자동차의 주변 환경 인식을 위한 도시 환경에서의 그래프 기반 물체 분할 방법)

  • Seo, Bo Gil;Choe, Yungeun;Roh, Hyun Chul;Chung, Myung Jin
    • The Journal of Korea Robotics Society
    • /
    • v.9 no.1
    • /
    • pp.1-10
    • /
    • 2014
  • In recent years, the research of 3D mapping technique in urban environments obtained by mobile robots equipped with multiple sensors for recognizing the robot's surroundings is being studied actively. However, the map generated by simple integration of multiple sensors data only gives spatial information to robots. To get a semantic knowledge to help an autonomous mobile robot from the map, the robot has to convert low-level map representations to higher-level ones containing semantic knowledge of a scene. Given a 3D point cloud of an urban scene, this research proposes a method to recognize the objects effectively using 3D graph model for autonomous mobile robots. The proposed method is decomposed into three steps: sequential range data acquisition, normal vector estimation and incremental graph-based segmentation. This method guarantees the both real-time performance and accuracy of recognizing the objects in real urban environments. Also, it can provide plentiful data for classifying the objects. To evaluate a performance of proposed method, computation time and recognition rate of objects are analyzed. Experimental results show that the proposed method has efficiently in understanding the semantic knowledge of an urban environment.

Object Motion Analysis and Interpretation in Video

  • Song, Dan;Cho, Mi-Young;Kim, Pan-Koo
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.10b
    • /
    • pp.694-696
    • /
    • 2004
  • With the more sophisticated abilities development of video, object motion analysis and interpretation has become the fundamental task for the computer vision understanding. For that understanding, firstly, we seek a sum of absolute difference algorithm to apply to the motion detection, which was based on the scene. Then we will focus on the moving objects representation in the scene using spatio-temporal relations. The video can be explained comprehensively from the both aspects : moving objects relations and video events intervals.

  • PDF

PC networked parallel processing system for figures and letters

  • Kitazawa, M.;Sakai, Y.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1993.10b
    • /
    • pp.277-282
    • /
    • 1993
  • In understanding concepts, there are two aspects; image and language. The point discussed in this paper is things fundamental in finding proper relations between objects in a scene to represent the meaning of the that whole scene properly through experiencing in image and language. It is assumed that one of the objects in a scene has letters as objects inside its contour. As the present system can deal with both figures and letters in a scene, the above assumption makes it easy for the system to infer the context of a scene. Several personal computers on the LAN network are used and they process items in parallel.

  • PDF

Towards Size of Scene in Auditory Scene Analysis: A Systematic Review

  • Kwak, Chanbeom;Han, Woojae
    • Korean Journal of Audiology
    • /
    • v.24 no.1
    • /
    • pp.1-9
    • /
    • 2020
  • Auditory scene analysis is defined as a listener's ability to segregate a meaningful message from meaningless background noise in a listening environment. To gain better understanding of auditory perception in terms of message integration and segregation ability among concurrent signals, we aimed to systematically review the size of auditory scenes among individuals. A total of seven electronic databases were searched from 2000 to the present with related key terms. Using our inclusion criteria, 4,507 articles were classified according to four sequential steps-identification, screening, eligibility, included. Following study selection, the quality of four included articles was evaluated using the CAMARADES checklist. In general, studies concluded that the size of auditory scene increased as the number of sound sources increased; however, when the number of sources was five or higher, the listener's auditory scene analysis reached its maximum capability. Unfortunately, the score of study quality was not determined to be very high, and the number of articles used to calculate mean effect size and statistical significance was insufficient to draw significant conclusions. We suggest that study design and materials that consider realistic listening environments should be used in further studies to deep understand the nature of auditory scene analysis within various groups.

Towards Size of Scene in Auditory Scene Analysis: A Systematic Review

  • Kwak, Chanbeom;Han, Woojae
    • Journal of Audiology & Otology
    • /
    • v.24 no.1
    • /
    • pp.1-9
    • /
    • 2020
  • Auditory scene analysis is defined as a listener's ability to segregate a meaningful message from meaningless background noise in a listening environment. To gain better understanding of auditory perception in terms of message integration and segregation ability among concurrent signals, we aimed to systematically review the size of auditory scenes among individuals. A total of seven electronic databases were searched from 2000 to the present with related key terms. Using our inclusion criteria, 4,507 articles were classified according to four sequential steps-identification, screening, eligibility, included. Following study selection, the quality of four included articles was evaluated using the CAMARADES checklist. In general, studies concluded that the size of auditory scene increased as the number of sound sources increased; however, when the number of sources was five or higher, the listener's auditory scene analysis reached its maximum capability. Unfortunately, the score of study quality was not determined to be very high, and the number of articles used to calculate mean effect size and statistical significance was insufficient to draw significant conclusions. We suggest that study design and materials that consider realistic listening environments should be used in further studies to deep understand the nature of auditory scene analysis within various groups.

Research about a game image 3D versification (3D 게임영상 작성법에 관한 연구)

  • Lee Dong-Lyeor
    • Journal of Game and Entertainment
    • /
    • v.1 no.1
    • /
    • pp.31-38
    • /
    • 2005
  • Correct flow of various game manufacture among the justice which is used at the game development. and The understanding about the manufacture regards we making rather correct game. We justice understanding which we are correct in the image manufacture to become the reason air control of the game and We put the center in a 3B game image manufacture understanding. we are marked in maneuvered the game in actual game good. The image of the back of Cut Scene which is inserted at an opeuning incomparableness event time, we have been produced in this method. The thing which a 3D game image is utilized in a special effectiveness image though it is different from the game in the theater movie, we are the graphic which a game manufacture o'clock must be considered. The reason air control which the game player Is rather correct, we are regarded we offer the reason to immerse with his game.

  • PDF

Fast Scene Understanding in Urban Environments for an Autonomous Vehicle equipped with 2D Laser Scanners (무인 자동차의 2차원 레이저 거리 센서를 이용한 도시 환경에서의 빠른 주변 환경 인식 방법)

  • Ahn, Seung-Uk;Choe, Yun-Geun;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.7 no.2
    • /
    • pp.92-100
    • /
    • 2012
  • A map of complex environment can be generated using a robot carrying sensors. However, representation of environments directly using the integration of sensor data tells only spatial existence. In order to execute high-level applications, robots need semantic knowledge of the environments. This research investigates the design of a system for recognizing objects in 3D point clouds of urban environments. The proposed system is decomposed into five steps: sequential LIDAR scan, point classification, ground detection and elimination, segmentation, and object classification. This method could classify the various objects in urban environment, such as cars, trees, buildings, posts, etc. The simple methods minimizing time-consuming process are developed to guarantee real-time performance and to perform data classification on-the-fly as data is being acquired. To evaluate performance of the proposed methods, computation time and recognition rate are analyzed. Experimental results demonstrate that the proposed algorithm has efficiency in fast understanding the semantic knowledge of a dynamic urban environment.

Detection of Video Scene Boundaries based on the Local and Global Context Information (지역 컨텍스트 및 전역 컨텍스트 정보를 이용한 비디오 장면 경계 검출)

  • 강행봉
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.6
    • /
    • pp.778-786
    • /
    • 2002
  • Scene boundary detection is important in the understanding of semantic structure from video data. However, it is more difficult than shot change detection because scene boundary detection needs to understand semantics in video data well. In this paper, we propose a new approach to scene segmentation using contextual information in video data. The contextual information is divided into two categories: local and global contextual information. The local contextual information refers to the foreground regions' information, background and shot activity. The global contextual information refers to the video shot's environment or its relationship with other video shots. Coherence, interaction and the tempo of video shots are computed as global contextual information. Using the proposed contextual information, we detect scene boundaries. Our proposed approach consists of three consecutive steps: linking, verification, and adjusting. We experimented the proposed approach using TV dramas and movies. The detection accuracy of correct scene boundaries is over than 80%.