• 제목/요약/키워드: scene understanding

검색결과 108건 처리시간 0.02초

Object's orientation and motion for scene understanding

  • Sakai, Y.;Kitazawa, M.;Okuno, Y.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1993년도 한국자동제어학술회의논문집(국제학술편); Seoul National University, Seoul; 20-22 Oct. 1993
    • /
    • pp.271-276
    • /
    • 1993
  • Here in the present paper. A methodology for understanding scenes which includes moving objects in it, in the framework of notion of concepts. First by conceptualizing, understanding an object which is an element of a scene will be described. Then how to know the direction to which that object is heading will be discussed. Further, the methodology proposed, for understanding conceptually the motion of an object will be described utilizing the above knowledge of direction.

  • PDF

Construction Site Scene Understanding: A 2D Image Segmentation and Classification

  • Kim, Hongjo;Park, Sungjae;Ha, Sooji;Kim, Hyoungkwan
    • 국제학술발표논문집
    • /
    • The 6th International Conference on Construction Engineering and Project Management
    • /
    • pp.333-335
    • /
    • 2015
  • A computer vision-based scene recognition algorithm is proposed for monitoring construction sites. The system analyzes images acquired from a surveillance camera to separate regions and classify them as building, ground, and hole. Mean shift image segmentation algorithm is tested for separating meaningful regions of construction site images. The system would benefit current monitoring practices in that information extracted from images could embrace an environmental context.

  • PDF

무인 자동차의 주변 환경 인식을 위한 도시 환경에서의 그래프 기반 물체 분할 방법 (Graph-based Segmentation for Scene Understanding of an Autonomous Vehicle in Urban Environments)

  • 서보길;최윤근;노현철;정명진
    • 로봇학회논문지
    • /
    • 제9권1호
    • /
    • pp.1-10
    • /
    • 2014
  • In recent years, the research of 3D mapping technique in urban environments obtained by mobile robots equipped with multiple sensors for recognizing the robot's surroundings is being studied actively. However, the map generated by simple integration of multiple sensors data only gives spatial information to robots. To get a semantic knowledge to help an autonomous mobile robot from the map, the robot has to convert low-level map representations to higher-level ones containing semantic knowledge of a scene. Given a 3D point cloud of an urban scene, this research proposes a method to recognize the objects effectively using 3D graph model for autonomous mobile robots. The proposed method is decomposed into three steps: sequential range data acquisition, normal vector estimation and incremental graph-based segmentation. This method guarantees the both real-time performance and accuracy of recognizing the objects in real urban environments. Also, it can provide plentiful data for classifying the objects. To evaluate a performance of proposed method, computation time and recognition rate of objects are analyzed. Experimental results show that the proposed method has efficiently in understanding the semantic knowledge of an urban environment.

Object Motion Analysis and Interpretation in Video

  • Song, Dan;Cho, Mi-Young;Kim, Pan-Koo
    • 한국정보과학회:학술대회논문집
    • /
    • 한국정보과학회 2004년도 가을 학술발표논문집 Vol.31 No.2 (2)
    • /
    • pp.694-696
    • /
    • 2004
  • With the more sophisticated abilities development of video, object motion analysis and interpretation has become the fundamental task for the computer vision understanding. For that understanding, firstly, we seek a sum of absolute difference algorithm to apply to the motion detection, which was based on the scene. Then we will focus on the moving objects representation in the scene using spatio-temporal relations. The video can be explained comprehensively from the both aspects : moving objects relations and video events intervals.

  • PDF

PC networked parallel processing system for figures and letters

  • Kitazawa, M.;Sakai, Y.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1993년도 한국자동제어학술회의논문집(국제학술편); Seoul National University, Seoul; 20-22 Oct. 1993
    • /
    • pp.277-282
    • /
    • 1993
  • In understanding concepts, there are two aspects; image and language. The point discussed in this paper is things fundamental in finding proper relations between objects in a scene to represent the meaning of the that whole scene properly through experiencing in image and language. It is assumed that one of the objects in a scene has letters as objects inside its contour. As the present system can deal with both figures and letters in a scene, the above assumption makes it easy for the system to infer the context of a scene. Several personal computers on the LAN network are used and they process items in parallel.

  • PDF

Towards Size of Scene in Auditory Scene Analysis: A Systematic Review

  • Kwak, Chanbeom;Han, Woojae
    • 대한청각학회지
    • /
    • 제24권1호
    • /
    • pp.1-9
    • /
    • 2020
  • Auditory scene analysis is defined as a listener's ability to segregate a meaningful message from meaningless background noise in a listening environment. To gain better understanding of auditory perception in terms of message integration and segregation ability among concurrent signals, we aimed to systematically review the size of auditory scenes among individuals. A total of seven electronic databases were searched from 2000 to the present with related key terms. Using our inclusion criteria, 4,507 articles were classified according to four sequential steps-identification, screening, eligibility, included. Following study selection, the quality of four included articles was evaluated using the CAMARADES checklist. In general, studies concluded that the size of auditory scene increased as the number of sound sources increased; however, when the number of sources was five or higher, the listener's auditory scene analysis reached its maximum capability. Unfortunately, the score of study quality was not determined to be very high, and the number of articles used to calculate mean effect size and statistical significance was insufficient to draw significant conclusions. We suggest that study design and materials that consider realistic listening environments should be used in further studies to deep understand the nature of auditory scene analysis within various groups.

Towards Size of Scene in Auditory Scene Analysis: A Systematic Review

  • Kwak, Chanbeom;Han, Woojae
    • Journal of Audiology & Otology
    • /
    • 제24권1호
    • /
    • pp.1-9
    • /
    • 2020
  • Auditory scene analysis is defined as a listener's ability to segregate a meaningful message from meaningless background noise in a listening environment. To gain better understanding of auditory perception in terms of message integration and segregation ability among concurrent signals, we aimed to systematically review the size of auditory scenes among individuals. A total of seven electronic databases were searched from 2000 to the present with related key terms. Using our inclusion criteria, 4,507 articles were classified according to four sequential steps-identification, screening, eligibility, included. Following study selection, the quality of four included articles was evaluated using the CAMARADES checklist. In general, studies concluded that the size of auditory scene increased as the number of sound sources increased; however, when the number of sources was five or higher, the listener's auditory scene analysis reached its maximum capability. Unfortunately, the score of study quality was not determined to be very high, and the number of articles used to calculate mean effect size and statistical significance was insufficient to draw significant conclusions. We suggest that study design and materials that consider realistic listening environments should be used in further studies to deep understand the nature of auditory scene analysis within various groups.

3D 게임영상 작성법에 관한 연구 (Research about a game image 3D versification)

  • 이동열
    • 게임&엔터테인먼트 논문지
    • /
    • 제1권1호
    • /
    • pp.31-38
    • /
    • 2005
  • 게임개발에 사용되어지는 여러 가지 공정 중 게임제작의 정확한 흐름. 그리고 제작에 대한 이해가 보다 정확한 게임을 제작하리라 여긴다. 게임의 원인제공이 되는 영상제작에 있어 정확한 공정이해와 3D게임영상제작이해에 중심을 둔다. 실제 게임제품에 있어서는 게임을 기동했을 때에 표시되는 오프닝 무비, 이벤트 때에 삽입되는 Cut Scene등의 영상이 이 방법으로 생성되고 있다. 게임과는 다르지만 극장 영화에 있어서 특수효과 영상에서 3D게임영상이 이용되는 것이 게임 제작 시 고려되어야 할 그래픽이다. 게임플레이어가 보다 정확한 원인제공으로 그 게임에 몰입 할 수 있는 원인을 제공하리라 여겨진다.

  • PDF

무인 자동차의 2차원 레이저 거리 센서를 이용한 도시 환경에서의 빠른 주변 환경 인식 방법 (Fast Scene Understanding in Urban Environments for an Autonomous Vehicle equipped with 2D Laser Scanners)

  • 안승욱;최윤근;정명진
    • 로봇학회논문지
    • /
    • 제7권2호
    • /
    • pp.92-100
    • /
    • 2012
  • A map of complex environment can be generated using a robot carrying sensors. However, representation of environments directly using the integration of sensor data tells only spatial existence. In order to execute high-level applications, robots need semantic knowledge of the environments. This research investigates the design of a system for recognizing objects in 3D point clouds of urban environments. The proposed system is decomposed into five steps: sequential LIDAR scan, point classification, ground detection and elimination, segmentation, and object classification. This method could classify the various objects in urban environment, such as cars, trees, buildings, posts, etc. The simple methods minimizing time-consuming process are developed to guarantee real-time performance and to perform data classification on-the-fly as data is being acquired. To evaluate performance of the proposed methods, computation time and recognition rate are analyzed. Experimental results demonstrate that the proposed algorithm has efficiency in fast understanding the semantic knowledge of a dynamic urban environment.

지역 컨텍스트 및 전역 컨텍스트 정보를 이용한 비디오 장면 경계 검출 (Detection of Video Scene Boundaries based on the Local and Global Context Information)

  • 강행봉
    • 한국정보과학회논문지:컴퓨팅의 실제 및 레터
    • /
    • 제8권6호
    • /
    • pp.778-786
    • /
    • 2002
  • 장면 경계 검출은 비디오 데이타에서 의미적인 구조를 이해하는데 있어서 매우 중요한 역할을 한다. 하지만, 장면 경계 검출은 의미적인 일관성을 갖는 장면을 추출하여야 하므로 셧 경계 검출에 비해 매우 까다로운 작업이다. 본 논문에서는 비디오 데이타에 존재하는 의미적인 정보를 사용하기 위해 비디오 셧의 지역 및 전역 컨텍스트 정보를 추출하여 이를 바탕으로 장면 경계를 검출하는 방식을 제안한다. 비디오 셧의 지역 컨텍스트 정보는 셧 자체에 존재하는 컨텍스트 정보로서 전경 객체(foreground object), 배경(background) 및 움직임 정보들로 정의한다. 전역 컨텍스트 정보는 주어진 비디오 셧이 주위에 존재하는 다른 비디오 셧들과의 관계로부터 발생하는 다양한 컨텍스트로서 셧들간의 유사성, 상호 작용 및 셧들의 지속 시간 패턴으로 정의한다. 이런 컨텍스트 정보를 바탕으로 연결 작업, 연결 검증 작업 및 조정 작업등의 3단계 과정을 거쳐 장면을 검출한다. 제안된 방식을 TV 드라마 및 영화에 적용하여 80% 이상의 검출 정확도를 얻었다.