• Title/Summary/Keyword: Scene labeling

Search Result 17, Processing Time 0.018 seconds

Development of a rotation angle estimation algorithm of HMD using feature points extraction (특징점 추출을 통한 HMD 회전각측정 알고리즘 개발)

  • Ro, Young-Shick;Kim, Chul-Hee;Yun, Won-Jun;Yoon, Yoo-Kyoung
    • Proceedings of the IEEK Conference
    • /
    • 2009.05a
    • /
    • pp.360-362
    • /
    • 2009
  • In this paper, we studied for the real-time azimuthal measurement of HMD(Head Mounted Display) using the feature points detection to control the tele-operated vision system on the mobile robot. To give the sense of presence to the tele-operator, we used a HMD to display the remote scene, measured the rotation angle of the HMD on a real time basis, and transmitted the measured rotation angles to the mobile robot controller to synchronize the pan-tilt angles of remote camera with the HMD. In this paper, we suggest an algorithm for the real-time estimation of the HMD rotation angles using feature points extraction from pc-camera image.

  • PDF

Vehicle Detection for Adaptive Head-Lamp Control of Night Vision System (적응형 헤드 램프 컨트롤을 위한 야간 차량 인식)

  • Kim, Hyun-Koo;Jung, Ho-Youl;Park, Ju H.
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.6 no.1
    • /
    • pp.8-15
    • /
    • 2011
  • This paper presents an effective method for detecting vehicles in front of the camera-assisted car during nighttime driving. The proposed method detects vehicles based on detecting vehicle headlights and taillights using techniques of image segmentation and clustering. First, in order to effectively extract spotlight of interest, a pre-signal-processing process based on camera lens filter and labeling method is applied on road-scene images. Second, to spatial clustering vehicle of detecting lamps, a grouping process use light tracking method and locating vehicle lighting patterns. For simulation, we are implemented through Da-vinci 7437 DSP board with visible light mono-camera and tested it in urban and rural roads. Through the test, classification performances are above 89% of precision rate and 94% of recall rate evaluated on real-time environment.

A Real-time Motion Object Detection based on Neighbor Foreground Pixel Propagation Algorithm (주변 전경 픽셀 전파 알고리즘 기반 실시간 이동 객체 검출)

  • Nguyen, Thanh Binh;Chung, Sun-Tae
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.1
    • /
    • pp.9-16
    • /
    • 2010
  • Moving object detection is to detect foreground object different from background scene in a new incoming image frame and is an essential ingredient process in some image processing applications such as intelligent visual surveillance, HCI, object-based video compression and etc. Most of previous object detection algorithms are still computationally heavy so that it is difficult to develop real-time multi-channel moving object detection in a workstation or even one-channel real-time moving object detection in an embedded system using them. Foreground mask correction necessary for a more precise object detection is usually accomplished using morphological operations like opening and closing. Morphological operations are not computationally cheap and moreover, they are difficult to be rendered to run simultaneously with the subsequent connected component labeling routine since they need quite different type of processing from what the connected component labeling does. In this paper, we first devise a fast and precise foreground mask correction algorithm, "Neighbor Foreground Pixel Propagation (NFPP)" which utilizes neighbor pixel checking employed in the connected component labeling. Next, we propose a novel moving object detection method based on the devised foreground mask correction algorithm, NFPP where the connected component labeling routine can be executed simultaneously with the foreground mask correction. Through experiments, it is verified that the proposed moving object detection method shows more precise object detection and more than 4 times faster processing speed for a image frame and videos in the given the experiments than the previous moving object detection method using morphological operations.

A Study of Null Instantiated Frame Element Resolution for Construction of Dialog-Level FrameNet (대화 수준 FrameNet 구축을 위한 생략된 프레임 논항 복원 연구)

  • Noh, Youngbin;Heo, Cheolhun;Hahm, Younggyun;Jeong, Yoosung;Choi, Key-Sun
    • Annual Conference on Human and Language Technology
    • /
    • 2020.10a
    • /
    • pp.227-232
    • /
    • 2020
  • 본 논문은 의미역 주석(Semantic Role Labeling) 자원인 FrameNet을 준구어 말뭉치인 드라마 대본에 주석하는 과정과 주석 결과에 대해 서술한다. 본 논문에서는 프레임 - 프레임 논항 구조의 주석 범위를 한 문장에서 여러 발화로 이루어진 장면 (Scene) 단위의 대본으로 확장하여 문장 내에서 생략된 프레임 논항(Null-Instantiated Frame Elements)을 장면 단위 대본 내의 다른 발화에서 복원하였다. 본 논문은 프레임 자동 분석기를 통해 동일한 드라마의 한국어, 영어 대본에 FrameNet 주석을 한 드라마 대본을 선발된 주석자에 의해 대상 어휘 적합성 평가, 프레임 적합성 평가, 생략된 프레임 논항 복원을 실시하고, 자동 주석된 대본과 주석자 작업 후의 대본 결과를 비교한 결과와 예시를 제시한다. 주석자가 자동 주석된 대본 중 총 2,641개 주석 (한국어 1,200개, 영어 1,461개)에 대하여 대상 어휘 적합성 평가를 실시하여 한국어 190개 (15.83%), 영어 226개 (15.47%)의 부적합 대상 어휘를 삭제하였다. 프레임 적합성 평가에서는 대상 어휘에 자동 주석된 프레임의 적합성을 평가하여 한국어 622개 (61.68%), 영어 473개 (38.22%)의 어휘에 대하여 새로운 프레임을 부여하였다. 생략된 프레임 논항을 복원한 결과 작업된 평균 프레임 논항 개수가 한국어 0.780개에서 2.519개, 영어 1.290개에서 2.253개로 증가하였다.

  • PDF

Identification of 5-Jung-color and 5-Kan-color In Video (비디오에서 오정색과 오간색 식별)

  • Shin, Seong-Yoon;Pyo, Seong-Bae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.1
    • /
    • pp.103-109
    • /
    • 2010
  • As color was used for formative language since a human activity was beginning, all the symptoms in the world that the human eye can see is present. In this paper, we identify Korea traditional color harmony for extracted key frames from scene change detection. Traditional color is classified as 5-Jung-color and 5-Kan-color, and determine whether to harmony. Red, blue, yellow, black, and white, called 5-Jung-color and pink, blue, purple, sulfur, and green, called the 5-Kan-color was identified. First, we extract edge using Canny algorithm. And, we are labeling and clustering colors around the edge. Finally, we identify the traditional color using identification method of traditional color harmony. The proposed study in this paper has been proven through experiments.

Mobile Phone Camera Based Scene Text Detection Using Edge and Color Quantization (에지 및 컬러 양자화를 이용한 모바일 폰 카메라 기반장면 텍스트 검출)

  • Park, Jong-Cheon;Lee, Keun-Wang
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.3
    • /
    • pp.847-852
    • /
    • 2010
  • Text in natural images has a various and important feature of image. Therefore, to detect text and extraction of text, recognizing it is a studied as an important research area. Lately, many applications of various fields is being developed based on mobile phone camera technology. Detecting edge component form gray-scale image and detect an boundary of text regions by local standard deviation and get an connected components using Euclidean distance of RGB color space. Labeling the detected edges and connected component and get bounding boxes each regions. Candidate of text achieved with heuristic rule of text. Detected candidate text regions was merged for generation for one candidate text region, then text region detected with verifying candidate text region using ectilarity characterization of adjacency and ectilarity between candidate text regions. Experctental results, We improved text region detection rate using completentary of edge and color connected component.

Development of a Emergency Situation Detection Algorithm Using a Vehicle Dash Cam (차량 단말기 기반 돌발상황 검지 알고리즘 개발)

  • Sanghyun Lee;Jinyoung Kim;Jongmin Noh;Hwanpil Lee;Soomok Lee;Ilsoo Yun
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.22 no.4
    • /
    • pp.97-113
    • /
    • 2023
  • Swift and appropriate responses in emergency situations like objects falling on the road can bring convenience to road users and effectively reduces secondary traffic accidents. In Korea, current intelligent transportation system (ITS)-based detection systems for emergency road situations mainly rely on loop detectors and CCTV cameras, which only capture road data within detection range of the equipment. Therefore, a new detection method is needed to identify emergency situations in spatially shaded areas that existing ITS detection systems cannot reach. In this study, we propose a ResNet-based algorithm that detects and classifies emergency situations from vehicle camera footage. We collected front-view driving videos recorded on Korean highways, labeling each video by defining the type of emergency, and training the proposed algorithm with the data.