• Title/Summary/Keyword: Camera tracking

Search Result 1,015, Processing Time 0.028 seconds

Recognition of Car License Plates Using Difference Operator and ART2 Algorithm (차 연산과 ART2 알고리즘을 이용한 차량 번호판 통합 인식)

  • Kim, Kwang-Baek;Kim, Seong-Hoon;Woo, Young-Woon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.11
    • /
    • pp.2277-2282
    • /
    • 2009
  • In this paper, we proposed a new recognition method can be used in application systems using morphological features, difference operators and ART2 algorithm. At first, edges are extracted from an acquired car image by a camera using difference operators and the image of extracted edges is binarized by a block binarization method. In order to extract license plate area, noise areas are eliminated by applying morphological features of new and existing types of license plate to the 8-directional edge tracking algorithm in the binarized image. After the extraction of license plate area, mean binarization and mini-max binarization methods are applied to the extracted license plate area in order to eliminated noises by morphological features of individual elements in the license plate area, and then each character is extracted and combined by Labeling algorithm. The extracted and combined characters(letter and number symbols) are recognized after the learning by ART2 algorithm. In order to evaluate the extraction and recognition performances of the proposed method, 200 vehicle license plate images (100 for green type and 100 for white type) are used for experiment, and the experimental results show the proposed method is effective.

Hand-Gesture Recognition Using Concentric-Circle Expanding and Tracing Algorithm (동심원 확장 및 추적 알고리즘을 이용한 손동작 인식)

  • Hwang, Dong-Hyun;Jang, Kyung-Sik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.3
    • /
    • pp.636-642
    • /
    • 2017
  • In this paper, We proposed a novel hand-gesture recognition algorithm using concentric-circle expanding and tracing. The proposed algorithm determines region of interest of hand image through preprocessing the original image acquired by web-camera and extracts the feature of hand gesture such as the number of stretched fingers, finger tips and finger bases, angle between the fingers which can be used as intuitive method for of human computer interaction. The proposed algorithm also reduces computational complexity compared with raster scan method through referencing only pixels of concentric-circles. The experimental result shows that the 9 hand gestures can be recognized with an average accuracy of 90.7% and an average algorithm execution time is 78ms. The algorithm is confirmed as a feasible way to a useful input method for virtual reality, augmented reality, mixed reality and perceptual interfaces of human computer interaction.

Development of a Forest Fire Tracking and GIS Mapping Base on Live Streaming (실시간 영상 기반 산불 추적 및 매핑기법 개발)

  • Cho, In-Je;Kim, Gyou-Beom;Park, Beom-Sun
    • Journal of Convergence for Information Technology
    • /
    • v.10 no.10
    • /
    • pp.123-127
    • /
    • 2020
  • In order to obtain the overall fire line information of medium and large forest fires at night, the ground control system was developed to determine whether forest fires occurred through real-time video clips and to calculate the location of the forest fires determined using the location of drones, angle information of video cameras, and altitude information on the map to reduce the time required for regular video matches obtained after the completion of the mission. To verify the reliability of the developed function, the error distance of the aiming position information of the flight altitude star and the image camera was measured, and the location information within the reliable range was displayed on the map. As the function developed in this paper allows real-time identification of multiple locations of forest fires, it is expected that overall fire line information for the establishment of forest fire extinguishing measures will be obtained more quickly.

Development of A Multi-sensor Fusion-based Traffic Information Acquisition System with Robust to Environmental Changes using Mono Camera, Radar and Infrared Range Finder (환경변화에 강인한 단안카메라 레이더 적외선거리계 센서 융합 기반 교통정보 수집 시스템 개발)

  • Byun, Ki-hoon;Kim, Se-jin;Kwon, Jang-woo
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.16 no.2
    • /
    • pp.36-54
    • /
    • 2017
  • The purpose of this paper is to develop a multi-sensor fusion-based traffic information acquisition system with robust to environmental changes. it combines the characteristics of each sensor and is more robust to the environmental changes than the video detector. Moreover, it is not affected by the time of day and night, and has less maintenance cost than the inductive-loop traffic detector. This is accomplished by synthesizing object tracking informations based on a radar, vehicle classification informations based on a video detector and reliable object detections of a infrared range finder. To prove the effectiveness of the proposed system, I conducted experiments for 6 hours over 5 days of the daytime and early evening on the pedestrian - accessible road. According to the experimental results, it has 88.7% classification accuracy and 95.5% vehicle detection rate. If the parameters of this system is optimized to adapt to the experimental environment changes, it is expected that it will contribute to the advancement of ITS.

Auto-guiding Performance from IGRINS Test Observations (Immersion GRating INfrared Spectrograph)

  • Lee, Hye-In;Pak, Soojong;Le, Huynh Anh N.;Kang, Wonseok;Mace, Gregory;Pavel, Michael;Jaffe, Daniel T.;Lee, Jae-Joon;Kim, Hwihyun;Jeong, Ueejeong;Chun, Moo-Young;Park, Chan;Yuk, In-Soo;Kim, Kangmin
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.39 no.2
    • /
    • pp.92.1-92.1
    • /
    • 2014
  • In astronomical spectroscopy, stable auto-guiding and accurate target centering capabilities are critical to increase the achievement of high observation efficiency and sensitivity. We developed an instrument control software for the Immersion GRating INfrared Spectrograph (IGRINS), a high spectral resolution near-infrared slit spectrograph with (R=40,000). IGRINS is currently installed on the McDonald 2.7 m telescope in Texas, USA. We had successful commissioning observations in March, May, and July of 2014. The role of the IGRINS slit-viewing camera (SVC) is to move the target onto the slit, and to provide feedback about the tracking offsets for the auto-guiding. For a point source, we guide the telescope with the target on the slit. While for an extended source, we use another a guide star in the field offset from the slit. Since the slit blocks the center of the point spread function, it is challenging to fit the Gaussian function to guide and center the target on slit. We developed several center finding algorithms, e.g., 2D-Gaussian Fitting, 1D-Gaussian Fitting, and Center Balancing methods. In this presentation, we show the results of auto-guiding performances with these algorithms.

  • PDF

Development of application for guidance and controller unit for low cost and small UAV missile based on smartphone (스마트폰을 활용한 소형 저가 유도탄 유도조종장치용 어플리케이션 개발)

  • Noh, Junghoon;Cho, Kyongkuk;Kim, Seongjun;Kim, Wonsop;Jeong, Jinseob;Sang, Jinwoo;Park, Chung-Woon;Gong, Minsik
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.45 no.7
    • /
    • pp.610-617
    • /
    • 2017
  • In the recent weapon system trend, it is required to develop small and low cost guidance missile to track and strike the enemy target effectively. Controling the such small drone typed weapon demands a integrated electronic device that equipped with not only a wireless network interface, a high resolution camera, various sensors for target tracking, and position and attitude control but also a high performance processor that integrates and processes those sensor outputs in real-time. In this paper, we propose the android smartphone as a solution for that and implement the guidance and control application of the missile. Furthermore, the performance of the implemented guidance and control application is analyzed through the simulation.

Development and Validation of a Measurement Technique for Interfacial Velocity in Liquid-gas Separated Flow Using IR-PTV (적외선 입자추적유속계를 이용한 액체-기체 분리유동 시 계면속도 측정기법 개발 및 검증)

  • Kim, Sangeun;Kim, Hyungdae
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.39 no.7
    • /
    • pp.549-555
    • /
    • 2015
  • A measurement technique of interfacial velocity in air-water separated flow by particle tracking velocimetry using an infrared camera (IR-PTV) was developed. As infrared light with wavelength in the range of 3-5 um could hardly penetrate water, IR-PTV can selectively visualize only the tracer particles existing in depths less than 20 um underneath the air-water interface. To validate the measurement accuracy of the IR-PTV technique, a measurement of the interfacial velocity of the air-water separated flow using Styrofoam particles floating in water was conducted. The interfacial velocity values obtained with the two different measurement techniques showed good agreement with errors less than 5%. It was found from the experimental results obtained using the developed technique that with increasing air velocity, the interfacial velocity proportionally increases, likely because of the increased interfacial stress.

Audio-Visual Fusion for Sound Source Localization and Improved Attention (음성-영상 융합 음원 방향 추정 및 사람 찾기 기술)

  • Lee, Byoung-Gi;Choi, Jong-Suk;Yoon, Sang-Suk;Choi, Mun-Taek;Kim, Mun-Sang;Kim, Dai-Jin
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.35 no.7
    • /
    • pp.737-743
    • /
    • 2011
  • Service robots are equipped with various sensors such as vision camera, sonar sensor, laser scanner, and microphones. Although these sensors have their own functions, some of them can be made to work together and perform more complicated functions. AudioFvisual fusion is a typical and powerful combination of audio and video sensors, because audio information is complementary to visual information and vice versa. Human beings also mainly depend on visual and auditory information in their daily life. In this paper, we conduct two studies using audioFvision fusion: one is on enhancing the performance of sound localization, and the other is on improving robot attention through sound localization and face detection.

Object Recognition Face Detection With 3D Imaging Parameters A Research on Measurement Technology (3D영상 객체인식을 통한 얼굴검출 파라미터 측정기술에 대한 연구)

  • Choi, Byung-Kwan;Moon, Nam-Mee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.10
    • /
    • pp.53-62
    • /
    • 2011
  • In this paper, high-tech IT Convergence, to the development of complex technology, special technology, video object recognition technology was considered only as a smart - phone technology with the development of personal portable terminal has been developed crossroads. Technology-based detection of 3D face recognition technology that recognizes objects detected through the intelligent video recognition technology has been evolving technologies based on image recognition, face detection technology with through the development speed is booming. In this paper, based on human face recognition technology to detect the object recognition image processing technology is applied through the face recognition technology applied to the IP camera is the party of the mouth, and allowed the ability to identify and apply the human face recognition, measurement techniques applied research is suggested. Study plan: 1) face model based face tracking technology was developed and applied 2) algorithm developed by PC-based measurement of human perception through the CPU load in the face value of their basic parameters can be tracked, and 3) bilateral distance and the angle of gaze can be tracked in real time, proved effective.

Development of A Framework for Robust Extraction of Regions Of Interest (환경 요인에 독립적인 관심 영역 추출을 위한 프레임워크의 개발)

  • Kim, Seong-Hoon;Lee, Kwang-Eui;Heo, Gyeong-Yong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.12
    • /
    • pp.49-57
    • /
    • 2011
  • Extraction of regions of interest (ROIs) is the first and important step for the applications in computer vision and affects the rest of the application process. However, ROI extraction can be easily affected by the environment such as illumination, camera, etc. Many applications adopt problem-specific knowledge and/or post-processing to correct the error occurred in ROI extraction. In this paper, proposed is a robust framework that could overcome the environmental change and is independent from the rest of the process. The proposed framework uses a differential image and a color distribution to extract ROIs. The color distribution can be learned on-line, which make the framework to be robust to environmental change. Even more, the components of the framework are independent each other, which makes the framework flexible and extensible. The usefulness of the proposed framework is demonstrated with the application of hand region extraction in an image sequence.