• Title/Summary/Keyword: Camera-based Recognition

Search Result 593, Processing Time 0.03 seconds

Recognition and tracking system of moving objects based on artificial neural network and PWM control

  • Sugisaka, M.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1992.10b
    • /
    • pp.573-574
    • /
    • 1992
  • We developed a recognition and tracking system of moving objects. The system consists of one CCD video camera, two DC motors in horizontal and vertical axles with encoders, pluse width modulation(PWM) driving unit, 16 bit NEC 9801 microcomputer, and their interfaces. The recognition and tracking system is able to recognize shape and size of a moving object and is able to track the object within a certain range of errors. This paper presents the brief introduction of the recognition and tracking system developed in our laboratory.

  • PDF

Face Recognition Method Based on Local Binary Pattern using Depth Images (깊이 영상을 이용한 지역 이진 패턴 기반의 얼굴인식 방법)

  • Kwon, Soon Kak;Kim, Heung Jun;Lee, Dong Seok
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.22 no.6
    • /
    • pp.39-45
    • /
    • 2017
  • Conventional Color-Based Face Recognition Methods are Sensitive to Illumination Changes, and there are the Possibilities of Forgery and Falsification so that it is Difficult to Apply to Various Industrial Fields. In This Paper, we propose a Face Recognition Method Based on LBP(Local Binary Pattern) using the Depth Images to Solve This Problem. Face Detection Method Using Depth Information and Feature Extraction and Matching Methods for Face Recognition are implemented, the Simulation Results show the Recognition Performance of the Proposed Method.

Scaling Attack Method for Misalignment Error of Camera-LiDAR Calibration Model (카메라-라이다 융합 모델의 오류 유발을 위한 스케일링 공격 방법)

  • Yi-ji Im;Dae-seon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.6
    • /
    • pp.1099-1110
    • /
    • 2023
  • The recognition system of autonomous driving and robot navigation performs vision work such as object recognition, tracking, and lane detection after multi-sensor fusion to improve performance. Currently, research on a deep learning model based on the fusion of a camera and a lidar sensor is being actively conducted. However, deep learning models are vulnerable to adversarial attacks through modulation of input data. Attacks on the existing multi-sensor-based autonomous driving recognition system are focused on inducing obstacle detection by lowering the confidence score of the object recognition model.However, there is a limitation that an attack is possible only in the target model. In the case of attacks on the sensor fusion stage, errors in vision work after fusion can be cascaded, and this risk needs to be considered. In addition, an attack on LIDAR's point cloud data, which is difficult to judge visually, makes it difficult to determine whether it is an attack. In this study, image scaling-based camera-lidar We propose an attack method that reduces the accuracy of LCCNet, a fusion model (camera-LiDAR calibration model). The proposed method is to perform a scaling attack on the point of the input lidar. As a result of conducting an attack performance experiment by size with a scaling algorithm, an average of more than 77% of fusion errors were caused.

A Study of Line Recognition and Driving Direction Control On Vision based AGV (Vision을 이용한 자율주행 로봇의 라인 인식 및 주행방향 결정에 관한 연구)

  • Kim, Young-Suk;Kim, Tae-Wan;Lee, Chang-Goo
    • Proceedings of the KIEE Conference
    • /
    • 2002.07d
    • /
    • pp.2341-2343
    • /
    • 2002
  • This paper describes a vision-based line recognition and control of driving direction for an AGV(autonomous guided vehicle). As navigation guide, black stripe attached on the corridor is used. Binary image of guide stripe captured by a CCD camera is used. For detect the guideline quickly and extractly, we use for variable thresholding algorithm. this low-cost line-tracking system is efficiently using pc-based real time vision processing. steering control is studied through controller with guide-line angle error. This method is tested via a typical agv with a single camera in laboratory environment.

  • PDF

An Automatic Camera Tracking System for Video Surveillance

  • Lee, Sang-Hwa;Sharma, Siddharth;Lin, Sang-Lin;Park, Jong-Il
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2010.07a
    • /
    • pp.42-45
    • /
    • 2010
  • This paper proposes an intelligent video surveillance system for human object tracking. The proposed system integrates the object extraction, human object recognition, face detection, and camera control. First, the object in the video signals is extracted using the background subtraction. Then, the object region is examined whether it is human or not. For this recognition, the region-based shape descriptor, angular radial transform (ART) in MPEG-7, is used to learn and train the shapes of human bodies. When it is decided that the object is human or something to be investigated, the face region is detected. Finally, the face or object region is tracked in the video, and the pan/tilt/zoom (PTZ) controllable camera tracks the moving object with the motion information of the object. This paper performs the simulation with the real CCTV cameras and their communication protocol. According to the experiments, the proposed system is able to track the moving object(human) automatically not only in the image domain but also in the real 3-D space. The proposed system reduces the human supervisors and improves the surveillance efficiency with the computer vision techniques.

  • PDF

Human Head Mouse System Based on Facial Gesture Recognition

  • Wei, Li;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.12
    • /
    • pp.1591-1600
    • /
    • 2007
  • Camera position information from 2D face image is very important for that make the virtual 3D face model synchronize to the real face at view point, and it is also very important for any other uses such as: human computer interface (face mouth), automatic camera control etc. We present an algorithm to detect human face region and mouth, based on special color features of face and mouth in $YC_bC_r$ color space. The algorithm constructs a mouth feature image based on $C_b\;and\;C_r$ values, and use pattern method to detect the mouth position. And then we use the geometrical relationship between mouth position information and face side boundary information to determine the camera position. Experimental results demonstrate the validity of the proposed algorithm and the Correct Determination Rate is accredited for applying it into practice.

  • PDF

Development of Gesture Recognition-Based 3D Serious Games (치매 예방을 위한 제스처 인식 기반 3D 기능성 게임 개발)

  • He, Guan-Feng;Park, Jin-Woong;Kang, Sun-Kyung;Jung, Sung-Tae
    • Journal of Korea Game Society
    • /
    • v.11 no.6
    • /
    • pp.103-113
    • /
    • 2011
  • In this paper, we propose gesture recognition based 3D Serious Games to prevent dementia. These games are designed to enhance the effect of preventing dementia by helping increase brain usage and physical activities of users by the entire body gesture recognition. The existing cameras used for gesture recognition technology are limited in terms of recognition ratio and operation range. For more stable recognition of the body gestures, we recognized users with a 3D depth camera, obtained joint data of users, and analyzed joint motions to recognize gestures of the body. Game contents were designed to practice memory, reasoning, calculation, and spatial recognition focusing on the atrophy of brain cells as a major cause of dementia. Game results of each user were saved and analyzed to measure how their recognition skills improved.

3D Image Processing for Recognition and Size Estimation of the Fruit of Plum(Japanese Apricot) (3D 영상을 활용한 매실 인식 및 크기 추정)

  • Jang, Eun-Chae;Park, Seong-Jin;Park, Woo-Jun;Bae, Yeonghwan;Kim, Hyuck-Joo
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.2
    • /
    • pp.130-139
    • /
    • 2021
  • In this study, size of the fruit of Japanese apricot (plum) was estimated through a plum recognition and size estimation program using 3D images in order to control the Eurytoma maslovskii that causes the most damage to plum in a timely manner. In 2018, night shooting was carried out using a Kinect 2.0 Camera. For night shooting in 2019, a RealSense Depth Camera D415 was used. Based on the acquired images, a plum recognition and estimation program consisting of four stages of image preprocessing, sizeable plum extraction, RGB and depth image matching and plum size estimation was implemented using MATLAB R2018a. The results obtained by running the program on 10 images produced an average plum recognition error rate of 61.9%, an average plum recognition error rate of 0.5% and an average size measurement error rate of 3.6%. The continued development of these plum recognition and size estimation programs is expected to enable accurate fruit size monitoring in the future and the development of timely control systems for Eurytoma maslovskii.

Pattern Recognition Method Using Fuzzy Clustering and String Matching (퍼지 클러스터링과 스트링 매칭을 통합한 형상 인식법)

  • 남원우;이상조
    • Transactions of the Korean Society of Mechanical Engineers
    • /
    • v.17 no.11
    • /
    • pp.2711-2722
    • /
    • 1993
  • Most of the current 2-D object recognition systems are model-based. In such systems, the representation of each of a known set of objects are precompiled and stored in a database of models. Later, they are used to recognize the image of an object in each instance. In this thesis, the approach method for the 2-D object recognition is treating an object boundary as a string of structral units and utilizing string matching to analyze the scenes. To reduce string matching time, models are rebuilt by means of fuzzy c-means clustering algorithm. In this experiments, the image of objects were taken at initial position of a robot from the CCD camera, and the models are consturcted by the proposed algorithm. After that the image of an unknown object is taken by the camera at a random position, and then the unknown object is identified by a comparison between the unknown object and models. Finally, the amount of translation and rotation of object from the initial position is computed.

Development of Camera Calibration Technique Using Neural-Network (뉴럴네트워크를 이용한 카메라 보정기법 개발)

  • 장영희
    • Proceedings of the Korean Society of Machine Tool Engineers Conference
    • /
    • 1997.10a
    • /
    • pp.225-229
    • /
    • 1997
  • This paper describes the camera calibration based-neural network with a camera modeling that accounts for major sources of camera distortion, namely, radial, decentering, and thin prism distortion. Radial distortion causes and inward or outward displacement of a given image point from its ideal location. Actual optical systems are subject to various degrees of decentering, that is, the optical centers of lens elements are not strictly collinear. Thin prism distortion arises from imperfection in lens design and manufacturing as well as camera assembly. It is our purpose to develop the vision system for the pattern recognition and the automatic test of parts and to apply the line of manufacturing. The performance of proposed camera calibration is illustrated by simulation and experiment.

  • PDF