• Title/Summary/Keyword: Camera-based Recognition

Search Result 593, Processing Time 0.028 seconds

A Beverage Can Recognition System Based on Deep Learning for the Visually Impaired (시각장애인을 위한 딥러닝 기반 음료수 캔 인식 시스템)

  • Lee Chanbee;Sim Suhyun;Kim Sunhee
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.19 no.1
    • /
    • pp.119-127
    • /
    • 2023
  • Recently, deep learning has been used in the development of various institutional devices and services to help the visually impaired people in their daily lives. This is because not only are there few products and facility guides written in braille, but less than 10% of the visually impaired can use braille. In this paper, we propose a system that recognizes beverage cans in real time and outputs the beverage can name with sound for the convenience of the visually impaired. Five commercially available beverage cans were selected, and a CNN model and a YOLO model were designed to recognize the beverage cans. After augmenting the image data, model training was performed. The accuracy of the proposed CNN model and YOLO model is 91.2% and 90.8%, respectively. For practical verification, a system was built by attaching a camera and speaker to a Raspberry Pi. In the system, the YOLO model was applied. It was confirmed that beverage cans were recognized and output as sound in real time in various environments.

Indoor Location Positioning System for Image Recognition based LBS (영상인식 기반의 위치기반서비스를 위한 실내위치인식 시스템)

  • Kim, Jong-Bae
    • Journal of Korea Spatial Information System Society
    • /
    • v.10 no.2
    • /
    • pp.49-62
    • /
    • 2008
  • This paper proposes an indoor location positioning system for the image recognition based LBS. The proposed system is a vision-based location positioning system that is implemented the augmented reality by overlaying the location results with the view of the user. For implementing, the proposed system uses the pattern matching and location model to recognize user location from images taken by a wearable mobile PC with camera. In the proposed system, the system uses the pattern matching and location model for recognizing a personal location in image sequences. The system is estimated user location by the image sequence matching and marker detection methods, and is recognized user location by using the pre-defined location model. To detect marker in image sequences, the proposed system apply to the adaptive thresholding method, and by using the location model to recognize a location, the system can be obtained more accurate and efficient results. Experimental results show that the proposed system has both quality and performance to be used as an indoor location-based services(LBS) for visitors in various environments.

  • PDF

Vehicle Headlight and Taillight Recognition in Nighttime using Low-Exposure Camera and Wavelet-based Random Forest (저노출 카메라와 웨이블릿 기반 랜덤 포레스트를 이용한 야간 자동차 전조등 및 후미등 인식)

  • Heo, Duyoung;Kim, Sang Jun;Kwak, Choong Sub;Nam, Jae-Yeal;Ko, Byoung Chul
    • Journal of Broadcast Engineering
    • /
    • v.22 no.3
    • /
    • pp.282-294
    • /
    • 2017
  • In this paper, we propose a novel intelligent headlight control (IHC) system which is durable to various road lights and camera movement caused by vehicle driving. For detecting candidate light blobs, the region of interest (ROI) is decided as front ROI (FROI) and back ROI (BROI) by considering the camera geometry based on perspective range estimation model. Then, light blobs such as headlights, taillights of vehicles, reflection light as well as the surrounding road lighting are segmented using two different adaptive thresholding. From the number of segmented blobs, taillights are first detected using the redness checking and random forest classifier based on Haar-like feature. For the headlight and taillight classification, we use the random forest instead of popular support vector machine or convolutional neural networks for supporting fast learning and testing in real-life applications. Pairing is performed by using the predefined geometric rules, such as vertical coordinate similarity and association check between blobs. The proposed algorithm was successfully applied to various driving sequences in night-time, and the results show that the performance of the proposed algorithms is better than that of recent related works.

Development of a Slope Condition Analysis System using IoT Sensors and AI Camera (IoT 센서와 AI 카메라를 융합한 급경사지 상태 분석 시스템 개발)

  • Seungjoo Lee;Kiyen Jeong;Taehoon Lee;YoungSeok Kim
    • Journal of the Korean Geosynthetics Society
    • /
    • v.23 no.2
    • /
    • pp.43-52
    • /
    • 2024
  • Recent abnormal climate conditions have increased the risk of slope collapses, which frequently result in significant loss of life and property due to the absence of early prediction and warning dissemination. In this paper, we develop a slope condition analysis system using IoT sensors and AI-based camera to assess the condition of slopes. To develop the system, we conducted hardware and firmware design for measurement sensors considering the ground conditions of slopes, designed AI-based image analysis algorithms, and developed prediction and warning solutions and systems. We aimed to minimize errors in sensor data through the integration of IoT sensor data and AI camera image analysis, ultimately enhancing the reliability of the data. Additionally, we evaluated the accuracy (reliability) by applying it to actual slopes. As a result, sensor measurement errors were maintained within 0.1°, and the data transmission rate exceeded 95%. Moreover, the AI-based image analysis system demonstrated nighttime partial recognition rates of over 99%, indicating excellent performance even in low-light conditions. Through this research, it is anticipated that the analysis of slope conditions and smart maintenance management in various fields of Social Overhead Capital (SOC) facilities can be applied.

Human Tracking and Body Silhouette Extraction System for Humanoid Robot (휴머노이드 로봇을 위한 사람 검출, 추적 및 실루엣 추출 시스템)

  • Kwak, Soo-Yeong;Byun, Hye-Ran
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.6C
    • /
    • pp.593-603
    • /
    • 2009
  • In this paper, we propose a new integrated computer vision system designed to track multiple human beings and extract their silhouette with an active stereo camera. The proposed system consists of three modules: detection, tracking and silhouette extraction. Detection was performed by camera ego-motion compensation and disparity segmentation. For tracking, we present an efficient mean shift based tracking method in which the tracking objects are characterized as disparity weighted color histograms. The silhouette was obtained by two-step segmentation. A trimap is estimated in advance and then this was effectively incorporated into the graph cut framework for fine segmentation. The proposed system was evaluated with respect to ground truth data and it was shown to detect and track multiple people very well and also produce high quality silhouettes. The proposed system can assist in gesture and gait recognition in field of Human-Robot Interaction (HRI).

Enterprise Human Resource Management using Hybrid Recognition Technique (하이브리드 인식 기술을 이용한 전사적 인적자원관리)

  • Han, Jung-Soo;Lee, Jeong-Heon;Kim, Gui-Jung
    • Journal of Digital Convergence
    • /
    • v.10 no.10
    • /
    • pp.333-338
    • /
    • 2012
  • Human resource management is bringing the various changes with the IT technology. In particular, if HRM is non-scientific method such as group management, physical plant, working hours constraints, personal contacts, etc, the current enterprise human resources management(e-HRM) appeared in the individual dimension management, virtual workspace (for example: smart work center, home work, etc.), working time flexibility and elasticity, computer-based statistical data and the scientific method of analysis and management has been a big difference in the sense. Therefore, depending on changes in the environment, companies have introduced a variety of techniques as RFID card, fingerprint time & attendance systems in order to build more efficient and strategic human resource management system. In this paper, time and attendance, access control management system was developed using multi camera for 2D and 3D face recognition technology-based for efficient enterprise human resource management. We had an issue with existing 2D-style face-recognition technology for lighting and the attitude, and got more than 90% recognition rate against the poor readability. In addition, 3D face recognition has computational complexities, so we could improve hybrid video recognition and the speed using 3D and 2D in parallel.

Deep learning based Person Re-identification with RGB-D sensors

  • Kim, Min;Park, Dong-Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.3
    • /
    • pp.35-42
    • /
    • 2021
  • In this paper, we propose a deep learning-based person re-identification method using a three-dimensional RGB-Depth Xtion2 camera considering joint coordinates and dynamic features(velocity, acceleration). The main idea of the proposed identification methodology is to easily extract gait data such as joint coordinates, dynamic features with an RGB-D camera and automatically identify gait patterns through a self-designed one-dimensional convolutional neural network classifier(1D-ConvNet). The accuracy was measured based on the F1 Score, and the influence was measured by comparing the accuracy with the classifier model (JC) that did not consider dynamic characteristics. As a result, our proposed classifier model in the case of considering the dynamic characteristics(JCSpeed) showed about 8% higher F1-Score than JC.

Implementation and Verification of Deep Learning-based Automatic Object Tracking and Handy Motion Control Drone System (심층학습 기반의 자동 객체 추적 및 핸디 모션 제어 드론 시스템 구현 및 검증)

  • Kim, Youngsoo;Lee, Junbeom;Lee, Chanyoung;Jeon, Hyeri;Kim, Seungpil
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.16 no.5
    • /
    • pp.163-169
    • /
    • 2021
  • In this paper, we implemented a deep learning-based automatic object tracking and handy motion control drone system and analyzed the performance of the proposed system. The drone system automatically detects and tracks targets by analyzing images obtained from the drone's camera using deep learning algorithms, consisting of the YOLO, the MobileNet, and the deepSORT. Such deep learning-based detection and tracking algorithms have both higher target detection accuracy and processing speed than the conventional color-based algorithm, the CAMShift. In addition, in order to facilitate the drone control by hand from the ground control station, we classified handy motions and generated flight control commands through motion recognition using the YOLO algorithm. It was confirmed that such a deep learning-based target tracking and drone handy motion control system stably track the target and can easily control the drone.

Real-Time Hand Pose Tracking and Finger Action Recognition Based on 3D Hand Modeling (3차원 손 모델링 기반의 실시간 손 포즈 추적 및 손가락 동작 인식)

  • Suk, Heung-Il;Lee, Ji-Hong;Lee, Seong-Whan
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.12
    • /
    • pp.780-788
    • /
    • 2008
  • Modeling hand poses and tracking its movement are one of the challenging problems in computer vision. There are two typical approaches for the reconstruction of hand poses in 3D, depending on the number of cameras from which images are captured. One is to capture images from multiple cameras or a stereo camera. The other is to capture images from a single camera. The former approach is relatively limited, because of the environmental constraints for setting up multiple cameras. In this paper we propose a method of reconstructing 3D hand poses from a 2D input image sequence captured from a single camera by means of Belief Propagation in a graphical model and recognizing a finger clicking motion using a hidden Markov model. We define a graphical model with hidden nodes representing joints of a hand, and observable nodes with the features extracted from a 2D input image sequence. To track hand poses in 3D, we use a Belief Propagation algorithm, which provides a robust and unified framework for inference in a graphical model. From the estimated 3D hand pose we extract the information for each finger's motion, which is then fed into a hidden Markov model. To recognize natural finger actions, we consider the movements of all the fingers to recognize a single finger's action. We applied the proposed method to a virtual keypad system and the result showed a high recognition rate of 94.66% with 300 test data.

A Study on the Design and Implementation of a Thermal Imaging Temperature Screening System for Monitoring the Risk of Infectious Diseases in Enclosed Indoor Spaces (밀폐공간 내 감염병 위험도 모니터링을 위한 열화상 온도 스크리닝 시스템 설계 및 구현에 대한 연구)

  • Jae-Young, Jung;You-Jin, Kim
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.12 no.2
    • /
    • pp.85-92
    • /
    • 2023
  • Respiratory infections such as COVID-19 mainly occur within enclosed spaces. The presence or absence of abnormal symptoms of respiratory infectious diseases is judged through initial symptoms such as fever, cough, sneezing and difficulty breathing, and constant monitoring of these early symptoms is required. In this paper, image matching correction was performed for the RGB camera module and the thermal imaging camera module, and the temperature of the thermal imaging camera module for the measurement environment was calibrated using a blackbody. To detection the target recommended by the standard, a deep learning-based object recognition algorithm and the inner canthus recognition model were developed, and the model accuracy was derived by applying a dataset of 100 experimenters. Also, the error according to the measured distance was corrected through the object distance measurement using the Lidar module and the linear regression correction module. To measure the performance of the proposed model, an experimental environment consisting of a motor stage, an infrared thermography temperature screening system and a blackbody was established, and the error accuracy within 0.28℃ was shown as a result of temperature measurement according to a variable distance between 1m and 3.5 m.