• Title/Summary/Keyword: Computer Vision system

Search Result 1,064, Processing Time 0.035 seconds

Camera Calibration Method for an Automotive Safety Driving System (자동차 안전운전 보조 시스템에 응용할 수 있는 카메라 캘리브레이션 방법)

  • Park, Jong-Seop;Kim, Gi-Seok;Roh, Soo-Jang;Cho, Jae-Soo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.7
    • /
    • pp.621-626
    • /
    • 2015
  • This paper presents a camera calibration method in order to estimate the lane detection and inter-vehicle distance estimation system for an automotive safety driving system. In order to implement the lane detection and vision-based inter-vehicle distance estimation to the embedded navigations or black box systems, it is necessary to consider the computation time and algorithm complexity. The process of camera calibration estimates the horizon, the position of the car's hood and the lane width for extraction of region of interest (ROI) from input image sequences. The precision of the calibration method is very important to the lane detection and inter-vehicle distance estimation. The proposed calibration method consists of three main steps: 1) horizon area determination; 2) estimation of the car's hood area; and 3) estimation of initial lane width. Various experimental results show the effectiveness of the proposed method.

On-site Performance Evaluation of a Vision-based Displacement Measurement System (영상 기반 변위 계측장치의 현장 적용 성능 평가)

  • Cho, Soojin;Sim, Sung-Han;Kim, Eunsung
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.15 no.9
    • /
    • pp.5854-5860
    • /
    • 2014
  • The on-site performance of a vision-based displacement measurement system (VDMS) was evaluated through a field test on a bridge. The VDMS used in this study is composed of a camera, a marker, a frame grabber, and a laptop. The system measures the displacement by attaching a marker at the location to be measured on the structure, by capturing images of that marker with a fixed rate, and by processing a series of images using a planar homography technique. The developed system was first validated from a laboratory test using a small-scale building structure. The VDMS was then employed in a field test on a railroad bridge with a KTX train running under various conditions. The on-site performance was evaluated by comparing the obtained displacement using the VDMS with the displacement measured from a laser Doppler vibrometer (LDV), which is an expensive and accurate displacement measurement device.

End to End Autonomous Driving System using Out-layer Removal (Out-layer를 제거한 End to End 자율주행 시스템)

  • Seung-Hyeok Jeong;Dong-Ho Yun;Sung-Hun Hong
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.1
    • /
    • pp.65-70
    • /
    • 2023
  • In this paper, we propose an autonomous driving system using an end-to-end model to improve lane departure and misrecognition of traffic lights in a vision sensor-based system. End-to-end learning can be extended to a variety of environmental conditions. Driving data is collected using a model car based on a vision sensor. Using the collected data, it is composed of existing data and data with outlayers removed. A class was formed with camera image data as input data and speed and steering data as output data, and data learning was performed using an end-to-end model. The reliability of the trained model was verified. Apply the learned end-to-end model to the model car to predict the steering angle with image data. As a result of the learning of the model car, it can be seen that the model with the outlayer removed is improved than the existing model.

Design of OpenCV based Finger Recognition System using binary processing and histogram graph

  • Baek, Yeong-Tae;Lee, Se-Hoon;Kim, Ji-Seong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.2
    • /
    • pp.17-23
    • /
    • 2016
  • NUI is a motion interface. It uses the body of the user without the use of HID device such as a mouse and keyboard to control the device. In this paper, we use a Pi Camera and sensors connected to it with small embedded board Raspberry Pi. We are using the OpenCV algorithms optimized for image recognition and computer vision compared with traditional HID equipment and to implement a more human-friendly and intuitive interface NUI devices. comparison operation detects motion, it proposed a more advanced motion sensors and recognition systems fused connected to the Raspberry Pi.

Intelligent Shoes for Detecting Blind Falls Using the Internet of Things

  • Ahmad Abusukhon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.9
    • /
    • pp.2377-2398
    • /
    • 2023
  • In our daily lives, we engage in a variety of tasks that rely on our senses, such as seeing. Blindness is the absence of the sense of vision. According to the World Health Organization, 2.2 billion people worldwide suffer from various forms of vision impairment. Unfortunately, blind people face a variety of indoor and outdoor challenges on a daily basis, limiting their mobility and preventing them from engaging in other activities. Blind people are very vulnerable to a variety of hazards, including falls. Various barriers, such as stairs, can cause a fall. The Internet of Things (IoT) is used to track falls and send a warning message to the blind caretakers. One of the gaps in the previous works is that they were unable to differentiate between falls true and false. Treating false falls as true falls results in many false alarms being sent to the blind caretakers and thus, they may reject the IoT system. As a means of bridging this chasm, this paper proposes an intelligent shoe that is able to precisely distinguish between false and true falls based on three sensors, namely, the load scale sensor, the light sensor, and the Flex sensor. The proposed IoT system is tested in an indoor environment for various scenarios of falls using four models of machine learning. The results from our system showed an accuracy of 0.96%. Compared to the state-of-the-art, our system is simpler and more accurate since it avoids sending false alarms to the blind caretakers.

Indoor Surveillance Camera based Human Centric Lighting Control for Smart Building Lighting Management

  • Yoon, Sung Hoon;Lee, Kil Soo;Cha, Jae Sang;Mariappan, Vinayagam;Lee, Min Woo;Woo, Deok Gun;Kim, Jeong Uk
    • International Journal of Advanced Culture Technology
    • /
    • v.8 no.1
    • /
    • pp.207-212
    • /
    • 2020
  • The human centric lighting (HCL) control is a major focus point of the smart lighting system design to provide energy efficient and people mood rhythmic motivation lighting in smart buildings. This paper proposes the HCL control using indoor surveillance camera to improve the human motivation and well-beings in the indoor environments like residential and industrial buildings. In this proposed approach, the indoor surveillance camera video streams are used to predict the day lights and occupancy, occupancy specific emotional features predictions using the advanced computer vision techniques, and this human centric features are transmitted to the smart building light management system. The smart building light management system connected with internet of things (IoT) featured lighting devices and controls the light illumination of the objective human specific lighting devices. The proposed concept experimental model implemented using RGB LED lighting devices connected with IoT features open-source controller in the network along with networked video surveillance solution. The experiment results are verified with custom made automatic lighting control demon application integrated with OpenCV framework based computer vision methods to predict the human centric features and based on the estimated features the lighting illumination level and colors are controlled automatically. The experiment results received from the demon system are analyzed and used for the real-time development of a lighting system control strategy.

Histogram Based Hand Recognition System for Augmented Reality (증강현실을 위한 히스토그램 기반의 손 인식 시스템)

  • Ko, Min-Su;Yoo, Ji-Sang
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.7
    • /
    • pp.1564-1572
    • /
    • 2011
  • In this paper, we propose a new histogram based hand recognition algorithm for augmented reality. Hand recognition system makes it possible a useful interaction between an user and computer. However, there is difficulty in vision-based hand gesture recognition with viewing angle dependency due to the complexity of human hand shape. A new hand recognition system proposed in this paper is based on the features from hand geometry. The proposed recognition system consists of two steps. In the first step, hand region is extracted from the image captured by a camera and then hand gestures are recognized in the second step. At first, we extract hand region by deleting background and using skin color information. Then we recognize hand shape by determining hand feature point using histogram of the obtained hand region. Finally, we design a augmented reality system by controlling a 3D object with the recognized hand gesture. Experimental results show that the proposed algorithm gives more than 91% accuracy for the hand recognition with less computational power.

A Real-time Augmented Reality System using Hand Geometric Characteristics based on Computer Vision (손의 기하학적인 특성을 적용한 실시간 비전 기반 증강현실 시스템)

  • Choi, Hee-Sun;Jung, Da-Un;Choi, Jong-Soo
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.3
    • /
    • pp.323-335
    • /
    • 2012
  • In this paper, we propose an AR(augmented reality) system using user's bare hand based on computer vision. It is important for registering a virtual object on the real input image to detect and track correct feature points. The AR systems with markers are stable but they can not register the virtual object on an acquired image when the marker goes out of a range of the camera. There is a tendency to give users inconvenient environment which is limited to control a virtual object. On the other hand, our system detects fingertips as fiducial features using adaptive ellipse fitting method considering the geometric characteristics of hand. It registers the virtual object stably by getting movement of fingertips with determining the shortest distance from a palm center. We verified that the accuracy of fingertip detection over 82.0% and fingertip ordering and tracking have just 1.8% and 2.0% errors for each step. We proved that this system can replace the marker system by tacking a camera projection matrix effectively in the view of stable augmentation of virtual object.

An Efficient Deep Learning Based Image Recognition Service System Using AWS Lambda Serverless Computing Technology (AWS Lambda Serverless Computing 기술을 활용한 효율적인 딥러닝 기반 이미지 인식 서비스 시스템)

  • Lee, Hyunchul;Lee, Sungmin;Kim, Kangseok
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.6
    • /
    • pp.177-186
    • /
    • 2020
  • Recent advances in deep learning technology have improved image recognition performance in the field of computer vision, and serverless computing is emerging as the next generation cloud computing technology for event-based cloud application development and services. Attempts to use deep learning and serverless computing technology to increase the number of real-world image recognition services are increasing. Therefore, this paper describes how to develop an efficient deep learning based image recognition service system using serverless computing technology. The proposed system suggests a method that can serve large neural network model to users at low cost by using AWS Lambda Server based on serverless computing. We also show that we can effectively build a serverless computing system that uses a large neural network model by addressing the shortcomings of AWS Lambda Server, cold start time and capacity limitation. Through experiments, we confirmed that the proposed system, using AWS Lambda Serverless Computing technology, is efficient for servicing large neural network models by solving processing time and capacity limitations as well as cost reduction.

Vision-based Walking Guidance System Using Top-view Transform and Beam-ray Model (탑-뷰 변환과 빔-레이 모델을 이용한 영상기반 보행 안내 시스템)

  • Lin, Qing;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.12
    • /
    • pp.93-102
    • /
    • 2011
  • This paper presents a walking guidance system for blind pedestrians in an outdoor environment using just one single camera. Unlike many existing travel-aid systems that rely on stereo-vision, the proposed system aims to get necessary information of the road environment by using just single camera fixed at the belly of the user. To achieve this goal, a top-view image of the road is used, on which obstacles are detected by first extracting local extreme points and then verified by the polar edge histogram. Meanwhile, user motion is estimated by using optical flow in an area close to the user. Based on these information extracted from image domain, an audio message generation scheme is proposed to deliver guidance instructions via synthetic voice to the blind user. Experiments with several sidewalk video-clips show that the proposed walking guidance system is able to provide useful guidance instructions under certain sidewalk environments.