• Title/Summary/Keyword: camera vision

Search Result 1,386, Processing Time 0.027 seconds

Study on vision-based object recognition to improve performance of industrial manipulator (산업용 매니퓰레이터의 작업 성능 향상을 위한 영상 기반 물체 인식에 관한 연구)

  • Park, In-Cheol;Park, Jong-Ho;Ryu, Ji-Hyoung;Kim, Hyoung-Ju;Chong, Kil-To
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.4
    • /
    • pp.358-365
    • /
    • 2017
  • In this paper, we propose an object recognition method using image information to improve the efficiency of visual servoingfor industrial manipulators in industry. This is an image-processing method for real-time responses to an abnormal situation or to external environment change in a work object by utilizing camera-image information of an industrial manipulator. The object recognition method proposed in this paper uses the Otsu method, a thresholding technique based on separation of the V channel containing color information and the S channel, in which it is easy to separate the background from the HSV channel in order to improve the recognition rate of the existing Harris Corner algorithm. Through this study, when the work object is not placed in the correct position due to external factors or from being twisted,the position is calculated and provided to the industrial manipulator.

Fire Detection using Deep Convolutional Neural Networks for Assisting People with Visual Impairments in an Emergency Situation (시각 장애인을 위한 영상 기반 심층 합성곱 신경망을 이용한 화재 감지기)

  • Kong, Borasy;Won, Insu;Kwon, Jangwoo
    • 재활복지
    • /
    • v.21 no.3
    • /
    • pp.129-146
    • /
    • 2017
  • In an event of an emergency, such as fire in a building, visually impaired and blind people are prone to exposed to a level of danger that is greater than that of normal people, for they cannot be aware of it quickly. Current fire detection methods such as smoke detector is very slow and unreliable because it usually uses chemical sensor based technology to detect fire particles. But by using vision sensor instead, fire can be proven to be detected much faster as we show in our experiments. Previous studies have applied various image processing and machine learning techniques to detect fire, but they usually don't work very well because these techniques require hand-crafted features that do not generalize well to various scenarios. But with the help of recent advancement in the field of deep learning, this research can be conducted to help solve this problem by using deep learning-based object detector that can detect fire using images from security camera. Deep learning based approach can learn features automatically so they can usually generalize well to various scenes. In order to ensure maximum capacity, we applied the latest technologies in the field of computer vision such as YOLO detector in order to solve this task. Considering the trade-off between recall vs. complexity, we introduced two convolutional neural networks with slightly different model's complexity to detect fire at different recall rate. Both models can detect fire at 99% average precision, but one model has 76% recall at 30 FPS while another has 61% recall at 50 FPS. We also compare our model memory consumption with each other and show our models robustness by testing on various real-world scenarios.

White striping degree assessment using computer vision system and consumer acceptance test

  • Kato, Talita;Mastelini, Saulo Martiello;Campos, Gabriel Fillipe Centini;Barbon, Ana Paula Ayub da Costa;Prudencio, Sandra Helena;Shimokomaki, Massami;Soares, Adriana Lourenco;Barbon, Sylvio Jr.
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.32 no.7
    • /
    • pp.1015-1026
    • /
    • 2019
  • Objective: The objective of this study was to evaluate three different degrees of white striping (WS) addressing their automatic assessment and customer acceptance. The WS classification was performed based on a computer vision system (CVS), exploring different machine learning (ML) algorithms and the most important image features. Moreover, it was verified by consumer acceptance and purchase intent. Methods: The samples for image analysis were classified by trained specialists, according to severity degrees regarding visual and firmness aspects. Samples were obtained with a digital camera, and 25 features were extracted from these images. ML algorithms were applied aiming to induce a model capable of classifying the samples into three severity degrees. In addition, two sensory analyses were performed: 75 samples properly grilled were used for the first sensory test, and 9 photos for the second. All tests were performed using a 10-cm hybrid hedonic scale (acceptance test) and a 5-point scale (purchase intention). Results: The information gain metric ranked 13 attributes. However, just one type of image feature was not enough to describe the phenomenon. The classification models support vector machine, fuzzy-W, and random forest showed the best results with similar general accuracy (86.4%). The worst performance was obtained by multilayer perceptron (70.9%) with the high error rate in normal (NORM) sample predictions. The sensory analysis of acceptance verified that WS myopathy negatively affects the texture of the broiler breast fillets when grilled and the appearance attribute of the raw samples, which influenced the purchase intention scores of raw samples. Conclusion: The proposed system has proved to be adequate (fast and accurate) for the classification of WS samples. The sensory analysis of acceptance showed that WS myopathy negatively affects the tenderness of the broiler breast fillets when grilled, while the appearance attribute of the raw samples eventually influenced purchase intentions.

Development of Mask-RCNN Based Axle Control Violation Detection Method for Enforcement on Overload Trucks (과적 화물차 단속을 위한 Mask-RCNN기반 축조작 검지 기술 개발)

  • Park, Hyun suk;Cho, Yong sung;Kim, Young Nam;Kim, Jin pyung
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.21 no.5
    • /
    • pp.57-66
    • /
    • 2022
  • The Road Management Administration is cracking down on overloaded vehicles by installing low-speed or high-speed WIMs at toll gates and main lines on expressways. However, in recent years, the act of intelligently evading the overloaded-vehicle control system of the Road Management Administration by illegally manipulating the variable axle of an overloaded truck is increasing. In this manipulation, when entering the overloaded-vehicle checkpoint, all axles of the vehicle are lowered to pass normally, and when driving on the main road, the variable axle of the vehicle is illegally lifted with the axle load exceeding 10 tons alarmingly. Therefore, this study developed a technology to detect the state of the variable axle of a truck driving on the road using roadside camera images. In particular, this technology formed the basis for cracking down on overloaded vehicles by lifting the variable axle after entering the checkpoint and linking the vehicle with the account information of the checkpoint. Fundamentally, in this study, the tires of the vehicle were recognized using the Mask RCNN algorithm, the recognized tires were virtually arranged before and after the checkpoint, and the height difference of the vehicle was measured from the arrangement to determine whether the variable axle was lifted after the vehicle left the checkpoint.

Methodology for Vehicle Trajectory Detection Using Long Distance Image Tracking (원거리 차량 추적 감지 방법)

  • Oh, Ju-Taek;Min, Joon-Young;Heo, Byung-Do
    • International Journal of Highway Engineering
    • /
    • v.10 no.2
    • /
    • pp.159-166
    • /
    • 2008
  • Video image processing systems (VIPS) offer numerous benefits to transportation models and applications, due to their ability to monitor traffic in real time. VIPS based on a wide-area detection algorithm provide traffic parameters such as flow and velocity as well as occupancy and density. However, most current commercial VIPS utilize a tripwire detection algorithm that examines image intensity changes in the detection regions to indicate vehicle presence and passage, i.e., they do not identify individual vehicles as unique targets. If VIPS are developed to track individual vehicles and thus trace vehicle trajectories, many existing transportation models will benefit from more detailed information of individual vehicles. Furthermore, additional information obtained from the vehicle trajectories will improve incident detection by identifying lane change maneuvers and acceleration/deceleration patterns. However, unlike human vision, VIPS cameras have difficulty in recognizing vehicle movements over a detection zone longer than 100 meters. Over such a distance, the camera operators need to zoom in to recognize objects. As a result, vehicle tracking with a single camera is limited to detection zones under 100m. This paper develops a methodology capable of monitoring individual vehicle trajectories based on image processing. To improve traffic flow surveillance, a long distance tracking algorithm for use over 200m is developed with multi-closed circuit television (CCTV) cameras. The algorithm is capable of recognizing individual vehicle maneuvers and increasing the effectiveness of incident detection.

  • PDF

Robust 3-D Motion Estimation Based on Stereo Vision and Kalman Filtering (스테레오 시각과 Kalman 필터링을 이용한 강인한 3차원 운동추정)

  • 계영철
    • Journal of Broadcast Engineering
    • /
    • v.1 no.2
    • /
    • pp.176-187
    • /
    • 1996
  • This paper deals with the accurate estimation of 3- D pose (position and orientation) of a moving object with reference to the world frame (or robot base frame), based on a sequence of stereo images taken by cameras mounted on the end - effector of a robot manipulator. This work is an extension of the previous work[1]. Emphasis is given to the 3-D pose estimation relative to the world (or robot base) frame under the presence of not only the measurement noise in 2 - D images[ 1] but also the camera position errors due to the random noise involved in joint angles of a robot manipulator. To this end, a new set of discrete linear Kalman filter equations is derived, based on the following: 1) the orientation error of the object frame due to measurement noise in 2 - D images is modeled with reference to the camera frame by analyzing the noise propagation through 3- D reconstruction; 2) an extended Jacobian matrix is formulated by combining the result of 1) and the orientation error of the end-effector frame due to joint angle errors through robot differential kinematics; and 3) the rotational motion of an object, which is nonlinear in nature, is linearized based on quaternions. Motion parameters are computed from the estimated quaternions based on the iterated least-squares method. Simulation results show the significant reduction of estimation errors and also demonstrate an accurate convergence of the actual motion parameters to the true values.

  • PDF

Improved CS-RANSAC Algorithm Using K-Means Clustering (K-Means 클러스터링을 적용한 향상된 CS-RANSAC 알고리즘)

  • Ko, Seunghyun;Yoon, Ui-Nyoung;Alikhanov, Jumabek;Jo, Geun-Sik
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.6
    • /
    • pp.315-320
    • /
    • 2017
  • Estimating the correct pose of augmented objects on the real camera view efficiently is one of the most important questions in image tracking area. In computer vision, Homography is used for camera pose estimation in augmented reality system with markerless. To estimating Homography, several algorithm like SURF features which extracted from images are used. Based on extracted features, Homography is estimated. For this purpose, RANSAC algorithm is well used to estimate homography and DCS-RANSAC algorithm is researched which apply constraints dynamically based on Constraint Satisfaction Problem to improve performance. In DCS-RANSAC, however, the dataset is based on pattern of feature distribution of images manually, so this algorithm cannot classify the input image, pattern of feature distribution is not recognized in DCS-RANSAC algorithm, which lead to reduce it's performance. To improve this problem, we suggest the KCS-RANSAC algorithm using K-means clustering in CS-RANSAC to cluster the images automatically based on pattern of feature distribution and apply constraints to each image groups. The suggested algorithm cluster the images automatically and apply the constraints to each clustered image groups. The experiment result shows that our KCS-RANSAC algorithm outperformed the DCS-RANSAC algorithm in terms of speed, accuracy, and inlier rate.

Projective Reconstruction Method for 3D modeling from Un-calibrated Image Sequence (비교정 영상 시퀀스로부터 3차원 모델링을 위한 프로젝티브 재구성 방법)

  • Hong Hyun-Ki;Jung Yoon-Yong;Hwang Yong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.2 s.302
    • /
    • pp.113-120
    • /
    • 2005
  • 3D reconstruction of a scene structure from un-calibrated image sequences has been long one of the central problems in computer vision. For 3D reconstruction in Euclidean space, projective reconstruction, which is classified into the merging method and the factorization, is needed as a preceding step. By calculating all camera projection matrices and structures at the same time, the factorization method suffers less from dia and error accumulation than the merging. However, the factorization is hard to analyze precisely long sequences because it is based on the assumption that all correspondences must remain in all views from the first frame to the last. This paper presents a new projective reconstruction method for recovery of 3D structure over long sequences. We break a full sequence into sub-sequences based on a quantitative measure considering the number of matching points between frames, the homography error, and the distribution of matching points on the frame. All of the projective reconstructions of sub-sequences are registered into the same coordinate frame for a complete description of the scene. no experimental results showed that the proposed method can recover more precise 3D structure than the merging method.

Development of a Low-cost Monocular PSD Motion Capture System with Two Active Markers at Fixed Distance (일정간격의 두 능동마커를 이용한 저가형 단안 PSD 모션캡쳐 시스템 개발)

  • Seo, Pyeong-Won;Kim, Yu-Geon;Han, Chang-Ho;Ryu, Young-Kee;Oh, Choon-Suk
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.46 no.2
    • /
    • pp.61-71
    • /
    • 2009
  • In this paper, we propose a low-cost and compact motion capture system which enables to play motion games in PS2(Play Station 2). Recently, motion capture systems which are being used as a part in film producing and making games are too expensive and enormous systems. Now days, motion games using common USB camera are slow and have two-dimension recognition. But PSD sensor has a few good points, such as fast and low-cost. In recently year, 3D motion capture systems using 2D PSD (Position Sensitive Detector) optic sensor for motion capturing have been developed. One is Multi-PSD motion capture system applying stereo vision and another is Single-PSD motion capture system applying optical theory ship. But there are some problems to apply them to motion games. The Multi-PSD is high-cost and complicated because of using two more PSD Camera. It is so difficult to make markers having omni-direction equal intensity in Single-PSD. In this research, we propose a new theory that solves aforementioned problems. It can measure 3D coordination if separated two marker's intensity is equal to. We made a system based on this theory and experimented for performance capability. As a result, we were able to develop a motion capture system which is a single, low-cost, fast, compact, wide-angle and an adaptable motion games. The developed system is expected to be useful in animation, movies and games.

IoT Based Intelligent Position and Posture Control of Home Wellness Robots (홈 웰니스 로봇의 사물인터넷 기반 지능형 자기 위치 및 자세 제어)

  • Lee, Byoungsu;Hyun, Chang-Ho;Kim, Seungwoo
    • Journal of IKEEE
    • /
    • v.18 no.4
    • /
    • pp.636-644
    • /
    • 2014
  • This paper is to technically implement the sensing platform for Home-Wellness Robot. First, self-localization technique is based on a smart home and object in a home environment, and IOT(Internet of Thing) between Home Wellness Robots. RF tag is set in a smart home and the absolute coordinate information is acquired by a object included RF reader. Then bluetooth communication between object and home wellness robot provides the absolute coordinate information to home wellness robot. After that, the relative coordinate of home wellness robot is found and self-localization through a stereo camera in a home wellness robot. Second, this paper proposed fuzzy control methode based on a vision sensor for approach object of home wellness robot. Based on a stereo camera equipped with face of home wellness robot, depth information to the object is extracted. Then figure out the angle difference between the object and home wellness robot by calculating a warped angle based on the center of the image. The obtained information is written Look-Up table and makes the attitude control for approaching object. Through the experimental with home wellness robot and the smart home environment, confirm performance about the proposed self-localization and posture control method respectively.