• Title/Summary/Keyword: vision camera

Search Result 1,372, Processing Time 0.025 seconds

Real-Time Hand Pose Tracking and Finger Action Recognition Based on 3D Hand Modeling (3차원 손 모델링 기반의 실시간 손 포즈 추적 및 손가락 동작 인식)

  • Suk, Heung-Il;Lee, Ji-Hong;Lee, Seong-Whan
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.12
    • /
    • pp.780-788
    • /
    • 2008
  • Modeling hand poses and tracking its movement are one of the challenging problems in computer vision. There are two typical approaches for the reconstruction of hand poses in 3D, depending on the number of cameras from which images are captured. One is to capture images from multiple cameras or a stereo camera. The other is to capture images from a single camera. The former approach is relatively limited, because of the environmental constraints for setting up multiple cameras. In this paper we propose a method of reconstructing 3D hand poses from a 2D input image sequence captured from a single camera by means of Belief Propagation in a graphical model and recognizing a finger clicking motion using a hidden Markov model. We define a graphical model with hidden nodes representing joints of a hand, and observable nodes with the features extracted from a 2D input image sequence. To track hand poses in 3D, we use a Belief Propagation algorithm, which provides a robust and unified framework for inference in a graphical model. From the estimated 3D hand pose we extract the information for each finger's motion, which is then fed into a hidden Markov model. To recognize natural finger actions, we consider the movements of all the fingers to recognize a single finger's action. We applied the proposed method to a virtual keypad system and the result showed a high recognition rate of 94.66% with 300 test data.

The Obstacle Avoidance Algorithm of Mobile Robot using Line Histogram Intensity (Line Histogram Intensity를 이용한 이동로봇의 장애물 회피 알고리즘)

  • 류한성;최중경;구본민;박무열;방만식
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.6 no.8
    • /
    • pp.1365-1373
    • /
    • 2002
  • In this paper, we present two types of vision algorithm that mobile robot has CCD camera. for obstacle avoidance. This is simple algorithm that compare with grey level from input images. Also, The mobile robot depend on image processing and move command from PC host. we has been studied self controlled mobile robot system with CCD camera. This system consists of digital signal processor, step motor, RF module and CCD camera. we used wireless RF module for movable command transmitting between robot and host PC. This robot go straight until recognize obstacle from input image that preprocessed by edge detection, converting, thresholding. And it could avoid the obstacle when recognize obstacle by line histogram intensity. Host PC measurement wave from various line histogram each 20 pixel. This histogram is (x, y) value of pixel. For example, first line histogram intensity wave from (0, 0) to (0, 197) and last wave from (280, 0) to (2n, 197. So we find uniform wave region and nonuniform wave region. The period of uniform wave is obstacle region. we guess that algorithm is very useful about moving robot for obstacle avoidance.

On-the-go Nitrogen Sensing and Fertilizer Control for Site-specific Crop Management

  • Kim, Y.;Reid, J.F.;Han, S.
    • Agricultural and Biosystems Engineering
    • /
    • v.7 no.1
    • /
    • pp.18-26
    • /
    • 2006
  • In-field site-specific nitrogen (N) management increases crop yield, reduces N application to minimize the risk of nitrate contamination of ground water, and thus reduces farming cost. Real-time N sensing and fertilization is required for efficient N management. An 'on-the-go' site-specific N management system was developed and evaluated for the supplemental N application to com (Zea mays L.). This real-time N sensing and fertilization system monitored and assessed N fertilization needs using a vision-based spectral sensor and controlled the appropriate variable N rate according to N deficiency level estimated from spectral signature of crop canopies. Sensor inputs included ambient illumination, camera parameters, and image histogram of three spectral regions (red, green, and near-infrared). The real-time sensor-based supplemental N treatment improved crop N status and increased yield over most plots. The largest yield increase was achieved in plots with low initial N treatment combined with supplemental variable-rate application. Yield data for plots where N was applied the latest in the season resulted in a reduced impact on supplemental N. For plots with no supplemental N application, yield increased gradually with initial N treatment, but any N application more than 101 kg/ha had minimal impact on yield.

  • PDF

Artificial Vision : Electrical Stimulation of the Visual Cortex (뇌세포의 전기자극에 의한 맹인의 시감각 회복에 관한 연구)

  • Cha, Ki-Chul
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1991 no.05
    • /
    • pp.28-30
    • /
    • 1991
  • A visual prosthesis for the blind based upon electrical stimulation of the visual cortex requires the development of an array of electrodes. To establish design specifications for such an electrode array, we have conducted psychophysical experiments with normally sighted subjects wearing a portable 'phosphene' simulator. The simulator consists of a small video camera, a monitor masked by an opaque perforated film, and optical lenses. The visual angle subtended by the masked monitor is $1.7^{\circ}$ or less. We measured visual acuity and reading rate as a function of the number of pixels and their spacing. Our results indicate that a phosphene image produced by 600 electrodes implanted in a $1\;cm^2$, area near the foveal projection on the visual cortex should provide a limited but useful visual sense for the profoundly blind.

  • PDF

Lane Detection Algorithm for Night-time Digital Image Based on Distribution Feature of Boundary Pixels

  • You, Feng;Zhang, Ronghui;Zhong, Lingshu;Wang, Haiwei;Xu, Jianmin
    • Journal of the Optical Society of Korea
    • /
    • v.17 no.2
    • /
    • pp.188-199
    • /
    • 2013
  • This paper presents a novel algorithm for nighttime detection of the lane markers painted on a road at night. First of all, the proposed algorithm uses neighborhood average filtering, 8-directional Sobel operator and thresholding segmentation based on OTSU's to handle raw lane images taken from a digital CCD camera. Secondly, combining intensity map and gradient map, we analyze the distribution features of pixels on boundaries of lanes in the nighttime and construct 4 feature sets for these points, which are helpful to supply with sufficient data related to lane boundaries to detect lane markers much more robustly. Then, the searching method in multiple directions- horizontal, vertical and diagonal directions, is conducted to eliminate the noise points on lane boundaries. Adapted Hough transformation is utilized to obtain the feature parameters related to the lane edge. The proposed algorithm can not only significantly improve detection performance for the lane marker, but it requires less computational power. Finally, the algorithm is proved to be reliable and robust in lane detection in a nighttime scenario.

Incremental displacement estimation of structures using paired structured light

  • Jeon, Haemin;Shin, Jae-Uk;Myung, Hyun
    • Smart Structures and Systems
    • /
    • v.9 no.3
    • /
    • pp.273-286
    • /
    • 2012
  • As civil structures are exposed to various external loads, it is essential to assess the structural condition, especially the structural displacement, in every moment. Therefore, a visually servoed paired structured light system was proposed in the previous study. The proposed system is composed of two screens facing with each other, each with a camera, a screen, and one or two lasers controlled by a 2-DOF manipulator. The 6-DOF displacement can be calculated from the positions of three projected laser beams and the rotation angles of the manipulators. In the estimation process, one of well-known iterative methods such as Newton-Raphson or extended Kalman filter (EKF) was used for each measurement. Although the proposed system with the aforementioned algorithms estimates the displacement with high accuracy, it takes relatively long computation time. Therefore, an incremental displacement estimation (IDE) algorithm which updates the previously estimated displacement based on the difference between the previous and the current observed data is newly proposed. To validate the performance of the proposed algorithm, simulations and experiments are performed. The results show that the proposed algorithm significantly reduces the computation time with the same level of accuracy compared to the EKF with multiple iterations.

Automated quality characterization of 3D printed bone scaffolds

  • Tseng, Tzu-Liang Bill;Chilukuri, Aditya;Park, Sang C.;Kwon, Yongjin James
    • Journal of Computational Design and Engineering
    • /
    • v.1 no.3
    • /
    • pp.194-201
    • /
    • 2014
  • Optimization of design is an important step in obtaining tissue engineering scaffolds with appropriate shapes and inner micro-structures. Different shapes and sizes of scaffolds are modeled using UGS NX 6.0 software with variable pore sizes. The quality issue we are concerned is the scaffold porosity, which is mainly caused by the fabrication inaccuracies. Bone scaffolds are usually characterized using a scanning electron microscope, but this study presents a new automated inspection and classification technique. Due to many numbers and size variations for the pores, the manual inspection of the fabricated scaffolds tends to be error-prone and costly. Manual inspection also raises the chance of contamination. Thus, non-contact, precise inspection is preferred. In this study, the critical dimensions are automatically measured by the vision camera. The measured data are analyzed to classify the quality characteristics. The automated inspection and classification techniques developed in this study are expected to improve the quality of the fabricated scaffolds and reduce the overall cost of manufacturing.

Development of Multi-functional Laser Pointer Mouse Through Image Processing (영상처리를 통한 다기능 레이저 포인터 마우스 개발)

  • Kim, Yeong-Woo;Kim, Sung-Min;Shin, Jin;Yi, Soo-Yeong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.11
    • /
    • pp.1168-1172
    • /
    • 2011
  • Beam projector is popularly used for presentation. In order to pay attention to local area of the beam projector display, a laser pointer is used together with a pointing device(Mouse). Simple wireless presenter has limited functions of a pointing device such as "go to next slide" or "back to previous slide" in a specific application(Microsoft PowerPoint) through wireless channel; thus, there is inconvenience to do other tasks e.g., program execution, maximize/minimize window etc. provided by clicking mouse buttons. The main objective of this paper is to implement a multi-functional laser-pointer mouse that has the same functions of a computer mouse. In order to get position of laser spot in the projector display, an image processing to extract the laser spot in the camera image is required. In addition, we propose a transformation of the spot position into computer display coordinates to execute mouse functions on computer display.

Deep Learning Based Real-Time Painting Surface Inspection Algorithm for Autonomous Inspection Drone

  • Chang, Hyung-young;Han, Seung-ryong;Lim, Heon-young
    • Corrosion Science and Technology
    • /
    • v.18 no.6
    • /
    • pp.253-257
    • /
    • 2019
  • A deep learning based real-time painting surface inspection algorithm is proposed herein, designed for developing an autonomous inspection drone. The painting surface inspection is usually conducted manually. However, the manual inspection has a limitation in obtaining accurate data for correct judgement on the surface because of human error and deviation of individual inspection experiences. The best method to replace manual surface inspection is the vision-based inspection method with a camera, using various image processing algorithms. Nevertheless, the visual inspection is difficult to apply to surface inspection due to diverse appearances of material, hue, and lightning effects. To overcome technical limitations, a deep learning-based pattern recognition algorithm is proposed, which is specialized for painting surface inspections. The proposed algorithm functions in real time on the embedded board mounted on an autonomous inspection drone. The inspection results data are stored in the database and used for training the deep learning algorithm to improve performance. The various experiments for pre-inspection of painting processes are performed to verify real-time performance of the proposed deep learning algorithm.

Real-Time Eye Detection and Tracking Under Various Light Conditions

  • Park Ho Sik;Nam Kee Hwan;Seol Jeung Bo;Cho Hyeon Seob;Ra Sang Dong;Bae Cheol Soo
    • Proceedings of the IEEK Conference
    • /
    • 2004.08c
    • /
    • pp.862-866
    • /
    • 2004
  • Non-intrusive methods based on active remote IR illumination for eye tracking is important for many applications of vision-based man-machine interaction. One problem that has plagued those methods is their sensitivity to lighting condition change. This tends to significantly limit their scope of application. In this paper, we present a new real-time eye detection and tracking methodology that works under variable and realistic lighting conditions. Based on combining the bright-pupil effect resulted from IR light and the conventional appearance-based object recognition technique, our method can robustly track eyes when the pupils are not very bright due to significant external illumination interferences. The appearance model is incorporated in both eyes detection and tracking via the use of support vector machine and the mean shift tracking. Additional improvement is achieved from modifying the image acquisition apparatus including the illuminator and the camera.

  • PDF