• Title/Summary/Keyword: camera vision

Search Result 1,386, Processing Time 0.027 seconds

Shape Based Framework for Recognition and Tracking of Texture-free Objects for Submerged Robots in Structured Underwater Environment (수중로봇을 위한 형태를 기반으로 하는 인공표식의 인식 및 추종 알고리즘)

  • Han, Kyung-Min;Choi, Hyun-Taek
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.48 no.6
    • /
    • pp.91-98
    • /
    • 2011
  • This paper proposes an efficient and accurate vision based recognition and tracking framework for texture free objects. We approached this problem with a two phased algorithm: detection phase and tracking phase. In the detection phase, the algorithm extracts shape context descriptors that used for classifying objects into predetermined interesting targets. Later on, the matching result is further refined by a minimization technique. In the tracking phase, we resorted to meanshift tracking algorithm based on Bhattacharyya coefficient measurement. In summary, the contributions of our methods for the underwater robot vision are four folds: 1) Our method can deal with camera motion and scale changes of objects in underwater environment; 2) It is inexpensive vision based recognition algorithm; 3) The advantage of shape based method compared to a distinct feature point based method (SIFT) in the underwater environment with possible turbidity variation; 4) We made a quantitative comparison of our method with a few other well-known methods. The result is quite promising for the map based underwater SLAM task which is the goal of our research.

Face Classification Using Cascade Facial Detection and Convolutional Neural Network (Cascade 안면 검출기와 컨볼루셔널 신경망을 이용한 얼굴 분류)

  • Yu, Je-Hun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.1
    • /
    • pp.70-75
    • /
    • 2016
  • Nowadays, there are many research for recognizing face of people using the machine vision. the machine vision is classification and analysis technology using machine that has sight such as human eyes. In this paper, we propose algorithm for classifying human face using this machine vision system. This algorithm consist of Convolutional Neural Network and cascade face detector. And using this algorithm, we classified the face of subjects. For training the face classification algorithm, 2,000, 3,000, and 4,000 images of each subject are used. Training iteration of Convolutional Neural Network had 10 and 20. Then we classified the images. In this paper, about 6,000 images was classified for effectiveness. And we implement the system that can classify the face of subjects in realtime using USB camera.

Measurement of Dynamic Characteristics on Structure using Non-marker Vision-based Displacement Measurement System (비마커 영상기반 변위계측 시스템을 이용한 구조물의 동특성 측정)

  • Choi, Insub;Kim, JunHee
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.29 no.4
    • /
    • pp.301-308
    • /
    • 2016
  • In this study, a novel method referred as non-marker vision-based displacement measuring system(NVDMS) was introduced in order to measure the displacement of structure. There are two distinct differences between proposed NVDMS and existing vision-based displacement measuring system(VDMS). First, the NVDMS extracts the pixel coordinates of the structure using a feature point not a marker. Second, in the NVDMS, the scaling factor in order to convert the coordinates of a feature points from pixel value to physical value can be calculated by using the external conditions between the camera and the structure, which are distance, angle, and focal length, while the scaling factor for VDMS can be calculated by using the geometry of marker. The free vibration test using the three-stories scale model was conducted in order to analyze the reliability of the displacement data obtained from the NVDMS by comparing the reference data obtained from laser displacement sensor(LDS), and the measurement of dynamic characteristics was proceed using the displacement data. The NVDMS can accurately measure the dynamic displacement of the structure without the marker, and the high reliability of the dynamic characteristics obtained from the NVDMS are secured.

Classification between Intentional and Natural Blinks in Infrared Vision Based Eye Tracking System

  • Kim, Song-Yi;Noh, Sue-Jin;Kim, Jin-Man;Whang, Min-Cheol;Lee, Eui-Chul
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.601-607
    • /
    • 2012
  • Objective: The aim of this study is to classify between intentional and natural blinks in vision based eye tracking system. Through implementing the classification method, we expect that the great eye tracking method will be designed which will perform well both navigation and selection interactions. Background: Currently, eye tracking is widely used in order to increase immersion and interest of user by supporting natural user interface. Even though conventional eye tracking system is well focused on navigation interaction by tracking pupil movement, there is no breakthrough selection interaction method. Method: To determine classification threshold between intentional and natural blinks, we performed experiment by capturing eye images including intentional and natural blinks from 12 subjects. By analyzing successive eye images, two features such as eye closed duration and pupil size variation after eye open were collected. Then, the classification threshold was determined by performing SVM(Support Vector Machine) training. Results: Experimental results showed that the average detection accuracy of intentional blinks was 97.4% in wearable eye tracking system environments. Also, the detecting accuracy in non-wearable camera environment was 92.9% on the basis of the above used SVM classifier. Conclusion: By combining two features using SVM, we could implement the accurate selection interaction method in vision based eye tracking system. Application: The results of this research might help to improve efficiency and usability of vision based eye tracking method by supporting reliable selection interaction scheme.

Development of Kid Height Measurement Application based on Image using Computer Vision (컴퓨터 비전을 이용한 이미지 기반 아이 키 측정 애플리케이션 개발)

  • Yun, Da-Yeong;Moon, Mi-Kyeong
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.1
    • /
    • pp.117-124
    • /
    • 2021
  • Among growth disorders, 'Short Stature' can be improved through rapid diagnosis and treatment, and for that, it is important to detect early'Short Stature'. It is recommended to measure the height steadily for early detection of 'Short Stature' and checking the kid's growth process, but existing height measurement methods have problems such as time and space limitations, cost occurrence, and difficulty in keeping records. So in this paper, we proposed an 'Development of Kid Height Measurement Application based on Image using computer vision' method using smart phones, a medium that is highly accessible to people. In images taken through a smartphone camera, the kid's height is measured using algorithms from OpenCV, a computer vision library, and the measured heights were printed on the screen through 'a comparison graph with the standard height by gender and age' and 'list by date', made possible to check the kid's growth process. It is expected to measure height anytime, anywhere without time and space limitations and costs through this proposed method, and it is expected to help early detection of 'Short Stature' and other disorder through steady height measurement and confirmation of growth process.

Intelligent Monitoring System for Solitary Senior Citizens with Vision-Based Security Architecture (영상보안 구조 기반의 지능형 독거노인 모니터링 시스템)

  • Kim, Soohee;Jeong, Youngwoo;Jeong, Yue Ri;Lee, Seung Eun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.639-641
    • /
    • 2022
  • With the increasing of aging population, a lot of researches on monitoring systems for solitary senior citizens are under study. In general, a monitoring system provides a monitoring service by computing the information of vision, sensors, and measurement values on a server. Design considering data security is essential because a risk of data leakage exists in the structure of the system employing the server. In this paper, we propose a intelligent monitoring system for solitary senior citizens with vision-based security architecture. The proposed system protects privacy by ensuring high security through an architecture that blocks communication between a camera module and a server by employing an edge AI module. The edge AI module was designed with Verilog HDL and verified by implementing on a Field Programmable Gate Array (FPGA). We tested our proposed system on 5,144 frame data and demonstrated that a dangerous detection signal is generated correctly when human motion is not detected for a certain period.

  • PDF

Machine Vision Platform for High-Precision Detection of Disease VOC Biomarkers Using Colorimetric MOF-Based Gas Sensor Array (비색 MOF 가스센서 어레이 기반 고정밀 질환 VOCs 바이오마커 검출을 위한 머신비전 플랫폼)

  • Junyeong Lee;Seungyun Oh;Dongmin Kim;Young Wung Kim;Jungseok Heo;Dae-Sik Lee
    • Journal of Sensor Science and Technology
    • /
    • v.33 no.2
    • /
    • pp.112-116
    • /
    • 2024
  • Gas-sensor technology for volatile organic compounds (VOC) biomarker detection offers significant advantages for noninvasive diagnostics, including rapid response time and low operational costs, exhibiting promising potential for disease diagnosis. Colorimetric gas sensors, which enable intuitive analysis of gas concentrations through changes in color, present additional benefits for the development of personal diagnostic kits. However, the traditional method of visually monitoring these sensors can limit quantitative analysis and consistency in detection threshold evaluation, potentially affecting diagnostic accuracy. To address this, we developed a machine vision platform based on metal-organic framework (MOF) for colorimetric gas sensor arrays, designed to accurately detect disease-related VOC biomarkers. This platform integrates a CMOS camera module, gas chamber, and colorimetric MOF sensor jig to quantitatively assess color changes. A specialized machine vision algorithm accurately identifies the color-change Region of Interest (ROI) from the captured images and monitors the color trends. Performance evaluation was conducted through experiments using a platform with four types of low-concentration standard gases. A limit-of-detection (LoD) at 100 ppb level was observed. This approach significantly enhances the potential for non-invasive and accurate disease diagnosis by detecting low-concentration VOC biomarkers and offers a novel diagnostic tool.

Facial Gaze Detection by Estimating Three Dimensional Positional Movements (얼굴의 3차원 위치 및 움직임 추정에 의한 시선 위치 추적)

  • Park, Gang-Ryeong;Kim, Jae-Hui
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.3
    • /
    • pp.23-35
    • /
    • 2002
  • Gaze detection is to locate the position on a monitor screen where a user is looking. In our work, we implement it with a computer vision system setting a single camera above a monitor and a user moves (rotates and/or translates) his face to gaze at a different position on the monitor. To detect the gaze position, we locate facial region and facial features(both eyes, nostrils and lip corners) automatically in 2D camera images. From the movement of feature points detected in starting images, we can compute the initial 3D positions of those features by camera calibration and parameter estimation algorithm. Then, when a user moves(rotates and/or translates) his face in order to gaze at one position on a monitor, the moved 3D positions of those features can be computed from 3D rotation and translation estimation and affine transform. Finally, the gaze position on a monitor is computed from the normal vector of the plane determined by those moved 3D positions of features. As experimental results, we can obtain the gaze position on a monitor(19inches) and the gaze position accuracy between the computed positions and the real ones is about 2.01 inches of RMS error.

Investigation on the Applicability of Defocus Blur Variations to Depth Calculation Using Target Sheet Images Captured by a DSLR Camera

  • Seo, Suyoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.2
    • /
    • pp.109-121
    • /
    • 2020
  • Depth calculation of objects in a scene from images is one of the most studied processes in the fields of image processing, computer vision, and photogrammetry. Conventionally, depth is calculated using a pair of overlapped images captured at different view points. However, there have been studies to calculate depths from a single image. Theoretically, it is known to be possible to calculate depth using the diameter of CoC (Circle of Confusion) caused by defocus under the assumption of a thin lens model. Thus, this study aims to verify the validity of the thin lens model to calculate depth from edge blur amount which corresponds to the radius of CoC. For this study, a commercially available DSLR (Digital Single Lens Reflex) camera was used to capture a set of target sheets which had different edge contrasts. In order to find out the pattern of the variations of edge blur against varying combination of FD (Focusing Distance) and OD (Object Distance), the camera was set to varying FD and target sheet images were captured at varying OD under each FD. Then, the edge blur and edge displacement were estimated from edge slope profiles using a brute-force method. The experimental results show that the pattern of the variations of edge blur observed in the target images was apart from their corresponding theoretical amounts derived under the thin lens assumption but can still be utilized to calculate depth from a single image for the cases similar to the limited conditions experimented under which the tendency between FD and OD is manifest.

Development of Automatic Hole Position Measurement System using the CCD-camera (CCD-카메라를 이용한 홀 변위 자동측정시스템 개발)

  • 김병규;최재영;강희준;노영식
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2004.10a
    • /
    • pp.127-130
    • /
    • 2004
  • For the quality control of the industrial products, an automatic hole measuring system has been developed. The measurement device allows X-Y movement due to contact forces between a hole and its own circular cone and the device is attached to an industrial robot. Its measurement accuracy is about 0.04mm. This movement of the plate is measured by two LVDT sensor system. But this system using the LVDT sensors is restricted by high cost and precision of measurement and correspondence of environment so particularly, a vision system with CCD-Camera is discussed in this paper for the above mentioned purpose. The device consists of two of two links jointed with hinge pins basically and, they guarantee free movement of the touch prove attached on the second link in the same plane. These links are returned to home position by the spring plungers automatically after each process for the next one. On the surface of the touch prove, it has a circular white mark for camera recognition. The system detect and notify the center coordinate of capture mark image through the image processing. Its measuring accuracy has been proved to be about $\pm$0.01mm through the repeated implementation over 200 times. This technique will shows the advantage of touch-indirect image capture idea using cone-shaped touch prove in various symmetrical shaped holes particulary, like tapped holes, chamfered holes, etc As a result, we attained our object in a view of the accuracy, economical efficiency, and functionality

  • PDF