• Title/Summary/Keyword: camera vision

Search Result 1,386, Processing Time 0.029 seconds

A Design and Implementation of Yoga Exercise Program Using Azure Kinect

  • Park, Jong Hoon;Sim, Dae Han;Jun, Young Pyo;Lee, Hongrae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.6
    • /
    • pp.37-46
    • /
    • 2021
  • In this paper, we designed and implemented a program to measure and to judge the accuracy of yoga postures using Azure Kinect. The program measures all joint positions of the user through Azure Kinect Camera and sensors. The measured values of joints are used as data to determine accuracy in two ways. The measured joint data are determined by trigonometry and Pythagoras theorem to determine the angle of the joint. In addition, the measured joint value is changed to relative position value. The calculated and obtained values are compared to the joint values and relative position values of the desired posture to determine the accuracy. Azure Kinect Camera organizes the screen so that users can check their posture and gives feedback on the user's posture accuracy to improve their posture.

Distortion Removal and False Positive Filtering for Camera-based Object Position Estimation (카메라 기반 객체의 위치인식을 위한 왜곡제거 및 오검출 필터링 기법)

  • Sil Jin;Jimin Song;Jiho Choi;Yongsik Jin;Jae Jin Jeong;Sang Jun Lee
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.19 no.1
    • /
    • pp.1-8
    • /
    • 2024
  • Robotic arms have been widely utilized in various labor-intensive industries such as manufacturing, agriculture, and food services, contributing to increasing productivity. In the development of industrial robotic arms, camera sensors have many advantages due to their cost-effectiveness and small sizes. However, estimating object positions is a challenging problem, and it critically affects to the robustness of object manipulation functions. This paper proposes a method for estimating the 3D positions of objects, and it is applied to a pick-and-place task. A deep learning model is utilized to detect 2D bounding boxes in the image plane, and the pinhole camera model is employed to compute the object positions. To improve the robustness of measuring the 3D positions of objects, we analyze the effect of lens distortion and introduce a false positive filtering process. Experiments were conducted on a real-world scenario for moving medicine bottles by using a camera-based manipulator. Experimental results demonstrated that the distortion removal and false positive filtering are effective to improve the position estimation precision and the manipulation success rate.

Human Legs Motion Estimation by using a Single Camera and a Planar Mirror (단일 카메라와 평면거울을 이용한 하지 운동 자세 추정)

  • Lee, Seok-Jun;Lee, Sung-Soo;Kang, Sun-Ho;Jung, Soon-Ki
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.11
    • /
    • pp.1131-1135
    • /
    • 2010
  • This paper presents a method to capture the posture of the human lower-limbs on the 3D space by using a single camera and a planar mirror. The system estimates the pose of the camera facing the mirror by using four coplanar IR markers attached on the planar mirror. After that, the training space is set up based on the relationship between the mirror and the camera. When a patient steps on the weight board, the system obtains relative position between patients' feet. The markers are attached on the sides of both legs, so that some markers are invisible from the camera due to the self-occlusion. The reflections of the markers on the mirror can partially resolve the above problem with a single camera system. The 3D positions of the markers are estimated by using the geometric information of the camera on the training space. Finally the system estimates and visualizes the posture and motion of the both legs based on the 3D marker positions.

Resolution improvement of a CMOS vision chip for edge detection by separating photo-sensing and edge detection circuits (수광 회로와 윤곽 검출 회로의 분리를 통한 윤곽 검출용 시각칩의 해상도 향상)

  • Kong, Jae-Sung;Suh, Sung-Ho;Kim, Sang-Heon;Shin, Jang-Kyoo;Lee, Min-Ho
    • Journal of Sensor Science and Technology
    • /
    • v.15 no.2
    • /
    • pp.112-119
    • /
    • 2006
  • Resolution of an image sensor is very significant parameter to improve. It is hard to improve the resolution of the CMOS vision chip for edge detection based on a biological retina using a resistive network because the vision chip contains additional circuits such as a resistive network and some processing circuits comparing with general image sensors such as CMOS image sensor (CIS). In this paper, we proved the problem of low resolution by separating photo-sensing and signal processing circuits. This type of vision chips occurs a problem of low operation speed because the signal processing circuits should be commonly used in a row of the photo-sensors. The low speed problem of operation was proved by using a reset decoder. A vision chip for edge detection with $128{\times}128$ pixel array has been designed and fabricated by using $0.35{\mu}m$ 2-poly 4-metal CMOS technology. The fabricated chip was integrated with optical lens as a camera system and investigated with real image. By using this chip, we could achieved sufficient edge images for real application.

Vision Inspection Method Development which Improves Accuracy By using Power-Law Transformation and Histogram Specification (멱함수 변환과 히스토그램 지정을 사용하여 정확도를 향상시킨 Vision 검사 방법 개발)

  • Huh, Kyung-Moo;Park, Se-Hyuk;Kang, Su-Min
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.44 no.5
    • /
    • pp.11-17
    • /
    • 2007
  • The appearance inspection of various electronic products and parts has been executed by the eyesight of human. But inspection by eyesight can't bring about uniform inspection result. Because the appearance inspection result by eyesight of human is changed by condition of physical and spirit of the checker. So machine vision inspection system is currently used to many appearance inspection fields instead of the checker. However the inspection result of machine vision is changed by the illumination of workplace. Therefore we have used a power-law transformation and histogram specification in this paper for improvement of vision inspection accuracy. As a result of these power-law transformation and histogram specification algorithm, we could increase the exactness of vision inspection and prevent system error from physical and spirit condition of human. Also this system has been developed only using PC, CCD Camera and Visual C++ for universal workplace.

Monocular Vision-Based Guidance and Control for a Formation Flight

  • Cheon, Bong-kyu;Kim, Jeong-ho;Min, Chan-oh;Han, Dong-in;Cho, Kyeum-rae;Lee, Dae-woo;Seong, kie-jeong
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.16 no.4
    • /
    • pp.581-589
    • /
    • 2015
  • This paper describes a monocular vision-based formation flight technology using two fixed wing unmanned aerial vehicles. To measuring relative position and attitude of a leader aircraft, a monocular camera installed in the front of the follower aircraft captures an image of the leader, and position and attitude are measured from the image using the KLT feature point tracker and POSIT algorithm. To verify the feasibility of this vision processing algorithm, a field test was performed using two light sports aircraft, and our experimental results show that the proposed monocular vision-based measurement algorithm is feasible. Performance verification for the proposed formation flight technology was carried out using the X-Plane flight simulator. The formation flight simulation system consists of two PCs playing the role of leader and follower. When the leader flies by the command of user, the follower aircraft tracks the leader by designed guidance and a PI control law, and all the information about leader was measured using monocular vision. This simulation shows that guidance using relative attitude information tracks the leader aircraft better than not using attitude information. This simulation shows absolute average errors for the relative position as follows: X-axis: 2.88 m, Y-axis: 2.09 m, and Z-axis: 0.44 m.

Performance Analysis of Vision-based Positioning Assistance Algorithm (비전 기반 측위 보조 알고리즘의 성능 분석)

  • Park, Jong Soo;Lee, Yong;Kwon, Jay Hyoun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.3
    • /
    • pp.101-108
    • /
    • 2019
  • Due to recent improvements in computer processing speed and image processing technology, researches are being actively carried out to combine information from camera with existing GNSS (Global Navigation Satellite System) and dead reckoning. In this study, developed a vision-based positioning assistant algorithm to estimate the distance to the object from stereo images. In addition, GNSS/on-board vehicle sensor/vision based positioning algorithm is developed by combining vision based positioning algorithm with existing positioning algorithm. For the performance analysis, the velocity calculated from the actual driving test was used for the navigation solution correction, simulation tests were performed to analyse the effects of velocity precision. As a result of analysis, it is confirmed that about 4% of position accuracy is improved when vision information is added compared to existing GNSS/on-board based positioning algorithm.

Developing an Occupants Count Methodology in Buildings Using Virtual Lines of Interest in a Multi-Camera Network (다중 카메라 네트워크 가상의 관심선(Line of Interest)을 활용한 건물 내 재실자 인원 계수 방법론 개발)

  • Chun, Hwikyung;Park, Chanhyuk;Chi, Seokho;Roh, Myungil;Susilawati, Connie
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.43 no.5
    • /
    • pp.667-674
    • /
    • 2023
  • In the event of a disaster occurring within a building, the prompt and efficient evacuation and rescue of occupants within the building becomes the foremost priority to minimize casualties. For the purpose of such rescue operations, it is essential to ascertain the distribution of individuals within the building. Nevertheless, there is a primary dependence on accounts provided by pertinent individuals like building proprietors or security staff, alongside fundamental data encompassing floor dimensions and maximum capacity. Consequently, accurate determination of the number of occupants within the building holds paramount significance in reducing uncertainties at the site and facilitating effective rescue activities during the golden hour. This research introduces a methodology employing computer vision algorithms to count the number of occupants within distinct building locations based on images captured by installed multiple CCTV cameras. The counting methodology consists of three stages: (1) establishing virtual Lines of Interest (LOI) for each camera to construct a multi-camera network environment, (2) detecting and tracking people within the monitoring area using deep learning, and (3) aggregating counts across the multi-camera network. The proposed methodology was validated through experiments conducted in a five-story building with the average accurary of 89.9% and the average MAE of 0.178 and RMSE of 0.339, and the advantages of using multiple cameras for occupant counting were explained. This paper showed the potential of the proposed methodology for more effective and timely disaster management through common surveillance systems by providing prompt occupancy information.

Gaze Detection System by Wide and Narrow View Camera (광각 및 협각 카메라를 이용한 시선 위치 추적 시스템)

  • 박강령
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.12C
    • /
    • pp.1239-1249
    • /
    • 2003
  • Gaze detection is to locate the position on a monitor screen where a user is looking by computer vision. Previous gaze detection system uses a wide view camera, which can capture the whole face of user. However, the image resolution is too low with such a camera and the fine movements of user's eye cannot be exactly detected. So, we implement the gaze detection system with a wide view camera and a narrow view camera. In order to detect the position of user's eye changed by facial movements, the narrow view camera has the functionalities of auto focusing and auto pan/tilt based on the detected 3D facial feature positions. As experimental results, we can obtain the facial and eye gaze position on a monitor and the gaze position accuracy between the computed positions and the real ones is about 3.1 cm of RMS error in case of Permitting facial movements and 3.57 cm in case of permitting facial and eye movement. The processing time is so short as to be implemented in real-time system(below 30 msec in Pentium -IV 1.8 GHz)

Exploration of temperature effect on videogrammetric technique for displacement monitoring

  • Zhou, Hua-Fei;Lu, Lin-Jun;Li, Zhao-Yi;Ni, Yi-Qing
    • Smart Structures and Systems
    • /
    • v.25 no.2
    • /
    • pp.135-153
    • /
    • 2020
  • There has been a sustained interest towards the non-contact structural displacement measurement by means of videogrammetric technique. On the way forward, one of the major concerns is the spurious image drift induced by temperature variation. This study therefore carries out an investigation into the temperature effect of videogrammetric technique, focusing on the exploration of the mechanism behind the temperature effect and the elimination of the temperature-caused measurement error. 2D videogrammetric measurement tests under monotonic or cyclic temperature variation are first performed. Features of measurement error and the casual relationship between temperature variation and measurement error are then studied. The variation of the temperature of digital camera is identified as the main cause of measurement error. An excellent linear relationship between them is revealed. After that, camera parameters are extracted from the mapping between world coordinates and pixels coordinates of the calibration targets. The coordinates of principle point and focal lengths show variations well correlated with temperature variation. The measurement error is thought to be an outcome mainly attributed to the variation of the coordinates of principle point. An approach for eliminating temperature-caused measurement error is finally proposed. Correlation models between camera parameters and temperature are formulated. Thereby, camera parameters under different temperature conditions can be predicted and the camera projective matrix can be updated accordingly. By reconstructing the world coordinates with the updated camera projective matrix, the temperature-caused measurement error is eliminated. A satisfactory performance has been achieved by the proposed approach in eliminating the temperature-caused measurement error.