• Title/Summary/Keyword: 자세각 추정

Search Result 102, Processing Time 0.03 seconds

Application for Workout and Diet Assistant using Image Processing and Machine Learning Skills (영상처리 및 머신러닝 기술을 이용하는 운동 및 식단 보조 애플리케이션)

  • Chi-Ho Lee;Dong-Hyun Kim;Seung-Ho Choi;In-Woong Hwang;Kyung-Sook Han
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.5
    • /
    • pp.83-88
    • /
    • 2023
  • In this paper, we developed a workout and diet assistance application to meet the growing demand for workout and dietary support services due to the increase in the home training population. The application analyzes the user's workout posture in real-time through the camera and guides the correct posture using guiding lines and voice feedback. It also classifies the foods included in the captured photos, estimates the amount of each food, and calculates and provides nutritional information such as calories. Nutritional information calculations are executed on the server, which then transmits the results back to the application. Once received, this data is presented visually to the user. Additionally, workout results and nutritional information are saved and organized by date for users to review.

Vision-based Mobile Robot Localization and Mapping using fisheye Lens (어안렌즈를 이용한 비전 기반의 이동 로봇 위치 추정 및 매핑)

  • Lee Jong-Shill;Min Hong-Ki;Hong Seung-Hong
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.5 no.4
    • /
    • pp.256-262
    • /
    • 2004
  • A key component of an autonomous mobile robot is to localize itself and build a map of the environment simultaneously. In this paper, we propose a vision-based localization and mapping algorithm of mobile robot using fisheye lens. To acquire high-level features with scale invariance, a camera with fisheye lens facing toward to ceiling is attached to the robot. These features are used in mP building and localization. As a preprocessing, input image from fisheye lens is calibrated to remove radial distortion and then labeling and convex hull techniques are used to segment ceiling and wall region for the calibrated image. At the initial map building process, features we calculated for each segmented region and stored in map database. Features are continuously calculated for sequential input images and matched to the map. n some features are not matched, those features are added to the map. This map matching and updating process is continued until map building process is finished, Localization is used in map building process and searching the location of the robot on the map. The calculated features at the position of the robot are matched to the existing map to estimate the real position of the robot, and map building database is updated at the same time. By the proposed method, the elapsed time for map building is within 2 minutes for 50㎡ region, the positioning accuracy is ±13cm and the error about the positioning angle of the robot is ±3 degree for localization.

  • PDF

Mobile Robot Localization and Mapping using Scale-Invariant Features (스케일 불변 특징을 이용한 이동 로봇의 위치 추정 및 매핑)

  • Lee, Jong-Shill;Shen, Dong-Fan;Kwon, Oh-Sang;Lee, Eung-Hyuk;Hong, Seung-Hong
    • Journal of IKEEE
    • /
    • v.9 no.1 s.16
    • /
    • pp.7-18
    • /
    • 2005
  • A key component of an autonomous mobile robot is to localize itself accurately and build a map of the environment simultaneously. In this paper, we propose a vision-based mobile robot localization and mapping algorithm using scale-invariant features. A camera with fisheye lens facing toward to ceiling is attached to the robot to acquire high-level features with scale invariance. These features are used in map building and localization process. As pre-processing, input images from fisheye lens are calibrated to remove radial distortion then labeling and convex hull techniques are used to segment ceiling region from wall region. At initial map building process, features are calculated for segmented regions and stored in map database. Features are continuously calculated from sequential input images and matched against existing map until map building process is finished. If features are not matched, they are added to the existing map. Localization is done simultaneously with feature matching at map building process. Localization. is performed when features are matched with existing map and map building database is updated at same time. The proposed method can perform a map building in 2 minutes on $50m^2$ area. The positioning accuracy is ${\pm}13cm$, the average error on robot angle with the positioning is ${\pm}3$ degree.

  • PDF

Estimation of Genetic Parameters for Linear Type and Conformation Traits in Hanwoo Cows (한우 암소의 선형 및 외모심사형질에 대한 유전모수 추정)

  • Lee, Ki-Hwan;Koo, Yang-Mo;Kim, Jung-Il;Song, Chi-Eun;Jeoung, Yeoung-Ho;Noh, Jae-Kwang;Ha, Yu-Na;Cha, Dae-Hyeop;Son, Ji-Hyun;Park, Byong-Ho;Lee, Jae-Gu;Lee, Jung-Gyu;Lee, Ji-Hong;Do, Chang-Hee;Choi, Tae-Jeong
    • Journal of agriculture & life science
    • /
    • v.51 no.6
    • /
    • pp.89-105
    • /
    • 2017
  • This study utilized 32,312 records of 17 linear type and 10 conformation traits(including final scores) of Hanwoo cows in the KAIA(Korea Animal Improvement Association) ('09~'10), with 60,556 animals in the pedigree file. Traits included stature, body length, strength, body depth, angularity, shank thickness, rump angle, rump length, pin bone width, thigh thickness, udder volume, teat length, teat placement, foot angle, hock angle, rear leg back view, body balance, breed characteristic, head development, forequarter quality, back line, rump, thigh development, udder development, leg line, and final score. Genetic and residual(co) variances were estimated using bi-trait pairwise analyses with EM-REML algorithm. Herd-year-classifier, year at classification, and calving stage were considered as fixed effects with classification months as a covariate. The heritability estimates ranged from 0.03(teat placement) to 0.42(body length). Rump length had the highest positive genetic correlation with pin bone width(0.96). Moreover, stature, body length, strength, and body depth had the highest positive genetic correlations with rump length, pin bone width, and thigh thickness(0.81-0.94). Stature, body length, strength, body depth, rump length, pin bone width, and thigh thickness traits also had high positive genetic correlations.

Design and Implementation of Pedestrian Position Information System in GPS-disabled Area (GPS 수신불가 지역에서의 보행자 위치정보시스템의 설계 및 구현)

  • Kwak, Hwy-Kuen;Park, Sang-Hoon;Lee, Choon-Woo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.9
    • /
    • pp.4131-4138
    • /
    • 2012
  • In this paper, we propose a Pedestrian Position Information System(PPIS) using low-cost inertial sensors in GPS-disabled area. The proposed scheme estimates the attitude/heading angle and step detection of pedestrian. Additionally, the estimation error due to the inertial sensors is mitigated by using additional sensors. We implement a portable hardware module to evaluate performance of the proposed system. Through the experiments in indoor building, the estimation error of position information was measured as 2.4% approximately.

Defects Length Measurement Using an Estimation Agorithm of the Camera Orientation and an Inclination Angle of a Laser Slit Beam (레이저 슬릿 빔의 경사각과 카메라 자세 추정 알고리듬을 이용한 벽면결함 길이측정)

  • Kim, Young-Hwang;Yoon, Ji-Sup;Kang, E-Sok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.8 no.1
    • /
    • pp.37-45
    • /
    • 2002
  • A method of measuring the length of defects on the wall and restructuring the defect image is proposed based on the estimation algorithm of a camera orientation, which uses the declination angle of a laser slit beam. The estimation algorithm of the horizontally inclined angle of CCD camera adopts a 3-dimensional coordinate transformation of the image plane where both the laser beam and the original image of the defects exist. The estimation equation is obtained by using the information of the beam projected on the wall and the parameters of this equation are experimentally obtained. With this algorithm, the original image of the defect can be reconstructed as an image normal to the wall. From the result of a series of experiments, the measuring accuracy of the defect is measured within 0.5% error bound of real defect size under 30 degree of the horizontally inclined angle. The proposed algorithm provides the method of reconstructing the image taken at any arbitrary horizontally inclined angle as the image normal as the wall and thus, it enables the accurate measurement of the defect lengths by using a single camera and a laser slit beam.

The Estimation of Craniovertebral Angle using Wearable Sensor for Monitoring of Neck Posture in Real-Time (실시간 목 자세 모니터링을 위한 웨어러블 센서를 이용한 두개척추각 추정)

  • Lee, Jaehyun;Chee, Youngjoon
    • Journal of Biomedical Engineering Research
    • /
    • v.39 no.6
    • /
    • pp.278-283
    • /
    • 2018
  • Nowdays, many people suffer from the neck pain due to forward head posture(FHP) and text neck(TN). To assess the severity of the FHP and TN the craniovertebral angle(CVA) is used in clinincs. However, it is difficult to monitor the neck posture using the CVA in daily life. We propose a new method using the cervical flexion angle(CFA) obtained from a wearable sensor to monitor neck posture in daily life. 15 participants were requested to pose FHP and TN. The CFA from the wearable sensor was compared with the CVA observed from a 3D motion camera system to analyze their correlation. The determination coefficients between CFA and CVA were 0.80 in TN and 0.57 in FHP, and 0.69 in TN and FHP. From the monitoring the neck posture while using laptop computer for 20 minutes, this wearable sensor can estimate the CVA with the mean squared error of 2.1 degree.

Investigation of image preprocessing and face covering influences on motion recognition by a 2D human pose estimation algorithm (모션 인식을 위한 2D 자세 추정 알고리듬의 이미지 전처리 및 얼굴 가림에 대한 영향도 분석)

  • Noh, Eunsol;Yi, Sarang;Hong, Seokmoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.7
    • /
    • pp.285-291
    • /
    • 2020
  • In manufacturing, humans are being replaced with robots, but expert skills remain difficult to convert to data, making them difficult to apply to industrial robots. One method is by visual motion recognition, but physical features may be judged differently depending on the image data. This study aimed to improve the accuracy of vision methods for estimating the posture of humans. Three OpenPose vision models were applied: MPII, COCO, and COCO+foot. To identify the effects of face-covering accessories and image preprocessing on the Convolutional Neural Network (CNN) structure, the presence/non-presence of accessories, image size, and filtering were set as the parameters affecting the identification of a human's posture. For each parameter, image data were applied to the three models, and the errors between the actual and predicted values, as well as the percentage correct keypoints (PCK), were calculated. The COCO+foot model showed the lowest sensitivity to all three parameters. A <50% (from 3024×4032 to 1512×2016 pixels) reduction in image size was considered acceptable. Emboss filtering, in combination with MPII, provided the best results (reduced error of <60 pixels).

High-Quality Depth Map Generation of Humans in Monocular Videos (단안 영상에서 인간 오브젝트의 고품질 깊이 정보 생성 방법)

  • Lee, Jungjin;Lee, Sangwoo;Park, Jongjin;Noh, Junyong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.20 no.2
    • /
    • pp.1-11
    • /
    • 2014
  • The quality of 2D-to-3D conversion depends on the accuracy of the assigned depth to scene objects. Manual depth painting for given objects is labor intensive as each frame is painted. Specifically, a human is one of the most challenging objects for a high-quality conversion, as a human body is an articulated figure and has many degrees of freedom (DOF). In addition, various styles of clothes, accessories, and hair create a very complex silhouette around the 2D human object. We propose an efficient method to estimate visually pleasing depths of a human at every frame in a monocular video. First, a 3D template model is matched to a person in a monocular video with a small number of specified user correspondences. Our pose estimation with sequential joint angular constraints reproduces a various range of human motions (i.e., spine bending) by allowing the utilization of a fully skinned 3D model with a large number of joints and DOFs. The initial depth of the 2D object in the video is assigned from the matched results, and then propagated toward areas where the depth is missing to produce a complete depth map. For the effective handling of the complex silhouettes and appearances, we introduce a partial depth propagation method based on color segmentation to ensure the detail of the results. We compared the result and depth maps painted by experienced artists. The comparison shows that our method produces viable depth maps of humans in monocular videos efficiently.

Display of Irradiation Location of Ultrasonic Beauty Device Using AR Scheme (증강현실 기법을 이용한 초음파 미용기의 조사 위치 표시)

  • Kang, Moon-Ho
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.9
    • /
    • pp.25-31
    • /
    • 2020
  • In this study, for the safe use of a portable ultrasonic skin-beauty device, an android app was developed to show the irradiation locations of focused ultrasound to a user through augmented reality (AR) and enable stable self-surgery. The utility of the app was assessed through testing. While the user is making a facial treatment with the beauty device, the user's face and the ultrasonic irradiation location on the face are detected in real-time with a smart-phone camera. The irradiation location is then indicated on the face image and shown to the user so that excessive ultrasound is not irradiated to the same area during treatment. To this end, ML-Kit is used to detect the user's face landmarks in real-time, and they are compared with a reference face model to estimate the pose of the face, such as rotation and movement. After mounting a LED on the ultrasonic irradiation part of the device and operating the LED during irradiation, the LED light was searched to find the position of the ultrasonic irradiation on the smart-phone screen, and the irradiation position was registered and displayed on the face image based on the estimated face pose. Each task performed in the app was implemented through the thread and the timer, and all tasks were executed within 75 ms. The test results showed that the time taken to register and display 120 ultrasound irradiation positions was less than 25ms, and the display accuracy was within 20mm when the face did not rotate significantly.