• Title/Summary/Keyword: Captured Image

Search Result 988, Processing Time 0.026 seconds

Analysis of 3D Accuracy According to Determination of Calibration Initial Value in Close-Range Digital Photogrammetry Using VLBI Antenna and Mobile Phone Camera (VLBI 안테나와 모바일폰 카메라를 활용한 근접수치사진측량의 캘리브레이션 초기값 결정에 따른 3차원 정확도 분석)

  • Kim, Hyuk Gi;Yun, Hong Sik;Cho, Jae Myoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.33 no.1
    • /
    • pp.31-43
    • /
    • 2015
  • This study had been aimed to conduct the camera calibration on VLBI antenna in the Space Geodetic Observation Center of Sejong City with a low-cost digital camera, which embedded in a mobile phone to determine the three-dimension position coordinates of the VLBI antenna, based on stereo images. The initial values for the camera calibration have been obtained by utilizing the Direct Linear Transformation algorithm and the commercial digital photogrammetry system, PhotoModeler $Scanner^{(R)}$ ver. 6.0, respectively. The accuracy of camera calibration results was compared with that the camera calibration results, acquired by a bundle adjustment with nonlinear collinearity condition equation. Although two methods showed significant differences in the initial value, the final calibration demonstrated the consistent results whichever methods had been performed for obtaining the initial value. Furthermore, those three-dimensional coordinates of feature points of the VLBI antenna were respectively calculated using the camera calibration by the two methods to be compared with the reference coordinates obtained from a total station. In fact, both methods have resulted out a same standard deviation of $X=0.004{\pm}0.010m$, $Y=0.001{\pm}0.015m$, $Z=0.009{\pm}0.017m$, that of showing a high degree of accuracy in centimeters. From the result, we can conclude that a mobile phone camera opens up the way for a variety of image processing studies, such as 3D reconstruction from images captured.

Regional Projection Histogram Matching and Linear Regression based Video Stabilization for a Moving Vehicle (영역별 수직 투영 히스토그램 매칭 및 선형 회귀모델 기반의 차량 운행 영상의 안정화 기술 개발)

  • Heo, Yu-Jung;Choi, Min-Kook;Lee, Hyun-Gyu;Lee, Sang-Chul
    • Journal of Broadcast Engineering
    • /
    • v.19 no.6
    • /
    • pp.798-809
    • /
    • 2014
  • Video stabilization is performed to remove unexpected shaky and irregular motion from a video. It is often used as preprocessing for robust feature tracking and matching in video. Typical video stabilization algorithms are developed to compensate motion from surveillance video or outdoor recordings that are captured by a hand-help camera. However, since the vehicle video contains rapid change of motion and local features, typical video stabilization algorithms are hard to be applied as it is. In this paper, we propose a novel approach to compensate shaky and irregular motion in vehicle video using linear regression model and vertical projection histogram matching. Towards this goal, we perform vertical projection histogram matching at each sub region of an input frame, and then we generate linear regression model to extract vertical translation and rotation parameters with estimated regional vertical movement vector. Multiple binarization with sub-region analysis for generating the linear regression model is effective to typical recording environments where occur rapid change of motion and local features. We demonstrated the effectiveness of our approach on blackbox videos and showed that employing the linear regression model achieved robust estimation of motion parameters and generated stabilized video in full automatic manner.

Evaluation of PPG signals regarding to video attributes of smart-phone camera (스마트폰 카메라의 영상 속성에 따른 맥파 신호 평가)

  • Lee, Haena;Kim, Minhee;Whang, MinCheol;Kim, Dong Keun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.4
    • /
    • pp.917-924
    • /
    • 2015
  • In this study, we study that the video attributes captured by built-in camera in smart-phone can effect on the quality of PPG signal. The conditions of video attributes were composed of the bitrate, the resolution, the flash. As each condition, we measured a change in the red value of the video image and calculated a PPI(Pulse to Pulse Interval) for extracting the pulse wave signal. 20 subjects participated in the experiment and this experiment was carried out 18 tasks. The PPG signal was measured simultaneously for two minutes with the PPG sensor in the middle finger and Smart-phone in the forefinger of the right hand. By proceeding the correlation analysis, we obtained the highest correlation condition(83%, p=0.01), which the resolution was $640{\times}480$, bitrate was 5000kbps, flash was on. As a result, this study will be a useful guide for quality of signals in the pulse signal measurement system using built-in camera in smart-phone.

Observation of Ignition Characteristics of Coals with Different Moisture Content in Laminar Flow Reactor (층류 반응기를 이용한 수분함량에 따른 석탄 휘발분의 점화 특성에 관한 연구)

  • Kim, Jae-Dong;Jung, Sung-Jae;Kim, Gyu-Bo;Chang, Young-June;Song, Ju-Hun;Jeon, Chung-Hwan
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.35 no.5
    • /
    • pp.451-457
    • /
    • 2011
  • The main objective of this study is to investigate the variation in the ignition characteristics of coals as a function of moisture content in a laminar flow reactor (LFR) equipped with a fuel moisture micro-supplier designed by the Pusan Clean Coal Center. The volatile ignition position and time were observed experimentally when a pulverized coal with moisture was fed into the LFR under burning conditions similar to those at the exit of the pulverizer and real boiler. The reaction-zone temperature along the centerline of the reactor was measured with a $70-{\mu}m$, R-type thermocouple. For different moisture contents, the volatile ignition position was determined based on an average of 15 to 20 images captured by a CCD camera using a proprietary image-processing technique. The reaction zone decreased proportionally as a function of the moisture content. As the moisture content increased, the volatile ignition positions were 2.92, 3.36, 3.96, and 4.65 mm corresponding to ignition times of 1.46, 1.68, 2.00, and 2.33 ms, respectively. These results indicate that the ignition position and time increased exponentially. We also calculated the ignition-delay time derived from the adiabatic thermal explosion. It showed a trend that was similar to that of the experimental data.

Multi-view Generation using High Resolution Stereoscopic Cameras and a Low Resolution Time-of-Flight Camera (고해상도 스테레오 카메라와 저해상도 깊이 카메라를 이용한 다시점 영상 생성)

  • Lee, Cheon;Song, Hyok;Choi, Byeong-Ho;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.4A
    • /
    • pp.239-249
    • /
    • 2012
  • Recently, the virtual view generation method using depth data is employed to support the advanced stereoscopic and auto-stereoscopic displays. Although depth data is invisible to user at 3D video rendering, its accuracy is very important since it determines the quality of generated virtual view image. Many works are related to such depth enhancement exploiting a time-of-flight (TOF) camera. In this paper, we propose a fast 3D scene capturing system using one TOF camera at center and two high-resolution cameras at both sides. Since we need two depth data for both color cameras, we obtain two views' depth data from the center using the 3D warping technique. Holes in warped depth maps are filled by referring to the surrounded background depth values. In order to reduce mismatches of object boundaries between the depth and color images, we used the joint bilateral filter on the warped depth data. Finally, using two color images and depth maps, we generated 10 additional intermediate images. To realize fast capturing system, we implemented the proposed system using multi-threading technique. Experimental results show that the proposed capturing system captured two viewpoints' color and depth videos in real-time and generated 10 additional views at 7 fps.

The Effects of Multi Joint-Joint Position Sense Training Using Functional Task on Joint Position Sense, Balance, Walking Ability in Patients With Post-Stroke Hemiplegia (기능적 과제를 통한 다관절 관절위치감각 훈련이 뇌졸중 환자의 관절위치감각, 균형, 보행능력에 미치는 효과)

  • Ko, Kyoung-hee;Choi, Jong-duk;Kim, Mi-sun
    • Physical Therapy Korea
    • /
    • v.22 no.3
    • /
    • pp.33-40
    • /
    • 2015
  • The purpose of this study was to investigate the effect of multi joint-joint position sense (MJ-JPS) training on joint position sense, balance, and gait ability in stroke patients. A total of 18 stroke patients participated in the study. The subjects were allocated randomly into two groups: an experimental group and a control group. Participants in the experimental group received MJ-JPS training (10 min) and conventional treatment (20 min), but participants in the control group only received conventional treatment (30 min). Both groups received training for five times per week for six weeks. MJ-JPS is a training method used to increase proprioception in the lower extremities; as such, it is used, to position the lower extremities in a given space. MJ-JPS measurement was captured via video using a Image J program to calculate the error distance. Balance ability was measured using Timed Up and Go (TUG) and the Berg Balance Scale (BBS). Gait ability was measured with a 10 m walking test (10MWT) and by climbing four flights of stairs. The Shapiro-Wilk test was used to assess normalization. Within-group differences were analyzed using the paired t-test. Between-group differences were analyzed using the independent t-test. The experimental group showed a significant decrease in error distance (MJ-JPS) compared to the control group (p<.05). Both groups showed a significant difference in their BBS and 10MWT results (p<.05). The experimental group showed a significant decrease in their TUG and climbing results (p<.05), but the control group results for those two tasks were not found to be significant (p>.05). There was significant difference in MJ-JPS and by climbing four flights of stairs on variation of pre and post test in between groups (p<.05), but TUG and BBS and 10MWT was no significantly (p>.05). We suggest that the MJ-JPS training proposed in this study be used as an intervention to help improve the functional activity of the lower extremities in stroke patients.

A Feature Point Extraction and Identification Technique for Immersive Contents Using Deep Learning (딥 러닝을 이용한 실감형 콘텐츠 특징점 추출 및 식별 방법)

  • Park, Byeongchan;Jang, Seyoung;Yoo, Injae;Lee, Jaechung;Kim, Seok-Yoon;Kim, Youngmo
    • Journal of IKEEE
    • /
    • v.24 no.2
    • /
    • pp.529-535
    • /
    • 2020
  • As the main technology of the 4th industrial revolution, immersive 360-degree video contents are drawing attention. The market size of immersive 360-degree video contents worldwide is projected to increase from $6.7 billion in 2018 to approximately $70 billion in 2020. However, most of the immersive 360-degree video contents are distributed through illegal distribution networks such as Webhard and Torrent, and the damage caused by illegal reproduction is increasing. Existing 2D video industry uses copyright filtering technology to prevent such illegal distribution. The technical difficulties dealing with immersive 360-degree videos arise in that they require ultra-high quality pictures and have the characteristics containing images captured by two or more cameras merged in one image, which results in the creation of distortion regions. There are also technical limitations such as an increase in the amount of feature point data due to the ultra-high definition and the processing speed requirement. These consideration makes it difficult to use the same 2D filtering technology for 360-degree videos. To solve this problem, this paper suggests a feature point extraction and identification technique that select object identification areas excluding regions with severe distortion, recognize objects using deep learning technology in the identification areas, extract feature points using the identified object information. Compared with the previously proposed method of extracting feature points using stitching area for immersive contents, the proposed technique shows excellent performance gain.

Multi-classifier Decision-level Fusion for Face Recognition (다중 분류기의 판정단계 융합에 의한 얼굴인식)

  • Yeom, Seok-Won
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.4
    • /
    • pp.77-84
    • /
    • 2012
  • Face classification has wide applications in intelligent video surveillance, content retrieval, robot vision, and human-machine interface. Pose and expression changes, and arbitrary illumination are typical problems for face recognition. When the face is captured at a distance, the image quality is often degraded by blurring and noise corruption. This paper investigates the efficacy of multi-classifier decision level fusion for face classification based on the photon-counting linear discriminant analysis with two different cost functions: Euclidean distance and negative normalized correlation. Decision level fusion comprises three stages: cost normalization, cost validation, and fusion rules. First, the costs are normalized into the uniform range and then, candidate costs are selected during validation. Three fusion rules are employed: minimum, average, and majority-voting rules. In the experiments, unfocusing and motion blurs are rendered to simulate the effects of the long distance environments. It will be shown that the decision-level fusion scheme provides better results than the single classifier.

A study on measurement and compensation of automobile door gap using optical triangulation algorithm (광 삼각법 측정 알고리즘을 이용한 자동차 도어 간격 측정 및 보정에 관한 연구)

  • Kang, Dong-Sung;Lee, Jeong-woo;Ko, Kang-Ho;Kim, Tae-Min;Park, Kyu-Bag;Park, Jung Rae;Kim, Ji-Hun;Choi, Doo-Sun;Lim, Dong-Wook
    • Design & Manufacturing
    • /
    • v.14 no.1
    • /
    • pp.8-14
    • /
    • 2020
  • In general, auto parts production assembly line is assembled and produced by automatic mounting by an automated robot. In such a production site, quality problems such as misalignment of parts (doors, trunks, roofs, etc.) to be assembled with the vehicle body or collision between assembly robots and components are often caused. In order to solve such a problem, the quality of parts is manually inspected by using mechanical jig devices outside the automated production line. Automotive inspection technology is the most commonly used field of vision, which includes surface inspection such as mounting hole spacing and defect detection, body panel dents and bends. It is used for guiding, providing location information to the robot controller to adjust the robot's path to improve process productivity and manufacturing flexibility. The most difficult weighing and measuring technology is to calibrate the surface analysis and position and characteristics between parts by storing images of the part to be measured that enters the camera's field of view mounted on the side or top of the part. The problem of the machine vision device applied to the automobile production line is that the lighting conditions inside the factory are severely changed due to various weather changes such as morning-evening, rainy days and sunny days through the exterior window of the assembly production plant. In addition, since the material of the vehicle body parts is a steel sheet, the reflection of light is very severe, which causes a problem in that the quality of the captured image is greatly changed even with a small light change. In this study, the distance between the car body and the door part and the door are acquired by the measuring device combining the laser slit light source and the LED pattern light source. The result is transferred to the joint robot for assembling parts at the optimum position between parts, and the assembly is done at the optimal position by changing the angle and step.

Albedo Based Fake Face Detection (빛의 반사량 측정을 통한 가면 착용 위변조 얼굴 검출)

  • Kim, Young-Shin;Na, Jae-Keun;Yoon, Sung-Beak;Yi, June-Ho
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.6
    • /
    • pp.139-146
    • /
    • 2008
  • Masked fake face detection using ordinary visible images is a formidable task when the mask is accurately made with special makeup. Considering recent advances in special makeup technology, a reliable solution to detect masked fake faces is essential to the development of a complete face recognition system. This research proposes a method for masked fake face detection that exploits reflectance disparity due to object material and its surface color. First, we have shown that measuring of albedo can be simplified to radiance measurement when a practical face recognition system is deployed under the user-cooperative environment. This enables us to obtain albedo just by grey values in the image captured. Second, we have found that 850nm infrared light is effective to discriminate between facial skin and mask material using reflectance disparity. On the other hand, 650nm visible light is known to be suitable for distinguishing different facial skin colors between ethnic groups. We use a 2D vector consisting of radiance measurements under 850nm and 659nm illumination as a feature vector. Facial skin and mask material show linearly separable distributions in the feature space. By employing FIB, we have achieved 97.8% accuracy in fake face detection. Our method is applicable to faces of different skin colors, and can be easily implemented into commercial face recognition systems.