• Title/Summary/Keyword: single camera

Search Result 776, Processing Time 0.03 seconds

Simple Camera-based Evaluation System for Lower Limb Alignment during Pedalling (자전거 페달링 시 하지 정렬 평가를 위한 영상 시스템 개발)

  • Oh, Ho-Sang;Choi, Jin-Seung;Kang, Dong-Won;Seo, Jeong-Woo;Bae, Jae-Hyuk;Tack, Gye-Rae
    • Korean Journal of Applied Biomechanics
    • /
    • v.22 no.1
    • /
    • pp.123-129
    • /
    • 2012
  • Simple camera-based system for evaluation of lower limb alignment as a part of an automated cycling fitting system was developed and verified in this study. Developed imaging system can evaluate lower limb alignment quantitatively during pedaling using a general camcorder and single marker attached on the knee. Threshold-based marker detection algorithm was proposed in this study. Experiment was carried out to compare the trajectory data from marker detection algorithm of the developed imaging system with the trajectory data from 3-D motion capture system. Results showed that average error between trajectories was 2.33 mm (0.92 %) in the vertical direction and 0.62 mm (1.86 %) in the medio-lateral direction. There existed significant correlation between two measured values (r=0.9996 in the vertical direction and r=0.9975 in the medio-lateral direction). It can be concluded that developed imaging system be applied to evaluate lower limb alignment which is an important factor for dynamic bicycle fitting.

Real-Time Individual Tracking of Multiple Moving Objects for Projection based Augmented Visualization (다중 동적객체의 실시간 독립추적을 통한 프로젝션 증강가시화)

  • Lee, June-Hyung;Kim, Ki-Hong
    • Journal of Digital Convergence
    • /
    • v.12 no.11
    • /
    • pp.357-364
    • /
    • 2014
  • AR contents, if markers to be tracked move fast, show flickering while updating images captured from cameras. Conventional methods employing image based markers and SLAM algorithms for tracking objects have the problem that they do not allow more than 2 objects to be tracked simultaneously and interacted with each other in the same camera scene. In this paper, an improved SLAM type algorithm for tracking dynamic objects is proposed and investigated to solve the problem described above. To this end, method using 2 virtual cameras for one physical camera is adopted, which makes the tracked 2 objects interacted with each other. This becomes possible because 2 objects are perceived separately by single physical camera. Mobile robots used as dynamic objects are synchronized with virtual robots in the well-designed contents, proving usefulness of applying the result of individual tracking for multiple moving objects to augmented visualization of objects.

Where to spot: individual identification of leopard cats (Prionailurus bengalensis euptilurus) in South Korea

  • Park, Heebok;Lim, Anya;Choi, Tae-Young;Baek, Seung-Yoon;Song, Eui-Geun;Park, Yung Chul
    • Journal of Ecology and Environment
    • /
    • v.43 no.4
    • /
    • pp.385-389
    • /
    • 2019
  • Knowledge of abundance, or population size, is fundamental in wildlife conservation and management. Camera-trapping, in combination with capture-recapture methods, has been extensively applied to estimate abundance and density of individually identifiable animals due to the advantages of being non-invasive, effective to survey wide-ranging, elusive, or nocturnal species, operating in inhospitable environment, and taking low labor. We assessed the possibility of using coat patterns from images to identify an individual leopard cat (Prionailurus bengalensis), a Class II endangered species in South Korea. We analyzed leopard cat images taken from Digital Single-Lense Relfex camera (high resolution, 18Mpxl) and camera traps (low resolution, 3.1Mpxl) using HotSpotter, an image matching algorithm. HotSpotter accurately top-ranked an image of the same individual leopard cat with the reference leopard cat image 100% by matching facial and ventral parts. This confirms that facial and ventral fur patterns of the Amur leopard cat are good matching points to be used reliably to identify an individual. We anticipate that the study results will be useful to researchers interested in studying behavior or population parameter estimates of Amur leopard cats based on capture-recapture models.

Gaze Detection System by IR-LED based Camera (적외선 조명 카메라를 이용한 시선 위치 추적 시스템)

  • 박강령
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.4C
    • /
    • pp.494-504
    • /
    • 2004
  • The researches about gaze detection have been much developed with many applications. Most previous researches only rely on image processing algorithm, so they take much processing time and have many constraints. In our work, we implement it with a computer vision system setting a IR-LED based single camera. To detect the gaze position, we locate facial features, which is effectively performed with IR-LED based camera and SVM(Support Vector Machine). When a user gazes at a position of monitor, we can compute the 3D positions of those features based on 3D rotation and translation estimation and affine transform. Finally, the gaze position by the facial movements is computed from the normal vector of the plane determined by those computed 3D positions of features. In addition, we use a trained neural network to detect the gaze position by eye's movement. As experimental results, we can obtain the facial and eye gaze position on a monitor and the gaze position accuracy between the computed positions and the real ones is about 4.2 cm of RMS error.

Real-time Eye Contact System Using a Kinect Depth Camera for Realistic Telepresence (Kinect 깊이 카메라를 이용한 실감 원격 영상회의의 시선 맞춤 시스템)

  • Lee, Sang-Beom;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.4C
    • /
    • pp.277-282
    • /
    • 2012
  • In this paper, we present a real-time eye contact system for realistic telepresence using a Kinect depth camera. In order to generate the eye contact image, we capture a pair of color and depth video. Then, the foreground single user is separated from the background. Since the raw depth data includes several types of noises, we perform a joint bilateral filtering method. We apply the discontinuity-adaptive depth filter to the filtered depth map to reduce the disocclusion area. From the color image and the preprocessed depth map, we construct a user mesh model at the virtual viewpoint. The entire system is implemented through GPU-based parallel programming for real-time processing. Experimental results have shown that the proposed eye contact system is efficient in realizing eye contact, providing the realistic telepresence.

Automatic Combination & Assembly System for Phone Camera Lens Module (폰 카메라 렌즈모듈 자동 조합시스템 개발)

  • Song, Jun Yeob;Ha, Tae Ho;Lee, Chang Woo;Kim, Dong Hoon;Jeon, Jong
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.38 no.2
    • /
    • pp.219-225
    • /
    • 2014
  • An automatic combination and assembly system for phone-camera lens modules was developed. The system enables the assembly of the relative orientation of the individual lenses making up the lens module. Conventional assembly systems assemble a lens module from eight assembly units. The developed system reduces this number to half by combining each lens and a spacer into a single assembly unit. Also, the number of transfer stages for sequential assembly is minimized without increasing the assembly time. Therefore, high productivity and a footprint that is only about 25 % of that of a conventional assembly system can be realized. The system features a modular design to allow it to cope with rapid changes in the market. Only a few components, such as the picker and guide, need to be replaced for changing to a new assembly model.

Robust Stereo Matching under Radiometric Change based on Weighted Local Descriptor (광량 변화에 강건한 가중치 국부 기술자 기반의 스테레오 정합)

  • Koo, Jamin;Kim, Yong-Ho;Lee, Sangkeun
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.4
    • /
    • pp.164-174
    • /
    • 2015
  • In a real scenario, radiometric change has frequently occurred in the stereo image acquisition process using multiple cameras with geometric characteristics or moving a single camera because it has different camera parameters and illumination change. Conventional stereo matching algorithms have a difficulty in finding correct corresponding points because it is assumed that corresponding pixels have similar color values. In this paper, we present a new method based on the local descriptor reflecting intensity, gradient and texture information. Furthermore, an adaptive weight for local descriptor based on the entropy is applied to estimate correct corresponding points under radiometric variation. The proposed method is tested on Middlebury datasets with radiometric changes, and compared with state-of-the-art algorithms. Experimental result shows that the proposed scheme outperforms other comparison algorithms around 5% less matching error on average.

3D feature point extraction technique using a mobile device (모바일 디바이스를 이용한 3차원 특징점 추출 기법)

  • Kim, Jin-Kyum;Seo, Young-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.256-257
    • /
    • 2022
  • In this paper, we introduce a method of extracting three-dimensional feature points through the movement of a single mobile device. Using a monocular camera, a 2D image is acquired according to the camera movement and a baseline is estimated. Perform stereo matching based on feature points. A feature point and a descriptor are acquired, and the feature point is matched. Using the matched feature points, the disparity is calculated and a depth value is generated. The 3D feature point is updated according to the camera movement. Finally, the feature point is reset at the time of scene change by using scene change detection. Through the above process, an average of 73.5% of additional storage space can be secured in the key point database. By applying the algorithm proposed to the depth ground truth value of the TUM Dataset and the RGB image, it was confirmed that the\re was an average distance difference of 26.88mm compared with the 3D feature point result.

  • PDF

Object Tracking for Elimination using LOD Edge Maps Generated from Canny Edge Maps (캐니 에지 맵을 LOD로 변환한 맵을 이용하여 객체 소거를 위한 추적)

  • Jang, Young-Dae;Park, Ji-Hun
    • Annual Conference of KIPS
    • /
    • 2007.05a
    • /
    • pp.333-336
    • /
    • 2007
  • We propose a simple method for tracking a nonparameterized subject contour in a single video stream with a moving camera and changing background. Then we present a method to eliminate the tracked contour object by replacing with the background scene we get from other frame. Our method consists of two parts: first we track the object using LOD (Level-of-Detail) canny edge maps, then we generate background of each image frame and replace the tracked object in a scene by a background image from other frame that is not occluded by the tracked object. Our tracking method is based on level-of-detail (LOD) modified Canny edge maps and graph-based routing operations on the LOD maps. To reduce side-effects because of irrelevant edges, we start our basic tracking by using strong Canny edges generated from large image intensity gradients of an input image. We get more edge pixels along LOD hierarchy. LOD Canny edge pixels become nodes in routing, and LOD values of adjacent edge pixels determine routing costs between the nodes. We find the best route to follow Canny edge pixels favoring stronger Canny edge pixels. Our accurate tracking is based on reducing effects from irrelevant edges by selecting the stronger edge pixels, thereby relying on the current frame edge pixel as much as possible. This approach is based on computing camera motion. Our experimental results show that our method works nice for moderate camera movement with small object shape changes.

Image Mosaicking Using Feature Points Based on Color-invariant (칼라 불변 기반의 특징점을 이용한 영상 모자이킹)

  • Kwon, Oh-Seol;Lee, Dong-Chang;Lee, Cheol-Hee;Ha, Yeong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.2
    • /
    • pp.89-98
    • /
    • 2009
  • In the field of computer vision, image mosaicking is a common method for effectively increasing restricted the field of view of a camera by combining a set of separate images into a single seamless image. Image mosaicking based on feature points has recently been a focus of research because of simple estimation for geometric transformation regardless distortions and differences of intensity generating by motion of a camera in consecutive images. Yet, since most feature-point matching algorithms extract feature points using gray values, identifying corresponding points becomes difficult in the case of changing illumination and images with a similar intensity. Accordingly, to solve these problems, this paper proposes a method of image mosaicking based on feature points using color information of images. Essentially, the digital values acquired from a digital color camera are converted to values of a virtual camera with distinct narrow bands. Values based on the surface reflectance and invariant to the chromaticity of various illuminations are then derived from the virtual camera values and defined as color-invariant values invariant to changing illuminations. The validity of these color-invariant values is verified in a test using a Macbeth Color-Checker under simulated illuminations. The test also compares the proposed method using the color-invariant values with the conventional SIFT algorithm. The accuracy of the matching between the feature points extracted using the proposed method is increased, while image mosaicking using color information is also achieved.