• Title/Summary/Keyword: 2D vision

Search Result 619, Processing Time 0.022 seconds

Design of Optimized RBFNNs based on Night Vision Face Recognition Simulator Using the 2D2 PCA Algorithm ((2D)2 PCA알고리즘을 이용한 최적 RBFNNs 기반 나이트비전 얼굴인식 시뮬레이터 설계)

  • Jang, Byoung-Hee;Kim, Hyun-Ki;Oh, Sung-Kwun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.1
    • /
    • pp.1-6
    • /
    • 2014
  • In this study, we propose optimized RBFNNs based on night vision face recognition simulator with the aid of $(2D)^2$ PCA algorithm. It is difficult to obtain the night image for performing face recognition due to low brightness in case of image acquired through CCD camera at night. For this reason, a night vision camera is used to get images at night. Ada-Boost algorithm is also used for the detection of face images on both face and non-face image area. And the minimization of distortion phenomenon of the images is carried out by using the histogram equalization. These high-dimensional images are reduced to low-dimensional images by using $(2D)^2$ PCA algorithm. Face recognition is performed through polynomial-based RBFNNs classifier, and the essential design parameters of the classifiers are optimized by means of Differential Evolution(DE). The performance evaluation of the optimized RBFNNs based on $(2D)^2$ PCA is carried out with the aid of night vision face recognition system and IC&CI Lab data.

Change of Phoria and Subjective Symptoms after Watching 2D and 3D Image (2D와 3D 영상 시청 후 나타난 사위도 및 자각증상의 변화)

  • Kim, Dong-Su;Lee, Wook-Jin;Kim, Jae-Do;Yu, Dong-Sik;Jeong, Eui Tae;Son, Jeong-Sik
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.17 no.2
    • /
    • pp.185-194
    • /
    • 2012
  • Purpose: The changes of phoria and subjective asthenopia before and after viewing were compared based on 2D image and two ways of 3D images, and presented for references of 3D image watching and production. Methods: Change in phoria was measured before and after watching 2D image, 3D-FPR and 3D-SG images for 30 minutes with a target of 41 university students at 20-30 years old (male 26, female 15). Paired t-test and Pearson correlation between changed phoria and subjective symptoms which were measured using questionnaires were evaluated by before and after watching each images. Results: Right after watching 2D image, exophoria was increased by 0.5 $\Delta$, in distance and near, but it was not a significant level. Right after watching 3D image, exophoria was increased by 1.0~1.5 $\Delta$, and 1.5~2.0 $\Delta$, in distance and near, respectively when compared with before watching. In the significant level, exophoria tended to increase. Changes in near was increased more by 0.5 $\Delta$, compared with those in distance. Changes based on way of 3D-FPR and 3D-SG image were less than 0.5 $\Delta$, and there was almost no difference. In terms of visual subjective symptoms, eye strain was increased in 3D image compared with that in 2D image. In addition, there was no difference depending on way of image. In terms of Pearson correlation between phoria change and eye strain, as exophoria was increased, eye strain was increased. Conclusions: Watching 3D image increased eye strain compared with watching 2D image, and accordingly exophoria tended to increase.

Entity Matching for Vision-Based Tracking of Construction Workers Using Epipolar Geometry (영상 내 건설인력 위치 추적을 위한 등극선 기하학 기반의 개체 매칭 기법)

  • Lee, Yong-Joo;Kim, Do-Wan;Park, Man-Woo
    • Journal of KIBIM
    • /
    • v.5 no.2
    • /
    • pp.46-54
    • /
    • 2015
  • Vision-based tracking has been proposed as a means to efficiently track a large number of construction resources operating in a congested site. In order to obtain 3D coordinates of an object, it is necessary to employ stereo-vision theories. Detecting and tracking of multiple objects require an entity matching process that finds corresponding pairs of detected entities across the two camera views. This paper proposes an efficient way of entity matching for tracking of construction workers. The proposed method basically uses epipolar geometry which represents the relationship between the two fixed cameras. Each pixel coordinate in a camera view is projected onto the other camera view as an epipolar line. The proposed method finds the matching pair of a worker entity by comparing the proximity of the all detected entities in the other view to the epipolar line. Experimental results demonstrate its suitability for automated entity matching for 3D vision-based tracking of construction workers.

Development of Laser Vision Sensor with Multi-line for High Speed Lap Joint Welding

  • Sung, K.;Rhee, S.
    • International Journal of Korean Welding Society
    • /
    • v.2 no.2
    • /
    • pp.57-60
    • /
    • 2002
  • Generally, the laser vision sensor makes it possible design a highly reliable and precise range sensor at a low cost. When the laser vision sensor is applied to lap joint welding, however. there are many limitations. Therefore, a specially-designed hardware system has to be used. However, if the multi-lines are used instead of a single line, multi-range data .:an be generated from one image. Even under a set condition of 30fps, the generated 2D range data increases depending on the number of lines used. In this study, a laser vision sensor with a multi-line pattern is developed with conventional CCD camera to carry out high speed seam tracking in lap joint welding.

  • PDF

A Stereo-Vision System for 3D Position Recognition of Cow Teats on Robot Milking System (로봇 착유시스템의 3차원 유두위치인식을 위한 스테레오비젼 시스템)

  • Kim, Woong;Min, Byeong-Ro;Lee, Dea-Weon
    • Journal of Biosystems Engineering
    • /
    • v.32 no.1 s.120
    • /
    • pp.44-49
    • /
    • 2007
  • A stereo vision system was developed for robot milking system (RMS) using two monochromatic cameras. An algorithm for inverse perspective transformation was developed for the 3-D information acquisition of all teats. To verify performance of the algorithm in the stereo vision system, indoor tests were carried out using a test-board and model teats. A real cow and a model cow were used to measure distance errors. The maximum distance errors of test-board, model teats and real teats were 0.5 mm, 4.9 mm and 6 mm, respectively. The average distance errors of model teats and real teats were 2.9 mm and 4.43 mm, respectively. Therefore, it was concluded that this algorithm was sufficient for the RMS to be applied.

3D Shape Descriptor for Segmenting Point Cloud Data

  • Park, So Young;Yoo, Eun Jin;Lee, Dong-Cheon;Lee, Yong Wook
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.6_2
    • /
    • pp.643-651
    • /
    • 2012
  • Object recognition belongs to high-level processing that is one of the difficult and challenging tasks in computer vision. Digital photogrammetry based on the computer vision paradigm has begun to emerge in the middle of 1980s. However, the ultimate goal of digital photogrammetry - intelligent and autonomous processing of surface reconstruction - is not achieved yet. Object recognition requires a robust shape description about objects. However, most of the shape descriptors aim to apply 2D space for image data. Therefore, such descriptors have to be extended to deal with 3D data such as LiDAR(Light Detection and Ranging) data obtained from ALS(Airborne Laser Scanner) system. This paper introduces extension of chain code to 3D object space with hierarchical approach for segmenting point cloud data. The experiment demonstrates effectiveness and robustness of the proposed method for shape description and point cloud data segmentation. Geometric characteristics of various roof types are well described that will be eventually base for the object modeling. Segmentation accuracy of the simulated data was evaluated by measuring coordinates of the corners on the segmented patch boundaries. The overall RMSE(Root Mean Square Error) is equivalent to the average distance between points, i.e., GSD(Ground Sampling Distance).

An Analysis of 2-D Bluff Bodies Flows by Multi-Vision PIV (Multi-Vision PIV에 의한 2차원 단순물체의 유동장 해석)

  • Song, K.T.;Lee, H.;Kim, Y.T.;Lee, Y.H.
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.26 no.5
    • /
    • pp.573-580
    • /
    • 2002
  • Animation and time-resolved analysis of the wake characteristics of 2-D bluff body flows were examinated by applying the multi-vision PIV to square cylinders(three angles of attack: $0^{circ}, 30^{circ} and 45^{\circ}$) and circular cylinders(three rotating speeds: 0rpm, 76rpm, 153rpm) submerged within a circulating water channel $(Re=10^4)$, The macroscopic shedding patterns and their dominant frequencies were discussed in terms of instantaneous velocity, vorticity and turbulent quantities such as turbulent intensity, turbulent kinetic energy and three Reynolds stresses. Particularly the time-averaged distribution of turbulent intensity 'islands' where their peak magnitudes were focused always small regions behind the bodies without noticeable spatial migration were particularly discovered in all cases. And the dominant frequencies of the turbulent quantities in the wake regions were two times larger than those of the velocity and vorticity.

Improvement effect of Functional Myopia by Using of Vision Training Device(OTUS) (Vision Training Device(OTUS)적용에 따른 기능성 근시의 개선 효과)

  • Park, Sung-Yong;Yoon, Yeong-Dae;Kim, Deok-Hun;Lee, Dong-Hee
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.2
    • /
    • pp.147-154
    • /
    • 2020
  • This study is about the development of ICT-based wearable devices for vision recovery that can cause functional myopia improvement through accommodation training. Vision Training Device(OTUS) is a head mount type wearable device, which naturally stimulates the contraction and relaxation of the ciliary muscles of eye. Users can conduct customized vision training based on personal vision information stored through the device. In the experiment, the effects of improvement of the symptoms by the accommodation training were compared and analysed for the two groups (16 comparative group and 16 accommodation training group) after causing functional myopia. The result showed the functional myopia improved average 0.44D±0.35 (p<0.05) at the accommodation training group compared to the comparative group. This study proved the effectiveness of vision training device(OTUS) on functional myopia, but further clinical trials are judged necessary to prove the possibility of long-term control of the functional myopia.

The Study on Effects of After Vision Training for Elementary School Children in Muan (무안군 소재 초등학생들의 시훈련 효과에 관한 연구)

  • Jang, Jung Un;Kim, In Suk
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.18 no.2
    • /
    • pp.137-142
    • /
    • 2013
  • Purpose: This study was designed to investigate the current status of visual acuity for elementary school students in Muan-gun and to analyze improvements of their visual function after vision training for the elementary school students who have either insufficiency of accommodation or vergence. Methods: Subjective refraction, objective refraction and binocular function were examined for 335 elementary school children from year 1 to year 6 live in Muan area, and then 47 students who have symptoms of binocular dysfunction among them were selected. We analyzed and compared between before and after vision training (VT) in binocular vision function results. Results: The results show that most of the subjects had much problem in near point convergence (NPC) than accommodation. After the vision training, the average of subjects NPC was improved about 5.93 cm, from $11.57 {\pm}1.850$ cm for before VT to $5.66{\pm}0.965$ cm for after VT. After VT positive fusional vergence at near distance after VT was $19.64{\pm}3.66$ $\Delta$, which was as much as double of near phoria. Accommodative amplitude was improved from $10.02{\pm}2.566$ D for before VT to $12.30{\pm}1.397$ D for after VT, which similar to mean of expected accommodative amplitude of 11.27 years old. Conclusions: Among insufficiency of accommodation and vergence NPC was improved specially, and accommodative facility and other ocular functions were also improved. Therefore, it is considered the vision training is very effective to recover from visual function problems.

Development of ${\mu}BGA$ Solder Ball Inspection Algorithm (${\mu}BGA$ 납볼 검사 알고리즘 개발)

  • 박종욱;양진세;최태영
    • Proceedings of the IEEK Conference
    • /
    • 2000.06d
    • /
    • pp.139-142
    • /
    • 2000
  • $\mu$BGA(Ball Grid Array) is growing in response to a great demand for smaller and lighter packages for the use in laptop, mobile phones and other evolving products. However it is not easy to find its defect by human visual due to in very small dimension. From this point of view, we are interested its development of a vision based automated inspection algorithm. For this, first a 2D view of $\mu$BGA is described under a special blue illumination. Second, a notation-invariant 2D inspection algorithm is developed. Finally a 3D inspection algorithm is proposed for the case of stereo vision system. As a simulation result, it is shown that 3D defect not easy to find by 2D algorithm can be detected by the proposed inspection algorithm.

  • PDF