• Title/Summary/Keyword: single vision

Search Result 406, Processing Time 0.027 seconds

Effects of Myopia Alleviation Lenses in accordance with Parents' Refractive Errors (부모의 굴절이상에 따른 근시완화렌즈 효과)

  • Cho, Yoon Chul;Kang, JoongGu;Leem, Hyun Sung
    • The Korean Journal of Vision Science
    • /
    • v.20 no.4
    • /
    • pp.569-577
    • /
    • 2018
  • Purpose : The study looked at how effective each group wearing MyoVison lens, MC lens, and Single Vision lensdepending on their parents' myopia condition. Methods : The study observed the changeof spherical equivalent among customers, who visited between January 2010 and December 2016,of an optical shop in Incheon Metropolitan City. And we observed MyoVision 152 eyes, MC Lens 86 eyes and Single Vision lens 270 eyes. This study was conducted using SPSS ver18, which analyzes the changes in average values of MyoVision, MC Lens, and Single Vision for a year.In each group, the differences in the group were compared using the Paired T-test and then one-way ANOVA (post-hoc; Bonferroni) Results : Group-to-group comparisons showed that MyoVision and MC Lens have a shorterinhibition than Single Vision. In particular, MyoVisionand MC Lens showed different relief effects depending on the degree of refraction of parents.When both parents had normal refractive, the change between MyoVision and Single Vision lens was $-0.35{\pm}0.05D$. When the father had a refraction MC lens were $-0.36{\pm}0.14D$ more effective than Single Vision. When only the mother had refraction, the mean value between MyoVision and Single Vision lens was $-0.37{\pm}0.06D$, and the mean between MC lens and Single Vision lens was $-0.38{\pm}0.08D$. And when both parents had refraction problems, the mean value change between MyoVision and Single Vision lens was $-0.28{\pm}0.07D$, and $-0.31{\pm}0.07D$, respectively. Conclusion : MyoVision and MC Lens appeared to have no effect on the functions of mitigating myopia in within group comparisons, but MyoVision and MC Lens showed reducing myopia than Single Vision in between group.

Robot vision interface (로보트와 Vision System Interface)

  • 김선일;여인택;박찬웅
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1987.10b
    • /
    • pp.101-104
    • /
    • 1987
  • This paper shows the robot-vision system which consists of robot, vision system, single board computer and IBM-PC. IBM-PC based system has a great flexibility in expansion for a vision system interfacing. Easy human interfacing and great calculation ability are the benefits of this system. It was carried to interface between each component. The calibration between two coordinate systems is studied. The robot language for robot-vision system was written in "C" language. User also can write job program in "C" language in which the robot and vision related functions reside in the library.side in the library.

  • PDF

Single-neuron PID Type control method for a MM-LDM with vision system(ICCAS 2003)

  • Kim, Young-Lyul;Eom, Ki-Hwan;Lim, Joong-Kyu;Son, Dong-Seol;Chung, Sung-Boo;Lee, Hyun-Kwan
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.598-602
    • /
    • 2003
  • In this paper, we propose the method to control the position of LDM(Linear DC Motor) using vision system. The proposed method is composed of a vision system for position detecting, and main computer calculates PID control output which is deliver to 8051 actuator circuit in serial communication. To confirm the usefulness of the proposed method, we experimented about position control of a small size LDM using CCD camera which has a performance 30frames/sec as vision system.

  • PDF

Near Visual Performance of Multifocal Contact Lenses in University Students (대학생에서 멀티포컬 소프트콘택트렌즈의 근거리 시기능 유용성)

  • Jong, Woo-Cheol;Kim, Soo-Hyun;Kim, Jai-Min
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.16 no.1
    • /
    • pp.51-60
    • /
    • 2011
  • Purpose: This study was to investigate visual performance and subjective satisfaction with multifocal soft contact lenses at near works in university students. Methods: In a cross-over study design, 26 students (6 male, 20 female) who did not have any ocular disorder with at least 20/20(1.0) binocular vision were fitted with singlevision lenses (SofLens$^{TM}59$, Bausch + Lomb Co. USA) or multifocal lenses (SofLens Multifocal, Bausch + Lomb Co. USA). After 2 weeks, visual performance assessments included visual acuity, stereoacuity and contrast sensitivity function at distance and near. Near point of accommodation, accommodative facility, near point of convergence, vergence facility and near range of clear vision at near were examined. Students' satisfaction and preference were measured using survey questionaries. Results: Subjects maintained at least 20/20 binocular vision with multifocal and single-vision lenses at distance and near. There was no difference between multifocal and single-vision lenses in stereoacuity, contrast sensitivity function and vergence facility at far and near. The near point of accommodation, accommodative facility, near point of convergence and the near range of clear vision with multifocal lenses were better than single-vision lenses. On the survey questionaries, subjects reported that they preferred and satisfied with multifocal lenses with near works, and single-vision lenses with distance works. Conclusions: The majority of university students preferred multifocal to single vision lenses because multifocal lenses provided better visual performance at near works. This study suggests that multifocal lens is helpful for young adult in prolonged near works.

Self-Localization of Mobile Robot Using Single Camera (단일 카메라를 이용한 이동로봇의 자기 위치 추정)

  • 김명호;이쾌희
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.404-404
    • /
    • 2000
  • This paper presents a single vision-based sel(-localization method in an corridor environment. We use the Hough transform for finding parallel lines and vertical lines. And we use these cross points as feature points and it is calculated relative distance from mobile robot to these points. For matching environment map to feature points, searching window is defined and self-localization is performed by matching procedure. The result shows the suitability of this method by experiment.

  • PDF

The Automated Measurement of Tool Wear using Computer Vision (컴퓨터 비젼에 의한 공구마모의 자동계측)

  • Song, Jun-Yeop;Lee, Jae-Jong;Park, Hwa-Yeong
    • 한국기계연구소 소보
    • /
    • s.19
    • /
    • pp.69-79
    • /
    • 1989
  • Cutting tool life monitoring is a critical element needed for designing unmanned machining systems. This paper describes a tool wear measurement system using computer vision which repeatedly measures flank and crater wear of a single point cutting tool. This direct tool wear measurement method is based on an interactive procedure utilizing a image processor and multi-vision sensors. A measurement software calcultes 7 parameters to characterize flank and crater wear. Performance test revealed that the computer vision technique provides precise, absolute tool-wear quantification and reduces human maesurement errors.

  • PDF

The Multipass Joint Tracking System by Vision Sensor (비전센서를 이용한 다층 용접선 추적 시스템)

  • Lee, Jeong-Ick;Koh, Byung-Kab
    • Transactions of the Korean Society of Machine Tool Engineers
    • /
    • v.16 no.5
    • /
    • pp.14-23
    • /
    • 2007
  • Welding fabrication invariantly involves three district sequential steps: preparation, actual process execution and post-weld inspection. One of the major problems in automating these steps and developing autonomous welding system is the lack of proper sensing strategies. Conventionally, machine vision is used in robotic arc welding only for the correction of pre-taught welding paths in single pass. However, in this paper, multipass tracking more than single pass tracking is performed by conventional seam tracking algorithm and developed one. And tracking performances of two algorithm are compared in multipass tracking. As the result, tracking performance in multi-pass welding shows superior conventional seam tracking algorithm to developed one.

Aerial Object Detection and Tracking based on Fusion of Vision and Lidar Sensors using Kalman Filter for UAV

  • Park, Cheonman;Lee, Seongbong;Kim, Hyeji;Lee, Dongjin
    • International journal of advanced smart convergence
    • /
    • v.9 no.3
    • /
    • pp.232-238
    • /
    • 2020
  • In this paper, we study on aerial objects detection and position estimation algorithm for the safety of UAV that flight in BVLOS. We use the vision sensor and LiDAR to detect objects. We use YOLOv2 architecture based on CNN to detect objects on a 2D image. Additionally we use a clustering method to detect objects on point cloud data acquired from LiDAR. When a single sensor used, detection rate can be degraded in a specific situation depending on the characteristics of sensor. If the result of the detection algorithm using a single sensor is absent or false, we need to complement the detection accuracy. In order to complement the accuracy of detection algorithm based on a single sensor, we use the Kalman filter. And we fused the results of a single sensor to improve detection accuracy. We estimate the 3D position of the object using the pixel position of the object and distance measured to LiDAR. We verified the performance of proposed fusion algorithm by performing the simulation using the Gazebo simulator.

A Study on IMM-PDAF based Sensor Fusion Method for Compensating Lateral Errors of Detected Vehicles Using Radar and Vision Sensors (레이더와 비전 센서를 이용하여 선행차량의 횡방향 운동상태를 보정하기 위한 IMM-PDAF 기반 센서융합 기법 연구)

  • Jang, Sung-woo;Kang, Yeon-sik
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.8
    • /
    • pp.633-642
    • /
    • 2016
  • It is important for advanced active safety systems and autonomous driving cars to get the accurate estimates of the nearby vehicles in order to increase their safety and performance. This paper proposes a sensor fusion method for radar and vision sensors to accurately estimate the state of the preceding vehicles. In particular, we performed a study on compensating for the lateral state error on automotive radar sensors by using a vision sensor. The proposed method is based on the Interactive Multiple Model(IMM) algorithm, which stochastically integrates the multiple Kalman Filters with the multiple models depending on lateral-compensation mode and radar-single sensor mode. In addition, a Probabilistic Data Association Filter(PDAF) is utilized as a data association method to improve the reliability of the estimates under a cluttered radar environment. A two-step correction method is used in the Kalman filter, which efficiently associates both the radar and vision measurements into single state estimates. Finally, the proposed method is validated through off-line simulations using measurements obtained from a field test in an actual road environment.