• Title/Summary/Keyword: Visual target

Search Result 624, Processing Time 0.041 seconds

The Effects of the Relative Legibility of Optotypes on Corrected Visual Acuity (시표의 유형에 따른 상대가독성이 교정시력에 미치는 영향)

  • Ha, Na-Ri;Choi, Jang-Ho;Kim, Hyun Jung
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.20 no.2
    • /
    • pp.177-186
    • /
    • 2015
  • Purpose: The Purpose of this study is to investigate if the relative legibility of optotypes affects the corrected visual acuity in visual acuity test. Methods: After measuring the relative legibility of 'Landolt ring target', 'arabic number target', 'alphabet target' by showing as a single-letter-target in 24 subjects without specific ocular diseases and ocular surgery experience, the relative legibility of 0.8, 1.0, 1.25 row of vision according to type of target in 7 types of chart were compared. After then we compared by measuring the corrected visual acuity according to type of target by using binocular MPMVA test (#7A) in 60 myopic subjects. Results: In 3 types of target the worst relative legibility target was 'Landolt ring target' with legible distance of $98.97{\pm}4.57cm$ and the best relative legibility target was 'alphabet target' with legible distance of $108.42{\pm}3.46cm$. There was no difference of the relative legibility according to type of chart or visual acuity level in the row of vision if other conditions are the same. In 1.0 and 1.25 row of vision the difference of relative legibility according to type of target was shown the statistically significant difference between 'Landolt ring target' and 'alphabet target' as $-0.07{\pm}0.06$ (p=0.02) and $-0.06{\pm}0.06$ (p=0.04) respectively. In myopia the difference of corrected visual acuity according to type of target was statistically significant difference between 'Landolt ring target' and 'arabic number target' as $-0.04{\pm}0.02$ (p=0.02) and it was especially remarkable in the low myopia. Conclusions: Measuring visual acuity with different optotypes could cause the errors in best vision measurement value because there was difference of the relative legibility according to type of target even though visual acuity level is same in the row of vision.

Visual tracking based Discriminative Correlation Filter Using Target Separation and Detection

  • Lee, Jun-Haeng
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.12
    • /
    • pp.55-61
    • /
    • 2017
  • In this paper, we propose a novel tracking method using target separation and detection that are based on discriminative correlation filter (DCF), which is studied a lot recently. 'Retainability' is one of the most important factor of tracking. There are some factors making retainability of tracking worse. Especially, fast movement and occlusion of a target frequently occur in image data, and when it happens, it would make target lost. As a result, the tracking cannot be retained. For maintaining a robust tracking, in this paper, separation of a target is used so that normal tracking is maintained even though some part of a target is occluded. The detection algorithm is executed and find new location of the target when the target gets out of tracking range due to occlusion of whole part of a target or fast movement speed of a target. A variety of experiments with various image data sets are conducted. The algorithm proposed in this paper showed better performance than other conventional algorithms when fast movement and occlusion of a target occur.

The Implementation of the Realtime Visual Tracking of Moving Terget by using Kalman Filter (칼만필터를 이용한 이동 목표물의 실시간 시각추적의 구현)

  • 임양남;방두열;이성철
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1996.04a
    • /
    • pp.254-258
    • /
    • 1996
  • In this paper, we proposed realtime visual tracking system of moving object for 2D target using extended Kalman Filter Algorithm. A targeting marker are recongnized in each image frame and positions of targer object in each frame from a CCD camera while te targeting marker is attached to the tip of the SCARA robot hand. After the detection of a target coming into any position of the field-of-view, the target is tracked and always made to be located at the center of target window. Then, we can track the moving object which moved in inter-frames. The experimental results show the effectiveness of the Kalman filter algorithm for realtime tracking and estimated state value of filter, predicting the position of moving object to minimize an image processing area, and by reducing the effect by quantization noise of image

  • PDF

Comparison of Vertical and Horizontal Eye Movement Times in the Selection of Visual Targets by an Eye Input Device

  • Hong, Seung Kweon
    • Journal of the Ergonomics Society of Korea
    • /
    • v.34 no.1
    • /
    • pp.19-27
    • /
    • 2015
  • Objective: The aim of this study is to investigate how well eye movement times in visual target selection tasks by an eye input device follows the typical Fitts' Law and to compare vertical and horizontal eye movement times. Background: Typically manual pointing provides excellent fit to the Fitts' Law model. However, when an eye input device is used for the visual target selection tasks, there were some debates on whether the eye movement times in can be described by the Fitts' Law. More empirical studies should be added to resolve these debates. This study is an empirical study for resolving this debate. On the other hand, many researchers reported the direction of movement in typical manual pointing has some effects on the movement times. The other question in this study is whether the direction of eye movement also affects the eye movement times. Method: A cursor movement times in visual target selection tasks by both input devices were collected. The layout of visual targets was set up by two types. Cursor starting position for vertical movement times were in the top of the monitor and visual targets were located in the bottom, while cursor starting positions for horizontal movement times were in the right of the monitor and visual targets were located in the left. Results: Although eye movement time was described by the Fitts' Law, the error rate was high and correlation was relatively low ($R^2=0.80$ for horizontal movements and $R^2=0.66$ for vertical movements), compared to those of manual movement. According to the movement direction, manual movement times were not significantly different, but eye movement times were significantly different. Conclusion: Eye movement times in the selection of visual targets by an eye-gaze input device could be described and predicted by the Fitts' Law. Eye movement times were significantly different according to the direction of eye movement. Application: The results of this study might help to understand eye movement times in visual target selection tasks by the eye input devices.

Visual Search Models for Multiple Targets and Optimal Stopping Time (다수표적의 시각적 탐색을 위한 탐색능력 모델과 최적 탐색정지 시점)

  • Hong, Seung-Kweon;Park, Seikwon;Ryu, Seung Wan
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.29 no.2
    • /
    • pp.165-171
    • /
    • 2003
  • Visual search in an unstructured search field is a fruitful research area for computational modeling. Search models that describe relationship between search time and probability of target detection have been used for prediction of human search performance and provision of ideal goals for search training. Until recently, however, most of models were focused on detecting a single target in a search field, although, in practice, a search field includes multiple targets and search models for multiple targets may differ from search models for a single target. This study proposed a random search model for multiple targets, generalizing a random search model for a single target which is the most typical search model. To test this model, human search data were collected and compared with the model. This model well predicted human performance in visual search for multiple targets. This paper also proposed how to determine optimal stopping time in multiple-target search.

Image-based Visual Servoing Through Range and Feature Point Uncertainty Estimation of a Target for a Manipulator (목표물의 거리 및 특징점 불확실성 추정을 통한 매니퓰레이터의 영상기반 비주얼 서보잉)

  • Lee, Sanghyob;Jeong, Seongchan;Hong, Young-Dae;Chwa, Dongkyoung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.6
    • /
    • pp.403-410
    • /
    • 2016
  • This paper proposes a robust image-based visual servoing scheme using a nonlinear observer for a monocular eye-in-hand manipulator. The proposed control method is divided into a range estimation phase and a target-tracking phase. In the range estimation phase, the range from the camera to the target is estimated under the non-moving target condition to solve the uncertainty of an interaction matrix. Then, in the target-tracking phase, the feature point uncertainty caused by the unknown motion of the target is estimated and feature point errors converge sufficiently near to zero through compensation for the feature point uncertainty.

A Multi-category Task for Bitrate Interval Prediction with the Target Perceptual Quality

  • Yang, Zhenwei;Shen, Liquan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.12
    • /
    • pp.4476-4491
    • /
    • 2021
  • Video service providers tend to face user network problems in the process of transmitting video streams. They strive to provide user with superior video quality in a limited bitrate environment. It is necessary to accurately determine the target bitrate range of the video under different quality requirements. Recently, several schemes have been proposed to meet this requirement. However, they do not take the impact of visual influence into account. In this paper, we propose a new multi-category model to accurately predict the target bitrate range with target visual quality by machine learning. Firstly, a dataset is constructed to generate multi-category models by machine learning. The quality score ladders and the corresponding bitrate-interval categories are defined in the dataset. Secondly, several types of spatial-temporal features related to VMAF evaluation metrics and visual factors are extracted and processed statistically for classification. Finally, bitrate prediction models trained on the dataset by RandomForest classifier can be used to accurately predict the target bitrate of the input videos with target video quality. The classification prediction accuracy of the model reaches 0.705 and the encoded video which is compressed by the bitrate predicted by the model can achieve the target perceptual quality.

Effects of target types and retinal eccentricity on visual search (시각탐색에서 표적 유형과 망막 이심율 효과)

  • 신현정;권오영
    • Korean Journal of Cognitive Science
    • /
    • v.14 no.3
    • /
    • pp.1-11
    • /
    • 2003
  • Two experiments were conducted to investigate effects of target types and retinal eccentricity on the search of a target while both target and background stimuli were static or moving. A visual search task was used in both experiments. The retinal eccentricity was determined by five concentric circles increasing by the unit of 1.6 and the target was different from the background stimuli in either orientation(orientation target) or a distinctive feature(feature target). In Experiment 1 where both the target and background stimuli were presented statically, an interaction between retinal eccentricity arid target type was found. While search time of the orientation target was not affected by the retinal eccentricity, that of the feature target increased as the retinal eccentricity increased. In Experiment 2 where all stimuli were moving, the interaction effect was also found. But the reason was not the same as that in Experiment 1. In the moving condition, while the search time of the orientation target decreased consistently as the retinal eccentricity increased, that of the feature target was not affected by the retinal eccentricity. The implications and limitations of the present results were discussed with respects to the real world situations such as driving cars or flying airplanes.

  • PDF

Uncalibrated Visual Servoing through the Efficient Estimation of the Image Jacobian for Large Residual

  • Kim, Gon-Woo
    • Journal of Electrical Engineering and Technology
    • /
    • v.8 no.2
    • /
    • pp.385-392
    • /
    • 2013
  • An uncalibrated visual servo control method for tracking a target is presented. We define the robot-positioning problem as an unconstrained optimization problem to minimize the image error between the target feature and the robot end-effector feature. We propose a method to find the residual term for more precise modeling using the secant approximation method. The composite image Jacobian is estimated by the proper method for eye-to-hand configuration without knowledge of the kinematic structure, imaging geometry and intrinsic parameter of camera. This method is independent of the motion of a target feature. The algorithm for regulation of the joint velocity for safety and stability is presented using the cost function. Adaptive regulation for visibility constraints is proposed using the adaptive parameter.

Traded control of telerobot system with an autonomous visual sensor feedback (자율적인 시각 센서 피드백 기능을 갖는 원격 로보트 시스템교환 제어)

  • 김주곤;차동혁;김승호
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10b
    • /
    • pp.940-943
    • /
    • 1996
  • In teleoperating, as seeing the monitor screen obtained from a camera instituted in the working environment, human operator generally controls the slave arm. Because we can see only 2-D image in a monitor, human operator does not know the depth information and can not work with high accuracy. In this paper, we proposed a traded control method using an visual sensor for the purpose of solving this problem. We can control a teleoperation system with precision when we use the proposed algorithm. Not only a human operator command but also an autonomous visual sensor feedback command is given to a slave arm for the purpose of coincidence current image features and target image features. When the slave arm place in a distant place from the target position, human operator can know very well the difference between the desired image features and the current image features, but calculated visual sensor command have big errors. And when the slave arm is near the target position, the state of affairs is changed conversely. With this visual sensor feedback, human does not need coincide the detail difference between the desired image features and the current image features and proposed method can work with higher accuracy than other method without, sensor feedback. The effectiveness of the proposed control method is verified through series of experiments.

  • PDF