• Title/Summary/Keyword: Visual Distance

Search Result 736, Processing Time 0.028 seconds

Visual Perception of Garment Surface Appearance

  • Fan, Jintu;Liu, Fu
    • Science of Emotion and Sensibility
    • /
    • v.5 no.3
    • /
    • pp.1-10
    • /
    • 2002
  • This paper concerns with the relationship between the visual perception of the degree of pucker or wrinkles of garment surfaces and the geometrical parameters of surfaces. In this study, four potentially relevant parameters of the surface profile are considered, namely, the variance ($\sigma$$^2$), the cutting frequency (F$\_$c/), the effective disparity curvature (D$\_$ce/) (Defined as the average disparity curvature of the wrinkled surface over the eyeball distance of the observer) and the frequency component of the disparity curvature ( D$\_$cf/). Based on the experiments using garment seams having varying degree of pucker (i.e. the wrinkles along a seam line), it was found that, while the logarithm of each of these four parameters has a strong linear relationship with the visually perceived degree of wrinkles, following the Web-Fetchner Law, the effective disparity curvature ( D$\_$ce/) and the frequency component of the disparity curvature (D$\_$cf/) appeared to have stronger relationships with the visual perception. This finding is in agreement with the suggestion by Rogers '||'&'||' Cagenello that human visual system may compute the disparity curvature in discriminating curved surfaces. It also suggested an objective method of measuring the degree of surface wrinkles.

  • PDF

Development and Test of the Remote Operator Visual Support System Based on Virtual Environment (가상환경기반 원격작업자 시각지원시스템 개발 및 시험)

  • Song, T.G.;Park, B.S.;Choi, K.H.;Lee, S.H.
    • Korean Journal of Computational Design and Engineering
    • /
    • v.13 no.6
    • /
    • pp.429-439
    • /
    • 2008
  • With a remote operated manipulator system, the situation at a remote site can be rendered through remote visualized image to the operator. Then the operator can quickly realize situations and control the slave manipulator by operating a master input device based on the information of the virtual image. In this study, the remote operator visual support system (ROVSS) was developed for viewing support of a remote operator to perform the remote task effectively. A visual support model based on virtual environment was also inserted and used to fulfill the need of this study. The framework for the system was created by Windows API based on PC and the library of 3D graphic simulation tool such as ENVISION. To realize this system, an operation test environment for a limited operating site was constructed by using experimental robot operation. A 3D virtual environment was designed to provide accurate information about the rotation of robot manipulator, the location and distance of operation tool through the real time synchronization. In order to show the efficiency of the visual support, we conducted the experiments by four methods such as the direct view, the camera view, the virtual view and camera view plus virtual view. The experimental results show that the method of camera view plus virtual view has about 30% more efficiency than the method of camera view.

The Impact of Emotion on Focused Attention in a Flanker Task (수반자극과제에서 정서가 초점주의에 미치는 영향)

  • Park, Tae-Jin;Park, Sun-Hee
    • Korean Journal of Cognitive Science
    • /
    • v.22 no.4
    • /
    • pp.385-404
    • /
    • 2011
  • We examined how emotional background stimuli influence focused attention in a flanker task. An IAPS picture was presented for 1,000ms in advance, then a target and two flanker letters were presented against the IAPS picture for 200ms(Experiment 1). The flanking stimuli were simultaneously presented on the left and right sides of the target stimulus with distance of $0.5^{\circ}C$, $1^{\circ}C$, or $1.5^{\circ}C$ visual angle. We investigated the flanker compatibility effect that identification of target would be faster when they were flanked by identical(compatible) stimuli than when they were flanked by different(incompatible) stimuli. Results of Experiment 1 revealed that the flanker compatibility effect depended not only on the distance of flankers but also on the valence of a background IAPS pictures. Positive and neutral pictures showed distance effect that the flanker compatibility effect was decreased as the farther the distance was, while negative pictures showed no di stance effect. Positive and neutral pictures showed compatibility effects at all distance conditions, but negative pictures didn't showed compatibility effect at $1.5^{\circ}C$ distance condition. In Experiment 2, the SOA(Stimulus Onset Asynchrony) between the picture and the stimuli of flanker task was manipulated. The flanking stimuli were presented simultaneously on the left and right sides of the target stimulus with a distance of either $0.5^{\circ}C$ or $1.5^{\circ}C$ visual angle. The results of Experiment 2 showed that flanker compatibility effect depends on SOA. At long SOA(2800ms), negative pictures showed no distance effect, but positive or neutral pictures did. All valence conditions of background pictures showed compatibility effects at $0.5^{\circ}C$ distance condition, but didn't showed compatibility effect at $1.5^{\circ}C$ distance condition. At short SOA(100ms), all valence conditions of background pictures showed distance effect, and showed compatibility effects with the exception of negative background pictures at $1.5^{\circ}C$ distance condition. These findings suggest that the scope of visual attention becomes narrower when viewing negative emotional stimuli and becomes broadened when viewing positive emotional stimuli. The narrowed scope of attention in negative emotion lasts longer, while the broaden scope of attention in positive emotion lasts shorter.

  • PDF

The Effects of Trunk Movement and Ground Reaction Force during Sit to Stand Using Visual Feedback (시각 되먹임을 이용한 앉은 자세에서 일어서기 시 몸통의 동작과 지면 반발력에 미치는 영향)

  • Yeong-Geon Koh;Tae-Young Oh;Jae-Ho Lee
    • Journal of The Korean Society of Integrative Medicine
    • /
    • v.11 no.2
    • /
    • pp.207-219
    • /
    • 2023
  • Purpose : This study was conducted to investigate the changes in trunk movement and ground reaction during sit to stand motion using visual feedback. Methods : Fifteen adults (average age: 23.53±1.77 years) participated in this study. An infrared reflective marker was attached to the body each participant for motion analysis, and the participants performed sit to stand motion while wearing a hat attached with a laser pointer, which provided visual feedback. First, the sit to stand action was repeated thrice without obtaining any visual feedback, followed by a three minute break. Next, the laser pointers attached to hats were irradiated on a whiteboard, located at a distance of 5 m in front of the chairs, on which the participants sat; a baseline was set, and the participants performed stand up movements three times under this condition. A visual feedback was provided to the participants to prevent the laser pointers from crossing the set baseline. During each stand-up movement, the position of the reflective marker attached to the subject's body was recorded in real time using an infrared camera for motion analysis. The trunk movement and ground reaction force were extracted through recorded data and analyzed according to the presence or absence of visual feedback. Results : The results indicated that in the presence of a visual feedback during the sit-to-stand movements, the range of motion of the trunk and hip joints decreased, whereas that of the knee and ankle joints increased in the sagittal plane. The rotation angle of the trunk in the horizontal plane decreased. The left and right movement speed of the center of pressure increased, the pressing force decreased, and the forward and backward movement speed of the trunk decreased. Conclusion : The results suggest that the efficiency and stability of the stand up movement of a body increase when a visual feedback is provided.

Visual Tracking Control of Aerial Robotic Systems with Adaptive Depth Estimation

  • Metni, Najib;Hamel, Tarek
    • International Journal of Control, Automation, and Systems
    • /
    • v.5 no.1
    • /
    • pp.51-60
    • /
    • 2007
  • This paper describes a visual tracking control law of an Unmanned Aerial Vehicle(UAV) for monitoring of structures and maintenance of bridges. It presents a control law based on computer vision for quasi-stationary flights above a planar target. The first part of the UAV's mission is the navigation from an initial position to a final position to define a desired trajectory in an unknown 3D environment. The proposed method uses the homography matrix computed from the visual information and derives, using backstepping techniques, an adaptive nonlinear tracking control law allowing the effective tracking and depth estimation. The depth represents the desired distance separating the camera from the target.

Visual servoing of robot manipulator by fuzzy membership function based neural network (퍼지 신경망에 의한 로보트의 시각구동)

  • 김태원;서일홍;조영조
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1992.10a
    • /
    • pp.874-879
    • /
    • 1992
  • It is shown that there exists a nonlinear mappping which transforms features and their changes to the desired camera motion without measurement of the relative distance between the camera and the part, and the nonlinear mapping can eliminate several difficulties encountered when using the inverse of the feature Jacobian as in the usual feature-based visual feedback controls. And instead of analytically deriving the closed form of such a nonlinear mapping, a fuzzy membership function (FMF) based neural network is then proposed to approximate the nonlinear mapping, where the structure of proposed networks is similar to that of radial basis function neural network which is known to be very useful in function approximations. The proposed FMF network is trained to be capable of tracking moving parts in the whole work space along the line of sight. For the effective implementation of proposed IMF networks, an image feature selection processing is investigated, and required fuzzy membership functions are designed. Finally, several numerical examples are illustrated to show the validities of our proposed visual servoing method.

  • PDF

Development of Automatic Design Program for Measuring Master Gear using Visual Lisp (Visual Lisp을 이용한 측정용 마스터기어 자동설계 프로그램 개발)

  • 김영남;이성수
    • Proceedings of the Korean Society of Machine Tool Engineers Conference
    • /
    • 2000.10a
    • /
    • pp.169-174
    • /
    • 2000
  • This paper is about automatic design of measuring master gears. Master gears are usually thought of as gears of extreme accuracy level, but are better defined as gages to check the meshing action of production gears. This is usually not recognized because most mechanical gages are associated with static measurements rather than having the form of machine elements used in a functional check involving machine motion. In this paper the interface that allows beginners to design easily and quickly is provided. The addition and modification of data is easy and the reduced design lead time is feasible with the program even though users don't know much about program since it is developed with Visual Lisp and DCL.

  • PDF

Resolution-enhanced Reconstruction of 3D Object Using Depth-reversed Elemental Images for Partially Occluded Object Recognitionz

  • Wei, Tan-Chun;Shin, Dong-Hak;Lee, Byung-Gook
    • Journal of the Optical Society of Korea
    • /
    • v.13 no.1
    • /
    • pp.139-145
    • /
    • 2009
  • Computational integral imaging (CII) is a new method for 3D imaging and visualization. However, it suffers from seriously poor image quality of the reconstructed image as the reconstructed image plane increases. In this paper, to overcome this problem, we propose a CII method based on a smart pixel mapping (SPM) technique for partially occluded 3D object recognition, in which the object to be recognized is located at far distance from the lenslet array. In the SPM-based CII, the use of SPM moves a far 3D object toward the near lenslet array and then improves the image quality of the reconstructed image. To show the usefulness of the proposed method, we carry out some experiments for occluded objects and present the experimental results.

Shot Group and Representative Shot Frame Detection using Similarity-based Clustering

  • Lee, Gye-Sung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.9
    • /
    • pp.37-43
    • /
    • 2016
  • This paper introduces a method for video shot group detection needed for efficient management and summary of video. The proposed method detects shots based on low-level visual properties and performs temporal and spatial clustering based on visual similarity of neighboring shots. Shot groups created from temporal clustering are further clustered into small groups with respect to visual similarity. A set of representative shot frames are selected from each cluster of the smaller groups representing a scene. Shots excluded from temporal clustering are also clustered into groups from which representative shot frames are selected. A number of video clips are collected and applied to the method for accuracy of shot group detection. We achieved 91% of accuracy of the method for shot group detection. The number of representative shot frames is reduced to 1/3 of the total shot frames. The experiment also shows the inverse relationship between accuracy and compression rate.

Implementation of Sound Source Localization Based on Audio-visual Information for Humanoid Robots (휴모노이드 로봇을 위한 시청각 정보 기반 음원 정위 시스템 구현)

  • Park, Jeong-Ok;Na, Seung-You;Kim, Jin-Young
    • Speech Sciences
    • /
    • v.11 no.4
    • /
    • pp.29-42
    • /
    • 2004
  • This paper presents an implementation of real-time speaker localization using audio-visual information. Four channels of microphone signals are processed to detect vertical as well as horizontal speaker positions. At first short-time average magnitude difference function(AMDF) signals are used to determine whether the microphone signals are human voices or not. And then the orientation and distance information of the sound sources can be obtained through interaural time difference. Finally visual information by a camera helps get finer tuning of the angles to speaker. Experimental results of the real-time localization system show that the performance improves to 99.6% compared to the rate of 88.8% when only the audio information is used.

  • PDF