• Title/Summary/Keyword: position and orientation

Search Result 739, Processing Time 0.031 seconds

Ergonomic Recommendation for Optimum Positions and Warning Foreperiod of Auditory Signals in Human-Machine Interface

  • Lee, Fion C.H.;Chan, Alan H.S.
    • Industrial Engineering and Management Systems
    • /
    • v.6 no.1
    • /
    • pp.40-48
    • /
    • 2007
  • This study investigated the optimum positions and warning foreperiod for auditory signals with an experiment on spatial stimulus-response (S-R) compatibility effects. The auditory signals were presented at the front-right, front-left, rear-right, and rear-left positions from the subjects, whose reaction times and accuracies at different spatial mapping conditions were examined. The results showed a significant spatial stimulus-response compatibility effect in which faster and more accurate responses were obtained in the transversely and longitudinally compatible condition while the worst performance was found when spatial stimulus-response compatibility did not exist in either orientation. It was also shown that the transverse compatibility effect was found significantly stronger than the longitudinal compatibility effect. The effect of signal position was found significant and post hoc test suggested that the emergent warning alarm should be placed on the front-right position for right-handed users. The warning foreperiod prior to the signal presentation was shown to influence reaction time and a warning foreperiod of 3 s is found optimal for the 2-choice auditory reaction task.

A Study on the Photo-realistic 3D City Modeling Using the Omnidirectional Image and Digital Maps (전 방향 이미지와 디지털 맵을 활용한 3차원 실사 도시모델 생성 기법 연구)

  • Kim, Hyungki;Kang, Yuna;Han, Soonhung
    • Korean Journal of Computational Design and Engineering
    • /
    • v.19 no.3
    • /
    • pp.253-262
    • /
    • 2014
  • 3D city model, which consisted of the 3D building models and their geospatial position and orientation, is becoming a valuable resource in virtual reality, navigation systems, civil engineering, etc. The purpose of this research is to propose the new framework to generate the 3D city model that satisfies visual and physical requirements in ground oriented simulation system. At the same time, the framework should meet the demand of the automatic creation and cost-effectiveness, which facilitates the usability of the proposed approach. To do that, I suggest the framework that leverages the mobile mapping system which automatically gathers high resolution images and supplement sensor information like position and direction of the image. And to resolve the problem from the sensor noise and a large number of the occlusions, the fusion of digital map data will be used. This paper describes the overall framework with major process and the recommended or demanded techniques for each processing step.

DESIGN AND ANALYSIS FOR THE SPECIAL SERIAL MANIPULATOR

  • Kim, Woo-Sub;Park, Jae-Hong;Kim, Jung-Ha
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1396-1401
    • /
    • 2004
  • In recent years, robot has been used widely in industrial field and has been expanded as a result of continous research and development for high-speed and miniaturization. The goal of this paper is to design the special serial manipulator through the understanding of the structure, mobility, and analysis of serial manipulator. Thereafter we control the position and orientation of end-effector with respect to time. In general, a structure of industrial robot consists of several links connected in series by various types of joints. Typically revolute and prismatic joints. The movement of these joints is determined in inverse kinematic analysis. Compared to the complicated structure of parallel and hybrid robot, open loop system retains the characteristic that each link is independent and is controlled easily by AC servomotor that is used to place the robot end-effector toward the accurate point with the desired speed and power while it is operated by position control algorithm. The robot end-effector should trace the given trajectory within the appropriate time. The trajectory of 3D end-effector model made by OpenGL can be displayed on the monitor program simultaneously

  • PDF

Design and Experimental Report for the Special 3D.O.F Robot Manipulator

  • Moon, Dong-Hee;Lee, Woon-Sung;Kim, Jung-Ha
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.2000-2003
    • /
    • 2003
  • In recent years, robots have been used widely in industrial field and have been expanded as a result of continuous research and development for high-speed and miniaturization. The goal of this paper is to design the serial manipulator through kinematic analysis and to control the position and orientation of end-effector with respect to time. In general, a structure of industrial robot consists of several links connected in series by various types of joints, typically revolute and prismatic joints. The movement of these joints is determined in inverse kinematic analysis. Compared to the complicated structure of parallel and hybrid robot, open loop system retains the characteristic that each link is independent and is controlled easily. AC servo motor is used to place the robot end-effector toward the accurate point with the desired speed and power while it is operated by position control algorithm. The robot end-effector should trace the given trajectory within the appropriate time. The trajectory of end-effector can be displayed on the monitor of general personal computer through Opengl program.

  • PDF

Guidance of Mobile Robot for Inspection of Pipe (파이프 내부검사를 위한 이동로봇의 유도방법)

  • 정규원
    • Proceedings of the Korean Society of Machine Tool Engineers Conference
    • /
    • 2002.04a
    • /
    • pp.480-485
    • /
    • 2002
  • The purpose of this paper is the development of guidance algorithm for a mobile robot which is used to acquire the position and state information of the pipe defects such as crack, damage and through hole. The data used for the algorithm is the range data obtained by the range sensor which is based on an optical triangulation method. The sensor, which consists of a laser slit beam and a CCD camera, measures the 3D profile of the pipe's inner surface. After setting the range sensor on the robot, the robot is put into a pipe. While the camera and the LSB sensor part is rotated about the robot axis, a laser slit beam (LSB) is projected onto the inner surface of the pipe and a CCD camera captures the image. From the images the range data is obtained with respect to the sensor coordinate through a series of image processing and applying the sensor matrix. After the data is transformed into the robot coordinate, the position and orientation of the robot should be obtained in order to guide the robot. In addition, analyzing the data, 3D shape of the pipe is constructed and the numerical data for the defects of the pipe can be found. These data will be used for pipe maintenance and service.

  • PDF

Point Pattern Matching Based Global Localization using Ceiling Vision (천장 조명을 이용한 점 패턴 매칭 기반의 광역적인 위치 추정)

  • Kang, Min-Tae;Sung, Chang-Hun;Roh, Hyun-Chul;Chung, Myung-Jin
    • Proceedings of the KIEE Conference
    • /
    • 2011.07a
    • /
    • pp.1934-1935
    • /
    • 2011
  • In order for a service robot to perform several tasks, basically autonomous navigation technique such as localization, mapping, and path planning is required. The localization (estimation robot's pose) is fundamental ability for service robot to navigate autonomously. In this paper, we propose a new system for point pattern matching based visual global localization using spot lightings in ceiling. The proposed algorithm us suitable for system that demands high accuracy and fast update rate such a guide robot in the exhibition. A single camera looking upward direction (called ceiling vision system) is mounted on the head of the mobile robot and image features such as lightings are detected and tracked through the image sequence. For detecting more spot lightings, we choose wide FOV lens, and inevitably there is serious image distortion. But by applying correction calculation only for the position of spot lightings not whole image pixels, we can decrease the processing time. And then using point pattern matching and least square estimation, finally we can get the precise position and orientation of the mobile robot. Experimental results demonstrate the accuracy and update rate of the proposed algorithm in real environments.

  • PDF

Design of a 6-DOF Parallel Haptic Rand Controller Consisting of 5-Bar Linkages and Gimbal Mechanisms (5절링크와 짐벌기구로 구성된 병렬형 6자유도 햅틱 핸드컨트롤러의 설계)

  • Ryu, Dong-Seok;Sohn, Won-Sun;Song, Jae-Bok
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.27 no.1
    • /
    • pp.18-25
    • /
    • 2003
  • A haptic hand controller (HHC) operated by the user’s hand can receive information on position and orientation of the hand and display force and moment generated in the virtual environment to the hand. In this paper, a 3-DOF hand controller is first presented, in which all the actuators are mounted on the fixed base by combining a 5-bar linkage and a gimbal mechanism. The 6-DOF HHC is then designed by connecting these two 3-DOF devices through a handle which consists of a screw and nut. Analysis using performance index is carried out to determine the dimensions of the device. The HHC control system consists of the high-level controller for kinematic and static analysis and the low-level controller for position sensing and motor control. The HHC used as a user interface to control the mobile robot in the virtual environment is given as a simple application.

AR-based Tangible Interaction Using a Finger Fixture for Digital Handheld Products (손가락 고정구를 이용한 휴대용 전자제품의 증강현실기반 감각형 상호작용)

  • Park, Hyung-Jun;Moon, Hee-Cheol
    • Korean Journal of Computational Design and Engineering
    • /
    • v.16 no.1
    • /
    • pp.1-10
    • /
    • 2011
  • In this paper, we propose an AR-based tangible interaction using a finger fixture for virtual evaluation of digital handheld products. To realize tangible interaction between a user and a product in a computer-vision based AR environment, we uses two types of tangible objects: a product-type object and a finger fixture. The product-type object is used to acquire the position and orientation of the product, and the finger fixture is used to recognize the position of a finger tip. The two objects are fabricated by RP technology and AR markers are attached to them. The finger fixture is designed to satisfy various requirements with an ultimate goal that the user holding the finger fixture in his or her index finger can create HMI events by touching specified regions (buttons or sliders) of the product-type object with the finger tip. By assessing the accuracy of the proposed interaction, we have found that it can be applied to a wide variety of digital handheld products whose button size is not less than 6 mm. After performing the design evaluation of several handheld products using the proposed AR-based tangible interaction, we received highly encouraging feedback from users since the proposed interaction is intuitive and tangible enough to provide a feeling like manipulating products with human hands.

Improvement of Smartphone Interface Using AR Marker (AR 마커를 이용한 스마트폰 인터페이스의 개선)

  • Kang, Yun-A;Han, Soon-Hung
    • Korean Journal of Computational Design and Engineering
    • /
    • v.16 no.5
    • /
    • pp.361-369
    • /
    • 2011
  • As smartphones came into wide use recently, it has become increasingly popular not only among young people, but middle-aged people as well. Most smartphones use capacitive full touch screen, so touch commands are made by fingers unlike the PDAs in the past that use touch pens. In this case, a significant portion of the smartphone's screen is blocked by the finger so it is impossible to see the screens around the finger touching the screen, and difficulty occurs in precise control used for small buttons such as qwerty keyboard. To solve this problem, this research proposes a method of using simple AR markers to improve the interface of smartphones. Sticker-form marker is attached to fingernails and placed in front of the smartphone camera Then, the camera image of the marker is analyzed to determine the orientation of the marker to perceive as onRelease() or onPress() of the mouse depending on the marker's angle of rotation, and use its position as the position of the mouse cursor. This method can enable click, double-click, drag-and-drop used in PCs as well as touch, slide, long-touch-input in smartphones. Through this research, smartphone inputs can be made more precise and simple, and show the possibility of the application of a new concept of smartphone interface.

Fine-Motion Estimation Using Ego/Exo-Cameras

  • Uhm, Taeyoung;Ryu, Minsoo;Park, Jong-Il
    • ETRI Journal
    • /
    • v.37 no.4
    • /
    • pp.766-771
    • /
    • 2015
  • Robust motion estimation for human-computer interactions played an important role in a novel method of interaction with electronic devices. Existing pose estimation using a monocular camera employs either ego-motion or exo-motion, both of which are not sufficiently accurate for estimating fine motion due to the motion ambiguity of rotation and translation. This paper presents a hybrid vision-based pose estimation method for fine-motion estimation that is specifically capable of extracting human body motion accurately. The method uses an ego-camera attached to a point of interest and exo-cameras located in the immediate surroundings of the point of interest. The exo-cameras can easily track the exact position of the point of interest by triangulation. Once the position is given, the ego-camera can accurately obtain the point of interest's orientation. In this way, any ambiguity between rotation and translation is eliminated and the exact motion of a target point (that is, ego-camera) can then be obtained. The proposed method is expected to provide a practical solution for robustly estimating fine motion in a non-contact manner, such as in interactive games that are designed for special purposes (for example, remote rehabilitation care systems).