• Title/Summary/Keyword: Vision Technology

Search Result 2,021, Processing Time 0.033 seconds

Development and Validation of a Vision-Based Needling Training System for Acupuncture on a Phantom Model

  • Trong Hieu Luu;Hoang-Long Cao;Duy Duc Pham;Le Trung Chanh Tran;Tom Verstraten
    • Journal of Acupuncture Research
    • /
    • v.40 no.1
    • /
    • pp.44-52
    • /
    • 2023
  • Background: Previous studies have investigated technology-aided needling training systems for acupuncture on phantom models using various measurement techniques. In this study, we developed and validated a vision-based needling training system (noncontact measurement) and compared its training effectiveness with that of the traditional training method. Methods: Needle displacements during manipulation were analyzed using OpenCV to derive three parameters, i.e., needle insertion speed, needle insertion angle (needle tip direction), and needle insertion length. The system was validated in a laboratory setting and a needling training course. The performances of the novices (students) before and after training were compared with the experts. The technology-aided training method was also compared with the traditional training method. Results: Before the training, a significant difference in needle insertion speed was found between experts and novices. After the training, the novices approached the speed of the experts. Both training methods could improve the insertion speed of the novices after 10 training sessions. However, the technology-aided training group already showed improvement after five training sessions. Students and teachers showed positive attitudes toward the system. Conclusion: The results suggest that the technology-aided method using computer vision has similar training effectiveness to the traditional one and can potentially be used to speed up needling training.

Coordination of dual arm robot using 3-D vision sensor

  • Yoshioka, Izuru;Taguchi, Nobuyoshi;Yeol, Beak-Ju;Wang, Honbo;Ishimatsu, Takakazu
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1995.10a
    • /
    • pp.400-403
    • /
    • 1995
  • A robot system is proposed to realize coordinated motion of two arm robot. Due to a 3-D vision sensor, precise coordinated motions could be realized. Using a sophisticated IC chip, real time image processing could be executed using a simple circuit.

  • PDF

Monocular 3D Vision Unit for Correct Depth Perception by Accommodation

  • Hosomi, Takashi;Sakamoto, Kunio;Nomura, Shusaku;Hirotomi, Tetsuya;Shiwaku, Kuninori;Hirakawa, Masahito
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2009.10a
    • /
    • pp.1334-1337
    • /
    • 2009
  • The human vision system has visual functions for viewing 3D images with a correct depth. These functions are called accommodation, vergence and binocular stereopsis. Most 3D display system utilizes binocular stereopsis. The authors have developed a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth.

  • PDF

Trends in Biomimetic Vision Sensor Technology (생체모방 시각센서 기술동향)

  • Lee, Tae-Jae;Park, Yun-Jae;Koo, Kyo-In;Seo, Jong-Mo;Cho, Dong-Il Dan
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.12
    • /
    • pp.1178-1184
    • /
    • 2015
  • In conventional robotics, charge-coupled device (CCD) and complementary metal-oxide-semiconductor (CMOS) cameras have been utilized for acquiring vision information. These devices have problems, such as narrow optic angles and inefficiencies in visual information processing. Recently, biomimetic vision sensors for robotic applications have been receiving much attention. These sensors are more efficient than conventional vision sensors in terms of the optic angle, power consumption, dynamic range, and redundancy suppression. This paper presents recent research trends on biomimetic vision sensors and discusses future directions.

Visualizations of Relational Capital for Shared Vision

  • Russell, Martha G.;Still, Kaisa;Huhtamaki, Jukka;Rubens, Neil
    • World Technopolis Review
    • /
    • v.5 no.1
    • /
    • pp.47-60
    • /
    • 2016
  • In today's digital non-linear global business environment, innovation initiatives are influenced by inter-organizational, political, economic, environmental, technological systems, as well as by decisions made individually by key actors in these systems. Network-based structures emerge from social linkages and collaborations among various actors, creating innovation ecosystems, complex adaptive systems in which entities co-create value. A shared vision of value co-creation allows people operating individually to arrive together at the same future. Yet, relationships are difficult to see, continually changing and challenging to manage. The Innovation Ecosystem Transformation Framework construct includes three core components to make innovation relationships visible and articulate networks of relational capital for the wellbeing, sustainability and business success of innovation ecosystems: data-driven visualizations, storytelling and shared vision. Access to data facilitates building evidence-based visualizations using relational data. This has dramatically altered the way leaders can use data-driven analysis to develop insights and provide ongoing feedback needed to orchestrate relational capital and build shared vision for high quality decisions about innovation. Enabled by a shared vision, relational capital can guide decisions that catalyze, support and sustain an ecosystemic milieu conducive to innovation for business growth.

Smart Vision Sensor for Satellite Video Surveillance Sensor Network (위성 영상감시 센서망을 위한 스마트 비젼 센서)

  • Kim, Won-Ho;Im, Jae-Yoo
    • Journal of Satellite, Information and Communications
    • /
    • v.10 no.2
    • /
    • pp.70-74
    • /
    • 2015
  • In this paper, satellite communication based video surveillance system that consisted of ultra-small aperture terminals with small-size smart vision sensor is proposed. The events such as forest fire, smoke, intruder movement are detected automatically in field and false alarms are minimized by using intelligent and high-reliable video analysis algorithms. The smart vision sensor is necessary to achieve high-confidence, high hardware endurance, seamless communication and easy maintenance requirements. To satisfy these requirements, real-time digital signal processor, camera module and satellite transceiver are integrated as a smart vision sensor-based ultra-small aperture terminal. Also, high-performance video analysis and image coding algorithms are embedded. The video analysis functions and performances were verified and confirmed practicality through computer simulation and vision sensor prototype test.

A Study on Visual Feedback Control of a Dual Arm Robot with Eight Joints

  • Lee, Woo-Song;Kim, Hong-Rae;Kim, Young-Tae;Jung, Dong-Yean;Han, Sung-Hyun
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.610-615
    • /
    • 2005
  • Visual servoing is the fusion of results from many elemental areas including high-speed image processing, kinematics, dynamics, control theory, and real-time computing. It has much in common with research into active vision and structure from motion, but is quite different from the often described use of vision in hierarchical task-level robot control systems. We present a new approach to visual feedback control using image-based visual servoing with the stereo vision in this paper. In order to control the position and orientation of a robot with respect to an object, a new technique is proposed using a binocular stereo vision. The stereo vision enables us to calculate an exact image Jacobian not only at around a desired location but also at the other locations. The suggested technique can guide a robot manipulator to the desired location without giving such priori knowledge as the relative distance to the desired location or the model of an object even if the initial positioning error is large. This paper describes a model of stereo vision and how to generate feedback commands. The performance of the proposed visual servoing system is illustrated by the simulation and experimental results and compared with the case of conventional method for dual-arm robot made in Samsung Electronics Co., Ltd.

  • PDF

Vision-sensor-based Drivable Area Detection Technique for Environments with Changes in Road Elevation and Vegetation (도로의 높낮이 변화와 초목이 존재하는 환경에서의 비전 센서 기반)

  • Lee, Sangjae;Hyun, Jongkil;Kwon, Yeon Soo;Shim, Jae Hoon;Moon, Byungin
    • Journal of Sensor Science and Technology
    • /
    • v.28 no.2
    • /
    • pp.94-100
    • /
    • 2019
  • Drivable area detection is a major task in advanced driver assistance systems. For drivable area detection, several studies have proposed vision-sensor-based approaches. However, conventional drivable area detection methods that use vision sensors are not suitable for environments with changes in road elevation. In addition, if the boundary between the road and vegetation is not clear, judging a vegetation area as a drivable area becomes a problem. Therefore, this study proposes an accurate method of detecting drivable areas in environments in which road elevations change and vegetation exists. Experimental results show that when compared to the conventional method, the proposed method improves the average accuracy and recall of drivable area detection on the KITTI vision benchmark suite by 3.42%p and 8.37%p, respectively. In addition, when the proposed vegetation area removal method is applied, the average accuracy and recall are further improved by 6.43%p and 9.68%p, respectively.

Development of the Noise Elimination Algorithm of Stereo-Vision Images for 3D Terrain Modeling (지반형상 3차원 모델링을 위한 스테레오 비전 영상의 노이즈 제거 알고리즘 개발)

  • Yoo, Hyun-Seok;Kim, Young-Suk;Han, Seung-Woo
    • Korean Journal of Construction Engineering and Management
    • /
    • v.10 no.2
    • /
    • pp.145-154
    • /
    • 2009
  • For developing an Automation equipment in construction, it is a key issue to develop 3D modeling technology which can be used for automatically recognizing environmental objects. Recently, for the development of "Intelligent Excavating System(IES), a research developing the real-time 3D terrain modeling technology has been implemented from 2006 in Korea and a stereo vision system is selected as the optimum technology. However, as a result of performance tests implemented in various earth moving environment, the 3D images obtained by stereo vision included considerable noise. Therefore, in this study, for getting rid of the noise which is necessarily generated in stereo image matching, the noise elimination algorithm of stereo-vision images for 3D terrain modeling was developed. The consequence of this study is expected to be applicable in developing an automation equipments which are used in field environment.

Resolution improvement of a CMOS vision chip for edge detection by separating photo-sensing and edge detection circuits (수광 회로와 윤곽 검출 회로의 분리를 통한 윤곽 검출용 시각칩의 해상도 향상)

  • Kong, Jae-Sung;Suh, Sung-Ho;Kim, Sang-Heon;Shin, Jang-Kyoo;Lee, Min-Ho
    • Journal of Sensor Science and Technology
    • /
    • v.15 no.2
    • /
    • pp.112-119
    • /
    • 2006
  • Resolution of an image sensor is very significant parameter to improve. It is hard to improve the resolution of the CMOS vision chip for edge detection based on a biological retina using a resistive network because the vision chip contains additional circuits such as a resistive network and some processing circuits comparing with general image sensors such as CMOS image sensor (CIS). In this paper, we proved the problem of low resolution by separating photo-sensing and signal processing circuits. This type of vision chips occurs a problem of low operation speed because the signal processing circuits should be commonly used in a row of the photo-sensors. The low speed problem of operation was proved by using a reset decoder. A vision chip for edge detection with $128{\times}128$ pixel array has been designed and fabricated by using $0.35{\mu}m$ 2-poly 4-metal CMOS technology. The fabricated chip was integrated with optical lens as a camera system and investigated with real image. By using this chip, we could achieved sufficient edge images for real application.