• 제목/요약/키워드: vision recognition

Search Result 1,048, Processing Time 0.032 seconds

Vision-Based Robot Manipulator for Grasping Objects (물체 잡기를 위한 비전 기반의 로봇 메뉴플레이터)

  • Baek, Young-Min;Ahn, Ho-Seok;Choi, Jin-Young
    • Proceedings of the KIEE Conference
    • /
    • 2007.04a
    • /
    • pp.331-333
    • /
    • 2007
  • Robot manipulator is one of the important features in service robot area. Until now, there has been a lot of research on robot" manipulator that can imitate the functions of a human being by recognizing and grasping objects. In this paper, we present a robot arm based on the object recognition vision system. We have implemented closed-loop control that use the feedback from visual information, and used a sonar sensor to improve the accuracy. We have placed the web-camera on the top of the hand to recognize objects. We also present some vision-based manipulation issues and our system features.

  • PDF

A Study of Line Recognition and Driving Direction Control On Vision based AGV (Vision을 이용한 자율주행 로봇의 라인 인식 및 주행방향 결정에 관한 연구)

  • Kim, Young-Suk;Kim, Tae-Wan;Lee, Chang-Goo
    • Proceedings of the KIEE Conference
    • /
    • 2002.07d
    • /
    • pp.2341-2343
    • /
    • 2002
  • This paper describes a vision-based line recognition and control of driving direction for an AGV(autonomous guided vehicle). As navigation guide, black stripe attached on the corridor is used. Binary image of guide stripe captured by a CCD camera is used. For detect the guideline quickly and extractly, we use for variable thresholding algorithm. this low-cost line-tracking system is efficiently using pc-based real time vision processing. steering control is studied through controller with guide-line angle error. This method is tested via a typical agv with a single camera in laboratory environment.

  • PDF

Intelligent Pattern Recognition Algorithms based on Dust, Vision and Activity Sensors for User Unusual Event Detection

  • Song, Jung-Eun;Jung, Ju-Ho;Ahn, Jun-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.8
    • /
    • pp.95-103
    • /
    • 2019
  • According to the Statistics Korea in 2017, the 10 leading causes of death contain a cardiac disorder disease, self-injury. In terms of these diseases, urgent assistance is highly required when people do not move for certain period of time. We propose an unusual event detection algorithm to identify abnormal user behaviors using dust, vision and activity sensors in their houses. Vision sensors can detect personalized activity behaviors within the CCTV range in the house in their lives. The pattern algorithm using the dust sensors classifies user movements or dust-generated daily behaviors in indoor areas. The accelerometer sensor in the smartphone is suitable to identify activity behaviors of the mobile users. We evaluated the proposed pattern algorithms and the fusion method in the scenarios.

A Study on Rotational Alignment Algorithm for Improving Character Recognition (문자 인식 향상을 위한 회전 정렬 알고리즘에 관한 연구)

  • Jin, Go-Whan
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.11
    • /
    • pp.79-84
    • /
    • 2019
  • Video image based technology is being used in various fields with continuous development. The demand for vision system technology that analyzes and discriminates image objects acquired through cameras is rapidly increasing. Image processing is one of the core technologies of vision systems, and is used for defect inspection in the semiconductor manufacturing field, object recognition inspection such as the number of tire surfaces and symbols. In addition, research into license plate recognition is ongoing, and it is necessary to recognize objects quickly and accurately. In this paper, propose a recognition model through the rotational alignment of objects after checking the angle value of the tilt of the object in the input video image for the recognition of inclined objects such as numbers or symbols marked on the surface. The proposed model can perform object recognition of the rotationally sorted image after extracting the object region and calculating the angle of the object based on the contour algorithm. The proposed model extracts the object region based on the contour algorithm, calculates the angle of the object, and then performs object recognition on the rotationally aligned image. In future research, it is necessary to study template matching through machine learning.

Quantitative evaluation of transfer learning for image recognition AI of robot vision (로봇 비전의 영상 인식 AI를 위한 전이학습 정량 평가)

  • Jae-Hak Jeong
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.3
    • /
    • pp.909-914
    • /
    • 2024
  • This study suggests a quantitative evaluation of transfer learning, which is widely used in various AI fields, including image recognition for robot vision. Quantitative and qualitative analyses of results applying transfer learning are presented, but transfer learning itself is not discussed. Therefore, this study proposes a quantitative evaluation of transfer learning itself based on MNIST, a handwritten digit database. For the reference network, the change in recognition accuracy according to the depth of the transfer learning frozen layer and the ratio of transfer learning data and pre-training data is tracked. It is observed that when freezing up to the first layer and the ratio of transfer learning data is more than 3%, the recognition accuracy of more than 90% can be stably maintained. The transfer learning quantitative evaluation method of this study can be used to implement transfer learning optimized according to the network structure and type of data in the future, and will expand the scope of the use of robot vision and image analysis AI in various environments.

A Study on Development and Application of Real Time Vision Algorithm for Inspection Process Automation (검사공정 자동화를 위한 실시간 비전알고리즘 개발 및 응용에 관한 연구)

  • Back, Seung-Hak;Hwang, Won-Jun;Shin, Haeng-Bong;Choi, Young-Sik;Park, Dae-Yeong
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.19 no.1
    • /
    • pp.42-49
    • /
    • 2016
  • This study proposes a non-contact inspective technology based robot vision system for Faulty Inspection of welding States and Parts Shape. The maine focus is real time implementation of the machining parts' automatic inspection by the robotic moving. For this purpose, the automatic test instrument inspects the precision components designator the vision system. pattern Recognition Technologies and Precision Components for vision inspection technology and precision machining of precision parts including the status and appearance distinguish between good and bad. To perform a realization of a real-time automation integration system for the precision parts of manufacturing process, it is designed a robot vision system for the integrated system controller and verified the reliability through experiments. The main contents of this paper, the robot vision technology for noncontact inspection of precision components and machinery parts is useful technology for FA.

A completely non-contact recognition system for bridge unit influence line using portable cameras and computer vision

  • Dong, Chuan-Zhi;Bas, Selcuk;Catbas, F. Necati
    • Smart Structures and Systems
    • /
    • v.24 no.5
    • /
    • pp.617-630
    • /
    • 2019
  • Currently most of the vision-based structural identification research focus either on structural input (vehicle location) estimation or on structural output (structural displacement and strain responses) estimation. The structural condition assessment at global level just with the vision-based structural output cannot give a normalized response irrespective of the type and/or load configurations of the vehicles. Combining the vision-based structural input and the structural output from non-contact sensors overcomes the disadvantage given above, while reducing cost, time, labor force including cable wiring work. In conventional traffic monitoring, sometimes traffic closure is essential for bridge structures, which may cause other severe problems such as traffic jams and accidents. In this study, a completely non-contact structural identification system is proposed, and the system mainly targets the identification of bridge unit influence line (UIL) under operational traffic. Both the structural input (vehicle location information) and output (displacement responses) are obtained by only using cameras and computer vision techniques. Multiple cameras are synchronized by audio signal pattern recognition. The proposed system is verified with a laboratory experiment on a scaled bridge model under a small moving truck load and a field application on a footbridge on campus under a moving golf cart load. The UILs are successfully identified in both bridge cases. The pedestrian loads are also estimated with the extracted UIL and the predicted weights of pedestrians are observed to be in acceptable ranges.

A Method for Improving Object Recognition Using Pattern Recognition Filtering (패턴인식 필터링을 적용한 물체인식 성능 향상 기법)

  • Park, JinLyul;Lee, SeungGi
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.6
    • /
    • pp.122-129
    • /
    • 2016
  • There have been a lot of researches on object recognition in computer vision. The SURF(Speeded Up Robust Features) algorithm based on feature detection is faster and more accurate than others. However, this algorithm has a shortcoming of making an error due to feature point mismatching when extracting feature points. In order to increase a success rate of object recognition, we have created an object recognition system based on SURF and RANSAC(Random Sample Consensus) algorithm and proposed the pattern recognition filtering. We have also presented experiment results relating to enhanced the success rate of object recognition.

Vision- Based Finger Spelling Recognition for Korean Sign Language

  • Park Jun;Lee Dae-hyun
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.6
    • /
    • pp.768-775
    • /
    • 2005
  • For sign languages are main communication means among hearing-impaired people, there are communication difficulties between speaking-oriented people and sign-language-oriented people. Automated sign-language recognition may resolve these communication problems. In sign languages, finger spelling is used to spell names and words that are not listed in the dictionary. There have been research activities for gesture and posture recognition using glove-based devices. However, these devices are often expensive, cumbersome, and inadequate for recognizing elaborate finger spelling. Use of colored patches or gloves also cause uneasiness. In this paper, a vision-based finger spelling recognition system is introduced. In our method, captured hand region images were separated from the background using a skin detection algorithm assuming that there are no skin-colored objects in the background. Then, hand postures were recognized using a two-dimensional grid analysis method. Our recognition system is not sensitive to the size or the rotation of the input posture images. By optimizing the weights of the posture features using a genetic algorithm, our system achieved high accuracy that matches other systems using devices or colored gloves. We applied our posture recognition system for detecting Korean Sign Language, achieving better than $93\%$ accuracy.

  • PDF

Test bed for autonomous controlled space robot (우주로봇 자율제어 테스트 베드)

  • 최종현;백윤수;박종오
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.1828-1831
    • /
    • 1997
  • this paper, to represent the robot motion approximately in space, delas with algorithm for position recognition of space robot, target and obstacle with vision system in 2-D. And also there are algorithms for precise distance-measuring and calibration usign laser displacement system, and for trajectory selection for optimizing moving to object, and for robot locomtion with air-thrust valve. And the software synthesizing of these algorithms hleps operator to realize the situation certainly and perform the job without any difficulty.

  • PDF