• Title/Summary/Keyword: Computer Vision system

Search Result 1,064, Processing Time 0.03 seconds

A Parallel Implementation of Multiple Non-overlapping Cameras for Robot Pose Estimation

  • Ragab, Mohammad Ehab;Elkabbany, Ghada Farouk
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.11
    • /
    • pp.4103-4117
    • /
    • 2014
  • Image processing and computer vision algorithms are gaining larger concern in a variety of application areas such as robotics and man-machine interaction. Vision allows the development of flexible, intelligent, and less intrusive approaches than most of the other sensor systems. In this work, we determine the location and orientation of a mobile robot which is crucial for performing its tasks. In order to be able to operate in real time there is a need to speed up different vision routines. Therefore, we present and evaluate a method for introducing parallelism into the multiple non-overlapping camera pose estimation algorithm proposed in [1]. In this algorithm the problem has been solved in real time using multiple non-overlapping cameras and the Extended Kalman Filter (EKF). Four cameras arranged in two back-to-back pairs are put on the platform of a moving robot. An important benefit of using multiple cameras for robot pose estimation is the capability of resolving vision uncertainties such as the bas-relief ambiguity. The proposed method is based on algorithmic skeletons for low, medium and high levels of parallelization. The analysis shows that the use of a multiprocessor system enhances the system performance by about 87%. In addition, the proposed design is scalable, which is necaccery in this application where the number of features changes repeatedly.

Measurement and control of weld pool using vision system (시각장치를 이용한 용융지의 계측과 제어)

  • 박주용;황선효
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1986.10a
    • /
    • pp.527-529
    • /
    • 1986
  • Measurement and control system of weld pool is comprised of optical devices, image processor, personal computer and welding machine. Combinations of ND and Infrared filters were used to block the intense arc light and to get the clearer image of weld pool. Smoothing operation and conversion to binary data were performed to eliminate the noises and to decrease the processing time. A simple algorithm for feedback control was developed and weld pool size is controlled by welding current which is adjusted automatically with personal computer.

  • PDF

Survey: The Tabletop Display Techniques for Collaborative Interaction (협력적인 상호작용을 위한 테이블-탑 디스플레이 기술 동향)

  • Kim, Song-Gook;Lee, Chil-Woo
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2006.11a
    • /
    • pp.616-621
    • /
    • 2006
  • Recently, the researches based on vision about user attention and action awareness are being pushed actively for human computer interaction. Among them, various applications of tabletop display system are developed more in accordance with touch sensing technique, co-located and collaborative work. Formerly, although supported only one user, support multi-user at present. Therefore, collaborative work and interaction of four elements (human, computer, displayed objects, physical objects) that is ultimate goal of tabletop display are realizable. Generally, tabletop display system designs according to four key aspects. 1)multi-touch interaction using bare hands. 2)implementation of collaborative work, simultaneous user interaction. 3)direct touch interaction. 4)use of physical objects as an interaction tool. In this paper, we describe a critical analysis of the state-of-the-art in advanced multi-touch sensing techniques for tabletop display system according to the four methods: vision based method, non-vision based method, top-down projection system and rear projection system. And we also discuss some problems and practical applications in the research field.

  • PDF

Quality Inspection and Sorting in Eggs by Machine Vision

  • Cho, Han-Keun;Yang Kwon
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 1996.06c
    • /
    • pp.834-841
    • /
    • 1996
  • Egg production in Korea is becoming automated with a large scale farm. Although many operations in egg production have been and cracks are regraded as a critical problem. A computer vision system was built to generate images of a single , stationary egg. This system includes a CCD camera, a frame grabber board, a personal computer (IBM PC AT 486) and an incandescent back lighting system. Image processing algorithms were developed to inspect egg shell and to sort eggs. Those values of both gray level and area of dark spots in the egg image were used as criteria to detect holes in egg and those values of both area and roundness of dark spots in the egg and those values of both area and roundness of dark spots in the egg image were used to detect cracks in egg. Fro a sample of 300 eggs. this system was able to correctly analyze an egg for the presence of a defect 97.5% of the time. The weights of eggs were found to be linear to both the projected area and the perimeter of eggs v ewed from above. Those two values were used as criteria to sort eggs. Accuracy in grading was found to be 96.7% as compared with results from weight by electronic scale.

  • PDF

A Real-time Vehicle Localization Algorithm for Autonomous Parking System (자율 주차 시스템을 위한 실시간 차량 추출 알고리즘)

  • Hahn, Jong-Woo;Choi, Young-Kyu
    • Journal of the Semiconductor & Display Technology
    • /
    • v.10 no.2
    • /
    • pp.31-38
    • /
    • 2011
  • This paper introduces a video based traffic monitoring system for detecting vehicles and obstacles on the road. To segment moving objects from image sequence, we adopt the background subtraction algorithm based on the local binary patterns (LBP). Recently, LBP based texture analysis techniques are becoming popular tools for various machine vision applications such as face recognition, object classification and so on. In this paper, we adopt an extension of LBP, called the Diagonal LBP (DLBP), to handle the background subtraction problem arise in vision-based autonomous parking systems. It reduces the code length of LBP by half and improves the computation complexity drastically. An edge based shadow removal and blob merging procedure are also applied to the foreground blobs, and a pose estimation technique is utilized for calculating the position and heading angle of the moving object precisely. Experimental results revealed that our system works well for real-time vehicle localization and tracking applications.

Plant Species Identification based on Plant Leaf Using Computer Vision and Machine Learning Techniques

  • Kaur, Surleen;Kaur, Prabhpreet
    • Journal of Multimedia Information System
    • /
    • v.6 no.2
    • /
    • pp.49-60
    • /
    • 2019
  • Plants are very crucial for life on Earth. There is a wide variety of plant species available, and the number is increasing every year. Species knowledge is a necessity of various groups of society like foresters, farmers, environmentalists, educators for different work areas. This makes species identification an interdisciplinary interest. This, however, requires expert knowledge and becomes a tedious and challenging task for the non-experts who have very little or no knowledge of the typical botanical terms. However, the advancements in the fields of machine learning and computer vision can help make this task comparatively easier. There is still not a system so developed that can identify all the plant species, but some efforts have been made. In this study, we also have made such an attempt. Plant identification usually involves four steps, i.e. image acquisition, pre-processing, feature extraction, and classification. In this study, images from Swedish leaf dataset have been used, which contains 1,125 images of 15 different species. This is followed by pre-processing using Gaussian filtering mechanism and then texture and color features have been extracted. Finally, classification has been done using Multiclass-support vector machine, which achieved accuracy of nearly 93.26%, which we aim to enhance further.

Vision-based hand Gesture Detection and Tracking System (비전 기반의 손동작 검출 및 추적 시스템)

  • Park Ho-Sik;Bae Cheol-soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.12C
    • /
    • pp.1175-1180
    • /
    • 2005
  • We present a vision-based hand gesture detection and tracking system. Most conventional hand gesture recognition systems utilize a simpler method for hand detection such as background subtractions with assumed static observation conditions and those methods are not robust against camera motions, illumination changes, and so on. Therefore, we propose a statistical method to recognize and detect hand regions in images using geometrical structures. Also, Our hand tracking system employs multiple cameras to reduce occlusion problems and non-synchronous multiple observations enhance system scalability. In this experiment, the proposed method has recognition rate of $99.28\%$ that shows more improved $3.91\%$ than the conventional appearance method.

Development and Validation of a Vision-Based Needling Training System for Acupuncture on a Phantom Model

  • Trong Hieu Luu;Hoang-Long Cao;Duy Duc Pham;Le Trung Chanh Tran;Tom Verstraten
    • Journal of Acupuncture Research
    • /
    • v.40 no.1
    • /
    • pp.44-52
    • /
    • 2023
  • Background: Previous studies have investigated technology-aided needling training systems for acupuncture on phantom models using various measurement techniques. In this study, we developed and validated a vision-based needling training system (noncontact measurement) and compared its training effectiveness with that of the traditional training method. Methods: Needle displacements during manipulation were analyzed using OpenCV to derive three parameters, i.e., needle insertion speed, needle insertion angle (needle tip direction), and needle insertion length. The system was validated in a laboratory setting and a needling training course. The performances of the novices (students) before and after training were compared with the experts. The technology-aided training method was also compared with the traditional training method. Results: Before the training, a significant difference in needle insertion speed was found between experts and novices. After the training, the novices approached the speed of the experts. Both training methods could improve the insertion speed of the novices after 10 training sessions. However, the technology-aided training group already showed improvement after five training sessions. Students and teachers showed positive attitudes toward the system. Conclusion: The results suggest that the technology-aided method using computer vision has similar training effectiveness to the traditional one and can potentially be used to speed up needling training.

Computer Simulation for Gradual Yellowing of Aged Lens and Its Application for Test Devices

  • Kim, Bog G.;Han, Jeong-Won;Park, Soo-Been
    • Journal of the Optical Society of Korea
    • /
    • v.17 no.4
    • /
    • pp.344-349
    • /
    • 2013
  • This paper proposes a simulation algorithm to assess the gradual yellowing vision of the elderly, which refers to the predominance of yellowness in their vision due to aging of the ocular optic media. This algorithm employed the spectral transmittance property of a yellow filter to represent the color appearance perceived by elderly people with yellow vision, and modeled the changes in the color space through a spectrum change in light using the yellow filter effect. The spectral reflectivity data of 1269 Munsell matte color chips were used as reference data. Under the standard conditions of a D65 illuminant and a $10^{\circ}$ observer of 1964 CIE, the spectrum of the 1269 Munsell colors were processed through the yellow filter effect to simulate yellow vision. Various degrees of yellow vision were modeled according to the transmittance percentage of the yellow filter. The color differences before and after the yellow filter effect were calculated using the DE2000 formula, and the color pairs were selected based on the color difference function. These color pairs are distinguishable through normal vision, but the color difference diminishes as the degree of yellow vision increases. Assuming 80% of yellow vision effect, 17 color pairs out of $(1269{\times}1268)/2$ pairs were selected, and for the 90% of yellow vision effect, only 3 color pairs were selected. The result of this study can be utilized for the diagnosis system of gradual yellow vision, making various types of test charts with selected color pairs.

Image Processing Algorithm for Weight Estimation of Dairy Cattle (젖소 체중추정을 위한 영상처리 알고리즘)

  • Seo, Kwang-Wook;Kim, Hyeon-Tae;Lee, Dae-Weon;Yoon, Yong-Cheol;Choi, Dong-Yoon
    • Journal of Biosystems Engineering
    • /
    • v.36 no.1
    • /
    • pp.48-57
    • /
    • 2011
  • The computer vision system was designed and constructed to measure the weight of a dairy cattle. Its development involved the functions of image capture, image preprocessing, image algorithm, and control integrated into one program. The experiments were conducted with the model dairy cattle and the real dairy cattle by two ways. First experiment with the model dairy cattle was conducted by using the indoor vision experimental system, which was built to measure the model dairy cattle in the laboratory. Second experiment with real dairy cattle was conducted by using the outdoor vision experimental system, which was built for measuring 229 heads of cows in the cattle facilities. This vision system proved to a reliable system by conducting their performance test with 15 heads of real cow in the cattle facilities. Indirect weight measuring with four methods were conducted by using the image processing system, which was the same system for measuring of body parameters. Error value of transform equation using chest girth was 30%. This error was seen as the cause of accumulated error by manually measurement. So it was not appropriate to estimate cow weight by using the transform equation, which was calculated from pixel values of the chest girth. Measurement of cow weight by multiple regression equation from top and side view images has relatively less error value, 5%. When cow weight was measured indirectly by image surface area from the pixel of top and side view images, maximum error value was 11.7%. When measured cow weight by image volume, maximum error weight was 57 kg. Generally, weight error was within 30 kg but maximum error 10.7%. Volume transform method, out of 4 measuring weight methods, was minimum error weight 21.8 kg.