• Title/Summary/Keyword: camera image

Search Result 4,917, Processing Time 0.034 seconds

Study on vision-based object recognition to improve performance of industrial manipulator (산업용 매니퓰레이터의 작업 성능 향상을 위한 영상 기반 물체 인식에 관한 연구)

  • Park, In-Cheol;Park, Jong-Ho;Ryu, Ji-Hyoung;Kim, Hyoung-Ju;Chong, Kil-To
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.4
    • /
    • pp.358-365
    • /
    • 2017
  • In this paper, we propose an object recognition method using image information to improve the efficiency of visual servoingfor industrial manipulators in industry. This is an image-processing method for real-time responses to an abnormal situation or to external environment change in a work object by utilizing camera-image information of an industrial manipulator. The object recognition method proposed in this paper uses the Otsu method, a thresholding technique based on separation of the V channel containing color information and the S channel, in which it is easy to separate the background from the HSV channel in order to improve the recognition rate of the existing Harris Corner algorithm. Through this study, when the work object is not placed in the correct position due to external factors or from being twisted,the position is calculated and provided to the industrial manipulator.

A Method for Effective Homography Estimation Applying a Depth Image-Based Filter (깊이 영상 기반 필터를 적용한 효과적인 호모그래피 추정 방법)

  • Joo, Yong-Joon;Hong, Myung-Duk;Yoon, Ui-Nyoung;Go, Seung-Hyun;Jo, Geun-Sik
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.2
    • /
    • pp.61-66
    • /
    • 2019
  • Augmented reality is a technology that makes a virtual object appear as if it exists in reality by composing a virtual object in real time with the image captured by the camera. In order to augment the virtual object on the object existing in reality, the homography of images utilized to estimate the position and orientation of the object. The homography can be estimated by applying the RANSAC algorithm to the feature points of the images. But the homography estimation method using the RANSAC algorithm has a problem that accurate homography can not be estimated when there are many feature points in the background. In this paper, we propose a method to filter feature points of a background when the object is near and the background is relatively far away. First, we classified the depth image into relatively near region and a distant region using the Otsu's method and improve homography estimation performance by filtering feature points on the relatively distant area. As a result of experiment, processing time is shortened 71.7% compared to a conventional homography estimation method, and the number of iterations of the RANSAC algorithm was reduced 69.4%, and Inlier rate was increased 16.9%.

White striping degree assessment using computer vision system and consumer acceptance test

  • Kato, Talita;Mastelini, Saulo Martiello;Campos, Gabriel Fillipe Centini;Barbon, Ana Paula Ayub da Costa;Prudencio, Sandra Helena;Shimokomaki, Massami;Soares, Adriana Lourenco;Barbon, Sylvio Jr.
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.32 no.7
    • /
    • pp.1015-1026
    • /
    • 2019
  • Objective: The objective of this study was to evaluate three different degrees of white striping (WS) addressing their automatic assessment and customer acceptance. The WS classification was performed based on a computer vision system (CVS), exploring different machine learning (ML) algorithms and the most important image features. Moreover, it was verified by consumer acceptance and purchase intent. Methods: The samples for image analysis were classified by trained specialists, according to severity degrees regarding visual and firmness aspects. Samples were obtained with a digital camera, and 25 features were extracted from these images. ML algorithms were applied aiming to induce a model capable of classifying the samples into three severity degrees. In addition, two sensory analyses were performed: 75 samples properly grilled were used for the first sensory test, and 9 photos for the second. All tests were performed using a 10-cm hybrid hedonic scale (acceptance test) and a 5-point scale (purchase intention). Results: The information gain metric ranked 13 attributes. However, just one type of image feature was not enough to describe the phenomenon. The classification models support vector machine, fuzzy-W, and random forest showed the best results with similar general accuracy (86.4%). The worst performance was obtained by multilayer perceptron (70.9%) with the high error rate in normal (NORM) sample predictions. The sensory analysis of acceptance verified that WS myopathy negatively affects the texture of the broiler breast fillets when grilled and the appearance attribute of the raw samples, which influenced the purchase intention scores of raw samples. Conclusion: The proposed system has proved to be adequate (fast and accurate) for the classification of WS samples. The sensory analysis of acceptance showed that WS myopathy negatively affects the tenderness of the broiler breast fillets when grilled, while the appearance attribute of the raw samples eventually influenced purchase intentions.

Development of Surface Velocity Measurement Technique without Reference Points Using UAV Image (드론 정사영상을 이용한 무참조점 표면유속 산정 기법 개발)

  • Lee, Jun Hyeong;Yoon, Byung Man;Kim, Seo Jun
    • Ecology and Resilient Infrastructure
    • /
    • v.8 no.1
    • /
    • pp.22-31
    • /
    • 2021
  • Surface image velocimetry (SIV) is a noncontact velocimetry technique based on images. Recently, studies have been conducted on surface velocity measurements using drones to measure a wide range of velocities and discharges. However, when measuring the surface velocity using a drone, reference points must be included in the image for image correction and the calculation of the ground sample distance, which limits the flight altitude and shooting area of the drone. A technique for calculating the surface velocity that does not require reference points must be developed to maximize spatial freedom, which is the advantage of velocity measurements using drone images. In this study, a technique for calculating the surface velocity that uses only the drone position and the specifications of the drone-mounted camera, without reference points, was developed. To verify the developed surface velocity calculation technique, surface velocities were calculated at the Andong River Experiment Center and then measured with a FlowTracker. The surface velocities measured by conventional SIV using reference points and those calculated by the developed SIV method without reference points were compared. The results confirmed an average difference of approximately 4.70% from the velocity obtained by the conventional SIV and approximately 4.60% from the velocity measured by FlowTracker. The proposed technique can accurately measure the surface velocity using a drone regardless of the flight altitude, shooting area, and analysis area.

A Study on the Comparison of Detected Vein Images by NIR LED Quantity of Vein Detector (정맥검출기의 NIR LED 수량에 따른 검출된 정맥 이미지 비교에 관한 연구)

  • Jae-Hyun, Jo;Jin-Hyoung, Jeong;Seung-Hun, Kim;Sang-Sik, Lee
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.15 no.6
    • /
    • pp.485-491
    • /
    • 2022
  • Intravenous injection is the most frequent invasive treatment for inpatients and is widely used for parenteral nutrition administration and blood products, and more than 1 billion procedures are used for peripheral catheter insertion, blood collection, and other IV therapy per year. Intravenous injection is one of the difficult procedures to be performed only by trained nurses with intravenous injection training, and failure can lead to thrombosis and hematoma or nerve damage to the vein. Accordingly, studies on auxiliary equipment capable of visualizing the vein structure of the back of the hand or arm are being published to reduce errors during intravenous injection. This study is a study on the performance difference according to the number of LEDs irradiating the 850nm wavelength band on a vein detector that visualizes the vein during intravenous injection. Four LED PCBs were produced by attaching NIR filters to CCD and CMOS camera lenses irradiated on the skin to acquire images, sharpen the acquired images using image processing algorithms, and project the sharpened images onto the skin. After that, each PCB was attached to the front end of the vein detector to detect the vein image and create a performance comparison questionnaire based on the vein image obtained for performance evaluation. The survey was conducted on 20 nurses working at K Hospital.

A of Radiation Field with a Developed EPID

  • Y.H. Ji;Lee, D.H.;Lee, D.H.;Y.K. Oh;Kim, Y.J.;C.K. Cho;Kim, M.S.;H.J. Yoo;K.M. Yang
    • Proceedings of the Korean Society of Medical Physics Conference
    • /
    • 2003.09a
    • /
    • pp.67-67
    • /
    • 2003
  • It is crucial to minimize setup errors of a cancer treatment machine using a high energy and to perform precise radiation therapy. Usually, port film has been used for verifying errors. The Korea Cancer Center Hospital (KCCH) has manufactured digital electronic portal imaging device (EPID) system to verify treatment machine errors as a Quality Assurance (Q.A) tool. This EPID was consisted of a metal/fluorescent screen, 45$^{\circ}$ mirror, a camera and an image grabber and could display the portal image with near real time KIRAMS has also made the acrylic phantom that has lead line of 1mm width for ligh/radiation field congruence verification and reference points phantom for using as an isocenter on portal image. We acquired portal images of 10$\times$10cm field size with this phantom by EPID and portal film rotating treatment head by 0.3$^{\circ}$, 0.6$^{\circ}$ and 0.9$^{\circ}$. To check field size, we acquired portal images with 18$\times$18cm, 19$\times$19cm and 20$\times$20cm field size with collimator angle 0$^{\circ}$ and 0.5$^{\circ}$ individually. We have performed Flatness comparison by displaying the line intensity from EPID and film images. The 0.6$^{\circ}$ shift of collimator angle was easily observed by edge detection of irradiated field size on EPID image. To the extent of one pixel (0.76mm) difference could be detected. We also have measured field size by finding optimal threshold value, finding isocenter, finding field edge and gauging distance between isocenter and edge. This EPID system could be used as a Q.A tool for checking field size, light/radiation congruence and flatness with a developed video based EPID.

  • PDF

Data Augmentation for Tomato Detection and Pose Estimation (토마토 위치 및 자세 추정을 위한 데이터 증대기법)

  • Jang, Minho;Hwang, Youngbae
    • Journal of Broadcast Engineering
    • /
    • v.27 no.1
    • /
    • pp.44-55
    • /
    • 2022
  • In order to automatically provide information on fruits in agricultural related broadcasting contents, instance image segmentation of target fruits is required. In addition, the information on the 3D pose of the corresponding fruit may be meaningfully used. This paper represents research that provides information about tomatoes in video content. A large amount of data is required to learn the instance segmentation, but it is difficult to obtain sufficient training data. Therefore, the training data is generated through a data augmentation technique based on a small amount of real images. Compared to the result using only the real images, it is shown that the detection performance is improved as a result of learning through the synthesized image created by separating the foreground and background. As a result of learning augmented images using images created using conventional image pre-processing techniques, it was shown that higher performance was obtained than synthetic images in which foreground and background were separated. To estimate the pose from the result of object detection, a point cloud was obtained using an RGB-D camera. Then, cylinder fitting based on least square minimization is performed, and the tomato pose is estimated through the axial direction of the cylinder. We show that the results of detection, instance image segmentation, and cylinder fitting of a target object effectively through various experiments.

A Deep Learning-based Real-time Deblurring Algorithm on HD Resolution (HD 해상도에서 실시간 구동이 가능한 딥러닝 기반 블러 제거 알고리즘)

  • Shim, Kyujin;Ko, Kangwook;Yoon, Sungjoon;Ha, Namkoo;Lee, Minseok;Jang, Hyunsung;Kwon, Kuyong;Kim, Eunjoon;Kim, Changick
    • Journal of Broadcast Engineering
    • /
    • v.27 no.1
    • /
    • pp.3-12
    • /
    • 2022
  • Image deblurring aims to remove image blur, which can be generated while shooting the pictures by the movement of objects, camera shake, blurring of focus, and so forth. With the rise in popularity of smartphones, it is common to carry portable digital cameras daily, so image deblurring techniques have become more significant recently. Originally, image deblurring techniques have been studied using traditional optimization techniques. Then with the recent attention on deep learning, deblurring methods based on convolutional neural networks have been actively proposed. However, most of them have been developed while focusing on better performance. Therefore, it is not easy to use in real situations due to the speed of their algorithms. To tackle this problem, we propose a novel deep learning-based deblurring algorithm that can be operated in real-time on HD resolution. In addition, we improved the training and inference process and could increase the performance of our model without any significant effect on the speed and the speed without any significant effect on the performance. As a result, our algorithm achieves real-time performance by processing 33.74 frames per second at 1280×720 resolution. Furthermore, it shows excellent performance compared to its speed with a PSNR of 29.78 and SSIM of 0.9287 with the GoPro dataset.

Examination of Aggregate Quality Using Image Processing Based on Deep-Learning (딥러닝 기반 영상처리를 이용한 골재 품질 검사)

  • Kim, Seong Kyu;Choi, Woo Bin;Lee, Jong Se;Lee, Won Gok;Choi, Gun Oh;Bae, You Suk
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.6
    • /
    • pp.255-266
    • /
    • 2022
  • The quality control of coarse aggregate among aggregates, which are the main ingredients of concrete, is currently carried out by SPC(Statistical Process Control) method through sampling. We construct a smart factory for manufacturing innovation by changing the quality control of coarse aggregates to inspect the coarse aggregates based on this image by acquired images through the camera instead of the current sieve analysis. First, obtained images were preprocessed, and HED(Hollistically-nested Edge Detection) which is the filter learned by deep learning segment each object. After analyzing each aggregate by image processing the segmentation result, fineness modulus and the aggregate shape rate are determined by analyzing result. The quality of aggregate obtained through the video was examined by calculate fineness modulus and aggregate shape rate and the accuracy of the algorithm was more than 90% accurate compared to that of aggregates through the sieve analysis. Furthermore, the aggregate shape rate could not be examined by conventional methods, but the content of this paper also allowed the measurement of the aggregate shape rate. For the aggregate shape rate, it was verified with the length of models, which showed a difference of ±4.5%. In the case of measuring the length of the aggregate, the algorithm result and actual length of the aggregate showed a ±6% difference. Analyzing the actual three-dimensional data in a two-dimensional video made a difference from the actual data, which requires further research.

Research and improvement of image analysis and bar code and QR recognition technology for the development of visually impaired applications (시각장애인 애플리케이션 개발을 위한 이미지 분석과 바코드, QR 인식 기술의 연구 및 개선)

  • MinSeok Cho;MinKi Yoon;MinSu Seo;YoungHoon Hwang;Hyun Woo;WonWhoi Huh
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.6
    • /
    • pp.861-866
    • /
    • 2023
  • Individuals with visual impairments face difficulties in accessing accurate information about medical services and medications, making it challenging for them to ensure proper medication intake. While there are healthcare laws addressing this issue, there is a lack of standardized solutions, and not all over-the-counter medications are covered. Therefore, we have undertaken the design of a mobile application that utilizes image recognition technology, barcode scanning, and QR code recognition to provide guidance on how to take over-the-counter medications, filling the existing gaps in the knowledge of visually impaired individuals. Currently available applications for individuals with visual impairments allow them to access information about medications. However, they still require the user to remember which specific medication they are taking, posing a significant challenge. In this research, we are optimizing the camera capture environment, user interface (UI), and user experience (UX) screens for image recognition, ensuring greater accessibility and convenience for visually impaired individuals. By implementing the findings from our research into the application, we aim to assist visually impaired individuals in acquiring the correct methods for taking over-the-counter medications.