• Title/Summary/Keyword: vision-based technology

Search Result 1,063, Processing Time 0.029 seconds

Development of Robot Vision Technology for Real-Time Recognition of Model of 3D Parts (3D 부품모델 실시간 인식을 위한 로봇 비전기술 개발)

  • Shim, Byoung-Kyun;Choi, Kyung-Sun;jang, Sung-Cheol;Ahn, Yong-Suk;Han, Sung-Hyun
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.16 no.4
    • /
    • pp.113-117
    • /
    • 2013
  • This paper describes a new technology to develop the character recognition technology based on pattern recognition for non-contacting inspection optical lens slant or precision parts, and including external form state of lens or electronic parts for the performance verification, this development can achieve badness finding. And, establish to existing reflex data because inputting surface badness degree of scratch's standard specification condition directly, and error designed to distinguish from product more than schedule error to badness product by normalcy product within schedule extent after calculate the error comparing actuality measurement reflex data and standard reflex data mutually. Developed system to smallest 1 pixel unit though measuring is possible 1 pixel as $37{\mu}m{\times}37{\mu}m$ ($0.1369{\times}10-4mm^2$) the accuracy to $1.5{\times}10-4mm$ minutely measuring is possible performance verification and trust ability through an experiment prove.

Estimating a Range of Lane Departure Allowance based on Road Alignment in an Autonomous Driving Vehicle (자율주행 차량의 도로 평면선형 기반 차로이탈 허용 범위 산정)

  • Kim, Youngmin;Kim, Hyoungsoo
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.15 no.4
    • /
    • pp.81-90
    • /
    • 2016
  • As an autonomous driving vehicle (AV) need to cope with external road conditions by itself, its perception performance for road environment should be better than that of a human driver. A vision sensor, one of AV sensors, performs lane detection function to percept road environment for performing safe vehicle steering, which relates to define vehicle heading and lane departure prevention. Performance standards for a vision sensor in an ADAS(Advanced Driver Assistance System) focus on the function of 'driver assistance', not on the perception of 'independent situation'. So the performance requirements for a vision sensor in AV may different from those in an ADAS. In assuming that an AV keep previous steering due to lane detection failure, this study calculated lane departure distances between the AV location following curved road alignment and the other one driving to the straight in a curved section. We analysed lane departure distance and time with respect to the allowance of lane detection malfunction of an AV vision sensor. With the results, we found that an AV would encounter a critical lane departure situation if a vision sensor loses lane detection over 1 second. Therefore, it is concluded that the performance standards for an AV should contain more severe lane departure situations than those of an ADAS.

Application of deep learning technique for battery lead tab welding error detection (배터리 리드탭 압흔 오류 검출의 딥러닝 기법 적용)

  • Kim, YunHo;Kim, ByeongMan
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.27 no.2
    • /
    • pp.71-82
    • /
    • 2022
  • In order to replace the sampling tensile test of products produced in the tab welding process, which is one of the automotive battery manufacturing processes, vision inspectors are currently being developed and used. However, the vision inspection has the problem of inspection position error and the cost of improving it. In order to solve these problems, there are recent cases of applying deep learning technology. As one such case, this paper tries to examine the usefulness of applying Faster R-CNN, one of the deep learning technologies, to existing product inspection. The images acquired through the existing vision inspection machine are used as training data and trained using the Faster R-CNN ResNet101 V1 1024x1024 model. The results of the conventional vision test and Faster R-CNN test are compared and analyzed based on the test standards of 0% non-detection and 10% over-detection. The non-detection rate is 34.5% in the conventional vision test and 0% in the Faster R-CNN test. The over-detection rate is 100% in the conventional vision test and 6.9% in Faster R-CNN. From these results, it is confirmed that deep learning technology is very useful for detecting welding error of lead tabs in automobile batteries.

A Study on a Intelligent GIS Monitoring System using the Preventive Diagnostic Technology (예방진단기술을 이용한 지능형 GIS 감시시스템에 관한 연구)

  • Park, Kee-Young;Lee, Jong-Ha;Cho, Sook-Jin;Choi, Hyung-Ki;Jung, Eui-Bung
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.6
    • /
    • pp.244-251
    • /
    • 2014
  • In this study, we give a detailed account of normal and abnormal state of GIS(Gas Insulated Switch-gear) using the preventive diagnostic technology. And it is based on the analysis and diagnosis for storing data of GIS by intelligent GIS monitoring system. The wave shape of GIS sound is similar to noise and is systematically generated by discharge and its corona sound. Therefore, in this paper, to classify normal and abnormal GIS sound. We could discriminate between normal and abnormal case using level crossing rate(LCR) and spectrogram energy rate.

The Study on Development and Effectiveness of Web-based Cyber Class Contents for University Student's Career Education (대학생의 진로교육을 위한 웹기반 사이버강의 콘텐츠 개발 및 효과검증)

  • Han, Mi-Hee
    • Journal of Information Technology Applications and Management
    • /
    • v.23 no.2
    • /
    • pp.225-238
    • /
    • 2016
  • The purpose of this study is to develop cyber class contents on website and to verify the effectiveness of the program to reinforce a career education. The participants of the experiment in this study are from Namseoul University who took the cyber class 'Self management and creation of vision.' of the second semester in 2015. They took the class fifty-sixty minutes a week. The control group also includes the students who took the classes 'Theory and practice of school violence prevention' and 'Youth Education theory.' To verify the effectiveness of career education, the data processing in performing this study utilized t-test for homogeneity with the effective variable of career identity and career decision and the result was proved by using paired t-test. The result suggests that the experiment group significantly shows improvement compared with the control group in the view of career identity and career decision level. Therefore we recognize that the web based cyber course has its effect on content development and career education and we anticipate the continuous development and activation of effective cyber education on website for university students in future.

EXTRACTION OF THE LEAN TISSUE BOUNDARY OF A BEEF CARCASS

  • Lee, C. H.;H. Hwang
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 2000.11c
    • /
    • pp.715-721
    • /
    • 2000
  • In this research, rule and neuro net based boundary extraction algorithm was developed. Extracting boundary of the interest, lean tissue, is essential for the quality evaluation of the beef based on color machine vision. Major quality features of the beef are size, marveling state of the lean tissue, color of the fat, and thickness of back fat. To evaluate the beef quality, extracting of loin parts from the sectional image of beef rib is crucial and the first step. Since its boundary is not clear and very difficult to trace, neural network model was developed to isolate loin parts from the entire image input. At the stage of training network, normalized color image data was used. Model reference of boundary was determined by binary feature extraction algorithm using R(red) channel. And 100 sub-images(selected from maximum extended boundary rectangle 11${\times}$11 masks) were used as training data set. Each mask has information on the curvature of boundary. The basic rule in boundary extraction is the adaptation of the known curvature of the boundary. The structured model reference and neural net based boundary extraction algorithm was developed and implemented to the beef image and results were analyzed.

  • PDF

Physical interpretation of concrete crack images from feature estimation and classification

  • Koh, Eunbyul;Jin, Seung-Seop;Kim, Robin Eunju
    • Smart Structures and Systems
    • /
    • v.30 no.4
    • /
    • pp.385-395
    • /
    • 2022
  • Detecting cracks on a concrete structure is crucial for structural maintenance, a crack being an indicator of possible damage. Conventional crack detection methods which include visual inspection and non-destructive equipment, are typically limited to a small region and require time-consuming processes. Recently, to reduce the human intervention in the inspections, various researchers have sought computer vision-based crack analyses: One class is filter-based methods, which effectively transforms the image to detect crack edges. The other class is using deep-learning algorithms. For example, convolutional neural networks have shown high precision in identifying cracks in an image. However, when the objective is to classify not only the existence of crack but also the types of cracks, only a few studies have been reported, limiting their practical use. Thus, the presented study develops an image processing procedure that detects cracks and classifies crack types; whether the image contains a crazing-type, single crack, or multiple cracks. The properties and steps in the algorithm have been developed using field-obtained images. Subsequently, the algorithm is validated from additional 227 images obtained from an open database. For test datasets, the proposed algorithm showed accuracy of 92.8% in average. In summary, the developed algorithm can precisely classify crazing-type images, while some single crack images may misclassify into multiple cracks, yielding conservative results. As a result, the successful results of the presented study show potentials of using vision-based technologies for providing crack information with reduced human intervention.

A vision-based system for dynamic displacement measurement of long-span bridges: algorithm and verification

  • Ye, X.W.;Ni, Y.Q.;Wai, T.T.;Wong, K.Y.;Zhang, X.M.;Xu, F.
    • Smart Structures and Systems
    • /
    • v.12 no.3_4
    • /
    • pp.363-379
    • /
    • 2013
  • Dynamic displacement of structures is an important index for in-service structural condition and behavior assessment, but accurate measurement of structural displacement for large-scale civil structures such as long-span bridges still remains as a challenging task. In this paper, a vision-based dynamic displacement measurement system with the use of digital image processing technology is developed, which is featured by its distinctive characteristics in non-contact, long-distance, and high-precision structural displacement measurement. The hardware of this system is mainly composed of a high-resolution industrial CCD (charge-coupled-device) digital camera and an extended-range zoom lens. Through continuously tracing and identifying a target on the structure, the structural displacement is derived through cross-correlation analysis between the predefined pattern and the captured digital images with the aid of a pattern matching algorithm. To validate the developed system, MTS tests of sinusoidal motions under different vibration frequencies and amplitudes and shaking table tests with different excitations (the El-Centro earthquake wave and a sinusoidal motion) are carried out. Additionally, in-situ verification experiments are performed to measure the mid-span vertical displacement of the suspension Tsing Ma Bridge in the operational condition and the cable-stayed Stonecutters Bridge during loading tests. The obtained results show that the developed system exhibits an excellent capability in real-time measurement of structural displacement and can serve as a good complement to the traditional sensors.

Guidance Line Extraction for Autonomous Weeding robot based-on Rice Morphology Characteristic in Wet Paddy (논 잡초 방제용 자율주행 로봇을 위한 벼의 형태학적 특징 기반의 주행기준선 추출)

  • Choi, Keun Ha;Han, Sang Kwon;Han, Sang Hoon;Park, Kwang-Ho;Kim, Kyung-Soo;Kim, Soohyun
    • The Journal of Korea Robotics Society
    • /
    • v.9 no.3
    • /
    • pp.147-153
    • /
    • 2014
  • In this paper, we proposed a new algorithm of the guidance line extraction for autonomous weeding robot based on infrared vision sensor in wet paddy. It is the critical process for guidance line extraction which finds the central point or area of rice row. In order to improve accuracy of the guidance line, we are trying to use the morphological characteristics of rice that the direction of rice leaves have convergence to central area of rice row. Using Hough transform, we were represented the curved leaves as a combination of segmented straight lines on binary image that has been skeletonized and segmented object. A slope of the guidance line was gotten as calculate the average slope of all segmented lines. An initial point of the guidance line was determined that is the maximum pixel value of the accumulated white columns of a binary image which is rotated the slope of guidance line in the opposite direction. We also have verified an accuracy of the proposed algorithm by experiments in the real wet paddy.

Vision-Based High Accuracy Vehicle Positioning Technology (비전 기반 고정밀 차량 측위 기술)

  • Jo, Sang-Il;Lee, Jaesung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.12
    • /
    • pp.1950-1958
    • /
    • 2016
  • Today, technique for precisely positioning vehicles is very important in C-ITS(Cooperative Intelligent Transport System), Self-Driving Car and other information technology relating to transportation. Though the most popular technology for vehicle positioning is the GPS, its accuracy is not reliable because of large delay caused by multipath effect, which is very bad for realtime traffic application. Therefore, in this paper, we proposed the Vision-Based High Accuracy Vehicle Positioning Technology. At the first step of proposed algorithm, the ROI is set up for road area and the vehicles detection. Then, center and four corners points of found vehicles on the road are determined. Lastly, these points are converted into aerial view map using homography matrix. By analyzing performance of algorithm, we find out that this technique has high accuracy with average error of result is less than about 20cm and the maximum value is not exceed 44.72cm. In addition, it is confirmed that the process of this algorithm is fast enough for real-time positioning at the $22-25_{FPS}$.