• Title/Summary/Keyword: camera vision

Search Result 1,386, Processing Time 0.028 seconds

A Study on Optical Condition and preprocessing for Input Image Improvement of Dented and Raised Characters of Rubber Tires (고무타이어 문자열 입력영상 개선을 위한 전처리와 광학조건에 관한 연구)

  • 류한성;최중경;권정혁;구본민;박무열
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.6 no.1
    • /
    • pp.124-132
    • /
    • 2002
  • In this paper, we present a vision algorithm and method for input image improvement and preprocessing of dented and raised characters on the sidewall of tires. we define optical condition between reflect coefficient and reflectance by the physical vector calculate. On the contrary this work will recognize the engraved characters using the computer vision technique. Tire input images have all most same grey levels between the characters and backgrounds. The reflectance is little from a tire surface. therefore, it's very difficult segment the characters from the background. Moreover, one side of the character string is raised and the other is dented. So, the captured images are varied with the angle of camera and illumination. For optimum Input images, the angle between camera and illumination was found out to be with in 90$^{\circ}$. In addition, We used complex filtering with low-pass and high-pass band filters to improve input images, for clear input images. Finally we define equation reflect coefficient and reflectance. By doing this, we obtained good images of tires for pattern recognition.

A Study on the Compensation Methods of Object Recognition Errors for Using Intelligent Recognition Model in Sports Games (스포츠 경기에서 지능인식모델을 이용하기 위한 대상체 인식오류 보상방법에 관한 연구)

  • Han, Junsu;Kim, Jongwon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.5
    • /
    • pp.537-542
    • /
    • 2021
  • This paper improves the possibility of recognizing fast-moving objects through the YOLO (You Only Look Once) deep learning recognition model in an application environment for object recognition in images. The purpose was to study the method of collecting semantic data through processing. In the recognition model, the moving object recognition error was identified as unrecognized because of the difference between the frame rate of the camera and the moving speed of the object and a misrecognition due to the existence of a similar object in an environment adjacent to the object. To minimize the recognition errors by compensating for errors, such as unrecognized and misrecognized objects through the proposed data collection method, and applying vision processing technology for the causes of errors that may occur in images acquired for sports (tennis games) that can represent real similar environments. The effectiveness of effective secondary data collection was improved by research on methods and processing structures. Therefore, by applying the data collection method proposed in this study, ordinary people can collect and manage data to improve their health and athletic performance in the sports and health industry through the simple shooting of a smart-phone camera.

Metaverse Augmented Reality Research Trends Using Topic Modeling Methodology (토픽 모델링 기법을 활용한 메타버스 증강현실 연구 동향 분석)

  • An, Jaeyoung;Shim, Soyun;Yun, Haejung
    • Knowledge Management Research
    • /
    • v.23 no.2
    • /
    • pp.123-142
    • /
    • 2022
  • The non-face-to-face environment accelerated by COVID-19 has speeded up the dissemination of digital virtual ecosystems and metaverse. In order for the metaverse to be sustainable, digital twins that are compatible with the real world are key, and critical technology for that is AR (Augmented Reality). In this study, we examined research trends about AR, and will propose the directions for future AR research. We conducted LDA based topic modeling on 11,049 abstracts of published domestic and foreign AR related papers from 2009 to Mar 2022, and then looked into AR that was comprehensive research trends, comparison of domestic and foreign research trends, and research trends before and after the popularity of metaverse concepts. As a result, the topics of AR related research were deduced from 11 topics such as device, network communication, surgery, digital twin, education, serious game, camera/vision, color application, therapy, location accuracy, and interface design. After popularity of metaverse, 6 topics were deduced such as camera/vision, training, digital twin, surgical/surgical, interaction performance, and network communication. We will expect, through this study, to encourage active research on metaverse AR with convergent characteristics in multidisciplinary fields and contribute to giving useful implications to practitioners.

Smart window coloring control automation system based on image analysis using a Raspberry Pi camera (라즈베리파이 카메라를 활용한 이미지 분석 기반 스마트 윈도우 착색 조절 자동화 시스템)

  • Min-Sang Kim;Hyeon-Sik Ahn;Seong-Min Lim;Eun-Jeong Jang;Na-Kyung Lee;Jun-Hyeok Heo;In-Gu Kang;Ji-Hyeon Kwon;Jun-Young Lee;Ha-Young Kim;Dong-Su Kim;Jong-Ho Yoon;Yoonseuk Choi
    • Journal of IKEEE
    • /
    • v.28 no.1
    • /
    • pp.90-96
    • /
    • 2024
  • In this paper, we propose an automated system. It utilizes a Raspberry Pi camera and a function generator to analyze luminance in an image. Then, it applies voltage based on this analysis to control light transmission through coloring smart windows. The existing luminance meters used to measure luminance are expensive and require unnecessary movement from the user, making them difficult to use in real life. However, after taking a photography, luminance analysis in the image using the Python Open Source Computer Vision Library (OpenCV) is inexpensive and portable, so it can be easily applied in real life. This system was used in an environment where smart windows were applied to detect the luminance of windows. Based on the brightness of the image, the coloring of the smart window is adjusted to reduce the brightness of the window, allowing occupants to create a comfortable viewing environment.

Vision-based Sensor Fusion of a Remotely Operated Vehicle for Underwater Structure Diagnostication (수중 구조물 진단용 원격 조종 로봇의 자세 제어를 위한 비전 기반 센서 융합)

  • Lee, Jae-Min;Kim, Gon-Woo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.4
    • /
    • pp.349-355
    • /
    • 2015
  • Underwater robots generally show better performances for tasks than humans under certain underwater constraints such as. high pressure, limited light, etc. To properly diagnose in an underwater environment using remotely operated underwater vehicles, it is important to keep autonomously its own position and orientation in order to avoid additional control efforts. In this paper, we propose an efficient method to assist in the operation for the various disturbances of a remotely operated vehicle for the diagnosis of underwater structures. The conventional AHRS-based bearing estimation system did not work well due to incorrect measurements caused by the hard-iron effect when the robot is approaching a ferromagnetic structure. To overcome this drawback, we propose a sensor fusion algorithm with the camera and AHRS for estimating the pose of the ROV. However, the image information in the underwater environment is often unreliable and blurred by turbidity or suspended solids. Thus, we suggest an efficient method for fusing the vision sensor and the AHRS with a criterion which is the amount of blur in the image. To evaluate the amount of blur, we adopt two methods: one is the quantification of high frequency components using the power spectrum density analysis of 2D discrete Fourier transformed image, and the other is identifying the blur parameter based on cepstrum analysis. We evaluate the performance of the robustness of the visual odometry and blur estimation methods according to the change of light and distance. We verify that the blur estimation method based on cepstrum analysis shows a better performance through the experiments.

Pallet Measurement Method for Automatic Pallet Engaging in Real-Time (자동 화물처리를 위한 실시간 팔레트 측정 방법)

  • Byun, Sung-Min;Kim, Min-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.2
    • /
    • pp.171-181
    • /
    • 2011
  • A vision-based method for positioning and orienting of pallets is presented in this paper, which guides autonomous forklifts to engage pallets automatically. The method uses a single camera mounted on the fork carriage instead of two cameras for stereo vision that is conventionally used for positioning objects in 3D space. An image back-projection technique for determining the orient of a pallet without any fiducial marks is suggested in tins paper, which projects two feature lines on the front plane of the pallet backward onto a virtual plane that can be rotated around a given axis in 3D space. We show the fact that the rotation angle of the virtual plane on which the back-projected feature lines are parallel can be used to describe the orient of the pallet front plane. The position of the pallet is determined by using ratio of the distance between the back-projected feature lines and their real distance on the pallet front plane. Through a test on real pallet images, we found that the proposed method was applicable to real environment practically in real-time.

Histogram Based Hand Recognition System for Augmented Reality (증강현실을 위한 히스토그램 기반의 손 인식 시스템)

  • Ko, Min-Su;Yoo, Ji-Sang
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.7
    • /
    • pp.1564-1572
    • /
    • 2011
  • In this paper, we propose a new histogram based hand recognition algorithm for augmented reality. Hand recognition system makes it possible a useful interaction between an user and computer. However, there is difficulty in vision-based hand gesture recognition with viewing angle dependency due to the complexity of human hand shape. A new hand recognition system proposed in this paper is based on the features from hand geometry. The proposed recognition system consists of two steps. In the first step, hand region is extracted from the image captured by a camera and then hand gestures are recognized in the second step. At first, we extract hand region by deleting background and using skin color information. Then we recognize hand shape by determining hand feature point using histogram of the obtained hand region. Finally, we design a augmented reality system by controlling a 3D object with the recognized hand gesture. Experimental results show that the proposed algorithm gives more than 91% accuracy for the hand recognition with less computational power.

Automatic Leather Quality Inspection and Grading System by Leather Texture Analysis (텍스쳐 분석에 의한 피혁 등급 판정 및 자동 선별시스템에의 응용)

  • 권장우;김명재;길경석
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.4
    • /
    • pp.451-458
    • /
    • 2004
  • A leather quality inspection by naked eyes has known as unreliable because of its biological characteristics like accumulated fatigue caused from an optical illusion and biological phenomenon. Therefore it is necessary to automate the leather quality inspection by computer vision technique. In this paper, we present automatic leather qua1ity classification system get information from leather surface. Leather is usually graded by its information such as texture density, types and distribution of defects. The presented algorithm explain how we analyze leather information like texture density and defects from the gray-level images obtained by digital camera. The density data is computed by its ratio of distribution area, width, and height of Fourier spectrum magnitude. And the defect information of leather surface can be obtained by histogram distribution of pixels which is Windowed from preprocessed images. The information for entire leather could be a standard for grading leather quality. The proposed leather inspection system using machine vision can also be applied to another field to substitute human eye inspection.

  • PDF

A Study on the BGA Package Measurement using Noise Reduction Filters (잡음제거 필터를 이용한 BGA 패키지 측정에 관한 연구)

  • Jin, Go-Whan
    • Journal of the Korea Convergence Society
    • /
    • v.8 no.11
    • /
    • pp.15-20
    • /
    • 2017
  • Recently, with the development of the IT industry, interest in computer convergence technology is increasing in various fields. Especially, in the semiconductor field, a vision system that uses a camera and computer convergence is often used to inspect semiconductor device defects in the production process. Various systems have been studied to remove noise, which is a major cause of degradation in processing of data related to these image processing systems. In this paper, we try to detect defects in BGA (Ball Grid Array) package devices by recognizing defects in advance during mass production. We propose a measurement system using a Gaussian filter, a Median filter, and an Average filter, which are widely used for noise reduction of image data Applying the proposed system to the manufacturing process of the BGA package can be used to judge whether the defect is good or not, and it is expected that productivity will be improved.

Design and Implementation of OPC UA-based Collaborative Robot Guard System Using Sensor and Camera Vision (센서 및 카메라 비전을 활용한 OPC UA 기반 협동로봇 가드 시스템의 설계 및 구현)

  • Kim, Jeehyeong;Jeong, Jongpil
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.6
    • /
    • pp.47-55
    • /
    • 2019
  • The robot is the creation of new markets and various cooperation according to the manufacturing paradigm shift. Cooperative management easy for existing industrial robots, robots work on productivity, manpower to replace the robot in every industry cooperation for the purpose of and demand increases.to exist But the industrial robot at the scene of the cooperation working due to accidents are frequent, threatening the safety of the operator. Of industrial site is configured with a robot in an environment ensuring the safety of the operator to and confidence to communicate that can do the possibility of action.Robot guard system of the need for development cooperation. The robot's cooperation through the sensors and computer vision task within a radius of the double to prevent accidents and accidents should reduce the risk. International protocol for a variety of industrial production equipment and communications opc ua system based on ultrasonic sensors and cnn to (Convolution Neural Network) for video analytics. We suggest the cooperation with the robot guard system. Robots in a proposed system is unsafe situation of workers evaluating the possibility of control.