• 제목/요약/키워드: Embedded vision system

검색결과 109건 처리시간 0.025초

자동차 차체부품 CO2용접설비 전수검사용 비전시스템 개발 (Development of a Vision System for the Complete Inspection of CO2 Welding Equipment of Automotive Body Parts)

  • 김주영;김민규
    • 센서학회지
    • /
    • 제33권3호
    • /
    • pp.179-184
    • /
    • 2024
  • In the car industry, welding is a fundamental linking technique used for joining components, such as steel, molds, and automobile parts. However, accurate inspection is required to test the reliability of the welding components. In this study, we investigate the detection of weld beads using 2D image processing in an automatic recognition system. The sample image is obtained using a 2D vision camera embedded in a lighting system, from where a portion of the bead is successfully extracted after image processing. In this process, the soot removal algorithm plays an important role in accurate weld bead detection, and adopts adaptive local gamma correction and gray color coordinates. Using this automatic recognition system, geometric parameters of the weld bead, such as its length, width, angle, and defect size can also be defined. Finally, on comparing the obtained data with the industrial standards, we can determine whether the weld bead is at an acceptable level or not.

지능형 사건 처리를 강조한 협업 감시 시스템 (Emphasizing Intelligent Event Processing Cooperative Surveillance System)

  • 윤태호;송유승
    • 대한임베디드공학회논문지
    • /
    • 제7권6호
    • /
    • pp.339-343
    • /
    • 2012
  • Security and monitoring system has many applications and commonly used for detection, warning, alarm, etc. As the networking technology advances, user requirements are getting higher. An intelligent and cooperative surveillance system is proposed to meet current user demands and improve the performance. This paper focuses on the implementation issue for the embedded intelligent surveillance system. To cover wide area cooperative function is implemented and connected by wireless sensor network technology. Also to improve the performance lots of sensors are employed into the surveillance system to reduce the error but improve the detection probability. The proposed surveillance system is composed of vision sensor (camera), mic array sensor, PIR sensor, etc. Between the sensors, data is transferred by IEEE 802.11s or Zigbee protocol. We deployed a private network for the sensors and multiple gateways for better data throughput. The developed system is targeted to the traffic accident detection and alarm. However, its application can be easily changed to others by just changing software algorithm in a DSP chip.

A Vision-Based Collision Warning System by Surrounding Vehicles Detection

  • Wu, Bing-Fei;Chen, Ying-Han;Kao, Chih-Chun;Li, Yen-Feng;Chen, Chao-Jung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제6권4호
    • /
    • pp.1203-1222
    • /
    • 2012
  • To provide active notification and enhance drivers'awareness of their surroundings, a vision-based collision warning system that detects and monitors surrounding vehicles is proposed in this paper. The main objective is to prevent possible vehicle collisions by monitoring the status of surrounding vehicles, including the distance to the other vehicles in front, behind, to the left and to the right sides. In addition, the proposed system collects and integrates this information to provide advisory warnings to drivers. To offer the correct notification, an algorithm based on features of edge and morphology to detect vehicles is also presented. The proposed system has been implemented in embedded systems and evaluated on real roads in various lighting and weather conditions. The experimental results indicate that the vehicle detection ratios were higher than 97% in the daytime, and appropriate for real road applications.

연결 성분 분류를 이용한 PCB 결함 검출 (PCB Defects Detection using Connected Component Classification)

  • 정민철
    • 반도체디스플레이기술학회지
    • /
    • 제10권1호
    • /
    • pp.113-118
    • /
    • 2011
  • This paper proposes computer visual inspection algorithms for PCB defects which are found in a manufacturing process. The proposed method can detect open circuit and short circuit on bare PCB without using any reference images. It performs adaptive threshold processing for the ROI (Region of Interest) of a target image, median filtering to remove noises, and then analyzes connected components of the binary image. In this paper, the connected components of circuit pattern are defined as 6 types. The proposed method classifies the connected components of the target image into 6 types, and determines an unclassified component as a defect of the circuit. The analysis of the original target image detects open circuits, while the analysis of the complement image finds short circuits. The machine vision inspection system is implemented using C language in an embedded Linux system for a high-speed real-time image processing. Experiment results show that the proposed algorithms are quite successful.

임베디드 시스템을 위한 고속의 손동작 인식 알고리즘 (Fast Hand-Gesture Recognition Algorithm For Embedded System)

  • 황동현;장경식
    • 한국정보통신학회논문지
    • /
    • 제21권7호
    • /
    • pp.1349-1354
    • /
    • 2017
  • 본 논문에서는 임베디드 시스템에 활용할 수 있는 고속의 손동작 인식 알고리즘을 제안한다. 기존의 손동작 인식 알고리즘은 손의 윤곽선을 구성하는 모든 점을 추출하는 윤곽선 추적 과정의 계산복잡도가 높기 때문에 임베디드 시스템, 모바일 디바이스와 같은 저성능의 시스템에서의 활용에 어려움이 있었다. 제안하는 알고리즘은 윤곽선 추적 알고리즘을 사용하는 대신 동심원 추적을 응용하여 추상화된 손가락의 윤곽선을 추정한 다음 특징을 추출하여 손동작을 분류한다. 제안된 알고리즘은 평균 인식률은 95%이고 평균 수행시간은 1.29ms로서 기존의 윤곽선 추적 방식을 사용하는 알고리즘에 비해 최대 44%의 성능향상을 보였고 임베디드 시스템, 모바일 디바이스와 같은 저성능의 시스템에서의 활용가능성을 확인하였다.

지능형 자동차의 적응형 제어를 위한 차선인식 (Lane Detection for Adaptive Control of Autonomous Vehicle)

  • 김현구;주영환;이종훈;박용완;정호열
    • 대한임베디드공학회논문지
    • /
    • 제4권4호
    • /
    • pp.180-189
    • /
    • 2009
  • Currently, most automobile companies are interested in research on intelligent autonomous vehicle. They are mainly focused on driver's intelligent assistant and driver replacement. In order to develop an autonomous vehicle, lateral and longitudinal control is necessary. This paper presents a lateral and longitudinal control system for autonomous vehicle that has only mono-vision camera. For lane detection, we present a new lane detection algorithm using clothoid parabolic road model. The proposed algorithm in compared with three other methods such as virtual line method, gradient method and hough transform method, in terms of lane detection ratio. For adaptive control, we apply a vanishing point estimation to fuzzy control. In order to improve handling and stability of the vehicle, the modeling errors between steering angle and predicted vanishing point are controlled to be minimized. So, we established a fuzzy rule of membership functions of inputs (vanishing point and differential vanishing point) and output (steering angle). For simulation, we developed 1/8 size robot (equipped with mono-vision system) of the actual vehicle and tested it in the athletics track of 400 meter. Through the test, we prove that our proposed method outperforms 98 % in terms of detection rate in normal condition. Compared with virtual line method, gradient method and hough transform method, our method also has good performance in the case of clear, fog and rain weather.

  • PDF

다각형 용기의 결함 검사 시스템 개발 (Development of Defect Inspection System for Polygonal Containers)

  • 윤석문;이승호
    • 전기전자학회논문지
    • /
    • 제25권3호
    • /
    • pp.485-492
    • /
    • 2021
  • 본 논문에서는 다각형 용기의 결함 검사 시스템 개발을 제안한다. 임베디드 보드는 메인부, 통신부, 입·출력부 등으로 구성된다. 메인부는 주 연산장치로써 임베디드 보드를 구동하는 운영체제가 포팅되어서 외부 통신, 센서 및 제어를 위한 입출력을 제어할 수 있다. 입·출력부는 필드에 설치되어 있는 센서들의 전기적신호를 디지털로 변환하여 메인모듈로 전달하는 역할 및 외부 스텝 모터 제어의 역활을 한다. 통신부는 영상 촬영 카메라 트리거 설정 및 제어 장치의 구동 설정의 역할을 수행한다. 입·출력부는 제어 스위치 및 센서들의 전기적신호를 디지털로 변환하여 메인모듈로 전달하는 역할을 수행한다. 동작 모드 등과 관련한 펄스 입력 등을 받기 위한 입력회로에는 외부 노이즈의 간섭을 최소화하기 위하여 각 입력포트에는 포토커플러를 설계한다. 제안된 다각형 용기의 결함 검사 시스템 개발의 정확성을 객관적으로 평가하기 위하여 다른 머신비전 검사 시스템과 비교를 해야 하지만, 현재 다각형 용기의 머신비전 검사 시스템이 존재하지 않기 때문에 불가능하다. 따라서, 동작 타이밍을 오실로스코프로 측정하여서 Test Time, One Angle Pulse Value, One Pulse Time, Camera Trigger Pulse, BLU 밝기 제어 등과 같은 파형이 정확히 출력됨을 확인하였다.

FPGA based HW/SW co-design for vision based real-time position measurement of an UAV

  • Kim, Young Sik;Kim, Jeong Ho;Han, Dong In;Lee, Mi Hyun;Park, Ji Hoon;Lee, Dae Woo
    • International Journal of Aeronautical and Space Sciences
    • /
    • 제17권2호
    • /
    • pp.232-239
    • /
    • 2016
  • Recently, in order to increase the efficiency and mission success rate of UAVs (Unmanned Aerial Vehicles), the necessity for formation flights is increased. In general, GPS (Global Positioning System) is used to obtain the relative position of leader with respect to follower in formation flight. However, it can't be utilized in environment where GPS jamming may occur or communication is impossible. Therefore, in this study, monocular vision is used for measuring relative position. General PC-based vision processing systems has larger size than embedded systems and is hard to install on small vehicles. Thus FPGA-based processing board is used to make our system small and compact. The processing system is divided into two blocks, PL(Programmable Logic) and PS(Processing system). PL is consisted of many parallel logic arrays and it can handle large amount of data fast, and it is designed in hardware-wise. PS is consisted of conventional processing unit like ARM processor in hardware-wise and sequential processing algorithm is installed on it. Consequentially HW/SW co-designed FPGA system is used for processing input images and measuring a relative 3D position of the leader, and this system showed RMSE accuracy of 0.42 cm ~ 0.51 cm.

AVM 카메라와 융합을 위한 다중 상용 레이더 데이터 획득 플랫폼 개발 (Development of Data Logging Platform of Multiple Commercial Radars for Sensor Fusion With AVM Cameras)

  • 진영석;전형철;신영남;현유진
    • 대한임베디드공학회논문지
    • /
    • 제13권4호
    • /
    • pp.169-178
    • /
    • 2018
  • Currently, various sensors have been used for advanced driver assistance systems. In order to overcome the limitations of individual sensors, sensor fusion has recently attracted the attention in the field of intelligence vehicles. Thus, vision and radar based sensor fusion has become a popular concept. The typical method of sensor fusion involves vision sensor that recognizes targets based on ROIs (Regions Of Interest) generated by radar sensors. Especially, because AVM (Around View Monitor) cameras due to their wide-angle lenses have limitations of detection performance over near distance and around the edges of the angle of view, for high performance of sensor fusion using AVM cameras and radar sensors the exact ROI extraction of the radar sensor is very important. In order to resolve this problem, we proposed a sensor fusion scheme based on commercial radar modules of the vendor Delphi. First, we configured multiple radar data logging systems together with AVM cameras. We also designed radar post-processing algorithms to extract the exact ROIs. Finally, using the developed hardware and software platforms, we verified the post-data processing algorithm under indoor and outdoor environments.

비전 기반 주간 LED 교통 신호등 인식 및 신호등 패턴 판단에 관한 연구 (Vision based Traffic Light Detection and Recognition Methods for Daytime LED Traffic Light)

  • 김현구;박주현;정호열
    • 대한임베디드공학회논문지
    • /
    • 제9권3호
    • /
    • pp.145-150
    • /
    • 2014
  • This paper presents an effective vision based method for LED traffic light detection at the daytime. First, the proposed method calculates horizontal coordinates to set region of interest (ROI) on input sequence images. Second, the proposed uses color segmentation method to extract region of green and red traffic light. Next, to classify traffic light and another noise, shape filter and haar-like feature value are used. Finally, temporal delay filter with weight is applied to remove blinking effect of LED traffic light, and state and weight of traffic light detection are used to classify types of traffic light. For simulations, the proposed method is implemented through Intel Core CPU with 2.80 GHz and 4 GB RAM, and tested on the urban and rural road video. Average detection rate of traffic light is 94.50 % and average recognition rate of traffic type is 90.24 %. Average computing time of the proposed method is 11 ms.