• Title/Summary/Keyword: camera vision

Search Result 1,386, Processing Time 1.029 seconds

Machine vision applications in automated scrap-separating research (머신비젼 시스템을 이용(利用)한 스크랩 자동선별(自動選別) 연구(硏究))

  • Kim, Chan-Wook;Lee, Seung-Hyun;Kim, Hang-gu
    • Proceedings of the Korean Institute of Resources Recycling Conference
    • /
    • 2005.05a
    • /
    • pp.57-61
    • /
    • 2005
  • In this study, the machine vision system for inspection using color recognition method have been designed and developed to automatically sort out a specified material such as Cu scraps or other non-ferrous metal scraps mixed in Fe scraps. The system consists of a CCD camera, light sources, a frame grabber, conveying devices and an air nozzled ejector, and is program-controlled by a image processing algorithm. The ejector is designed to be operated by an I/O interface communication with a hardware controller. The sorting examination results show that the efficiency of separating Cu scraps from the Fe scraps mixed with Cu scraps is around 90 % at the conveying speed of 15 m/min. and the system is proven to be excellent in terms of its efficiency. Therefore, it is expected that the system can be commercialized in shredder firms, if the high-speed automated sorting system will be realized.

  • PDF

The Weldability Estimation for the Purpose of Real-Time Inspection and Control (실시간 검사 및 제어를 목적으로 한 용접성 평가)

  • Lee, Jeong-Ick
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.9 no.3
    • /
    • pp.605-610
    • /
    • 2008
  • Through welding fabrication, user can feel unsatisfaction of surface quality because of welded defects, Generally speaking, these are called weld defects. For checking these defects effectively without time loss effectively, weldability estimation system setup is an urgent thing for detecting whole specimen quality. In this study, by laser vision camera, catching a rawdata on welded specimen profiles, treating vision processing with these data, qualitative defects are estimated from getting these information at first. At the same time, for detecting quantitative defects, whole specimen weldability estimation is pursued by multifeature pattern recognition, which is a kind of fuzzy pattern recognition. For user friendly, by weldability estimation results are shown each profiles, final reports and visual graphics method, user can easily determined weldability. By applying these system to welding fabrication, these technologies are contribution to on-line weldability estimation.

The Method to Reduce the Driving Time of Gentry (겐트리 구동시간의 단축 방법)

  • Kim, Soon Ho;Kim, Chi Su
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.11
    • /
    • pp.405-410
    • /
    • 2018
  • When more parts are mounted in the same time in a surface mount equipment, the total output will increase and will improve productivity. In this paper, we propose a method to reduce the gantry drive time from the suction to the mounting of the component to improve the productivity of the surface mount equipment. The method was to find a way to get the maximum velocity in front of the camera during the vision inspection. In this paper, we have developed a stop-motion, fly1-motion, and fly2-motion drive time calculation algorithms for vision inspection and calculated the driving time of 3 methods and compared them. As a result, the fly1-motion method shortened the time by 13% and the fly2-motion method shortened the time by 18% than the stop-motion method.

2-Axis Cartesian Coordinate Robot Optimization for Air Hockey Game (에어 하키 게임을 위한 2축 직교 좌표 로봇 최적화)

  • Kim, Hui-yeon;Lee, Won-jae;Yu, Yun Seop;Kim, Nam-ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2019.05a
    • /
    • pp.436-438
    • /
    • 2019
  • Air hockey robots are machine vision systems that allow users to play hockey balls through the camera. The position detection of the hockey ball is realized by using the color information of the ball using OpenCV library. It senses the position of the hockey ball, predicts its trajectory, and sends the result to the ARM Cortex-M board. The ARM Cortex-M board controls a 2- Axis Cartesian Coordinate Robot to run an air hockey game. Depending on the strategy of the air hockey robot, it can operate in defensive, offensive, defensive and offensive mode. In this paper, we describe a vision system development and trajectory prediction system and propose a new method to control a biaxial orthogonal robot in an air hockey game.

  • PDF

Development of an intelligent edge computing device equipped with on-device AI vision model (온디바이스 AI 비전 모델이 탑재된 지능형 엣지 컴퓨팅 기기 개발)

  • Kang, Namhi
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.5
    • /
    • pp.17-22
    • /
    • 2022
  • In this paper, we design a lightweight embedded device that can support intelligent edge computing, and show that the device quickly detects an object in an image input from a camera device in real time. The proposed system can be applied to environments without pre-installed infrastructure, such as an intelligent video control system for industrial sites or military areas, or video security systems mounted on autonomous vehicles such as drones. The On-Device AI(Artificial intelligence) technology is increasingly required for the widespread application of intelligent vision recognition systems. Computing offloading from an image data acquisition device to a nearby edge device enables fast service with less network and system resources than AI services performed in the cloud. In addition, it is expected to be safely applied to various industries as it can reduce the attack surface vulnerable to various hacking attacks and minimize the disclosure of sensitive data.

End to End Autonomous Driving System using Out-layer Removal (Out-layer를 제거한 End to End 자율주행 시스템)

  • Seung-Hyeok Jeong;Dong-Ho Yun;Sung-Hun Hong
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.1
    • /
    • pp.65-70
    • /
    • 2023
  • In this paper, we propose an autonomous driving system using an end-to-end model to improve lane departure and misrecognition of traffic lights in a vision sensor-based system. End-to-end learning can be extended to a variety of environmental conditions. Driving data is collected using a model car based on a vision sensor. Using the collected data, it is composed of existing data and data with outlayers removed. A class was formed with camera image data as input data and speed and steering data as output data, and data learning was performed using an end-to-end model. The reliability of the trained model was verified. Apply the learned end-to-end model to the model car to predict the steering angle with image data. As a result of the learning of the model car, it can be seen that the model with the outlayer removed is improved than the existing model.

Image Processing Software Development for Detection of Oyster Hinge Lines (굴의 힌지 선 감지를 위한 영상처리 소프트웨어의 개발)

  • So, J.D.;Wheaton, Fred W.
    • Journal of Biosystems Engineering
    • /
    • v.22 no.2
    • /
    • pp.237-246
    • /
    • 1997
  • Shucking(removing the meat from the shell) an oyster requires that the muscle attachments to the two shell valves and the hinge be severed. Described here is the computer vision software needed to locate the oyster hinge line so it can be automatically severed, one step in development of an automated oyster shucker. Oysters are first prepared by washing and trimming off a small shell piece on the oyster hinge end to provide access to the outer hinge surface. A computer vision system employing a color video comera then gabs an image of the hinge end of the oyster shell. This image is Processed by the computer using software. The software is a combination of commercially available and custom written routines that locate the oyster hinge. The software uses four feature variables, circularity, rectangularity, aspect-ration, and Euclidian distance, to distinguish the hinge object from other dark colored objects on the hinge end of the oyster. Several techniques, including shrink-expand, thresholding, and others, were used to secure an image that could be reliably and efficiently processed to locate the oyster hinge line.

  • PDF

Comparing State Representation Techniques for Reinforcement Learning in Autonomous Driving (자율주행 차량 시뮬레이션에서의 강화학습을 위한 상태표현 성능 비교)

  • Jihwan Ahn;Taesoo Kwon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.30 no.3
    • /
    • pp.109-123
    • /
    • 2024
  • Research into vision-based end-to-end autonomous driving systems utilizing deep learning and reinforcement learning has been steadily increasing. These systems typically encode continuous and high-dimensional vehicle states, such as location, velocity, orientation, and sensor data, into latent features, which are then decoded into a vehicular control policy. The complexity of urban driving environments necessitates the use of state representation learning through networks like Variational Autoencoders (VAEs) or Convolutional Neural Networks (CNNs). This paper analyzes the impact of different image state encoding methods on reinforcement learning performance in autonomous driving. Experiments were conducted in the CARLA simulator using RGB images and semantically segmented images captured by the vehicle's front camera. These images were encoded using VAE and Vision Transformer (ViT) networks. The study examines how these networks influence the agents' learning outcomes and experimentally demonstrates the role of each state representation technique in enhancing the learning efficiency and decision- making capabilities of autonomous driving systems.

3D Visualization and Work Status Analysis of Construction Site Objects

  • Junghoon Kim;Insoo Jeong;Seungmo Lim;Jeongbin Hwang;Seokho Chi
    • International conference on construction engineering and project management
    • /
    • 2024.07a
    • /
    • pp.447-454
    • /
    • 2024
  • Construction site monitoring is pivotal for overseeing project progress to ensure that projects are completed as planned, within budget, and in compliance with applicable laws and safety standards. Additionally, it seeks to improve operational efficiency for better project execution. To achieve this, many researchers have utilized computer vision technologies to conduct automatic site monitoring and analyze the operational status of equipment. However, most existing studies estimate real-world 3D information (e.g., object tracking, work status analysis) based only on 2D pixel-based information of images. This approach presents a substantial challenge in the dynamic environments of construction sites, necessitating the manual recalibration of analytical rules and thresholds based on the specific placement and the field of view of cameras. To address these challenges, this study introduces a novel method for 3D visualization and status analysis of construction site objects using 3D reconstruction technology. This method enables the analysis of equipment's operational status by acquiring 3D spatial information of equipment from single-camera images, utilizing the Sam-Track model for object segmentation and the One-2-3-45 model for 3D reconstruction. The framework consists of three main processes: (i) single image-based 3D reconstruction, (ii) 3D visualization, and (iii) work status analysis. Experimental results from a construction site video demonstrated the method's feasibility and satisfactory performance, achieving high accuracy in status analysis for excavators (93.33%) and dump trucks (98.33%). This research provides a more consistent method for analyzing working status, making it suitable for practical field applications and offering new directions for research in vision-based 3D information analysis. Future studies will apply this method to longer videos and diverse construction sites, comparing its performance with existing 2D pixel-based methods.

CV-Based Mobile Application to Enhance Real-time Safety Monitoring of Ladder Activities

  • Muhammad Sibtain Abbas;Nasrullah Khan;Syed Farhan Alam Zaidi;Rahat Hussain;Aqsa Sabir;Doyeop Lee;Chansik Park
    • International conference on construction engineering and project management
    • /
    • 2024.07a
    • /
    • pp.1057-1064
    • /
    • 2024
  • The construction industry has witnessed a concerning rise in ladder-related accidents, necessitating the implementation of stricter safety measures. Recent statistics highlight a substantial number of accidents occurring while using ladders, emphasizing the mandatory need for preventative measures. While prior research has explored computer vision-based automatic monitoring for specific aspects such as ladder stability with and without outriggers, worker height, and helmet usage, this study extends existing frameworks by introducing a rule set for co-workers. The research methodology involves training a YOLOv5 model on a comprehensive dataset to detect both the worker on the ladder and the presence of co-workers in real time. The aim is to enable smooth integration of the detector into a mobile application, serving as a portable real-time monitoring tool for safety managers. This mobile application functions as a general safety tool, considering not only conventional risk factors but also ensuring the presence of a co-worker when a worker reaches a specific height. The application offers users an intuitive interface, utilizing the device's camera to identify and verify the presence of coworkers during ladder activities. By combining computer vision technology with mobile applications, this study presents an innovative approach to ladder safety that prioritizes real-time, on-site co-worker verification, thereby significantly reducing the risk of accidents in construction environments. With an overall mean average precision (mAP) of 97.5 percent, the trained model demonstrates its effectiveness in detecting unsafe worker behavior within a construction environment.