• Title/Summary/Keyword: OpenCV-based Python

Search Result 18, Processing Time 0.026 seconds

Control Technology Based on the Finger Recognition of Robot Cleaners (손가락 인식을 기반으로 한 로봇청소기 제어기술)

  • Yoo, Hyang-Joon;Mok, Seung-Su;Kim, Jun-Seo;Baek, Ji-A;Ko, Yun-Seok
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.15 no.1
    • /
    • pp.139-146
    • /
    • 2020
  • The disadvantage of the general robot cleaner is that it works only on the designated route, so it is impossible to clean the place outside the designated route. Therefore, in this study, the direction control methodology for searching the place other than the designated route based on the finger recognition technology was studied to compensate for the shortcomings of the existing cleaner. Raspberry Pi was used as the main controller and Open CV program was used to recognize the number of fingers. To verify the validity of the proposed methodology, a finger recognition algorithm was implemented using Python language, and as a result of adopting the Logitech C922, the success rate was 100% at 90cm and 70% at 110cm, respectively.

Smart window coloring control automation system based on image analysis using a Raspberry Pi camera (라즈베리파이 카메라를 활용한 이미지 분석 기반 스마트 윈도우 착색 조절 자동화 시스템)

  • Min-Sang Kim;Hyeon-Sik Ahn;Seong-Min Lim;Eun-Jeong Jang;Na-Kyung Lee;Jun-Hyeok Heo;In-Gu Kang;Ji-Hyeon Kwon;Jun-Young Lee;Ha-Young Kim;Dong-Su Kim;Jong-Ho Yoon;Yoonseuk Choi
    • Journal of IKEEE
    • /
    • v.28 no.1
    • /
    • pp.90-96
    • /
    • 2024
  • In this paper, we propose an automated system. It utilizes a Raspberry Pi camera and a function generator to analyze luminance in an image. Then, it applies voltage based on this analysis to control light transmission through coloring smart windows. The existing luminance meters used to measure luminance are expensive and require unnecessary movement from the user, making them difficult to use in real life. However, after taking a photography, luminance analysis in the image using the Python Open Source Computer Vision Library (OpenCV) is inexpensive and portable, so it can be easily applied in real life. This system was used in an environment where smart windows were applied to detect the luminance of windows. Based on the brightness of the image, the coloring of the smart window is adjusted to reduce the brightness of the window, allowing occupants to create a comfortable viewing environment.

The Road Speed Sign Board Recognition, Steering Angle and Speed Control Methodology based on Double Vision Sensors and Deep Learning (2개의 비전 센서 및 딥 러닝을 이용한 도로 속도 표지판 인식, 자동차 조향 및 속도제어 방법론)

  • Kim, In-Sung;Seo, Jin-Woo;Ha, Dae-Wan;Ko, Yun-Seok
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.4
    • /
    • pp.699-708
    • /
    • 2021
  • In this paper, a steering control and speed control algorithm was presented for autonomous driving based on two vision sensors and road speed sign board. A car speed control algorithm was developed to recognize the speed sign by using TensorFlow, a deep learning program provided by Google to the road speed sign image provided from vision sensor B, and then let the car follows the recognized speed. At the same time, a steering angle control algorithm that detects lanes by analyzing road images transmitted from vision sensor A in real time, calculates steering angles, controls the front axle through PWM control, and allows the vehicle to track the lane. To verify the effectiveness of the proposed algorithm's steering and speed control algorithms, a car's prototype based on the Python language, Raspberry Pi and OpenCV was made. In addition, accuracy could be confirmed by verifying various scenarios related to steering and speed control on the test produced track.

Database Generation and Management System for Small-pixelized Airborne Target Recognition (미소 픽셀을 갖는 비행 객체 인식을 위한 데이터베이스 구축 및 관리시스템 연구)

  • Lee, Hoseop;Shin, Heemin;Shim, David Hyunchul;Cho, Sungwook
    • Journal of Aerospace System Engineering
    • /
    • v.16 no.5
    • /
    • pp.70-77
    • /
    • 2022
  • This paper proposes database generation and management system for small-pixelized airborne target recognition. The proposed system has five main features: 1) image extraction from in-flight test video frames, 2) automatic image archiving, 3) image data labeling and Meta data annotation, 4) virtual image data generation based on color channel convert conversion and seamless cloning and 5) HOG/LBP-based tiny-pixelized target augmented image data. The proposed framework is Python-based PyQt5 and has an interface that includes OpenCV. Using video files collected from flight tests, an image dataset for airborne target recognition on generates by using the proposed system and system input.

Efficient Object Recognition by Masking Semantic Pixel Difference Region of Vision Snapshot for Lightweight Embedded Systems (경량화된 임베디드 시스템에서 의미론적인 픽셀 분할 마스킹을 이용한 효율적인 영상 객체 인식 기법)

  • Yun, Heuijee;Park, Daejin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.6
    • /
    • pp.813-826
    • /
    • 2022
  • AI-based image processing technologies in various fields have been widely studied. However, the lighter the board, the more difficult it is to reduce the weight of image processing algorithm due to a lot of computation. In this paper, we propose a method using deep learning for object recognition algorithm in lightweight embedded boards. We can determine the area using a deep neural network architecture algorithm that processes semantic segmentation with a relatively small amount of computation. After masking the area, by using more accurate deep learning algorithm we could operate object detection with improved accuracy for efficient neural network (ENet) and You Only Look Once (YOLO) toward executing object recognition in real time for lightweighted embedded boards. This research is expected to be used for autonomous driving applications, which have to be much lighter and cheaper than the existing approaches used for object recognition.

Alarm program through image processing based on Machine Learning (ML 기반의 영상처리를 통한 알람 프로그램)

  • Kim, Deok-Min;Chung, Hyun-Woo;Park, Goo-Man
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • fall
    • /
    • pp.304-307
    • /
    • 2021
  • ML(machine learning) 기술을 활용하여 실용적인 측면에서 일반 사용자들이 바라보고 사용할 수 있도록 다양한 연구 개발이 이루어지고 있다. 특히 최근 개인 사용자의 personal computer와 mobile device의 processing unit의 연산 처리 속도가 두드러지게 빨라지고 있어 ML이 더 생활에 밀접해지고 있는 추세라고 볼 수 있다. 현재 ML시장에서 다양한 솔루션 및 어플리케이션을 제공하는 툴이나 라이브러리가 대거 공개되고 있는데 그 중에서도 Google에서 개발하여 배포한 'Mediapipe'를 사용하였다. Mediapipe는 현재 'android', 'IOS', 'C++', 'Python', 'JS', 'Coral' 등의 환경에서 개발을 지원하고 있으며 더욱 다양한 환경을 지원할 예정이다. 이에 본 팀은 앞서 설명한 Mediapipe 프레임워크를 기반으로 Machine Learning을 사용한 image processing를 통해 일반 사용자들에게 편의성을 제공할 수 있는 알람 프로그램을 연구 및 개발하였다. Mediapipe에서 신체를 landmark로 검출하게 되는데 이를 scikit-learn 머신러닝 라이브러리를 사용하여 특정 자세를 학습시키고 모델화하여 알람 프로그램에 특정 기능에 조건으로 사용될 수 있게 하였다. scikit-learn은 아나콘다 등과 같은 개발환경 패키지에서 간단하게 이용 가능한데 이 아나콘다는 데이터 분석이나 그래프 그리기 등, 파이썬에 자주 사용되는 라이브러리를 포함한 개발환경이라고 할 수 있다. 하여 본 팀은 ML기반의 영상처리 알람 프로그램을 제작하는데에 있어 이러한 사항들을 파이썬 환경에서 기본적으로 포함되어 제공하는 tkinter GUI툴을 사용하고 추가적으로 인텔에서 개발한 실시간 컴퓨터 비전을 목적으로 한 프로그래밍 라이브러리 OpenCV와 여러 항목을 사용하여 환경을 구축할 수 있도록 연구·개발하였다.

  • PDF

Development of a Face Detection and Recognition System Using a RaspberryPi (라즈베리파이를 이용한 얼굴검출 및 인식 시스템 개발)

  • Kim, Kang-Chul;Wei, Hai-tong
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.12 no.5
    • /
    • pp.859-864
    • /
    • 2017
  • IoT is a new emerging technology to lead the $4^{th}$ industry renovation and has been widely used in industry and home to increase the quality of human being. In this paper, IoT based face detection and recognition system for a smart elevator is developed. Haar cascade classifier is used in a face detection system and a proposed PCA algorithm written in Python in the face recognition system is implemented to reduce the execution time and calculates the eigenfaces. SVM or Euclidean metric is used to recognize the faces detected in the face detection system. The proposed system runs on RaspberryPi 3. 200 sample images in ORL face database are used for training and 200 samples for testing. The simulation results show that the recognition rate is over 93% for PP+EU and over 96% for PP+SVM. The execution times of the proposed PCA and the conventional PCA are 0.11sec and 1.1sec respectively, so the proposed PCA is much faster than the conventional one. The proposed system can be suitable for an elevator monitoring system, real time home security system, etc.

Development of Deep Learning AI Model and RGB Imagery Analysis Using Pre-sieved Soil (입경 분류된 토양의 RGB 영상 분석 및 딥러닝 기법을 활용한 AI 모델 개발)

  • Kim, Dongseok;Song, Jisu;Jeong, Eunji;Hwang, Hyunjung;Park, Jaesung
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.66 no.4
    • /
    • pp.27-39
    • /
    • 2024
  • Soil texture is determined by the proportions of sand, silt, and clay within the soil, which influence characteristics such as porosity, water retention capacity, electrical conductivity (EC), and pH. Traditional classification of soil texture requires significant sample preparation including oven drying to remove organic matter and moisture, a process that is both time-consuming and costly. This study aims to explore an alternative method by developing an AI model capable of predicting soil texture from images of pre-sorted soil samples using computer vision and deep learning technologies. Soil samples collected from agricultural fields were pre-processed using sieve analysis and the images of each sample were acquired in a controlled studio environment using a smartphone camera. Color distribution ratios based on RGB values of the images were analyzed using the OpenCV library in Python. A convolutional neural network (CNN) model, built on PyTorch, was enhanced using Digital Image Processing (DIP) techniques and then trained across nine distinct conditions to evaluate its robustness and accuracy. The model has achieved an accuracy of over 80% in classifying the images of pre-sorted soil samples, as validated by the components of the confusion matrix and measurements of the F1 score, demonstrating its potential to replace traditional experimental methods for soil texture classification. By utilizing an easily accessible tool, significant time and cost savings can be expected compared to traditional methods.