• Title/Summary/Keyword: OpenCV(Open Source Computer Vision)

Search Result 10, Processing Time 0.03 seconds

A Prototype for Stereo Vision Systems using OpenCV (OpenCV를 사용한 스테레오 비전 시스템의 프로토타입 구현)

  • Yi, Jong-Su;Jung, Sae-Am;Kim, Jun-Seong
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.763-764
    • /
    • 2008
  • Sensing is an important part of a smart home system. Vision sensors are a type of passive systems, which are not sensitive to noise. In this paper, we implement a prototype for stereo vision systems using OpenCV. It is an open source library for computer vision developed by Intel corporation. The prototype will by used for comparing performance among various stereo algorithms and for developing a stereo vision smart camera.

  • PDF

OpenCV-based Autonomous Vehicle (OpenCV 기반 자율 주행 자동차)

  • Lee, Jin-Woo;Hong, Dong-sun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.10a
    • /
    • pp.538-539
    • /
    • 2018
  • This paper summarizes the implementation of lane recognition using OpenCV, one of the open source computer vision libraries. The Linux operating system Rasbian(r18.03.13) was installed on the ARM processor-based Raspberry Pi 3 board, and Raspberry Pi Camera was used for image processing. In order to realize the lane recognition, Canny Edge Detection and Hough Transform algorithm implemented in OpenCV library was used and RANSAC algorithm was used to prevent shaking of vanishing point and to detect only the desired straight line. In addtion, the DC motor and the Servo motor were controlled so that the vehicle would run according to the detected lane.

  • PDF

Visual Cell OOK Modulation : A Case Study of MIMO CamCom (시각 셀 OOK 변조 : MIMO CamCom 연구 사례)

  • Le, Nam-Tuan;Jang, Yeong Min
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.9
    • /
    • pp.781-786
    • /
    • 2013
  • Multiplexing information over parallel data channels based on RF MIMO concept is possible to achieve considerable data rates over large transmission ranges with just a single transmitting element. Visual multiplexing MIMO techniques will send independent streams of bits using the multiple elements of the light transmitter array and recording over a group of camera pixels can further enhance the data rates. The proposed system is a combination of the reliance on computer vision algorithms for tracking and OOK cell frame modulation. LED array are controlled to transmit message in the form of digital information using ON-OFF signaling with ON-OFF pulses (ON = bit 1, OFF = bit 0). A camera captures image frames of the array which are then individually processed and sequentially decoded to retrieve data. To demodulated data transmission, a motion tracking algorithm is implemented in OpenCV (Open source Computer Vision library) to classify the transmission pattern. One of the most advantages of proposed architecture is Computer Vision (CV) based image analysis techniques which can be used to spatially separate signals and remove interferences from ambient light. It will be the future challenges and opportunities for mobile communication networking research.

Segmentation of underwater images using morphology for deep learning (딥러닝을 위한 모폴로지를 이용한 수중 영상의 세그먼테이션)

  • Ji-Eun Lee;Chul-Won Lee;Seok-Joon Park;Jea-Beom Shin;Hyun-Gi Jung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.42 no.4
    • /
    • pp.370-376
    • /
    • 2023
  • In the underwater image, it is not clear to distinguish the shape of the target due to underwater noise and low resolution. In addition, as an input of deep learning, underwater images require pre-processing and segmentation must be preceded. Even after pre-processing, the target is not clear, and the performance of detection and identification by deep learning may not be high. Therefore, it is necessary to distinguish and clarify the target. In this study, the importance of target shadows is confirmed in underwater images, object detection and target area acquisition by shadows, and data containing only the shape of targets and shadows without underwater background are generated. We present the process of converting the shadow image into a 3-mode image in which the target is white, the shadow is black, and the background is gray. Through this, it is possible to provide an image that is clearly pre-processed and easily discriminated as an input of deep learning. In addition, if the image processing code using Open Source Computer Vision (OpenCV)Library was used for processing, the processing speed was also suitable for real-time processing.

Computer Vision Platform Design with MEAN Stack Basis (MEAN Stack 기반의 컴퓨터 비전 플랫폼 설계)

  • Hong, Seonhack;Cho, Kyungsoon;Yun, Jinseob
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.11 no.3
    • /
    • pp.1-9
    • /
    • 2015
  • In this paper, we implemented the computer vision platform design with MEAN Stack through Raspberry PI 2 model which is an open source platform. we experimented the face recognition, temperature and humidity sensor data logging with WiFi communication under Raspberry Pi 2 model. Especially we directly made the shape of platform with 3D printing design. In this paper, we used the face recognition algorithm with OpenCV software through haarcascade feature extraction machine learning algorithm, and extended the functionality of wireless communication function ability with Bluetooth technology for the purpose of making Android Mobile devices interface. And therefore we implemented the functions of the vision platform for identifying the face recognition characteristics of scanning with PI camera with gathering the temperature and humidity sensor data under IoT environment. and made the vision platform with 3D printing technology. Especially we used MongoDB for developing the performance of vision platform because the MongoDB is more akin to working with objects in a programming language than what we know of as a database. Afterwards, we would enhance the performance of vision platform for clouding functionalities.

CNN-based Online Sign Language Translation Counseling System (CNN기반의 온라인 수어통역 상담 시스템에 관한 연구)

  • Park, Won-Cheol;Park, Koo-Rack
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.5
    • /
    • pp.17-22
    • /
    • 2021
  • It is difficult for the hearing impaired to use the counseling service without sign language interpretation. Due to the shortage of sign language interpreters, it takes a lot of time to connect to sign language interpreters, or there are many cases where the connection is not available. Therefore, in this paper, we propose a system that captures sign language as an image using OpenCV and CNN (Convolutional Neural Network), recognizes sign language motion, and converts the meaning of sign language into textual data and provides it to users. The counselor can conduct counseling by reading the stored sign language translation counseling contents. Consultation is possible without a professional sign language interpreter, reducing the burden of waiting for a sign language interpreter. If the proposed system is applied to counseling services for the hearing impaired, it is expected to improve the effectiveness of counseling and promote academic research on counseling for the hearing impaired in the future.

Smart window coloring control automation system based on image analysis using a Raspberry Pi camera (라즈베리파이 카메라를 활용한 이미지 분석 기반 스마트 윈도우 착색 조절 자동화 시스템)

  • Min-Sang Kim;Hyeon-Sik Ahn;Seong-Min Lim;Eun-Jeong Jang;Na-Kyung Lee;Jun-Hyeok Heo;In-Gu Kang;Ji-Hyeon Kwon;Jun-Young Lee;Ha-Young Kim;Dong-Su Kim;Jong-Ho Yoon;Yoonseuk Choi
    • Journal of IKEEE
    • /
    • v.28 no.1
    • /
    • pp.90-96
    • /
    • 2024
  • In this paper, we propose an automated system. It utilizes a Raspberry Pi camera and a function generator to analyze luminance in an image. Then, it applies voltage based on this analysis to control light transmission through coloring smart windows. The existing luminance meters used to measure luminance are expensive and require unnecessary movement from the user, making them difficult to use in real life. However, after taking a photography, luminance analysis in the image using the Python Open Source Computer Vision Library (OpenCV) is inexpensive and portable, so it can be easily applied in real life. This system was used in an environment where smart windows were applied to detect the luminance of windows. Based on the brightness of the image, the coloring of the smart window is adjusted to reduce the brightness of the window, allowing occupants to create a comfortable viewing environment.

Indoor Surveillance Camera based Human Centric Lighting Control for Smart Building Lighting Management

  • Yoon, Sung Hoon;Lee, Kil Soo;Cha, Jae Sang;Mariappan, Vinayagam;Lee, Min Woo;Woo, Deok Gun;Kim, Jeong Uk
    • International Journal of Advanced Culture Technology
    • /
    • v.8 no.1
    • /
    • pp.207-212
    • /
    • 2020
  • The human centric lighting (HCL) control is a major focus point of the smart lighting system design to provide energy efficient and people mood rhythmic motivation lighting in smart buildings. This paper proposes the HCL control using indoor surveillance camera to improve the human motivation and well-beings in the indoor environments like residential and industrial buildings. In this proposed approach, the indoor surveillance camera video streams are used to predict the day lights and occupancy, occupancy specific emotional features predictions using the advanced computer vision techniques, and this human centric features are transmitted to the smart building light management system. The smart building light management system connected with internet of things (IoT) featured lighting devices and controls the light illumination of the objective human specific lighting devices. The proposed concept experimental model implemented using RGB LED lighting devices connected with IoT features open-source controller in the network along with networked video surveillance solution. The experiment results are verified with custom made automatic lighting control demon application integrated with OpenCV framework based computer vision methods to predict the human centric features and based on the estimated features the lighting illumination level and colors are controlled automatically. The experiment results received from the demon system are analyzed and used for the real-time development of a lighting system control strategy.

A Study on the Measurement of Morphological properties of Coarse-grained Bottom Sediment using Image processing (이미지분석을 이용한 조립질 하상 토사의 형상학적 특성 측정 연구)

  • Kim, Dong-Ho;Kim, Sun-Sin;Hong, Jae-Seok;Ryu, Hong-Ryul;Hawng, Kyu-Nam
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2022.05a
    • /
    • pp.279-279
    • /
    • 2022
  • 최근 이미지분석 기술은 하드웨어 및 소프트웨어 기술의 급격한 발전으로 인해 의학, 생물학, 지리학, 재료공학 등에서 수많은 연구 분야에서 광범위하게 활용되고 있으며, 이미지분석은 다량의 토사에 대하여 입경을 포함한 형상학적 특성을 간편하게 정량화 할 수 있기 때문에 매우 효과적인 분석 방법으로 판단된다. 현재 모래의 입도분석 방법으로는 신뢰성 있는 체가름 시험법(KSF2302) 등이 있으나, 번거로운 처리과정과 많은 시간이 소요된다. 또한 입자형상은 입경이 세립 할수록 직접 측정이 어렵기 때문에, 최근에는 이미지 분석을 이용하는 방법이 시도되고 있다. 본 연구에서는 75㎛ 이상의 조립질 하상 토사 이미지를 취득하여, 입자들의 장·축단 길이, 면적, 둘레, 공칭직경 및 종횡비 등의 형상학적 특성인자를 자동으로 측정하는 프로그램 개발을 수행하였다. 프로그램은 이미지 분석에 특화된 라이브러리인 OpenCV(Open Source Computer Vision)를 적용하였다. 이미지 분석 절차는 크게 이미지 취득, 기하보정, 노이즈제거, 객체추출 및 형상인자 측정 단계로 구성되며, 이미지 취득시 패널의 하단에 Back light를 부착해 시료에 의해 발생되는 음영을 제거하였다. 기하보정은 원근변환(perspective transform)을 적용했으며, 노이즈 제거는 모폴로지 연산과 입자간의 중첩으로 인한 뭉침을 제거하기 위해 watershed 알고리즘을 적용하였다. 최종적으로 객체의 외곽선 추출하여 입자들의 다양한 정보(장축, 단축, 둘레, 면적, 공칭직경, 종횡비)를 산출하고, 분포형으로 제시하였다. 본 연구에서 제안하는 이미지분석을 적용한 토사의 형상학적 특성 측정 방법은 시간과 비용의 측면에서 보다 효율적으로 하상 토사에 대한 다양한 정보를 획득 할 수 있을 것으로 기대한다.

  • PDF

Development of a real-time surface image velocimeter using an android smartphone (스마트폰을 이용한 실시간 표면영상유속계 개발)

  • Yu, Kwonkyu;Hwang, Jeong-Geun
    • Journal of Korea Water Resources Association
    • /
    • v.49 no.6
    • /
    • pp.469-480
    • /
    • 2016
  • The present study aims to develop a real-time surface image velocimeter (SIV) using an Android smartphone. It can measure river surface velocity by using its built-in sensors and processors. At first the SIV system figures out the location of the site using the GPS of the phone. It also measures the angles (pitch and roll) of the device by using its orientation sensors to determine the coordinate transform from the real world coordinates to image coordinates. The only parameter to be entered is the height of the phone from the water surface. After setting, the camera of the phone takes a series of images. With the help of OpenCV, and open source computer vision library, we split the frames of the video and analyzed the image frames to get the water surface velocity field. The image processing algorithm, similar to the traditional STIV (Spatio-Temporal Image Velocimeter), was based on a correlation analysis of spatio-temporal images. The SIV system can measure instantaneous velocity field (1 second averaged velocity field) once every 11 seconds. Averaging this instantaneous velocity measurement for sufficient amount of time, we can get an average velocity field. A series of tests performed in an experimental flume showed that the measurement system developed was greatly effective and convenient. The measured results by the system showed a maximum error of 13.9 % and average error less than 10 %, when we compared with the measurements by a traditional propeller velocimeter.