• Title/Summary/Keyword: OpenCV(Open computer vision)

Search Result 57, Processing Time 0.029 seconds

A New CSR-DCF Tracking Algorithm based on Faster RCNN Detection Model and CSRT Tracker for Drone Data

  • Farhodov, Xurshid;Kwon, Oh-Heum;Moon, Kwang-Seok;Kwon, Oh-Jun;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.12
    • /
    • pp.1415-1429
    • /
    • 2019
  • Nowadays object tracking process becoming one of the most challenging task in Computer Vision filed. A CSR-DCF (channel spatial reliability-discriminative correlation filter) tracking algorithm have been proposed on recent tracking benchmark that could achieve stat-of-the-art performance where channel spatial reliability concepts to DCF tracking and provide a novel learning algorithm for its efficient and seamless integration in the filter update and the tracking process with only two simple standard features, HoGs and Color names. However, there are some cases where this method cannot track properly, like overlapping, occlusions, motion blur, changing appearance, environmental variations and so on. To overcome that kind of complications a new modified version of CSR-DCF algorithm has been proposed by integrating deep learning based object detection and CSRT tracker which implemented in OpenCV library. As an object detection model, according to the comparable result of object detection methods and by reason of high efficiency and celerity of Faster RCNN (Region-based Convolutional Neural Network) has been used, and combined with CSRT tracker, which demonstrated outstanding real-time detection and tracking performance. The results indicate that the trained object detection model integration with tracking algorithm gives better outcomes rather than using tracking algorithm or filter itself.

Indoor Surveillance Camera based Human Centric Lighting Control for Smart Building Lighting Management

  • Yoon, Sung Hoon;Lee, Kil Soo;Cha, Jae Sang;Mariappan, Vinayagam;Lee, Min Woo;Woo, Deok Gun;Kim, Jeong Uk
    • International Journal of Advanced Culture Technology
    • /
    • v.8 no.1
    • /
    • pp.207-212
    • /
    • 2020
  • The human centric lighting (HCL) control is a major focus point of the smart lighting system design to provide energy efficient and people mood rhythmic motivation lighting in smart buildings. This paper proposes the HCL control using indoor surveillance camera to improve the human motivation and well-beings in the indoor environments like residential and industrial buildings. In this proposed approach, the indoor surveillance camera video streams are used to predict the day lights and occupancy, occupancy specific emotional features predictions using the advanced computer vision techniques, and this human centric features are transmitted to the smart building light management system. The smart building light management system connected with internet of things (IoT) featured lighting devices and controls the light illumination of the objective human specific lighting devices. The proposed concept experimental model implemented using RGB LED lighting devices connected with IoT features open-source controller in the network along with networked video surveillance solution. The experiment results are verified with custom made automatic lighting control demon application integrated with OpenCV framework based computer vision methods to predict the human centric features and based on the estimated features the lighting illumination level and colors are controlled automatically. The experiment results received from the demon system are analyzed and used for the real-time development of a lighting system control strategy.

Development and Validation of a Vision-Based Needling Training System for Acupuncture on a Phantom Model

  • Trong Hieu Luu;Hoang-Long Cao;Duy Duc Pham;Le Trung Chanh Tran;Tom Verstraten
    • Journal of Acupuncture Research
    • /
    • v.40 no.1
    • /
    • pp.44-52
    • /
    • 2023
  • Background: Previous studies have investigated technology-aided needling training systems for acupuncture on phantom models using various measurement techniques. In this study, we developed and validated a vision-based needling training system (noncontact measurement) and compared its training effectiveness with that of the traditional training method. Methods: Needle displacements during manipulation were analyzed using OpenCV to derive three parameters, i.e., needle insertion speed, needle insertion angle (needle tip direction), and needle insertion length. The system was validated in a laboratory setting and a needling training course. The performances of the novices (students) before and after training were compared with the experts. The technology-aided training method was also compared with the traditional training method. Results: Before the training, a significant difference in needle insertion speed was found between experts and novices. After the training, the novices approached the speed of the experts. Both training methods could improve the insertion speed of the novices after 10 training sessions. However, the technology-aided training group already showed improvement after five training sessions. Students and teachers showed positive attitudes toward the system. Conclusion: The results suggest that the technology-aided method using computer vision has similar training effectiveness to the traditional one and can potentially be used to speed up needling training.

Mapping of Real-Time 3D object movement

  • Tengis, Tserendondog;Batmunkh, Amar
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.7 no.2
    • /
    • pp.1-8
    • /
    • 2015
  • Tracking of an object in 3D space performed in real-time is a significant task in different domains from autonomous robots to smart vehicles. In traditional methods, specific data acquisition equipments such as radars, lasers etc, are used. Contemporary computer technology development accelerates image processing, and it results in three-dimensional stereo vision to be used for localizing and object tracking in space. This paper describes a system for tracking three dimensional motion of an object using color information in real time. We create stereo images using pair of a simple web camera, raw data of an object positions are collected under realistic noisy conditions. The system has been tested using OpenCV and Matlab and the results of the experiments are presented here.

Control of Camera Mounting on Unmanned Aerial Vehicle and Image Processing (무인 항공기 탑재 카메라 제어 및 영상처리)

  • Kim, Kwang-Jin;Ahn, Yong-Nam;Song, Yong-Kyu
    • Journal of Aerospace System Engineering
    • /
    • v.3 no.4
    • /
    • pp.11-18
    • /
    • 2009
  • This paper is about EO sensor module control based on the image processing. The main purpose of this research is acquiring a latitude and longitude of the target located on the ground by using image processing. For image processing, OpenCV is employed which is a computer vision library originally developed by Intel, and ATmega128 is used for the EO sensor module control. This task also involves realization of control programs and acquisition of sensor view angle for the position of the target.

  • PDF

A study on Real-time Graphic User Interface for Hidden Target Segmentation (은닉표적의 분할을 위한 실시간 Graphic User Interface 구현에 관한 연구)

  • Yeom, Seokwon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.17 no.2
    • /
    • pp.67-70
    • /
    • 2016
  • This paper discusses a graphic user interface(GUI) for the concealed target segmentation. The human subject hiding a metal gun is captured by the passive millimeter wave(MMW) imaging system. The imaging system operates on the regime of 8 mm wavelength. The MMW image is analyzed by the multi-level segmentation to segment and identify a concealed weapon under clothing. The histogram of the passive MMW image is modeled with the Gaussian mixture distribution. LBG vector quantization(VQ) and expectation and maximization(EM) algorithms are sequentially applied to segment the body and the object area. In the experiment, the GUI is implemented by the MFC(Microsoft Foundation Class) and the OpenCV(Computer Vision) libraries and tested in real-time showing the efficiency of the system.

Development of a real-time surface image velocimeter using an android smartphone (스마트폰을 이용한 실시간 표면영상유속계 개발)

  • Yu, Kwonkyu;Hwang, Jeong-Geun
    • Journal of Korea Water Resources Association
    • /
    • v.49 no.6
    • /
    • pp.469-480
    • /
    • 2016
  • The present study aims to develop a real-time surface image velocimeter (SIV) using an Android smartphone. It can measure river surface velocity by using its built-in sensors and processors. At first the SIV system figures out the location of the site using the GPS of the phone. It also measures the angles (pitch and roll) of the device by using its orientation sensors to determine the coordinate transform from the real world coordinates to image coordinates. The only parameter to be entered is the height of the phone from the water surface. After setting, the camera of the phone takes a series of images. With the help of OpenCV, and open source computer vision library, we split the frames of the video and analyzed the image frames to get the water surface velocity field. The image processing algorithm, similar to the traditional STIV (Spatio-Temporal Image Velocimeter), was based on a correlation analysis of spatio-temporal images. The SIV system can measure instantaneous velocity field (1 second averaged velocity field) once every 11 seconds. Averaging this instantaneous velocity measurement for sufficient amount of time, we can get an average velocity field. A series of tests performed in an experimental flume showed that the measurement system developed was greatly effective and convenient. The measured results by the system showed a maximum error of 13.9 % and average error less than 10 %, when we compared with the measurements by a traditional propeller velocimeter.

A Study on the Measurement of Morphological properties of Coarse-grained Bottom Sediment using Image processing (이미지분석을 이용한 조립질 하상 토사의 형상학적 특성 측정 연구)

  • Kim, Dong-Ho;Kim, Sun-Sin;Hong, Jae-Seok;Ryu, Hong-Ryul;Hawng, Kyu-Nam
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2022.05a
    • /
    • pp.279-279
    • /
    • 2022
  • 최근 이미지분석 기술은 하드웨어 및 소프트웨어 기술의 급격한 발전으로 인해 의학, 생물학, 지리학, 재료공학 등에서 수많은 연구 분야에서 광범위하게 활용되고 있으며, 이미지분석은 다량의 토사에 대하여 입경을 포함한 형상학적 특성을 간편하게 정량화 할 수 있기 때문에 매우 효과적인 분석 방법으로 판단된다. 현재 모래의 입도분석 방법으로는 신뢰성 있는 체가름 시험법(KSF2302) 등이 있으나, 번거로운 처리과정과 많은 시간이 소요된다. 또한 입자형상은 입경이 세립 할수록 직접 측정이 어렵기 때문에, 최근에는 이미지 분석을 이용하는 방법이 시도되고 있다. 본 연구에서는 75㎛ 이상의 조립질 하상 토사 이미지를 취득하여, 입자들의 장·축단 길이, 면적, 둘레, 공칭직경 및 종횡비 등의 형상학적 특성인자를 자동으로 측정하는 프로그램 개발을 수행하였다. 프로그램은 이미지 분석에 특화된 라이브러리인 OpenCV(Open Source Computer Vision)를 적용하였다. 이미지 분석 절차는 크게 이미지 취득, 기하보정, 노이즈제거, 객체추출 및 형상인자 측정 단계로 구성되며, 이미지 취득시 패널의 하단에 Back light를 부착해 시료에 의해 발생되는 음영을 제거하였다. 기하보정은 원근변환(perspective transform)을 적용했으며, 노이즈 제거는 모폴로지 연산과 입자간의 중첩으로 인한 뭉침을 제거하기 위해 watershed 알고리즘을 적용하였다. 최종적으로 객체의 외곽선 추출하여 입자들의 다양한 정보(장축, 단축, 둘레, 면적, 공칭직경, 종횡비)를 산출하고, 분포형으로 제시하였다. 본 연구에서 제안하는 이미지분석을 적용한 토사의 형상학적 특성 측정 방법은 시간과 비용의 측면에서 보다 효율적으로 하상 토사에 대한 다양한 정보를 획득 할 수 있을 것으로 기대한다.

  • PDF

Development of Deep Learning AI Model and RGB Imagery Analysis Using Pre-sieved Soil (입경 분류된 토양의 RGB 영상 분석 및 딥러닝 기법을 활용한 AI 모델 개발)

  • Kim, Dongseok;Song, Jisu;Jeong, Eunji;Hwang, Hyunjung;Park, Jaesung
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.66 no.4
    • /
    • pp.27-39
    • /
    • 2024
  • Soil texture is determined by the proportions of sand, silt, and clay within the soil, which influence characteristics such as porosity, water retention capacity, electrical conductivity (EC), and pH. Traditional classification of soil texture requires significant sample preparation including oven drying to remove organic matter and moisture, a process that is both time-consuming and costly. This study aims to explore an alternative method by developing an AI model capable of predicting soil texture from images of pre-sorted soil samples using computer vision and deep learning technologies. Soil samples collected from agricultural fields were pre-processed using sieve analysis and the images of each sample were acquired in a controlled studio environment using a smartphone camera. Color distribution ratios based on RGB values of the images were analyzed using the OpenCV library in Python. A convolutional neural network (CNN) model, built on PyTorch, was enhanced using Digital Image Processing (DIP) techniques and then trained across nine distinct conditions to evaluate its robustness and accuracy. The model has achieved an accuracy of over 80% in classifying the images of pre-sorted soil samples, as validated by the components of the confusion matrix and measurements of the F1 score, demonstrating its potential to replace traditional experimental methods for soil texture classification. By utilizing an easily accessible tool, significant time and cost savings can be expected compared to traditional methods.

Position Detection and Gathering Swimming Control of Fish Robot Using Color Detection Algorithm (색상 검출 알고리즘을 활용한 물고기로봇의 위치인식과 군집 유영제어)

  • Akbar, Muhammad;Shin, Kyoo Jae
    • Annual Conference of KIPS
    • /
    • 2016.10a
    • /
    • pp.510-513
    • /
    • 2016
  • Detecting of the object in image processing is substantial but it depends on the object itself and the environment. An object can be detected either by its shape or color. Color is an essential for pattern recognition and computer vision. It is an attractive feature because of its simplicity and its robustness to scale changes and to detect the positions of the object. Generally, color of an object depends on its characteristics of the perceiving eye and brain. Physically, objects can be said to have color because of the light leaving their surfaces. Here, we conducted experiment in the aquarium fish tank. Different color of fish robots are mimic the natural swim of fish. Unfortunately, in the underwater medium, the colors are modified by attenuation and difficult to identify the color for moving objects. We consider the fish motion as a moving object and coordinates are found at every instinct of the aquarium to detect the position of the fish robot using OpenCV color detection. In this paper, we proposed to identify the position of the fish robot by their color and use the position data to control the fish robot gathering in one point in the fish tank through serial communication using RF module. It was verified by the performance test of detecting the position of the fish robot.