• Title/Summary/Keyword: RGB camera

Search Result 320, Processing Time 0.022 seconds

Intrusion Detection Algorithm based on Motion Information in Video Sequence (비디오 시퀀스에서 움직임 정보를 이용한 침입탐지 알고리즘)

  • Kim, Alla;Kim, Yoon-Ho
    • Journal of Advanced Navigation Technology
    • /
    • v.14 no.2
    • /
    • pp.284-288
    • /
    • 2010
  • Video surveillance is widely used in establishing the societal security network. In this paper, intrusion detection based on visual information acquired by static camera is proposed. Proposed approach uses background model constructed by approximated median filter(AMF) to find a foreground candidate, and detected object is calculated by analyzing motion information. Motion detection is determined by the relative size of 2D object in RGB space, finally, the threshold value for detecting object is determined by heuristic method. Experimental results showed that the performance of intrusion detection is better one when the spatio-temporal candidate informations change abruptly.

Dense RGB-D Map-Based Human Tracking and Activity Recognition using Skin Joints Features and Self-Organizing Map

  • Farooq, Adnan;Jalal, Ahmad;Kamal, Shaharyar
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.5
    • /
    • pp.1856-1869
    • /
    • 2015
  • This paper addresses the issues of 3D human activity detection, tracking and recognition from RGB-D video sequences using a feature structured framework. During human tracking and activity recognition, initially, dense depth images are captured using depth camera. In order to track human silhouettes, we considered spatial/temporal continuity, constraints of human motion information and compute centroids of each activity based on chain coding mechanism and centroids point extraction. In body skin joints features, we estimate human body skin color to identify human body parts (i.e., head, hands, and feet) likely to extract joint points information. These joints points are further processed as feature extraction process including distance position features and centroid distance features. Lastly, self-organized maps are used to recognize different activities. Experimental results demonstrate that the proposed method is reliable and efficient in recognizing human poses at different realistic scenes. The proposed system should be applicable to different consumer application systems such as healthcare system, video surveillance system and indoor monitoring systems which track and recognize different activities of multiple users.

A Deep Convolutional Neural Network Based 6-DOF Relocalization with Sensor Fusion System (센서 융합 시스템을 이용한 심층 컨벌루션 신경망 기반 6자유도 위치 재인식)

  • Jo, HyungGi;Cho, Hae Min;Lee, Seongwon;Kim, Euntai
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.2
    • /
    • pp.87-93
    • /
    • 2019
  • This paper presents a 6-DOF relocalization using a 3D laser scanner and a monocular camera. A relocalization problem in robotics is to estimate pose of sensor when a robot revisits the area. A deep convolutional neural network (CNN) is designed to regress 6-DOF sensor pose and trained using both RGB image and 3D point cloud information in end-to-end manner. We generate the new input that consists of RGB and range information. After training step, the relocalization system results in the pose of the sensor corresponding to each input when a new input is received. However, most of cases, mobile robot navigation system has successive sensor measurements. In order to improve the localization performance, the output of CNN is used for measurements of the particle filter that smooth the trajectory. We evaluate our relocalization method on real world datasets using a mobile robot platform.

All-In-One Observing Software for Small Telescope

  • Han, Jimin;Pak, Soojong;Ji, Tae-Geun;Lee, Hye-In;Byeon, Seoyeon;Ahn, Hojae;Im, Myungshin
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.43 no.2
    • /
    • pp.57.2-57.2
    • /
    • 2018
  • In astronomical observation, sequential device control and real-time data processing are important to maximize observing efficiency. We have developed series of automatic observing software (KAOS, KHU Automatic Observing Software), e.g. KAOS30 for the 30 inch telescope in the McDonald Observatory and KAOS76 for the 76 cm telescope in the KHAO. The series consist of four packages: the DAP (Data Acquisition Package) for CCD Camera control, the TCP (Telescope Control Package) for telescope control, the AFP (Auto Focus Package) for focusing, and the SMP (Script Mode Package) for automation of sequences. In this poster, we introduce KAOS10 which is being developed for controlling a small telescope such as aperture size of 10 cm. The hardware components are the QHY8pro CCD, the QHY5-II CMOS, the iOptron CEM 25 mount, and the Stellarvue SV102ED telescope. The devices are controlled on ASCOM Platform. In addition to the previous packages (DAP, SMP, TCP), KAOS10 has QLP (Quick Look Package) and astrometry function in the TCP. QHY8pro CCD has RGB Bayer matrix and the QLP transforms RGB images into BVR images in real-time. The TCP includes astrometry function which adjusts the telescope position by comparing the image with a star catalog. In the future, We expect KAOS10 be used on the research of transient objects such as a variable star.

  • PDF

Dog Activities Recognition System using Dog-centered Cropped Images (반려견에 초점을 맞춰 추출하는 영상 기반의 행동 탐지 시스템)

  • Othmane Atif;Jonguk Lee;Daihee Park;Yongwha Chung
    • Annual Conference of KIPS
    • /
    • 2023.05a
    • /
    • pp.615-617
    • /
    • 2023
  • In recent years, the growing popularity of dogs due to the benefits they bring their owners has contributed to the increase of the number of dogs raised. For owners, it is their responsibility to ensure their dogs' health and safety. However, it is challenging for them to continuously monitor their dogs' activities, which are important to understand and guarantee their wellbeing. In this work, we introduce a camera-based monitoring system to help owners automatically monitor their dogs' activities. The system receives sequences of RGB images and uses YOLOv7 to detect the dog presence, and then applies post-processing to perform dog-centered image cropping on each input sequence. The optical flow is extracted from each sequence, and both sequences of RGB and flow are input to a two-stream EfficientNet to extract their respective features. Finally, the features are concatenated, and a bi-directional LSTM is utilized to retrieve temporal features and recognize the activity. The experiments prove that our system achieves a good performance with the F-1 score exceeding 0.90 for all activities and reaching 0.963 on average.

Using Skeleton Vector Information and RNN Learning Behavior Recognition Algorithm (스켈레톤 벡터 정보와 RNN 학습을 이용한 행동인식 알고리즘)

  • Kim, Mi-Kyung;Cha, Eui-Young
    • Journal of Broadcast Engineering
    • /
    • v.23 no.5
    • /
    • pp.598-605
    • /
    • 2018
  • Behavior awareness is a technology that recognizes human behavior through data and can be used in applications such as risk behavior through video surveillance systems. Conventional behavior recognition algorithms have been performed using the 2D camera image device or multi-mode sensor or multi-view or 3D equipment. When two-dimensional data was used, the recognition rate was low in the behavior recognition of the three-dimensional space, and other methods were difficult due to the complicated equipment configuration and the expensive additional equipment. In this paper, we propose a method of recognizing human behavior using only CCTV images without additional equipment using only RGB and depth information. First, the skeleton extraction algorithm is applied to extract points of joints and body parts. We apply the equations to transform the vector including the displacement vector and the relational vector, and study the continuous vector data through the RNN model. As a result of applying the learned model to various data sets and confirming the accuracy of the behavior recognition, the performance similar to that of the existing algorithm using the 3D information can be verified only by the 2D information.

Night Time Leading Vehicle Detection Using Statistical Feature Based SVM (통계적 특징 기반 SVM을 이용한 야간 전방 차량 검출 기법)

  • Joung, Jung-Eun;Kim, Hyun-Koo;Park, Ju-Hyun;Jung, Ho-Youl
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.7 no.4
    • /
    • pp.163-172
    • /
    • 2012
  • A driver assistance system is critical to improve a convenience and stability of vehicle driving. Several systems have been already commercialized such as adaptive cruise control system and forward collision warning system. Efficient vehicle detection is very important to improve such driver assistance systems. Most existing vehicle detection systems are based on a radar system, which measures distance between a host and leading (or oncoming) vehicles under various weather conditions. However, it requires high deployment cost and complexity overload when there are many vehicles. A camera based vehicle detection technique is also good alternative method because of low cost and simple implementation. In general, night time vehicle detection is more complicated than day time vehicle detection, because it is much more difficult to distinguish the vehicle's features such as outline and color under the dim environment. This paper proposes a method to detect vehicles at night time using analysis of a captured color space with reduction of reflection and other light sources in images. Four colors spaces, namely RGB, YCbCr, normalized RGB and Ruta-RGB, are compared each other and evaluated. A suboptimal threshold value is determined by Otsu algorithm and applied to extract candidates of taillights of leading vehicles. Statistical features such as mean, variance, skewness, kurtosis, and entropy are extracted from the candidate regions and used as feature vector for SVM(Support Vector Machine) classifier. According to our simulation results, the proposed statistical feature based SVM provides relatively high performances of leading vehicle detection with various distances in variable nighttime environments.

Development of Deep Learning AI Model and RGB Imagery Analysis Using Pre-sieved Soil (입경 분류된 토양의 RGB 영상 분석 및 딥러닝 기법을 활용한 AI 모델 개발)

  • Kim, Dongseok;Song, Jisu;Jeong, Eunji;Hwang, Hyunjung;Park, Jaesung
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.66 no.4
    • /
    • pp.27-39
    • /
    • 2024
  • Soil texture is determined by the proportions of sand, silt, and clay within the soil, which influence characteristics such as porosity, water retention capacity, electrical conductivity (EC), and pH. Traditional classification of soil texture requires significant sample preparation including oven drying to remove organic matter and moisture, a process that is both time-consuming and costly. This study aims to explore an alternative method by developing an AI model capable of predicting soil texture from images of pre-sorted soil samples using computer vision and deep learning technologies. Soil samples collected from agricultural fields were pre-processed using sieve analysis and the images of each sample were acquired in a controlled studio environment using a smartphone camera. Color distribution ratios based on RGB values of the images were analyzed using the OpenCV library in Python. A convolutional neural network (CNN) model, built on PyTorch, was enhanced using Digital Image Processing (DIP) techniques and then trained across nine distinct conditions to evaluate its robustness and accuracy. The model has achieved an accuracy of over 80% in classifying the images of pre-sorted soil samples, as validated by the components of the confusion matrix and measurements of the F1 score, demonstrating its potential to replace traditional experimental methods for soil texture classification. By utilizing an easily accessible tool, significant time and cost savings can be expected compared to traditional methods.

Illuminant Color Estimation Method Using Valuable Pixels (중요 화소들을 이용한 광원의 색 추정 방법)

  • Kim, Young-Woo;Lee, Moon-Hyun;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.18 no.1
    • /
    • pp.21-30
    • /
    • 2013
  • It is a challenging problem to most of the image processing when the light source is unknown. The color of the light source must be estimated in order to compensate color changes. To estimate the color of the light source, additional assumption is need, so that we assumed color distribution according to the light source. If the pixels, which do not satisfy the assumption, are used, the estimation fails to provide an accurate result. The most popular color distribution assumption is Grey-World Assumption (GWA); it is the assumption that the color in each scene, the surface reflectance averages to gray or achromatic color over the entire images. In this paper, we analyze the characteristics of the camera response function, and the effect of the Grey-World Assumption on the pixel value and chromaticity, based on the inherent characteristics of the light source. Besides, we propose a novel method that detects important pixels for the color estimation of the light source. In our method, we firstly proposed a method that gives weights to pixels satisfying the assumption. Then, we proposed a pixel detection method, which we modified max-RGB method, to apply on the weighted pixels. Maximum weighted pixels in the column direction and row direction in one channel are detected. The performance of our method is verified through demonstrations in several real scenes. Proposed method better accurately estimate the color of the light than previous methods.

Assessment of Fire-Damaged Mortar using Color image Analysis (색도 이미지 분석을 이용한 화재 피해 모르타르의 손상 평가)

  • Park, Kwang-Min;Lee, Byung-Do;Yoo, Sung-Hun;Ham, Nam-Hyuk;Roh, Young-Sook
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.23 no.3
    • /
    • pp.83-91
    • /
    • 2019
  • The purpose of this study is to assess a fire-damaged concrete structure using a digital camera and image processing software. To simulate it, mortar and paste samples of W/C=0.5(general strength) and 0.3(high strength) were put into an electric furnace and simulated from $100^{\circ}C$ to $1000^{\circ}C$. Here, the paste was processed into a powder to measure CIELAB chromaticity, and the samples were taken with a digital camera. The RGB chromaticity was measured by color intensity analyzer software. As a result, the residual compressive strength of W/C=0.5 and 0.3 was 87.2 % and 86.7 % at the heating temperature of $400^{\circ}C$. However there was a sudden decrease in strength at the temperature above $500^{\circ}C$, while the residual compressive strength of W/C=0.5 and 0.3 was 55.2 % and 51.9 % of residual strength. At the temperature $700^{\circ}C$ or higher, W/C=0.5 and W/C=0.3 show 26.3% and 27.8% of residual strength, so that the durability of the structure could not be secured. The results of $L^*a^*b$ color analysis show that $b^*$ increases rapidly after $700^{\circ}C$. It is analyzed that the intensity of yellow becomes strong after $700^{\circ}C$. Further, the RGB analysis found that the histogram kurtosis and frequency of Red and Green increases after $700^{\circ}C$. It is analyzed that number of Red and Green pixels are increased. Therefore, it is deemed possible to estimate the degree of damage by checking the change in yellow($b^*$ or R+G) when analyzing the chromaticity of the fire-damaged concrete structures.