• Title/Summary/Keyword: 카메라 모델

Search Result 1,047, Processing Time 0.024 seconds

Fire-Flame Detection Using Fuzzy Logic (퍼지 로직을 이용한 화재 불꽃 감지)

  • Hwang, Hyun-Jae;Ko, Byoung-Chul
    • The KIPS Transactions:PartB
    • /
    • v.16B no.6
    • /
    • pp.463-470
    • /
    • 2009
  • In this paper, we propose the advanced fire-flame detection algorithm using camera image for better performance than previous sensors-based systems which is limited on small area. Also, previous works using camera image were depend on a lot of heuristic thresholds or required an additional computation time. To solve these problems, we use statistical values and divide image into blocks to reduce the processing time. First, from the captured image, candidate flame regions are detected by a background model and fire colored models of the fire-flame. After the probability models are formed using the change of luminance, wavelet transform and the change of motion on time axis, they are used for membership function of fuzzy logic. Finally, the result function is made by the defuzzification, and the probability value of fire-flame is estimated. The proposed system has shown better performance when it compared to Toreyin's method which perform well among existing algorithms.

Robust Estimation of Hand Poses Based on Learning (학습을 이용한 손 자세의 강인한 추정)

  • Kim, Sul-Ho;Jang, Seok-Woo;Kim, Gye-Young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.12
    • /
    • pp.1528-1534
    • /
    • 2019
  • Recently, due to the popularization of 3D depth cameras, new researches and opportunities have been made in research conducted on RGB images, but estimation of human hand pose is still classified as one of the difficult topics. In this paper, we propose a robust estimation method of human hand pose from various input 3D depth images using a learning algorithm. The proposed approach first generates a skeleton-based hand model and then aligns the generated hand model with three-dimensional point cloud data. Then, using a random forest-based learning algorithm, the hand pose is strongly estimated from the aligned hand model. Experimental results in this paper show that the proposed hierarchical approach makes robust and fast estimation of human hand posture from input depth images captured in various indoor and outdoor environments.

Face Detection Method based Fusion RetinaNet using RGB-D Image (RGB-D 영상을 이용한 Fusion RetinaNet 기반 얼굴 검출 방법)

  • Nam, Eun-Jeong;Nam, Chung-Hyeon;Jang, Kyung-Sik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.4
    • /
    • pp.519-525
    • /
    • 2022
  • The face detection task of detecting a person's face in an image is used as a preprocess or core process in various image processing-based applications. The neural network models, which have recently been performing well with the development of deep learning, are dependent on 2D images, so if noise occurs in the image, such as poor camera quality or pool focus of the face, the face may not be detected properly. In this paper, we propose a face detection method that uses depth information together to reduce the dependence of 2D images. The proposed model was trained after generating and preprocessing depth information in advance using face detection dataset, and as a result, it was confirmed that the FRN model was 89.16%, which was about 1.2% better than the RetinaNet model, which showed 87.95%.

CNN3D-Based Bus Passenger Prediction Model Using Skeleton Keypoints (Skeleton Keypoints를 활용한 CNN3D 기반의 버스 승객 승하차 예측모델)

  • Jang, Jin;Kim, Soo Hyung
    • Smart Media Journal
    • /
    • v.11 no.3
    • /
    • pp.90-101
    • /
    • 2022
  • Buses are a popular means of transportation. As such, thorough preparation is needed for passenger safety management. However, the safety system is insufficient because there are accidents such as a death accident occurred when the bus departed without recognizing the elderly approaching to get on in 2018. There is a safety system that prevents pinching accidents through sensors on the back door stairs, but such a system does not prevent accidents that occur in the process of getting on and off like the above accident. If it is possible to predict the intention of bus passengers to get on and off, it will help to develop a safety system to prevent such accidents. However, studies predicting the intention of passengers to get on and off are insufficient. Therefore, in this paper, we propose a 1×1 CNN3D-based getting on and off intention prediction model using skeleton keypoints of passengers extracted from the camera image attached to the bus through UDP-Pose. The proposed model shows approximately 1~2% higher accuracy than the RNN and LSTM models in predicting passenger's getting on and off intentions.

Performance Analysis of Optical Camera Communication with Applied Convolutional Neural Network (합성곱 신경망을 적용한 Optical Camera Communication 시스템 성능 분석)

  • Jong-In Kim;Hyun-Sun Park;Jung-Hyun Kim
    • Smart Media Journal
    • /
    • v.12 no.3
    • /
    • pp.49-59
    • /
    • 2023
  • Optical Camera Communication (OCC), known as the next-generation wireless communication technology, is currently under extensive research. The performance of OCC technology is affected by the communication environment, and various strategies are being studied to improve it. Among them, the most prominent method is applying convolutional neural networks (CNN) to the receiver of OCC using deep learning technology. However, in most studies, CNN is simply used to detect the transmitter. In this paper, we experiment with applying the convolutional neural network not only for transmitter detection but also for the Rx demodulation system. We hypothesize that, since the data images of the OCC system are relatively simple to classify compared to other image datasets, high accuracy results will appear in most CNN models. To prove this hypothesis, we designed and implemented an OCC system to collect data and applied it to 12 different CNN models for experimentation. The experimental results showed that not only high-performance CNN models with many parameters but also lightweight CNN models achieved an accuracy of over 99%. Through this, we confirmed the feasibility of applying the OCC system in real-time on mobile devices such as smartphones.

Performance Comparison for Exercise Motion classification using Deep Learing-based OpenPose (OpenPose기반 딥러닝을 이용한 운동동작분류 성능 비교)

  • Nam Rye Son;Min A Jung
    • Smart Media Journal
    • /
    • v.12 no.7
    • /
    • pp.59-67
    • /
    • 2023
  • Recently, research on behavior analysis tracking human posture and movement has been actively conducted. In particular, OpenPose, an open-source software developed by CMU in 2017, is a representative method for estimating human appearance and behavior. OpenPose can detect and estimate various body parts of a person, such as height, face, and hands in real-time, making it applicable to various fields such as smart healthcare, exercise training, security systems, and medical fields. In this paper, we propose a method for classifying four exercise movements - Squat, Walk, Wave, and Fall-down - which are most commonly performed by users in the gym, using OpenPose-based deep learning models, DNN and CNN. The training data is collected by capturing the user's movements through recorded videos and real-time camera captures. The collected dataset undergoes preprocessing using OpenPose. The preprocessed dataset is then used to train the proposed DNN and CNN models for exercise movement classification. The performance errors of the proposed models are evaluated using MSE, RMSE, and MAE. The performance evaluation results showed that the proposed DNN model outperformed the proposed CNN model.

Visual Tracking Technique Based on Projective Modular Active Shape Model (투영적 모듈화 능동 형태 모델에 기반한 영상 추적 기법)

  • Kim, Won
    • Journal of the Korea Society for Simulation
    • /
    • v.18 no.2
    • /
    • pp.77-89
    • /
    • 2009
  • Visual tracking technique is one of the essential things which are very important in the major fields of modern society. While contour tracking is especially necessary technique in the aspect of its fast performance with target's external contour information, it sometimes fails to track target motion because it is affected by the surrounding edges around target and weak egdes on the target boundary. To overcome these weak points, in this research it is suggested that PDMs can be obtained by generating the virtual 6-DOF motions of the mobile robot with a CCD camera and the image tracking system which is robust to the local minima around the target can be configured by constructing Active Shape Model in modular base. To show the effectiveness of the proposed method, the experiment is performed on the image stream obtained by a real mobile robot and the better performance is confirmed by comparing the experimental results with the ones of other major tracking techniques.

Research on APC Verification for Disaster Victims and Vulnerable Facilities (재난약자 및 취약시설에 대한 APC실증에 관한 연구)

  • Kim, Seung-Yong;Hwang, In-Cheol ;Kim, Dong-Sik
    • Proceedings of the Korean Society of Disaster Information Conference
    • /
    • 2023.11a
    • /
    • pp.278-281
    • /
    • 2023
  • 연구목적: 본 연구는 요양병원 등 재난취약시설에 재난이 발생할 경우 잔류한 요구조자를 정확하게 파악하여 소방 등 대응기관에 제공하는 APC(Auto People Counting)의 인식률 개선에 목적이 있다. 현재 재난 발생 시 건물 내 요구조자의 현황 파악을 위해 대응기관이 재난 현장에 도착하여 건물관계자에게 직접 물어보고 있다. 이는 요구조자에 대한 부정확한 정보일 가능성이 있어 대응기관의 업무범위가 확대되고 이로인해 구조자의 안전에도 위험이 될 수 있다. APC는 건물내 출입하는 인원을 자동으로 집계하여 실시간 잔류인원 정보를 제공함으로써 재난 시 요구조자 현황을 정확히 파악할 수 있다. 본 연구에서는 APC가 보다 정확하게 출입 인원을 집계할 수 있도록 최적의 인공지능 알고리즘을 선정하는데 목적이 있다. 연구방법: 본 연구에서는 실제 재난취약시설에 설치되어 운영 중인 APC를 대상으로 카메라를 통해 출입 인원의 이미지를 인식하는 알고리즘을 개선하기 위해 CNN모델을 활용하여 베이스라인 모델링을 하였다. 다양한 알고리즘의 성능을 분석하여 상위 7개의 후보군을 선정하고 전이학습 모델을 활용하여 성능이 가장 우수한 최적의 알고리즘을 선정하는 방법으로 연구를 수행하였다. 연구결과: 실험결과 시간과 성능이 가장 좋은 Densenet201, Resnet152v2 모델의 정밀도와 재현율을 확인한 결과 모든 라벨에 대해서 정확도 100%를 나타내는 것을 확인할 수 있었다. 이 중 Densenet201 모델이 더 높은 성능을 보여주었다. 결론: 다양한 인공지능 알고리즘 중 APC에 적용할 수 있는 최적의 알고리즘을 선정하였고 이는 APC의 인식률을 개선하여 재난시 요구조자의 정보를 정확하게 파악하여 신속하고 안전한 구조작업이 가능할 것이다. 이는 요구조자의 안전한 구조뿐만 아니라 구조작업을 수행하는 구조자의 안전을 확보하는 데 기여할 것으로 기대된다. 향후 연무 등 다양한 재난상황에서 재난취약시설 내 출입인원을 정확하게 파악할 수 있도록 알고리즘 분석 및 학습에 대한 추가 연구가 요구된다.

  • PDF

Mobile App for Detecting Canine Skin Diseases Using U-Net Image Segmentation (U-Net 기반 이미지 분할 및 병변 영역 식별을 활용한 반려견 피부질환 검출 모바일 앱)

  • Bo Kyeong Kim;Jae Yeon Byun;Kyung-Ae Cha
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.29 no.4
    • /
    • pp.25-34
    • /
    • 2024
  • This paper presents the development of a mobile application that detects and identifies canine skin diseases by training a deep learning-based U-Net model to infer the presence and location of skin lesions from images. U-Net, primarily used in medical imaging for image segmentation, is effective in distinguishing specific regions of an image in a polygonal form, making it suitable for identifying lesion areas in dogs. In this study, six major canine skin diseases were defined as classes, and the U-Net model was trained to differentiate among them. The model was then implemented in a mobile app, allowing users to perform lesion analysis and prediction through simple camera shots, with the results provided directly to the user. This enables pet owners to monitor the health of their pets and obtain information that aids in early diagnosis. By providing a quick and accurate diagnostic tool for pet health management through deep learning, this study emphasizes the significance of developing an easily accessible service for home use.

The Hand Region Acquistion System for Gesture-based Interface (제스처 기반 인터페이스를 위한 손영역 획득 시스템)

  • 양선옥;고일주;최형일
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.8 no.4
    • /
    • pp.43-52
    • /
    • 1998
  • We extract a hand region by using color information, which is an important feature for human vision to distinguish objects. Because pixel values in images are changed according to the luminance and lighting source, it is difficult to extract a hand region exactly without previous knowledge. We generate a hand skin model at learning stage, and extract a hand region from images by using the model. We also use a Kalman filter to consider changes of pixel values in a hand skin model. A Kalman filter restricts a search area for extracting a hand region at next frame also. The validity of the proposed method is proved by implementing the hand-region acquisition module.

  • PDF