• Title/Summary/Keyword: Audio-Vision Fusion

Search Result 9, Processing Time 0.021 seconds

Intelligent User Pattern Recognition based on Vision, Audio and Activity for Abnormal Event Detections of Single Households

  • Jung, Ju-Ho;Ahn, Jun-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.5
    • /
    • pp.59-66
    • /
    • 2019
  • According to the KT telecommunication statistics, people stayed inside their houses on an average of 11.9 hours a day. As well as, according to NSC statistics in the united states, people regardless of age are injured for a variety of reasons in their houses. For purposes of this research, we have investigated an abnormal event detection algorithm to classify infrequently occurring behaviors as accidents, health emergencies, etc. in their daily lives. We propose a fusion method that combines three classification algorithms with vision pattern, audio pattern, and activity pattern to detect unusual user events. The vision pattern algorithm identifies people and objects based on video data collected through home CCTV. The audio and activity pattern algorithms classify user audio and activity behaviors using the data collected from built-in sensors on their smartphones in their houses. We evaluated the proposed individual pattern algorithm and fusion method based on multiple scenarios.

Audio-Visual Fusion for Sound Source Localization and Improved Attention (음성-영상 융합 음원 방향 추정 및 사람 찾기 기술)

  • Lee, Byoung-Gi;Choi, Jong-Suk;Yoon, Sang-Suk;Choi, Mun-Taek;Kim, Mun-Sang;Kim, Dai-Jin
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.35 no.7
    • /
    • pp.737-743
    • /
    • 2011
  • Service robots are equipped with various sensors such as vision camera, sonar sensor, laser scanner, and microphones. Although these sensors have their own functions, some of them can be made to work together and perform more complicated functions. AudioFvisual fusion is a typical and powerful combination of audio and video sensors, because audio information is complementary to visual information and vice versa. Human beings also mainly depend on visual and auditory information in their daily life. In this paper, we conduct two studies using audioFvision fusion: one is on enhancing the performance of sound localization, and the other is on improving robot attention through sound localization and face detection.

Intelligent Abnormal Event Detection Algorithm for Single Households at Home via Daily Audio and Vision Patterns (지능형 오디오 및 비전 패턴 기반 1인 가구 이상 징후 탐지 알고리즘)

  • Jung, Juho;Ahn, Junho
    • Journal of Internet Computing and Services
    • /
    • v.20 no.1
    • /
    • pp.77-86
    • /
    • 2019
  • As the number of single-person households increases, it is not easy to ask for help alone if a single-person household is severely injured in the home. This paper detects abnormal event when members of a single household in the home are seriously injured. It proposes an vision detection algorithm that analyzes and recognizes patterns through videos that are collected based on home CCTV. And proposes audio detection algorithms that analyze and recognize patterns of sound that occur in households based on Smartphones. If only each algorithm is used, shortcomings exist and it is difficult to detect situations such as serious injuries in a wide area. So I propose a fusion method that effectively combines the two algorithms. The performance of the detection algorithm and the precise detection performance of the proposed fusion method were evaluated, respectively.

Deep Learning-Based User Emergency Event Detection Algorithms Fusing Vision, Audio, Activity and Dust Sensors (영상, 음성, 활동, 먼지 센서를 융합한 딥러닝 기반 사용자 이상 징후 탐지 알고리즘)

  • Jung, Ju-ho;Lee, Do-hyun;Kim, Seong-su;Ahn, Jun-ho
    • Journal of Internet Computing and Services
    • /
    • v.21 no.5
    • /
    • pp.109-118
    • /
    • 2020
  • Recently, people are spending a lot of time inside their homes because of various diseases. It is difficult to ask others for help in the case of a single-person household that is injured in the house or infected with a disease and needs help from others. In this study, an algorithm is proposed to detect emergency event, which are situations in which single-person households need help from others, such as injuries or disease infections, in their homes. It proposes vision pattern detection algorithms using home CCTVs, audio pattern detection algorithms using artificial intelligence speakers, activity pattern detection algorithms using acceleration sensors in smartphones, and dust pattern detection algorithms using air purifiers. However, if it is difficult to use due to security issues of home CCTVs, it proposes a fusion method combining audio, activity and dust pattern sensors. Each algorithm collected data through YouTube and experiments to measure accuracy.

Abnormal Behavior Pattern Identifications of One-person Households using Audio, Vision, and Dust Sensors (음성, 영상, 먼지 센서를 활용한 1인 가구 이상 행동 패턴 탐지)

  • Kim, Si-won;Ahn, Jun-ho
    • Journal of Internet Computing and Services
    • /
    • v.20 no.6
    • /
    • pp.95-103
    • /
    • 2019
  • The number of one person households has grown steadily over the recent past and the population of lonely and unnoticed death are also observed. The phenomenon of one person households has been occurred. In the dark side of society, the remarkable number of lonely and unnoticed death are reported among different age-groups. We propose an unusual event detection method which may give a remarkable solution to reduce the number of the death rete for people dying alone and remaining undiscovered for a long period of time. The unusual event detection method we suggested to identify abnormal user behavior in their lives using vision pattern, audio pattern, and dust pattern algorithms. Individually proposed pattern algorithms have disadvantages of not being able to detect when they leave the coverage area. We utilized a fusion method to improve the accuracy performance of each pattern algorithm and evaluated the technique with multiple user behavior patterns in indoor areas.

CNN-based Visual/Auditory Feature Fusion Method with Frame Selection for Classifying Video Events

  • Choe, Giseok;Lee, Seungbin;Nang, Jongho
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.3
    • /
    • pp.1689-1701
    • /
    • 2019
  • In recent years, personal videos have been shared online due to the popular uses of portable devices, such as smartphones and action cameras. A recent report predicted that 80% of the Internet traffic will be video content by the year 2021. Several studies have been conducted on the detection of main video events to manage a large scale of videos. These studies show fairly good performance in certain genres. However, the methods used in previous studies have difficulty in detecting events of personal video. This is because the characteristics and genres of personal videos vary widely. In a research, we found that adding a dataset with the right perspective in the study improved performance. It has also been shown that performance improves depending on how you extract keyframes from the video. we selected frame segments that can represent video considering the characteristics of this personal video. In each frame segment, object, location, food and audio features were extracted, and representative vectors were generated through a CNN-based recurrent model and a fusion module. The proposed method showed mAP 78.4% performance through experiments using LSVC data.

Multi-modal Emotion Recognition using Semi-supervised Learning and Multiple Neural Networks in the Wild (준 지도학습과 여러 개의 딥 뉴럴 네트워크를 사용한 멀티 모달 기반 감정 인식 알고리즘)

  • Kim, Dae Ha;Song, Byung Cheol
    • Journal of Broadcast Engineering
    • /
    • v.23 no.3
    • /
    • pp.351-360
    • /
    • 2018
  • Human emotion recognition is a research topic that is receiving continuous attention in computer vision and artificial intelligence domains. This paper proposes a method for classifying human emotions through multiple neural networks based on multi-modal signals which consist of image, landmark, and audio in a wild environment. The proposed method has the following features. First, the learning performance of the image-based network is greatly improved by employing both multi-task learning and semi-supervised learning using the spatio-temporal characteristic of videos. Second, a model for converting 1-dimensional (1D) landmark information of face into two-dimensional (2D) images, is newly proposed, and a CNN-LSTM network based on the model is proposed for better emotion recognition. Third, based on an observation that audio signals are often very effective for specific emotions, we propose an audio deep learning mechanism robust to the specific emotions. Finally, so-called emotion adaptive fusion is applied to enable synergy of multiple networks. The proposed network improves emotion classification performance by appropriately integrating existing supervised learning and semi-supervised learning networks. In the fifth attempt on the given test set in the EmotiW2017 challenge, the proposed method achieved a classification accuracy of 57.12%.

Intelligent Abnormal Situation Event Detections for Smart Home Users Using Lidar, Vision, and Audio Sensors (스마트 홈 사용자를 위한 라이다, 영상, 오디오 센서를 이용한 인공지능 이상징후 탐지 알고리즘)

  • Kim, Da-hyeon;Ahn, Jun-ho
    • Journal of Internet Computing and Services
    • /
    • v.22 no.3
    • /
    • pp.17-26
    • /
    • 2021
  • Recently, COVID-19 has spread and time to stay at home has been increasing in accordance with quarantine guidelines of the government such as recommendations to refrain from going out. As a result, the number of single-person households staying at home is also increasingsingle-person households are less likely to be notified to the outside world in times of emergency than multi-person households. This study collects various situations occurring in the home with lidar, image, and voice sensors and analyzes the data according to the sensors through their respective algorithms. Using this method, we analyzed abnormal patterns such as emergency situations and conducted research to detect abnormal signs in humans. Artificial intelligence algorithms that detect abnormalities in people by each sensor were studied and the accuracy of anomaly detection was measured according to the sensor. Furthermore, this work proposes a fusion method that complements the pros and cons between sensors by experimenting with the detectability of sensors for various situations.

Vision-based Low-cost Walking Spatial Recognition Algorithm for the Safety of Blind People (시각장애인 안전을 위한 영상 기반 저비용 보행 공간 인지 알고리즘)

  • Sunghyun Kang;Sehun Lee;Junho Ahn
    • Journal of Internet Computing and Services
    • /
    • v.24 no.6
    • /
    • pp.81-89
    • /
    • 2023
  • In modern society, blind people face difficulties in navigating common environments such as sidewalks, elevators, and crosswalks. Research has been conducted to alleviate these inconveniences for the visually impaired through the use of visual and audio aids. However, such research often encounters limitations when it comes to practical implementation due to the high cost of wearable devices, high-performance CCTV systems, and voice sensors. In this paper, we propose an artificial intelligence fusion algorithm that utilizes low-cost video sensors integrated into smartphones to help blind people safely navigate their surroundings during walking. The proposed algorithm combines motion capture and object detection algorithms to detect moving people and various obstacles encountered during walking. We employed the MediaPipe library for motion capture to model and detect surrounding pedestrians during motion. Additionally, we used object detection algorithms to model and detect various obstacles that can occur during walking on sidewalks. Through experimentation, we validated the performance of the artificial intelligence fusion algorithm, achieving accuracy of 0.92, precision of 0.91, recall of 0.99, and an F1 score of 0.95. This research can assist blind people in navigating through obstacles such as bollards, shared scooters, and vehicles encountered during walking, thereby enhancing their mobility and safety.