• Title/Summary/Keyword: Multi-Human Behavior

Search Result 115, Processing Time 0.024 seconds

Multi-Human Behavior Recognition Based on Improved Posture Estimation Model

  • Zhang, Ning;Park, Jin-Ho;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.5
    • /
    • pp.659-666
    • /
    • 2021
  • With the continuous development of deep learning, human behavior recognition algorithms have achieved good results. However, in a multi-person recognition environment, the complex behavior environment poses a great challenge to the efficiency of recognition. To this end, this paper proposes a multi-person pose estimation model. First of all, the human detectors in the top-down framework mostly use the two-stage target detection model, which runs slow down. The single-stage YOLOv3 target detection model is used to effectively improve the running speed and the generalization of the model. Depth separable convolution, which further improves the speed of target detection and improves the model's ability to extract target proposed regions; Secondly, based on the feature pyramid network combined with context semantic information in the pose estimation model, the OHEM algorithm is used to solve difficult key point detection problems, and the accuracy of multi-person pose estimation is improved; Finally, the Euclidean distance is used to calculate the spatial distance between key points, to determine the similarity of postures in the frame, and to eliminate redundant postures.

Human Activity Recognition Based on 3D Residual Dense Network

  • Park, Jin-Ho;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.12
    • /
    • pp.1540-1551
    • /
    • 2020
  • Aiming at the problem that the existing human behavior recognition algorithm cannot fully utilize the multi-level spatio-temporal information of the network, a human behavior recognition algorithm based on a dense three-dimensional residual network is proposed. First, the proposed algorithm uses a dense block of three-dimensional residuals as the basic module of the network. The module extracts the hierarchical features of human behavior through densely connected convolutional layers; Secondly, the local feature aggregation adaptive method is used to learn the local dense features of human behavior; Then, the residual connection module is applied to promote the flow of feature information and reduced the difficulty of training; Finally, the multi-layer local feature extraction of the network is realized by cascading multiple three-dimensional residual dense blocks, and use the global feature aggregation adaptive method to learn the features of all network layers to realize human behavior recognition. A large number of experimental results on benchmark datasets KTH show that the recognition rate (top-l accuracy) of the proposed algorithm reaches 93.52%. Compared with the three-dimensional convolutional neural network (C3D) algorithm, it has improved by 3.93 percentage points. The proposed algorithm framework has good robustness and transfer learning ability, and can effectively handle a variety of video behavior recognition tasks.

A Study on Characteristics of Spatial Configuration and Human Evacuation Behavior in Multi-Plex Theater of Jinju (진주지역 복합영화관의 공간구조와 피난행태특성에 관한 연구)

  • Ahn, Eun Hee
    • Journal of the Korean Institute of Rural Architecture
    • /
    • v.7 no.3
    • /
    • pp.93-100
    • /
    • 2005
  • The purpose of this study is to investigate specific crowding areas resulted from diverse physical factors of multi-plex theaters in a fire evacuation, and more accurately predict the evacuation route of people. To achieve these purpose, the architectural characteristics of three multi-plex theaters in Jinju have been chosen, and the evacuation experiments through the computer simulation called Simulex were carried out for each on the these theaters. The conclusions from this study are as follows: (1) Crowding usually happens cross areas between theater inside and corridors, and Crowding rate depending on the number of cross areas. (2) It is necessary to design the escape routes being employed ordinary times. And the egress routes planning should be integrated into space programming at the early stage of building design.

  • PDF

A compliance control of telerobot using neural network (신경 회로망을 이용한 원격조작 로보트의 컴플라이언스 제어)

  • 차동혁;박영수;조형석
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1991.10a
    • /
    • pp.850-855
    • /
    • 1991
  • In this paper, neural network-based compliance control of telerobot is presented, This is a method to learn the compliance of human behavior and control telerobot using learned compliance. The consistency of human behavior is checked using Lipschitz's condition. The neural compliance model is composed of a multi-layered neural network which mimics the compliant notion of the human operator. The effectiveness of proposed scheme ie verified by a simulation study.

  • PDF

The Emotional Boundary Decision in a Linear Affect-Expression Space for Effective Robot Behavior Generation (효과적인 로봇 행동 생성을 위한 선형의 정서-표정 공간 내 감정 경계의 결정 -비선형의 제스처 동기화를 위한 정서, 표정 공간의 영역 결정)

  • Jo, Su-Hun;Lee, Hui-Sung;Park, Jeong-Woo;Kim, Min-Gyu;Chung, Myung-Jin
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.540-546
    • /
    • 2008
  • In the near future, robots should be able to understand human's emotional states and exhibit appropriate behaviors accordingly. In Human-Human Interaction, the 93% consist of the speaker's nonverbal communicative behavior. Bodily movements provide information of the quantity of emotion. Latest personal robots can interact with human using multi-modality such as facial expression, gesture, LED, sound, sensors and so on. However, a posture needs a position and an orientation only and in facial expression or gesture, movements are involved. Verbal, vocal, musical, color expressions need time information. Because synchronization among multi-modalities is a key problem, emotion expression needs a systematic approach. On the other hand, at low intensity of surprise, the face could be expressed but the gesture could not be expressed because a gesture is not linear. It is need to decide the emotional boundaries for effective robot behavior generation and synchronization with another expressible method. If it is so, how can we define emotional boundaries? And how can multi-modality be synchronized each other?

  • PDF

Multi-modal Sensor System and Database for Human Detection and Activity Learning of Robot in Outdoor (실외에서 로봇의 인간 탐지 및 행위 학습을 위한 멀티모달센서 시스템 및 데이터베이스 구축)

  • Uhm, Taeyoung;Park, Jeong-Woo;Lee, Jong-Deuk;Bae, Gi-Deok;Choi, Young-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.12
    • /
    • pp.1459-1466
    • /
    • 2018
  • Robots which detect human and recognize action are important factors for human interaction, and many researches have been conducted. Recently, deep learning technology has developed and learning based robot's technology is a major research area. These studies require a database to learn and evaluate for intelligent human perception. In this paper, we propose a multi-modal sensor-based image database condition considering the security task by analyzing the image database to detect the person in the outdoor environment and to recognize the behavior during the running of the robot.

Prediction of Human Performance Time to Find Objects on Multi-display Monitors using ACT-R Cognitive Architecture

  • Oh, Hyungseok;Myung, Rohae;Kim, Sang-Hyeob;Jang, Eun-Hye;Park, Byoung-Jun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.32 no.2
    • /
    • pp.159-165
    • /
    • 2013
  • Objective: The aim of this study was to predict human performance time in finding objects on multi-display monitors using ACT-R cognitive architecture. Background: Display monitors are one of the representative interfaces for interaction between people and the system. Nowadays, the use of multi-display monitors is increasing so that it is necessary to research about the interaction between users and the system on multi-display monitors. Method: A cognitive model using ACT-R cognitive architecture was developed for the model-based evaluation on multi-display monitors. To develop the cognitive model, first, an experiment was performed to extract the latency about the where system of ACT-R. Then, a menu selection experiment was performed to develop a human performance model to find objects on multi-display monitors. The validation of the cognitive model was also carried out between the developed ACT-R model and empirical data. Results: As a result, no significant difference on performance time was found between the model and empirical data. Conclusion: The ACT-R cognitive architecture could be extended to model human behavior in the search of objects on multi-display monitors.. Application: This model can help predicting performance time for the model-based usability evaluation in the area of multi-display work environments.

Human-Tracking Behavior of Mobile Robot Using Multi-Camera System in a Networked ISpace (공간지능화에서 다중카메라를 이용한 이동로봇의 인간추적행위)

  • Jin, Tae-Seok;Hashimoto, Hideki
    • The Journal of Korea Robotics Society
    • /
    • v.2 no.4
    • /
    • pp.310-316
    • /
    • 2007
  • The paper proposes a human-following behavior of mobile robot and an intelligent space (ISpace) is used in order to achieve these goals. An ISpace is a 3-D environment in which many sensors and intelligent devices are distributed. Mobile robots exist in this space as physical agents providing humans with services. A mobile robot is controlled to track a walking human using distributed intelligent sensors as stably and precisely as possible. The moving objects is assumed to be a point-object and projected onto an image plane to form a geometrical constraint equation that provides position data of the object based on the kinematics of the intelligent space. Uncertainties in the position estimation caused by the point-object assumption are compensated using the Kalman filter. To generate the shortest time trajectory to track the walking human, the linear and angular velocities are estimated and utilized. The computer simulation and experimental results of estimating and trackinging of the walking human with the mobile robot are presented.

  • PDF

Human Activity Recognition using Multi-temporal Neural Networks (다중 시구간 신경회로망을 이용한 인간 행동 인식)

  • Lee, Hyun-Jin
    • Journal of Digital Contents Society
    • /
    • v.18 no.3
    • /
    • pp.559-565
    • /
    • 2017
  • A lot of studies have been conducted to recognize the motion state or behavior of the user using the acceleration sensor built in the smartphone. In this paper, we applied the neural networks to the 3-axis acceleration information of smartphone to study human behavior. There are performance issues in applying time series data to neural networks. We proposed a multi-temporal neural networks which have trained three neural networks with different time windows for feature extraction and uses the output of these neural networks as input to the new neural network. The proposed method showed better performance than other methods like SVM, AdaBoot and IBk classifier for real acceleration data.