• Title/Summary/Keyword: object motion

Search Result 1,044, Processing Time 0.034 seconds

Human Action Recognition Via Multi-modality Information

  • Gao, Zan;Song, Jian-Ming;Zhang, Hua;Liu, An-An;Xue, Yan-Bing;Xu, Guang-Ping
    • Journal of Electrical Engineering and Technology
    • /
    • v.9 no.2
    • /
    • pp.739-748
    • /
    • 2014
  • In this paper, we propose pyramid appearance and global structure action descriptors on both RGB and depth motion history images and a model-free method for human action recognition. In proposed algorithm, we firstly construct motion history image for both RGB and depth channels, at the same time, depth information is employed to filter RGB information, after that, different action descriptors are extracted from depth and RGB MHIs to represent these actions, and then multimodality information collaborative representation and recognition model, in which multi-modality information are put into object function naturally, and information fusion and action recognition also be done together, is proposed to classify human actions. To demonstrate the superiority of the proposed method, we evaluate it on MSR Action3D and DHA datasets, the well-known dataset for human action recognition. Large scale experiment shows our descriptors are robust, stable and efficient, when comparing with the-state-of-the-art algorithms, the performances of our descriptors are better than that of them, further, the performance of combined descriptors is much better than just using sole descriptor. What is more, our proposed model outperforms the state-of-the-art methods on both MSR Action3D and DHA datasets.

Design and performance evaluation of deep learning-based unmanned medical systems for rehabilitation medical assistance (재활 의료 보조를 위한 딥러닝 기반 무인 의료 시스템의 설계 및 성능평가)

  • Choi, Donggyu;Jang, Jongwook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.12
    • /
    • pp.1949-1955
    • /
    • 2021
  • With the recent COVID-19 situation, countries are seriously feeling the need for medical personnel and their technologies. PDepending on the aging society, the number of medical staff is actually decreasing, and in order to solve this problem, research is needed to replace the part that does not require high expertise among actual medical practices performed by doctors. This paper describes and proposes actual research methods related to unmanned medical systems that use various deep learning image processing-based technologies to check the recovery status applicable to rehabilitation areas where medical staff should face patients directly. The proposed method replaces passive calculations such as a protractor or a method of drawing a line in a photograph, which is the method used for actual motion comparison. Since it is performed in real time, it helps to diagnose quickly, and it is easy for medical staff to provide necessary information because data on the degree of match of motion performance can be checked.

Learning efficiency checking system by measuring human motion detection (사람의 움직임 감지를 측정한 학습 능률 확인 시스템)

  • Kim, Sukhyun;Lee, Jinsung;Yu, Eunsang;Park, Seon-u;Kim, Eung-Tae
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • fall
    • /
    • pp.290-293
    • /
    • 2021
  • In this paper, we implement a learning efficiency verification system to inspire learning motivation and help improve concentration by detecting the situation of the user studying. To this aim, data on learning attitude and concentration are measured by extracting the movement of the user's face or body through a real-time camera. The Jetson board was used to implement the real-time embedded system, and a convolutional neural network (CNN) was implemented for image recognition. After detecting the feature part of the object using a CNN, motion detection is performed. The captured image is shown in a GUI written in PYQT5, and data is collected by sending push messages when each of the actions is obstructed. In addition, each function can be executed on the main screen made with the GUI, and functions such as a statistical graph that calculates the collected data, To do list, and white noise are performed. Through learning efficiency checking system, various functions including data collection and analysis of targets were provided to users.

  • PDF

Rotating Brush Strokes to Track Movement for Painterly Rendering (회학적 렌더링에서 움직임을 따라 회전하는 붓질 기법)

  • Han, Jeong-Hun;Gi, Hyeon-U;Kim, Hyo-Won;O, Gyeong-Su
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.426-432
    • /
    • 2008
  • We introduce a method of rendering a scene lying 3D objects which is like that artist draw on a canvas by brushing. Painting is the art area presenting something created by color and line on 2D plane. We are brushing on billboards on screen space for the 2D brushing effect according to the definition of "Painting". Brushing orientation is haven to rotate for preventing the orientation in the first scene in the case that object or camera are moving. If the brushing isn't rotated, shower-door effect is watched on the scene as undesirable result We present a brushing rotating method for keeping the orientation changing the direction of view and object rigid animation. The brushing direction is computed with Horn's 2D similarity transform by least-square solution. We watched the changing brushing to track the motion of object and view.

  • PDF

User Detection and Main Body Parts Estimation using Inaccurate Depth Information and 2D Motion Information (정밀하지 않은 깊이정보와 2D움직임 정보를 이용한 사용자 검출과 주요 신체부위 추정)

  • Lee, Jae-Won;Hong, Sung-Hoon
    • Journal of Broadcast Engineering
    • /
    • v.17 no.4
    • /
    • pp.611-624
    • /
    • 2012
  • 'Gesture' is the most intuitive means of communication except the voice. Therefore, there are many researches for method that controls computer using gesture input to replace the keyboard or mouse. In these researches, the method of user detection and main body parts estimation is one of the very important process. in this paper, we propose user objects detection and main body parts estimation method on inaccurate depth information for pose estimation. we present user detection method using 2D and 3D depth information, so this method robust to changes in lighting and noise and 2D signal processing 1D signals, so mainly suitable for real-time and using the previous object information, so more accurate and robust. Also, we present main body parts estimation method using 2D contour information, 3D depth information, and tracking. The result of an experiment, proposed user detection method is more robust than only using 2D information method and exactly detect object on inaccurate depth information. Also, proposed main body parts estimation method overcome the disadvantage that can't detect main body parts in occlusion area only using 2D contour information and sensitive to changes in illumination or environment using color information.

Abnormal Behavior Detection Based on Adaptive Background Generation for Intelligent Video Analysis (지능형 비디오 분석을 위한 적응적 배경 생성 기반의 이상행위 검출)

  • Lee, Seoung-Won;Kim, Tae-Kyung;Yoo, Jang-Hee;Paik, Joon-Ki
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.1
    • /
    • pp.111-121
    • /
    • 2011
  • Intelligent video analysis systems require techniques which can predict accidents and provide alarms to the monitoring personnel. In this paper, we present an abnormal behavior analysis technique based on adaptive background generation. More specifically, abnormal behaviors include fence climbing, abandoned objects, fainting persons, and loitering persons. The proposed video analysis system consists of (i) background generation and (ii) abnormal behavior analysis modules. For robust background generation, the proposed system updates static regions by detecting motion changes at each frame. In addition, noise and shadow removal steps are also were added to improve the accuracy of the object detection. The abnormal behavior analysis module extracts object information, such as centroid, silhouette, size, and trajectory. As the result of the behavior analysis function objects' behavior is configured and analyzed based on the a priori specified scenarios, such as fence climbing, abandoning objects, fainting, and loitering. In the experimental results, the proposed system was able to detect the moving object and analyze the abnormal behavior in complex environments.

Weaving the realities with video in multi-media theatre centering on Schaubuhne's Hamlet and Lenea de Sombra's Amarillo (멀티미디어 공연에서 비디오를 활용한 리얼리티 구축하기 - 샤우뷔네의 <햄릿>과 리니아 드 솜브라의 <아마릴로>를 중심으로 -)

  • Choi, Young-Joo
    • Journal of Korean Theatre Studies Association
    • /
    • no.53
    • /
    • pp.167-202
    • /
    • 2014
  • When video composes mise-en-scene during the performance, it reflects the aspect of contemporary image culture, where the individual as creator joins in the image culture through the device of cell phone and computer remediating the former video technology. It also closely related with the contemporary theatre culture in which 1960's and 1970's video art was weaved into the contemporary performance theatre. With these cultural background, theatre practitioners regarded media-friendly mise-en-scene as an alternative facing the cultural landscape the linear representational narrative did not correspond to the present culture. Nonetheless, it can not be ignored that video in the performance theatre is remediating its historical function: to criticize the social reality. to enrich the aesthetic or emotional reality. I focused video in the performance theatre could feature the object with the image by realizing the realtime relay, emphasizing the situation within the frame, and strengthening the reality by alluding the object as a gesutre. So I explored its two historical manuel. First, video recorded the spot, communicated the information, and arose the audience's recognition of the object to its critical function. Second, video in performance theatre could redistribute perceptual way according to the editing method like as close up, slow motion, multiple perspective, montage and collage, and transformation of the image to the aesthetic function. Reminding the historical function of video in contemporary performance theatre, I analyzed two shows, Schaubuhne's Hamlet and Lenea de Sombra's Amarillo which were introduced to Korean audiences during the 2010 Seoul Theatre Olympics. It is known to us that Ostermeir found real social reality as a text and made the play the context. In this, he used video as a vehicle to penetrate the social reality through the hero's perspective. It is also noteworthy that Ostermeir understood Hamlet's dilemma as these days' young generation's propensity. They delayed action while being involved in image culture. Besides his use of video in the piece revitalized the aesthetic function of video by hypermedial perceptual method. Amarillo combined documentary theatre method with installation, physical theatre, and video relay on the spot, and activated aesthetic function with the intermediality, its interacting co-relationship between the media. In this performance theatre, video has recorded and pursued the absent presence of the real people who died or lost in the desert. At the same time it fantasized the emotional aspect of the people at the moment of their death, which would be opaque or non prominent otherwise. As a conclusion, I found the video in contemporary performance theatre visualized the rupture between the media and perform their intermediality. It attempted to disturb the transparent immediacy to invoke the spectator's perception to the theatrical situation, to open its emotional and spiritual aspect, and to remind the realities as with Schaubuhne's Hamlet and Lenea de Sombra's Amarillo.

A Study on Methods for Accelerating Sea Object Detection in Smart Aids to Navigation System (스마트 항로표지 시스템에서 해상 객체 감지 가속화를 위한 방법에 관한 연구)

  • Jeon, Ho-Seok;Song, Hyun-hak;Kwon, Ki-Won;Kim, Young-Jin;Im, Tae-Ho
    • Journal of Internet Computing and Services
    • /
    • v.23 no.5
    • /
    • pp.47-58
    • /
    • 2022
  • In recent years, navigation aids, which plays as sea traffic lights, have been digitized, and are developing beyond simple sign purpose to provide various functions such as marine information collection, supervision, control, etc. For example, Busan Port which is located in South Korea is leading the application of the advanced technologies by installing cameras on buoys and recording video images to supervise maritime accidents. However, there are difficulties to perform their major functions since the advanced technologies require long-term battery operation and also management and maintenance of them are hampered by marine characteristics. This study proposes a system that can automatically notify maritime objects passing around buoys by analyzing image information. In the existing sensor-based accident prevention systems, the alarms are generated by a collision detection sensor. The system can identify the cause of the accident whilst even though it is difficult not possible to fundamentally prevent the accidents. Therefore, in order to overcome these limitations, the proposed a maritime object detection system is based on marine characteristics. The experiments demonstrate that the proposed system shows about 5 times faster processing speed than other existing algorithms.

Postprocessing of Inter-Frame Coded Images Based on Convex Projection and Regularization (POCS와 정규화를 기반으로한 프레임간 압출 영사의 후처리)

  • Kim, Seong-Jin;Jeong, Si-Chang;Hwang, In-Gyeong;Baek, Jun-Gi
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.3
    • /
    • pp.58-65
    • /
    • 2002
  • In order to reduce blocking artifacts in inter-frame coded images, we propose a new image restoration algorithm, which directly processes differential images before reconstruction. We note that blocking artifact in inter-frame coded images is caused by both 8$\times$8 DCT and 16$\times$16 macroblock based motion compensation, while that of intra-coded images is caused by 8$\times$8 DCT only. According to the observation, we Propose a new degradation model for differential images and the corresponding restoration algorithm that utilizes additional constraints and convex sets for discontinuity inside blocks. The proposed restoration algorithm is a modified version of standard regularization that incorporate!; spatially adaptive lowpass filtering with consideration of edge directions by utilizing a part of DCT coefficients. Most of video coding standard adopt a hybrid structure of block-based motion compensation and block discrete cosine transform (BDCT). By this reason, blocking artifacts are occurred on both block boundary and block interior For more complete removal of both kinds of blocking artifacts, the restored differential image must satisfy two constraints, such as, directional discontinuities on block boundary and block interior Those constraints have been used for defining convex sets for restoring differential images.

Visual Touchless User Interface for Window Manipulation (윈도우 제어를 위한 시각적 비접촉 사용자 인터페이스)

  • Kim, Jin-Woo;Jung, Kyung-Boo;Jeong, Seung-Do;Choi, Byung-Uk
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.6
    • /
    • pp.471-478
    • /
    • 2009
  • Recently, researches for user interface are remarkably processed due to the explosive growth of 3-dimensional contents and applications, and the spread class of computer user. This paper proposes a novel method to manipulate windows efficiently using only the intuitive motion of hand. Previous methods have some drawbacks such as burden of expensive device, high complexity of gesture recognition, assistance of additional information using marker, and so on. To improve the defects, we propose a novel visual touchless interface. First, we detect hand region using hue channel in HSV color space to control window using hand. The distance transform method is applied to detect centroid of hand and curvature of hand contour is used to determine position of fingertips. Finally, by using the hand motion information, we recognize hand gesture as one of predefined seven motions. Recognized hand gesture is to be a command to control window. In the proposed method, user can manipulate windows with sense of depth in the real environment because the method adopts stereo camera. Intuitive manipulation is also available because the proposed method supports visual touch for the virtual object, which user want to manipulate, only using simple motions of hand. Finally, the efficiency of the proposed method is verified via an application based on our proposed interface.