• Title/Summary/Keyword: performance video

Search Result 2,485, Processing Time 0.029 seconds

The Effect of Asynchronous Haptic and Video Feedback on Teleoperation and a Comment for Improving the Performance (비 동기화된 촉각과 영상 시간지연이 원격조종로봇에 미치는 영향과 성능 향상을 위한 조언)

  • Kim, Hyuk;Ryu, Jee-Hwan
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.2
    • /
    • pp.156-160
    • /
    • 2012
  • In this paper, we investigate the effect of asynchronous haptic and video feedback on the performance of teleoperation. To analyze the effect, a tele-manipulation experiment is specially designed, which operator moves square objects from one place to another place by using master/slave telerobotic system. Task completion time and total number of falling of the object are used for evaluating the performance. Subjective study was conducted with 10 subjects in 16 different combinations of video and haptic feedback while participants didn't have any prior information about the amount of each delay. Initially we assume that synchronized haptic and video feedback would give best performance. However as a result, we found that the accuracy was increased when haptic and video feedback was synchronized, and the completion time was decreased when one of the feedback (either haptic or video) was decreased. Another interesting fact that we found in this experiment is that it showed even better accuracy when haptic information arrives little bit earlier than video information, than the case when those are synchronized.

CNN-based Visual/Auditory Feature Fusion Method with Frame Selection for Classifying Video Events

  • Choe, Giseok;Lee, Seungbin;Nang, Jongho
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.3
    • /
    • pp.1689-1701
    • /
    • 2019
  • In recent years, personal videos have been shared online due to the popular uses of portable devices, such as smartphones and action cameras. A recent report predicted that 80% of the Internet traffic will be video content by the year 2021. Several studies have been conducted on the detection of main video events to manage a large scale of videos. These studies show fairly good performance in certain genres. However, the methods used in previous studies have difficulty in detecting events of personal video. This is because the characteristics and genres of personal videos vary widely. In a research, we found that adding a dataset with the right perspective in the study improved performance. It has also been shown that performance improves depending on how you extract keyframes from the video. we selected frame segments that can represent video considering the characteristics of this personal video. In each frame segment, object, location, food and audio features were extracted, and representative vectors were generated through a CNN-based recurrent model and a fusion module. The proposed method showed mAP 78.4% performance through experiments using LSVC data.

The Effects of Video Programs of Cardiopulmonary Cerebral Resuscitation Education (동영상 심폐소생술 교육이 간호사의 심폐소생술 수행능력에 미치는 효과)

  • Byun, Gyu Ri;Park, Jeong Eun;Hong, Hae Sook
    • Journal of Korean Biological Nursing Science
    • /
    • v.17 no.1
    • /
    • pp.19-27
    • /
    • 2015
  • Purpose: The aim of this study was to identify the effect of video programs of cardiopulmonary cerebral resuscitation (CPCR) education of cardiopulmonary cerebral resuscitation of nurses. Methods: The subjects of the study were 64 nurses working in a university hospital. Nurse's CPCR performance have been measured four times (pre-test, post-test at immediately, 3 months and 6 months after intervention). Data were collected from February to August 2013. Results: There were significant differences in knowledge, attitude, self-efficacy, and performance between groups by measure time. And there were significant interactions in knowledge, self-efficacy, and performance between groups, within groups, except for the attitude. The video programs of CPCR interventions appear to be effective in the improvement of knowledge, self-efficacy, and performance, as compared to the control group. Conclusion: The video programs of CPCR education was an effective intervention to improve and retain the level of knowledge, attitude, self-efficacy and performance. And the video program of CPCR education have an advantage of self-learning effect for nurses with shift work. Therefore video programs of CPCR education will be utilized for continuing nurse's education.

Effect of Sensory Integration Video Modeling on Self-initiation and Task Performance in Children with Intellectual Disability (감각통합활동 동영상이 지적장애아동의 자발성과 과제 수행율에 미치는 효과)

  • Hong, Mi-Young;Kim, Jin-Kyung;Kang, Dae-Hyuk
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.5
    • /
    • pp.260-269
    • /
    • 2011
  • The aim of this research was to examine whether the sensory integration video modeling intervention would be beneficial to the children with intellectual disabilitiy. Four children with intellectual disabilitiy participated and the A-B-A design was utilized in this study. In the intervention phase, each individual watched his/her own 8 minutes long previously recorded video which was the independent variable in this study. The dependent variables were (1) the self-initiation and task performance of the four sensory integration activities, (2) performance time of Grooved Pegboard. During the intervention period, each individual participated in the occupational therapy session twice a week. The result showed that sensory integration video modeling increased self-initiation and task performance of the participants. Self-initiation and the task performance scores of the participants were maintained even after the intervention period. When Grooved Pegboard was administered, the performance time decreased. The findings indicated that sensory integration video modeling may be an effective intervention for improving self-initiation and task performance and reducing inattentiveness in children with intellectual disabilitiy. In the future research, it is suggested that the level of cognition and sensory processing capabilities of the participants be considered to validate the effectiveness of sensory integration video modeling.

The Study on the Development of the Realtime HD(High Definition) Level Video Streaming Transmitter Supporting the Multi-platform (다중 플랫폼 지원 실시간 HD급 영상 전송기 개발에 관한 연구)

  • Lee, JaeHee;Seo, ChangJin
    • The Transactions of the Korean Institute of Electrical Engineers P
    • /
    • v.65 no.4
    • /
    • pp.326-334
    • /
    • 2016
  • In this paper for developing and implementing the realtime HD level video streaming transmitter which is operated on the multi-platform in all network and client environment compared to the exist video live streaming transmitter. We design the realtime HD level video streaming transmitter supporting the multi-platform using the TMS320DM386 video processor of T.I company and then porting the Linux kernel 2.6.29 and implementing the RTSP(Real Time Streaming Protocol)/RTP(Real Time Transport Protocol), HLS(Http Live Streaming), RTMP(Real Time Messaging Protocol) that can support the multi-platform of video stream protocol of the received equipments (smart phone, tablet PC, notebook etc.). For proving the performance of developed video streaming transmitter, we make the testing environment for testing the performance of streaming transmitter using the notebook, iPad, android Phone, and then analysis the received video in the client displayer. In this paper, we suggest the developed the Realtime HD(High Definition) level Video Streaming transmitter performance data values higher than the exist products.

No-reference quality assessment of dynamic sports videos based on a spatiotemporal motion model

  • Kim, Hyoung-Gook;Shin, Seung-Su;Kim, Sang-Wook;Lee, Gi Yong
    • ETRI Journal
    • /
    • v.43 no.3
    • /
    • pp.538-548
    • /
    • 2021
  • This paper proposes an approach to improve the performance of no-reference video quality assessment for sports videos with dynamic motion scenes using an efficient spatiotemporal model. In the proposed method, we divide the video sequences into video blocks and apply a 3D shearlet transform that can efficiently extract primary spatiotemporal features to capture dynamic natural motion scene statistics from the incoming video blocks. The concatenation of a deep residual bidirectional gated recurrent neural network and logistic regression is used to learn the spatiotemporal correlation more robustly and predict the perceptual quality score. In addition, conditional video block-wise constraints are incorporated into the objective function to improve quality estimation performance for the entire video. The experimental results show that the proposed method extracts spatiotemporal motion information more effectively and predicts the video quality with higher accuracy than the conventional no-reference video quality assessment methods.

Recovery Corrupted Video Files using Time Information (시간 정보를 활용한 동영상 파일 복원 기법)

  • Na, Gihyun;Shim, Kyu-Sun;Byun, Jun-Seok;Kim, Eun-Soo;Lee, Joong
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.12
    • /
    • pp.1492-1500
    • /
    • 2015
  • In recent crime scene, there is the captured crime scene video at least one. So video files recorded on storage media often provide important evidence. Criminals often attempt to destroy storage saved crime scene video. For this reason recovery of a damaged or deleted video file is important to resolve criminal cases in aspects of digital forensic. In the recent, there is a study to recover video file based on video frames, but it is very poor time efficiency when the connecting video frames. This paper proposed advanced frame-based recovery technique of a damaged video files using time information. We suggest a new connecting algorithm to connect video frames using recorded time information in front of video frame. We also evaluate performance in aspects of time and experiment result shows that proposed method improves performance.

Performance Analysis of 3D-HEVC Video Coding (3D-HEVC 비디오 부호화 성능 분석)

  • Park, Daemin;Choi, Haechul
    • Journal of Broadcast Engineering
    • /
    • v.19 no.5
    • /
    • pp.713-725
    • /
    • 2014
  • Multi-view and 3D video technologies for a next generation video service are widely studied. These technologies can make users feel realistic experience as supporting various views. Because acquisition and transmission of a large number of views require a high cost, main challenges for multi-view and 3D video include view synthesis, video coding, and depth coding. Recently, JCT-3V (joint collaborative team on 3D video coding extension development) has being developed a new standard for multi-view and 3D video. In this paper, major tools adopted in this standard are introduced and evaluated in terms of coding efficiency and complexity. This performance analysis would be helpful for the development of a fast 3D video encoder as well as a new 3D video coding algorithm.

An Efficient Video Clip Matching Algorithm Using the Cauchy Function (커쉬함수를 이용한 효율적인 비디오 클립 정합 알고리즘)

  • Kim Sang-Hyul
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.5 no.4
    • /
    • pp.294-300
    • /
    • 2004
  • According to the development of digital media technologies various algorithms for video clip matching have been proposed to match the video sequences efficiently. A large number of video search methods have focused on frame-wise query, whereas a relatively few algorithms have been presented for video clip matching or video shot matching. In this paper, we propose an efficient algorithm to index the video sequences and to retrieve the sequences for video clip query. To improve the accuracy and performance of video sequence matching, we employ the Cauchy function as a similarity measure between histograms of consecutive frames, which yields a high performance compared with conventional measures. The key frames extracted from segmented video shots can be used not only for video shot clustering but also for video sequence matching or browsing, where the key frame is defined by the frame that is significantly different from the previous frames. Experimental results with color video sequences show that the proposed method yields the high matching performance and accuracy with a low computational load compared with conventional algorithms.

  • PDF

A Method for Video Placement on a Cluster of Video Servers Using Server and Network Loads (비디오 서버 클러스터 상에서의 서버 및 네트워크 부하를 고려한 비디오 배치 방법)

  • Kim, SangChul
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.8 no.4
    • /
    • pp.43-52
    • /
    • 2008
  • The paper presents the problem definition of video placement and efficient methods for placing video data on a cluster of video servers. The video placement is to place each of video replicas on one of the servers where the number and location of the servers are already determined. The rejection ratio of user requests is one of most important user-perceive performance measures, so it has been used as a performance criteria in many researches on the VOD system. The objective of our video placement is to achieve the load balancing among servers and the minimization of total network loads. To our experiment, the presented methods show better performance in terms of the rejection ratio of user requests than the methods for video placements in which only either server load balancing or network load minimization is considered. Also, it is observed that considerations on server load balancing is especially important in video placement. To our survey, little research has been published on video placement in which server and network load are considered together in a video server cluster environment.

  • PDF