• Title/Summary/Keyword: Video clips

Search Result 195, Processing Time 0.019 seconds

User Interface Design and Rehabilitation Training Methods in Hand or Arm Rehabilitation Support System (손과 팔 재활 훈련 지원 시스템에서의 사용자 인터페이스 설계와 재활 훈련 방법)

  • Ha, Jin-Young;Lee, Jun-Ho;Choi, Sun-Hwa
    • Journal of Industrial Technology
    • /
    • v.31 no.A
    • /
    • pp.63-69
    • /
    • 2011
  • A home-based rehabilitation system for patients with uncomfortable hands or arms was developed. By using this system, patients can save time and money of going to the hospital. The system's interface is easy to manipulate. In this paper, we discuss a rehabilitation system using video recognition; the focus is on designing a convenient user interface and rehabilitation training methods. The system consists of two screens: one for recording user's information and the other for training. A first-time user inputs his/her information. The system chooses the training method based on the information and records the training process automatically using video recognition. On the training screen, video clips of the training method and help messages are displayed for the user.

  • PDF

Shot Group and Representative Shot Frame Detection using Similarity-based Clustering

  • Lee, Gye-Sung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.9
    • /
    • pp.37-43
    • /
    • 2016
  • This paper introduces a method for video shot group detection needed for efficient management and summary of video. The proposed method detects shots based on low-level visual properties and performs temporal and spatial clustering based on visual similarity of neighboring shots. Shot groups created from temporal clustering are further clustered into small groups with respect to visual similarity. A set of representative shot frames are selected from each cluster of the smaller groups representing a scene. Shots excluded from temporal clustering are also clustered into groups from which representative shot frames are selected. A number of video clips are collected and applied to the method for accuracy of shot group detection. We achieved 91% of accuracy of the method for shot group detection. The number of representative shot frames is reduced to 1/3 of the total shot frames. The experiment also shows the inverse relationship between accuracy and compression rate.

Single Pixel Compressive Camera for Fast Video Acquisition using Spatial Cluster Regularization

  • Peng, Yang;Liu, Yu;Lu, Kuiyan;Zhang, Maojun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.11
    • /
    • pp.5481-5495
    • /
    • 2018
  • Single pixel imaging technology has developed for years, however the video acquisition on the single pixel camera is not a well-studied problem in computer vision. This work proposes a new scheme for single pixel camera to acquire video data and a new regularization for robust signal recovery algorithm. The method establishes a single pixel video compressive sensing scheme to reconstruct the video clips in spatial domain by recovering the difference of the consecutive frames. Different from traditional data acquisition method works in transform domain, the proposed scheme reconstructs the video frames directly in spatial domain. At the same time, a new regularization called spatial cluster is introduced to improve the performance of signal reconstruction. The regularization derives from the observation that the nonzero coefficients often tend to be clustered in the difference of the consecutive video frames. We implement an experiment platform to illustrate the effectiveness of the proposed algorithm. Numerous experiments show the well performance of video acquisition and frame reconstruction on single pixel camera.

Implementation of Sports Video Clip Extraction Based on MobileNetV3 Transfer Learning (MobileNetV3 전이학습 기반 스포츠 비디오 클립 추출 구현)

  • YU, LI
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.5
    • /
    • pp.897-904
    • /
    • 2022
  • Sports video is a very critical information resource. High-precision extraction of effective segments in sports video can better assist coaches in analyzing the player's actions in the video, and enable users to more intuitively appreciate the player's hitting action. Aiming at the shortcomings of the current sports video clip extraction results, such as strong subjectivity, large workload and low efficiency, a classification method of sports video clips based on MobileNetV3 is proposed to save user time. Experiments evaluate the effectiveness of effective segment extraction. Among the extracted segments, the effective proportion is 97.0%, indicating that the effective segment extraction results are good, and it can lay the foundation for the construction of the subsequent badminton action metadata video dataset.

Designing Video-based Teacher Professional Development: Teachers' Meaning Making with a Video Annotation Tool

  • SO, Hyo-Jeong;LIM, Weiying;XIONG, Yao
    • Educational Technology International
    • /
    • v.17 no.1
    • /
    • pp.87-116
    • /
    • 2016
  • In this research, we designed a teacher professional development (PD) program where a small group of mathematics teachers could share, reflect on, and discuss their pedagogical knowledge and practices of ICT-integrated lessons, using a video annotation tool called DIVER. The main purposes of this paper are both micro and macro: to examine how the teachers were engaged in the meaning-making process in a video-based PD (micro); and to derive implications about how to design effective video-based teacher PD programs toward a teacher community of practices (macro). To examine teachers' meaning-making in the PD sessions, discourse data from a series of 10 meetings was segmented into idea units and coded to identify discourse patterns, focusing on (a) participation levels, (b) conversation topics, and (c) conversation depth. Regarding the affordance of DIVER, discourse patterns of two meetings, before and after individual annotation with DIVER were compared through qualitative vignette analysis. Overall, we found that the teacher discourse shifted the focus from surface features to deeper pedagogical issues as the PD sessions progressed. In particular, the annotation function in DIVER afforded the teachers to exercise descriptive analyses of video clips in a flexible manner, thereby helping them cognitively prepared to take interpretative and evaluative stances in face-to-face discussions with colleagues. In conclusion, deriving from our research experiences, we discuss the possibilities and challenges of designing video-based teacher PD in a school context.

A Study on the Collaborative Authoring Tool based on Rights Information

  • Yi, Yeong-Hun;Choi, Chang-Ha;Cho, Seong-Hwan
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.3
    • /
    • pp.17-23
    • /
    • 2016
  • In this paper, we propose a model of the collaborative authoring tool and its implementation results. In the 'collaborative authoring and automatic distribution system,' users can create eBooks by using partial works (primary and edited sources) on the basis of copyright information registered by the primary author. The Collaborative Authoring Tool is a part of "the collaborative authoring and automatic distribution system" developed through research on "the development of key technologies of social work protection and content mashup tools" as an R&D project granted by the Korea Copyright Commission from 2013. In the collaborative authoring and automatic distribution system, authors of primary sources such as images, audio clips and video clips for eBooks can register them together with the copyright information; users can edit the primary sources to produce secondary sources and in turn register the secondary sources on the system; and users can create and distribute eBooks by using the sources registered in the system.

Multimodal Approach for Summarizing and Indexing News Video

  • Kim, Jae-Gon;Chang, Hyun-Sung;Kim, Young-Tae;Kang, Kyeong-Ok;Kim, Mun-Churl;Kim, Jin-Woong;Kim, Hyung-Myung
    • ETRI Journal
    • /
    • v.24 no.1
    • /
    • pp.1-11
    • /
    • 2002
  • A video summary abstracts the gist from an entire video and also enables efficient access to the desired content. In this paper, we propose a novel method for summarizing news video based on multimodal analysis of the content. The proposed method exploits the closed caption data to locate semantically meaningful highlights in a news video and speech signals in an audio stream to align the closed caption data with the video in a time-line. Then, the detected highlights are described using MPEG-7 Summarization Description Scheme, which allows efficient browsing of the content through such functionalities as multi-level abstracts and navigation guidance. Multimodal search and retrieval are also within the proposed framework. By indexing synchronized closed caption data, the video clips are searchable by inputting a text query. Intensive experiments with prototypical systems are presented to demonstrate the validity and reliability of the proposed method in real applications.

  • PDF

Authoring of Dynamic Information in Augmented Reality Using Video Object Definition (비디오 객체 정의에 의한 동적 증강 정보 저작)

  • Nam, Yang-Hee;Lee, Seo-Jin
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.6
    • /
    • pp.1-8
    • /
    • 2013
  • It is generally required to use modeling or animation tools for inserting dynamic objects into augmented reality, and this process demands high expertise and complexity. This paper proposes a video object based authoring method that enables augmentation with dynamic video objects without such process. Integrated grab-cut and grow-cut method strips initial area of video target off the existing video clips, and snap-cut method is then applied to track objects' boundaries over frames so as to augment real world with continuous motion frames. Experiment shows video cut-out and authoring results achieved by only a few menu selections and simple correcting sketch.

Extraction of User Preference for Video Stimuli Using EEG-Based User Responses

  • Moon, Jinyoung;Kim, Youngrae;Lee, Hyungjik;Bae, Changseok;Yoon, Wan Chul
    • ETRI Journal
    • /
    • v.35 no.6
    • /
    • pp.1105-1114
    • /
    • 2013
  • Owing to the large number of video programs available, a method for accessing preferred videos efficiently through personalized video summaries and clips is needed. The automatic recognition of user states when viewing a video is essential for extracting meaningful video segments. Although there have been many studies on emotion recognition using various user responses, electroencephalogram (EEG)-based research on preference recognition of videos is at its very early stages. This paper proposes classification models based on linear and nonlinear classifiers using EEG features of band power (BP) values and asymmetry scores for four preference classes. As a result, the quadratic-discriminant-analysis-based model using BP features achieves a classification accuracy of 97.39% (${\pm}0.73%$), and the models based on the other nonlinear classifiers using the BP features achieve an accuracy of over 96%, which is superior to that of previous work only for binary preference classification. The result proves that the proposed approach is sufficient for employment in personalized video segmentation with high accuracy and classification power.

Performance Evaluation of New Signatures for Video Copy Detection (비디오 복사방지를 위한 새로운 특징들의 성능평가)

  • 현기호
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.1
    • /
    • pp.96-102
    • /
    • 2003
  • Video copy detection is a complementary approach to watermarking. As opposed to watermarking, which relies on inserting a distinct pattern into the video stream, video copy detection techniques match content-based signatures to detect copies of video. Existing typical content-based copy detection schemes have relied on image matching. This paper proposes two new sequence matching techniques for copy detection and compares the performance with color techniques that is the existing techniques. Motion, intensity and color-based signatures are compared in the context of copy detection. Comparison of experimental results are reported on detecting copies of movie clips.