• 제목/요약/키워드: video technology

검색결과 2,721건 처리시간 0.027초

Optimizing the Joint Source/Network Coding for Video Streaming over Multi-hop Wireless Networks

  • Cui, Huali;Qian, Depei;Zhang, Xingjun;You, Ilsun;Dong, Xiaoshe
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제7권4호
    • /
    • pp.800-818
    • /
    • 2013
  • Supporting video streaming over multi-hop wireless networks is particularly challenging due to the time-varying and error-prone characteristics of the wireless channel. In this paper, we propose a joint optimization scheme for video streaming over multi-hop wireless networks. Our coding scheme, called Joint Source/Network Coding (JSNC), combines source coding and network coding to maximize the video quality under the limited wireless resources and coding constraints. JSNC segments the streaming data into generations at the source node and exploits the intra-session coding on both the source and the intermediate nodes. The size of the generation and the level of redundancy influence the streaming performance significantly and need to be determined carefully. We formulate the problem as an optimization problem with the objective of minimizing the end-to-end distortion by jointly considering the generation size and the coding redundancy. The simulation results demonstrate that, with the appropriate generation size and coding redundancy, the JSNC scheme can achieve an optimal performance for video streaming over multi-hop wireless networks.

얼굴 인식과 추적을 이용한 ROI 기반 영상 통화 코덱 설계 및 구현 (ROI-based Encoding using Face Detection and Tracking for mobile video telephony)

  • 이유선;김창희;나태영;임정연;주영호;김기문;변재완;김문철
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2008년도 하계종합학술대회
    • /
    • pp.77-78
    • /
    • 2008
  • With advent of 3G mobile communication services, video telephony becomes one of the major services. However, due to a narrow channel bandwidth, the current video telephony services have not yet reached a satisfied level. In this paper, we propose an ROI (Region-Of-Interest) based improvement of visual quality for video telephony services with the H.264|MPEG-4 Part 10 (AVC: Advanced Video Coding) codec. To this end, we propose a face detection and tracking method to define ROI for the AVC codec based video telephony. Experiment results show that our proposed ROI based method allowed for improved visual quality in both objective and subjective perspectives.

  • PDF

Predicting Learning Achievements with Indicators of Perceived Affordances Based on Different Levels of Content Complexity in Video-based Learning

  • Dasom KIM;Gyeoun JEONG
    • Educational Technology International
    • /
    • 제25권1호
    • /
    • pp.27-65
    • /
    • 2024
  • The purpose of this study was to identify differences in learning patterns according to content complexity in video-based learning environments and to derive variables that have an important effect on learning achievement within particular learning contexts. To achieve our aims, we observed and collected data on learners' cognitive processes through perceived affordances, using behavioral logs and eye movements as specific indicators. These two types of reaction data were collected from 67 male and female university students who watched two learning videos classified according to their task complexity through the video learning player. The results showed that when the content complexity level was low, learners tended to navigate using other learners' digital logs, but when it was high, students tended to control the learning process and directly generate their own logs. In addition, using derived prediction models according to the degree of content complexity level, we identified the important variables influencing learning achievement in the low content complexity group as those related to video playback and annotation. In comparison, in the high content complexity group, the important variables were related to active navigation of the learning video. This study tried not only to apply the novel variables in the field of educational technology, but also attempt to provide qualitative observations on the learning process based on a quantitative approach.

무인 항공기 촬영 동영상을 위한 실시간 안정화 기법 (Real-time Stabilization Method for Video acquired by Unmanned Aerial Vehicle)

  • 조현태;배효철;김민욱;윤경로
    • 반도체디스플레이기술학회지
    • /
    • 제13권1호
    • /
    • pp.27-33
    • /
    • 2014
  • Video from unmanned aerial vehicle (UAV) is influenced by natural environments due to the light-weight UAV, specifically by winds. Thus UAV's shaking movements make the video shaking. Objective of this paper is making a stabilized video by removing shakiness of video acquired by UAV. Stabilizer estimates camera's motion from calculation of optical flow between two successive frames. Estimated camera's movements have intended movements as well as unintended movements of shaking. Unintended movements are eliminated by smoothing process. Experimental results showed that our proposed method performs almost as good as the other off-line based stabilizer. However estimation of camera's movements, i.e., calculation of optical flow, becomes a bottleneck to the real-time stabilization. To solve this problem, we make parallel stabilizer making average 30 frames per second of stabilized video. Our proposed method can be used for the video acquired by UAV and also for the shaking video from non-professional users. The proposed method can also be used in any other fields which require object tracking, or accurate image analysis/representation.

Review for vision-based structural damage evaluation in disasters focusing on nonlinearity

  • Sifan Wang;Mayuko Nishio
    • Smart Structures and Systems
    • /
    • 제33권4호
    • /
    • pp.263-279
    • /
    • 2024
  • With the increasing diversity of internet media, available video data have become more convenient and abundant. Related video data-based research has advanced rapidly in recent years owing to advantages such as noncontact, low-cost data acquisition, high spatial resolution, and simultaneity. Additionally, structural nonlinearity extraction has attracted increasing attention as a tool for damage evaluation. This review paper aims to summarize the research experience with the recent developments and applications of video data-based technology for structural nonlinearity extraction and damage evaluation. The most regularly used object detection images and video databases are first summarized, followed by suggestions for obtaining video data on structural nonlinear damage events. Technologies for linear and nonlinear system identification based on video data are then discussed. In addition, common nonlinear damage types in disaster events and prevalent processing algorithms are reviewed in the section on structural damage evaluation using video data uploaded on online platform. Finally, a discussion regarding some potential research directions is proposed to address the weaknesses of the current nonlinear extraction technology based on video data, such as the use of uni-dimensional time-series data as leverage to further achieve nonlinear extraction and the difficulty of real-time detection, including the fields of nonlinear extraction for spatial data, real-time detection, and visualization.

Gradient Fusion Method for Night Video Enhancement

  • Rao, Yunbo;Zhang, Yuhong;Gou, Jianping
    • ETRI Journal
    • /
    • 제35권5호
    • /
    • pp.923-926
    • /
    • 2013
  • To resolve video enhancement problems, a novel method of gradient domain fusion wherein gradient domain frames of the background in daytime video are fused with nighttime video frames is proposed. To verify the superiority of the proposed method, it is compared to conventional techniques. The implemented output of our method is shown to offer enhanced visual quality.

비디오 영상 기반의 얼굴 검색 (Face Detection based on Video Sequence)

  • 안효창;이상범
    • 반도체디스플레이기술학회지
    • /
    • 제7권3호
    • /
    • pp.45-49
    • /
    • 2008
  • Face detection and tracking technology on video sequence has developed indebted to commercialization of teleconference, telecommunication, front stage of surveillance system using face recognition, and video-phone applications. Complex background, color distortion by luminance effect and condition of luminance has hindered face recognition system. In this paper, we have proceeded to research of face recognition on video sequence. We extracted facial area using luminance and chrominance component on $YC_bC_r$ color space. After extracting facial area, we have developed the face recognition system applied to our improved algorithm that combined PCA and LDA. Our proposed algorithm has shown 92% recognition rate which is more accurate performance than previous methods that are applied to PCA, or combined PCA and LDA.

  • PDF

A New Denoising Method for Time-lapse Video using Background Modeling

  • Park, Sanghyun
    • 한국정보기술학회 영문논문지
    • /
    • 제10권2호
    • /
    • pp.125-138
    • /
    • 2020
  • Due to the development of camera technology, the cost of producing time-lapse video has been reduced, and time-lapse videos are being applied in many fields. Time-lapse video is created using images obtained by shooting for a long time at long intervals. In this paper, we propose a method to improve the quality of time-lapse videos monitoring the changes in plants. Considering the characteristics of time-lapse video, we propose a method of separating the desired and unnecessary objects and removing unnecessary elements. The characteristic of time-lapse videos that we have noticed is that unnecessary elements appear intermittently in the captured images. In the proposed method, noises are removed by applying a codebook background modeling algorithm to use this characteristic. Experimental results show that the proposed method is simple and accurate to find and remove unnecessary elements in time-lapse videos.

가상현실 영상 기술의 현황과 발전방향 연구 (Status and development direction of Virtual Reality Video technology)

  • 유묘의해;정진헌
    • 디지털융복합연구
    • /
    • 제19권12호
    • /
    • pp.405-411
    • /
    • 2021
  • 가상현실 기술은 20 세기에 개발 된 새로운 실용적인 기술이다. 최근에는 가상현실(VR) 기술의 지속적인 발전과 개선으로 관련 산업이 빠르게 발전하고 있으며, 가상현실 기술의 활용으로 실감나는 다양한 영상콘텐츠는 사용자에게 더 나은 시각적 경험을 제공하고 있다. 또한 상호 작용 및 상상력 측면에서 뛰어난 특성을 가지고 있어 영상콘텐츠 제작 분야에서 밝은 전망을 기대할 수 있다. 본 논문은 현 단계에서 VR 비디오의 디스플레이의 종류, 기술, 그리고 사용자가 VR 비디오를 보는 방법을 소개하였다. 더불어 과거 VR 장비와 현재 장비의 해상도 차이를 비교 분석하고 해상도가 VR 영상에 영향을 미치는 이유를 탐구하였다. 미래 VR 영상 발전 중에서 몇개 발전 방향을 제출하고 사람들에게 생활 편의를 제공할 것이다.

모듈통합형 항공전자시스템을 위한 Video Processing Module 구현 (Implementation of Video Processing Module for Integrated Modular Avionics System)

  • 전은선;강대일;반창봉;양승열
    • 한국항행학회논문지
    • /
    • 제18권5호
    • /
    • pp.437-444
    • /
    • 2014
  • 모듈통합형 항공전자시스템은 연방형의 LRU (line replaceable unit)의 기능을 하나의 LRM (line replaceable module)에서 제공하고, 하나의 cabinet에 여러 개의 LRM을 탑재한다. IMA core 시스템의 VPM (video processing module)은 LRM으로써 ARINC 818 ADVB (avionics digital video bus)의 bridge 및 gateway 역할을 한다. ARINC 818은 광 대역폭, 적은 지연시간, 비 압축 디지털영상 전송을 위해 개발된 규격이다. VPM의 FPGA IP core는 ARINC 818 to DVI 또는 DVI to ARINC 818 처리와 video decoder, overlay 기능을 가진다. 본 논문에서는 VPM 하드웨어 구현에 대해 다루고, VPM 기능과 IP core 성능 검증 결과를 보인다.