• 제목/요약/키워드: Videos

검색결과 1,523건 처리시간 0.028초

Digital Video Steganalysis Based on a Spatial Temporal Detector

  • Su, Yuting;Yu, Fan;Zhang, Chengqian
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제11권1호
    • /
    • pp.360-373
    • /
    • 2017
  • This paper presents a novel digital video steganalysis scheme against the spatial domain video steganography technology based on a spatial temporal detector (ST_D) that considers both spatial and temporal redundancies of the video sequences simultaneously. Three descriptors are constructed on XY, XT and YT planes respectively to depict the spatial and temporal relationship between the current pixel and its adjacent pixels. Considering the impact of local motion intensity and texture complexity on the histogram distribution of three descriptors, each frame is segmented into non-overlapped blocks that are $8{\times}8$ in size for motion and texture analysis. Subsequently, texture and motion factors are introduced to provide reasonable weights for histograms of the three descriptors of each block. After further weighted modulation, the statistics of the histograms of the three descriptors are concatenated into a single value to build the global description of ST_D. The experimental results demonstrate the great advantage of our features relative to those of the rich model (RM), the subtractive pixel adjacency model (SPAM) and subtractive prediction error adjacency matrix (SPEAM), especially for compressed videos, which constitute most Internet videos.

동적 비디오 기반 안정화 및 객체 추적 방법 (A Method for Object Tracking Based on Background Stabilization)

  • 정훈조;이동은
    • 디지털산업정보학회논문지
    • /
    • 제14권1호
    • /
    • pp.77-85
    • /
    • 2018
  • This paper proposes a robust digital video stabilization algorithm to extract and track an object, which uses a phase correlation-based motion correction. The proposed video stabilization algorithm consists of background stabilization based on motion estimation and extraction of a moving object. The motion vectors can be estimated by calculating the phase correlation of a series of frames in the eight sub-images, which are located in the corner of the video. The global motion vector can be estimated and the image can be compensated by using the multiple local motions of sub-images. Through the calculations of the phase correlation, the motion of the background can be subtracted from the former frame and the compensated frame, which share the same background. The moving objects in the video can also be extracted. In this paper, calculating the phase correlation to track the robust motion vectors results in the compensation of vibrations, such as movement, rotation, expansion and the downsize of videos from all directions of the sub-images. Experimental results show that the proposed digital image stabilization algorithm can provide continuously stabilized videos and tracking object movements.

A Noisy Videos Background Subtraction Algorithm Based on Dictionary Learning

  • Xiao, Huaxin;Liu, Yu;Tan, Shuren;Duan, Jiang;Zhang, Maojun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제8권6호
    • /
    • pp.1946-1963
    • /
    • 2014
  • Most background subtraction methods focus on dynamic and complex scenes without considering robustness against noise. This paper proposes a background subtraction algorithm based on dictionary learning and sparse coding for handling low light conditions. The proposed method formulates background modeling as the linear and sparse combination of atoms in the dictionary. The background subtraction is considered as the difference between sparse representations of the current frame and the background model. Assuming that the projection of the noise over the dictionary is irregular and random guarantees the adaptability of the approach in large noisy scenes. Experimental results divided in simulated large noise and realistic low light conditions show the promising robustness of the proposed approach compared with other competing methods.

Reduced Reference Quality Metric for Synthesized Virtual Views in 3DTV

  • Le, Thanh Ha;Long, Vuong Tung;Duong, Dinh Trieu;Jung, Seung-Won
    • ETRI Journal
    • /
    • 제38권6호
    • /
    • pp.1114-1123
    • /
    • 2016
  • Multi-view video plus depth (MVD) has been widely used owing to its effectiveness in three-dimensional data representation. Using MVD, color videos with only a limited number of real viewpoints are compressed and transmitted along with captured or estimated depth videos. Because the synthesized views are generated from decoded real views, their original reference views do not exist at either the transmitter or receiver. Therefore, it is challenging to define an efficient metric to evaluate the quality of synthesized images. We propose a novel metric-the reduced-reference quality metric. First, the effects of depth distortion on the quality of synthesized images are analyzed. We then employ the high correlation between the local depth distortions and local color characteristics of the decoded depth and color images, respectively, to achieve an efficient depth quality metric for each real view. Finally, the objective quality metric of the synthesized views is obtained by combining all the depth quality metrics obtained from the decoded real views. The experimental results show that the proposed quality metric correlates very well with full reference image and video quality metrics.

효율적 4D 시스템을 위한 화염 검출 알고리즘 연구 (A Study of Fire Detection Algorithm for Efficient 4D System)

  • 조경우;왕기초;오창헌
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2013년도 추계학술대회
    • /
    • pp.1003-1005
    • /
    • 2013
  • 4D 기술이란 3D 혹은 일반영상과 함께 물리적 효과를 제공하는 것을 말한다. 이를 구현하기 위해선 영상의 재생 시간 및 프레임 데이터에 따른 4D용 메타데이터 제작이 필요하다. 본 논문은 영상의 컬러 정보에 따라 영상의 온도 상황을 판단하여 물리적 효과를 제공할 수 있는 방안을 제안한다. 제안하는 방안은 화재와 같은 영상의 컬러 정보를 파악하여 해당 상황에 맞게 물리적 효과를 제공할 수 있도록 한다. 제안하는 방안을 사용할 경우, 히터 장치와 같은 촉각체험을 위한 4D 메타데이터를 프로그래머의 개입 없이 자동으로 제작할 수 있을 것으로 기대된다.

  • PDF

Storytelling and Social Networking: Why Luxury Brand Needs to Tell Its Story

  • Park, Min-Sook
    • Journal of Information Technology Applications and Management
    • /
    • 제27권5호
    • /
    • pp.69-80
    • /
    • 2020
  • Recently, luxury brands are selling their products to consumers using their own direct online channels. In the online channel, marketing strategy through storytelling is needed because consumers do not have enough product experience. Therefore, luxury brands are actively utilizing social media and delivering stories includes their birth and growth. Unlike mass media, social media communicates with consumers more quickly and frequently and delivers the story of brand naturally. This study classifies luxury brands into four groups based on story recognition of luxury brands and self-esteem, and analyzes and materializes each group of the propensities of luxury brand consumption. It also tries to draw strategic implications for effective SNS advertising by analyzing narrative transportation on SNS advertising, interests in videos, and the interests in story based on these typified groups of luxury consumption. The result of the analysis shows that there is a difference in consumption propensity among consumers who were classified into four groups according to story cognition of luxury brands and self-esteem. There is also a difference in the response to narrative images through SNSs, such as narrative transportation, interests in videos, and interests in brand stories.

장면전환검출을 이용한 교양비디오 개요 검색 시스템 (The Abstraction Retrieval System of Cultural Videos using Scene Change Detection)

  • 강오형;이지현;이양원
    • 정보처리학회논문지B
    • /
    • 제12B권7호
    • /
    • pp.761-766
    • /
    • 2005
  • 본 논문에서는 교양 비디오 데이터베이스 시스템을 구축하기 위한 비디오 모델을 제안한다. 먼저, 교양 비디오의 효율적인 색인화와 검색을 위하여 교양 비디오를 의미 있는 단위로 분할하는 효율적인 장면 전환 검출 기법을 사용하였다 비디오가 대용량이며 장시간의 재생이 필요하다는 특징 때문에 전체 비디오를 시청해야하는 문제점이 있다. 이 문제점을 해결하기 위해 교양 비디오의 개요를 추출하여 시청자들에게 시간을 절약할 수 있고, 비디오 선택의 폭을 넓히도록 하였다. 비디오 개요는 개요 생성 규칙을 설정하여 중요 이벤트가 발생한 장면들을 요약한 형태이다.

모바일 얼굴 비디오로부터 심박 신호의 강건한 추출 (Robust Extraction of Heartbeat Signals from Mobile Facial Videos)

  • 로말리자쟝피에르;박한훈
    • 융합신호처리학회논문지
    • /
    • 제20권1호
    • /
    • pp.51-56
    • /
    • 2019
  • 본 논문은 모바일 환경에서의 BCG기반 심박 수 측정을 위한 향상된 심박 신호 추출 방법을 제안한다. 우선, 모바일 카메라를 이용하여 사용자의 얼굴을 촬영한 비디오로부터 얼굴 특징과 배경 특징을 동시에 추적함으로써 손 떨림에 의한 영향을 제거한 머리 움직임 신호를 추출한다. 그리고 머리 움직임 신호로부터 심박 신호를 정확하게 분리해내기 위해 신호의 주기성을 계산하는 새로운 방법을 제안한다. 제안 방법은 모바일 얼굴 비디오로부터 강건하게 심박 신호를 추출할 수 있으며, 기존 방법에 비해 보다 정확하게 심박 수 측정(측정 오차가 3-4 bpm 감소)을 할 수 있다.

동영상의 장면별 비디오 등급을 고려한 색인 (Indexing Considering Video Rating of Scenes in Video)

  • 김영봉
    • 게임&엔터테인먼트 논문지
    • /
    • 제2권2호
    • /
    • pp.51-60
    • /
    • 2006
  • 최근 영화, 드라마, 뮤직비디오 등의 다양한 스트리밍 비디오들이 웹을 통해 널리 퍼져나가고 있다. 기존의 스트리밍 비디오 서비스는 사용자에 따른 동영상 서비스 제한에 대해 소극적이고, 동영상 전체에 일관적인 제한을 가하는 방법을 사용하고 있다. 본 연구에서는 하나의 동영상에 대해 다양한 연령의 사용자들이 접근하는 것을 허용하나 동영상의 특정 장면에 대해서는 접근을 제한하는 방법을 제시하고자 한다. 이를 위해 먼저 히스토그램 기법을 사용하여 하나의 비디오를 여러 개의 장면들로 나눌 것이다. 각 장면에 대해 선정성에 기초한 접근 수준을 제시할 것이다. 마지막으로 각 장면의 접근 수준을 나타내는 비디오 색인 작업이 실행되어 비디오 상영시 접근 제한이 정해진 장면들은 마스크를 사용하여 문제의 장면이 보이지 않도록 가릴 것이다.

  • PDF

An Optimized e-Lecture Video Search and Indexing framework

  • Medida, Lakshmi Haritha;Ramani, Kasarapu
    • International Journal of Computer Science & Network Security
    • /
    • 제21권8호
    • /
    • pp.87-96
    • /
    • 2021
  • The demand for e-learning through video lectures is rapidly increasing due to its diverse advantages over the traditional learning methods. This led to massive volumes of web-based lecture videos. Indexing and retrieval of a lecture video or a lecture video topic has thus proved to be an exceptionally challenging problem. Many techniques listed by literature were either visual or audio based, but not both. Since the effects of both the visual and audio components are equally important for the content-based indexing and retrieval, the current work is focused on both these components. A framework for automatic topic-based indexing and search depending on the innate content of the lecture videos is presented. The text from the slides is extracted using the proposed Merged Bounding Box (MBB) text detector. The audio component text extraction is done using Google Speech Recognition (GSR) technology. This hybrid approach generates the indexing keywords from the merged transcripts of both the video and audio component extractors. The search within the indexed documents is optimized based on the Naïve Bayes (NB) Classification and K-Means Clustering models. This optimized search retrieves results by searching only the relevant document cluster in the predefined categories and not the whole lecture video corpus. The work is carried out on the dataset generated by assigning categories to the lecture video transcripts gathered from e-learning portals. The performance of search is assessed based on the accuracy and time taken. Further the improved accuracy of the proposed indexing technique is compared with the accepted chain indexing technique.