• Title/Summary/Keyword: Video representation

Search Result 195, Processing Time 0.027 seconds

Human Iris Recognition using Wavelet Transform and Neural Network

  • Cho, Seong-Won;Kim, Jae-Min;Won, Jung-Woo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.3 no.2
    • /
    • pp.178-186
    • /
    • 2003
  • Recently, many researchers have been interested in biometric systems such as fingerprint, handwriting, key-stroke patterns and human iris. From the viewpoint of reliability and robustness, iris recognition is the most attractive biometric system. Moreover, the iris recognition system is a comfortable biometric system, since the video image of an eye can be taken at a distance. In this paper, we discuss human iris recognition, which is based on accurate iris localization, robust feature extraction, and Neural Network classification. The iris region is accurately localized in the eye image using a multiresolution active snake model. For the feature representation, the localized iris image is decomposed using wavelet transform based on dyadic Haar wavelet. Experimental results show the usefulness of wavelet transform in comparison to conventional Gabor transform. In addition, we present a new method for setting initial weight vectors in competitive learning. The proposed initialization method yields better accuracy than the conventional method.

Chaotic Features for Dynamic Textures Recognition with Group Sparsity Representation

  • Luo, Xinbin;Fu, Shan;Wang, Yong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.11
    • /
    • pp.4556-4572
    • /
    • 2015
  • Dynamic texture (DT) recognition is a challenging problem in numerous applications. In this study, we propose a new algorithm for DT recognition based on group sparsity structure in conjunction with chaotic feature vector. Bag-of-words model is used to represent each video as a histogram of the chaotic feature vector, which is proposed to capture self-similarity property of the pixel intensity series. The recognition problem is then cast to a group sparsity model, which can be efficiently optimized through alternating direction method of multiplier algorithm. Experimental results show that the proposed method exhibited the best performance among several well-known DT modeling techniques.

Efficient 3D Scene Labeling using Object Detectors & Location Prior Maps (물체 탐지기와 위치 사전 확률 지도를 이용한 효율적인 3차원 장면 레이블링)

  • Kim, Joo-Hee;Kim, In-Cheol
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.11
    • /
    • pp.996-1002
    • /
    • 2015
  • In this paper, we present an effective system for the 3D scene labeling of objects from RGB-D videos. Our system uses a Markov Random Field (MRF) over a voxel representation of the 3D scene. In order to estimate the correct label of each voxel, the probabilistic graphical model integrates both scores from sliding window-based object detectors and also from object location prior maps. Both the object detectors and the location prior maps are pre-trained from manually labeled RGB-D images. Additionally, the model integrates the scores from considering the geometric constraints between adjacent voxels in the label estimation. We show excellent experimental results for the RGB-D Scenes Dataset built by the University of Washington, in which each indoor scene contains tabletop objects.

A Study of Video Coding Based on a Morphological Representation of Wavelet Data (웨이블릿 데이터의 형태적 표현을 적용한 동영상 코딩에 관한 연구)

  • 김혜경;오해석
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2000.10b
    • /
    • pp.541-543
    • /
    • 2000
  • 영역의 수와 윤곽선의 길이는 세그멘테이션 기반의 움직임 보상된 비디오 코딩에서 두 가지의 기본적인 제약사항이다. 이 논문에서 제안하는 코딩 스킴은 영역의 수를 축소하는 것에 초점을 맞추고, 윤곽성 코딩, 그리고 치환된 프레임 차이(DFD)의 압축에 초점을 맞춘다. 제안된 스킴의 가장 중요한 특징 중의 하나는 형태적인 필터를 기반으로 하는 spatio-temporal 단순성 알고리즘이고, 그것들과 함께 이미지는 작은 수의 영역으로 나누어질 수 있다. 이 스킴의 매우 중요한 특성은 세그멘테이션 맵 샘플링 기법으로, 그것은 윤곽선 길이를 매우 작은 복원 에러에 비례하여 약 50%까지 줄인다. 실험적인 결과는, 높은 압축 비율에 대하여 매우 작은 코딩 에러를 보여주었다.

  • PDF

Representation of Spatio-Temporal Relations for Understanding Object Motion in Video (비디오의 객체 움직임 이해를 위한 시공간 관계 표현)

  • Choi, Jun-Ho;Cho, Mi-Young;Kim, Pan-Koo
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.07b
    • /
    • pp.883-885
    • /
    • 2005
  • 비디오 데이터에서 의미적 인식을 위해 활용되는 요소 중 하나가 객체에 대한 움직임 정보로 이는 비디오 데이터에 대한 색인과 내용 기반 검색을 수행하는데 중요한 역할을 한다. 본 논문에서는 효율적인 객체기반 비디오 검색과 비디오의 움직임 해석을 위한 시공간 관계 표현 방법을 제시한다. 비디오의 객체표현 방법은 Polygon-based Bounding Volume의 3차원 Mesh 모델을 생성한 후 이를 이용하여 비디오 내 개체의 구조적 내용을 저차원적 속성과 움직임에 대한 기본 구조로 활용하였다. 또한, 움직임 객체에 대해 시공간적 특성과 시각적 특성을 동시에 고려하여 표현되도록 하였다. 각 Vertex는 시각적 특징 중 일부분이고, 비디오 내 개체의 공간적 특성과 개체의 움직임은 Volume Trajectory로 모델링되고, 개체와 개체간의 시공간적 관계를 표현하기 위한 Operation을 정의한다.

  • PDF

A Study on Efficient Image Processing and CAD-Vision System Interface (효율적인 화상자료 처리와 시각 시스템과 CAD시스템의 인터페이스에 관한 연구)

  • Park, Jin-Woo;Kim, Ki-Dong
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.18 no.2
    • /
    • pp.11-22
    • /
    • 1992
  • Up to now, most researches on production automation have concentrated on local automation, e. g. CAD, CAM, robotics, etc. However, to achieve total automation it is required to link each local modules such as CAD, CAM into a unified and integrated system. One such missing link is between CAD and computer vision system. This thesis is an attempt to link the gap between CAD and computer vision system. In this paper, we propose algorithms that carry out edge detection, thinning and pruning from the image data of manufactured parts, which are obtained from video camera and then transmitted to computer. We also propose a feature extraction and surface determination algorithm which extract informations from the image data. The informations are compatible to IGES CAD data. In addition, we suggest a methodology to reduce search efforts for CAD data bases. The methodology is based on graph submatching algorithm in GEFG(Generalized Edge Face Graph) representation for each part.

  • PDF

Robust appearance feature learning using pixel-wise discrimination for visual tracking

  • Kim, Minji;Kim, Sungchan
    • ETRI Journal
    • /
    • v.41 no.4
    • /
    • pp.483-493
    • /
    • 2019
  • Considering the high dimensions of video sequences, it is often challenging to acquire a sufficient dataset to train the tracking models. From this perspective, we propose to revisit the idea of hand-crafted feature learning to avoid such a requirement from a dataset. The proposed tracking approach is composed of two phases, detection and tracking, according to how severely the appearance of a target changes. The detection phase addresses severe and rapid variations by learning a new appearance model that classifies the pixels into foreground (or target) and background. We further combine the raw pixel features of the color intensity and spatial location with convolutional feature activations for robust target representation. The tracking phase tracks a target by searching for frame regions where the best pixel-level agreement to the model learned from the detection phase is achieved. Our two-phase approach results in efficient and accurate tracking, outperforming recent methods in various challenging cases of target appearance changes.

Trends and Prospects in Super-realistic Metaverse Visualization Technologies (초실감 메타버스 시각화 기술 동향과 전망)

  • W.S. Youm;C.W. Byun;C.M. Kang;K.J. Kim;Y.D. Kim;D.H. Ahn
    • Electronics and Telecommunications Trends
    • /
    • v.39 no.2
    • /
    • pp.24-32
    • /
    • 2024
  • Wearable metaverse devices have sparked enthusiasm as innovative virtual computing user interfaces by addressing a major source of user discomfort, namely, motion-to-photon latency. This kind of latency occurs between the user motion and screen update. To enhance the realism and immersion of experiences using metaverse devices, the vergence-accommodation conflict in stereoscopic image representation must be resolved. Ongoing research aims to address current challenges by adopting vari-focal, multifocal, and light field display technologies for stereoscopic imaging. We explore current trends in research with emphasis on multifocal stereoscopic imaging. Successful metaverse visualization services require the integration of stereoscopic image rendering modules and content encoding/decoding technologies tailored to these services. Additionally, real-time video processing is essential for these modules to correctly and timely process such content and implement metaverse visualization services.

A Study on Yi Sang Representation in Media -Focusing on the cinema and the drama (영상매체에 형상화 된 시인 '이상' 표상 연구 -영화 <건축무한육면각체의 비밀>, 드라마 <이상 그 이상>을 중심으로)

  • Son, Mi-young
    • The Journal of the Convergence on Culture Technology
    • /
    • v.5 no.4
    • /
    • pp.29-36
    • /
    • 2019
  • Lee Sang's poems and his portraits are being used in various video media. Depending on the characteristics of the medium and genre, the representation of the poet or higher and his poems are selected and variations in different ways. In a modern era where literature communicates with various media, reviewing how a poet's portrait is shaped is also the process of reading what text wants to convey to the public through a single person. This study examined aspects in which representations of poets or higher were utilized in various image media, and compared and analyzed how poet aberrations are represented in each text. In particular, the discussion centered on the movie and the drama . In the movie , the above poem is used as a hidden puzzle. The film uses the popularly known 'genius' representation to track down Yi Sang's secret. Because of this, the film represents its ideal in a way that is faithful to the genre's custom of Thriller In comparison, the drama was about to re-emerge as a young man with a passion for the inner workings. The cynical attitude shown in the above text is also a reflection of the love for the nation and the times. These different typographical methods are worth noting in terms of the literary man's public perception of "Yi-sang" and the strategy of the new portrait attempt.

Multimedia Network Teaching System based on SMIL (SMIL을 기반으로 한 멀티미디어 네트워크 교육시스템)

  • Yu, Lei;Cao, Ke-Rang;Bang, Jin-Suk;Cho, Tae-Beom;Jung, Hoe-Kyung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2008.10a
    • /
    • pp.524-527
    • /
    • 2008
  • Recently, digital and the Internet are widespread out of the world, and multimedia processing technology and the development of information and communication technology in education using the Internet as the demand is rapidly increasing. Also, we tan easily use informations with less restrictions of time and space. however, several kinds of audio, media to integrate multimedia data, such as the proliferation of demands for representation. Therefore, in 1998, W3C presented an international standard, SMIL in order to solve multimedia object representation and synchronization problems. By using SMIL, various multimedia elements can be integrated as a multimedia document with proper view in a spate and time. Using this SMIL document, we can create new internet radio broadcasting service that delivers not noly audio data but also various text, image and video. In this paper, with the system, teachers can easily create multimedia courseware and living broadcast their torture on network, students can receive audio-video information of the teacher, screen displays of the teachers computer. Moreover students can communicate with teacher simultaneously by text editor windows. Students can also order courseware after class.

  • PDF