• Title/Summary/Keyword: Video-conference

Search Result 2,921, Processing Time 0.031 seconds

Method of extracting context from media data by using video sharing site

  • Kondoh, Satoshi;Ogawa, Takeshi
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.709-713
    • /
    • 2009
  • Recently, a lot of research that applies data acquired from devices such as cameras and RFIDs to context aware services is being performed in the field on Life-Log and the sensor network. A variety of analytical techniques has been proposed to recognize various information from the raw data because video and audio data include a larger volume of information than other sensor data. However, manually watching a huge amount of media data again has been necessary to create supervised data for the update of a class or the addition of a new class because these techniques generally use supervised learning. Therefore, the problem was that applications were able to use only recognition function based on fixed supervised data in most cases. Then, we proposed a method of acquiring supervised data from a video sharing site where users give comments on any video scene because those sites are remarkably popular and, therefore, many comments are generated. In the first step of this method, words with a high utility value are extracted by filtering the comment about the video. Second, the set of feature data in the time series is calculated by applying functions, which extract various feature data, to media data. Finally, our learning system calculates the correlation coefficient by using the above-mentioned two kinds of data, and the correlation coefficient is stored in the DB of the system. Various other applications contain a recognition function that is used to generate collective intelligence based on Web comments, by applying this correlation coefficient to new media data. In addition, flexible recognition that adjusts to a new object becomes possible by regularly acquiring and learning both media data and comments from a video sharing site while reducing work by manual operation. As a result, recognition of not only the name of the seen object but also indirect information, e.g. the impression or the action toward the object, was enabled.

  • PDF

Complexity and Performance Analysis of SVC(Scalable Video Coding) Encoder Models for T-DMB/AT-DMB Video Service (T-DMB/AT-DMB 비디오 서비스를 위한 스케일러블 부호화기 모델에 따른 복잡도 및 성능 분석)

  • Kim, Kyu-Seok;Kim, Pil-Joong;Kim, Jin-Soo;Lee, Si-Woong;Kim, Jae-Gon;Choi, Hae-Chul
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2007.10a
    • /
    • pp.123-126
    • /
    • 2007
  • This paper presents the SVC (Scalable Video Coding) scheme which enables the AT-DMB (Advanced Terrestrial - DMB) video service in enhancement layer, while keeping the current T-DMB video service in base layer. But, it is very complicate to implement the SVC encoder and so it is necessary to analyze the complexity and performance for SVC encoder's structures and coding parameters. In this paper, through computer simulations, SVC coding parameters are tested and then, based on these results, three types of SVC encoder models are compared from the viewpoint of the complexity and performance.

  • PDF

A network-adaptive SVC Streaming Architecture

  • Chen, Peng;Lim, Jeong-Yeon;Lee, Bum-Shik;Kim, Mun-Churl;Hahm, Sang-Jin;Kim, Byung-Sun;Lee, Keun-Sik;Park, Keun-Soo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2006.11a
    • /
    • pp.257-260
    • /
    • 2006
  • In Video streaming environment, we must consider terminal and network characteristics, such as display resolution, frame rate, computational resource, network bandwidth, etc. The JVT (Joint Video Team) by ISO/IEC MPEG and ITU-TVCEG is currently standardizing Scalable Video Coding (SVC). This can represent video bitstreams in different sealable layers for flexible adaptation to terminal and network characteristics. This characteristic is very useful in video streaming applications. One fully scalable video can be extracted with specific target spatial resolution, temporal frame rate and quality level to match the requirements of terminals and networks. Besides, the extraction process is fast and consumes little computational resource, so it is possible to extract the partial video bitstream online to accommodate with changing network conditions etc. With all the advantages of SVC, we design and implement a network-adaptive SVC streaming system with an SVC extractor and a streamer to extract appropriate amounts of bitstreams to meet the required target bitrates and spatial resolutions. The proposed SVC extraction is designed to allow for flexible switching from layer to layer in SVC bitstreams online to cope with the change in network bandwidth. The extraction is made in every GOP unit. We present the implementation of our SVC streaming system with experimental results.

  • PDF

GreedyUCB1 based Monte-Carlo Tree Search for General Video Game Playing Artificial Intelligence (일반 비디오 게임 플레이 인공지능을 위한 GreedyUCB1기반 몬테카를로 트리 탐색)

  • Park, Hyunsoo;Kim, HyunTae;Kim, KyungJoong
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.8
    • /
    • pp.572-577
    • /
    • 2015
  • Generally, the existing Artificial Intelligence (AI) systems were designed for specific purposes and their capabilities handle only specific problems. Alternatively, Artificial General Intelligence can solve new problems as well as those that are already known. Recently, General Video Game Playing the game AI version of General Artificial Intelligence, has garnered a large amount of interest among Game Artificial Intelligence communities. Although video games are the sole concern, the design of a single AI that is capable of playing various video games is not an easy process. In this paper, we propose a GreedyUCB1 algorithm and rollout method that were formulated using the knowledge from a game analysis for the Monte-Carlo Tree Search game AI. An AI that used our method was ranked fourth at the GVG-AI (General Video Game-Artificial Intelligence) competition of the IEEE international conference of CIG (Computational Intelligence in Games) 2014.

Influence of UHD(Ultra High Definition) Video Technology on the Documentary Production Process (UHD(Ultra High Definition)영상기술이 다큐멘터리 제작과정에 미치는 영향)

  • Jeon, Min-gyu;Choi, Won-ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2014.05a
    • /
    • pp.513-515
    • /
    • 2014
  • Ultra High Definition(hereafter UHD), Next-generation video technology, was quickening in the request of many years of the recipient and the authors who have been towards the higher sense of reality. The change of the medium has resulted in a mutation in the detailed content and the point of view overlooking the world as evidenced through the history. Thus the emergence of UHD will also be expected to require to manufacture unique technology. Therefore, this paper, especially in the documentary among the video content, tries to study the changes in the manufacturing process by UHD video technology. Moreover UHD video screen technology such as high resolution tries to analyse for the fitness for purpose of the documentary that transmission of the accuracy and the change from a documentary video production. In addition, I try to explore the improvement in the choice of shooting material expected to be a problem, even for the advance of the work on the second half and backup according to the enormous data. We derive the developmental aspects that UHD video technology in documentary production is brought about by the research plan described above. It is intended to contribute to the development of content production with the development of technology.

  • PDF

A Research on the Method of Automatic Metadata Generation of Video Media for Improvement of Video Recommendation Service (영상 추천 서비스의 개선을 위한 영상 미디어의 메타데이터 자동생성 방법에 대한 연구)

  • You, Yeon-Hwi;Park, Hyo-Gyeong;Yong, Sung-Jung;Moon, Il-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.281-283
    • /
    • 2021
  • The representative companies mentioned in the recommendation service in the domestic OTT(Over-the-top media service) market are YouTube and Netflix. YouTube, through various methods, started personalized recommendations in earnest by introducing an algorithm to machine learning that records and uses users' viewing time from 2016. Netflix categorizes users by collecting information such as the user's selected video, viewing time zone, and video viewing device, and groups people with similar viewing patterns into the same group. It records and uses the information collected from the user and the tag information attached to the video. In this paper, we propose a method to improve video media recommendation by automatically generating metadata of video media that was written by hand.

  • PDF

A Video Conference System using Multimedia Data and Screen Sharing Methods (멀티미디어 데이터 및 화면 공유 기법을 활용한 화상 회의 시스템)

  • Ko Kwang-San;Jang Jung-Soo;Jung Hoe-Kyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.5
    • /
    • pp.1012-1018
    • /
    • 2005
  • According as computer industry develops and the Internet is spread information interchange between remote users video conference system that can exchange multimedia data such as voice as well as simplicity text data or burn by real-time appeared and is used in various field. However, most systems developed present have controversial points of that need expensive multi-point connection equipment or network resource of High band width. Hereupon, in this paper, designed and embody video conference system that can be stable and supplies multimedia information and screen public ownership environment of high-quality as software using minimum network resource without using expensive connection equipment.

Design and Implementation for Multi-User Interface Video Conference System (다자간 화상회의 시스템의 설계 및 구현)

  • Joo, Heon-Sik;Lee, Sang-Yeob
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.1
    • /
    • pp.153-160
    • /
    • 2008
  • This paper shows the maximum data flow utilizing the Weight Bipartite Graph Matching system. The Weight Bipartite Graph Matching system sets the data transmission as edges and guides the maximum data flow on the set server and the client. The proposed Weight Bipartite Graph Matching system implements the multi-user interface video conference system. By sending max data to the server and having the client receive the max data, the non-continuance of the motion image frame, the bottleneck phenomenon, and the broken images are prevented due to the excellent capacity. The experiment shows a two-times better excellency than that of the previous flow control.

  • PDF

Speaker Detection System for Video Conference (영상회의를 위한 화자 검출 시스템)

  • Lee, Byung-Sun;Ko, Sung-Won;Kwon, Heak-Bong
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.17 no.5
    • /
    • pp.68-79
    • /
    • 2003
  • In this paper, we propose a system that detects the current speaker in multi-speaker video conference by using lip motion. First, the system detects the face and lip area of each of the speakers using face color and shape information. Then, to detect the current speaker, it calculates the change between the current frame and the previous frame. To accomplish this, we used two CCD cameras. One is a general CCD camera, the other is a PTZ camera controlled by RS-232C serial port. The result is a system capable of detecting the face of current speaker in a video feed with more than three people, regardless of orientation of the faces. With this system, it only takes 4 to 5 seconds to zoom in on the speaker from the initial image. Also, it is amore efficient image transmission system for such things as video conference and internet broadcasting because it offers a face area screen at a resolution of 320X240, while at the same time providing a whole background screen.

Construction and Evaluation of Agent Knowledge for Improving Flexibility in Videoconference System (화상회의 시스템의 유연성 개선을 위한 에이전트 지식 구성 및 평가)

  • Lee Sung-Doke;Kang Sang-Gil
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.5
    • /
    • pp.605-614
    • /
    • 2005
  • In this paper, we present the design and implementation of an agent knowledge and QoS tuning methodology to improve the flexibility of agent-based flexible video-conference system. In order to improve the flexibility during video-conferencing, we propose a new T-INTER(Tuning-INTER) architecture of knowledge part in video-conference manager (VCM) agent in which an automatic QoS parameter tuning method is imbedded. The flexible video-conference system structured based on the proposed architecture can cope with the changes in service quality required by users. The VCM agent cooperates with other agents by protocols and executes the automatic QoS parameter tuning task whenever needed. By the tuned parameters, the system is able to flexibly cope with the internal or external changes and the burden of users can be decreased. In the experimental section, it is shown that our proposed system outperforms the existing system.