• Title/Summary/Keyword: real-time broadcast

Search Result 434, Processing Time 0.025 seconds

Design and Implementation of An MP4 File Streaming System over IP Networks (IP망을 통한 MP4 파일 스트리밍 시스템의 설계 및 구현)

  • 김현철;민승홍;서덕영;김규헌;김진웅
    • Journal of Broadcast Engineering
    • /
    • v.6 no.3
    • /
    • pp.205-214
    • /
    • 2001
  • In this Paper, we present an MP4 file streaming system over IP networks. Using the proposed system a user can access MP4 contents in the servers via H) networks and interact with the contents. The MP4 file format is designed to contain the media information of MPEG-4 and is object-oriented. The presented streaming server system contains GUI, session manager, splitter, SL- packetizer and transmitter. In addition, we knave implemented the client system based on the GUI -2D player, the MPEG-4 reference software. The Presented streaming system is designed use RTF for the media data requiring real-time streaming RTCP for QoS management and TCP for the data such as IOD(Initial Object Descriptor), OD(Object Descriptor) BIFS(Binary Format for Scene), which should be transmitted for the streaming and the data, such as still image and text, which can be entirely transmitted in a packet.

  • PDF

A differential image quantizer based on wavelet for low bit rate video coding (저비트율 동영상 부호화에 적합한 웨이블릿 기반의 차영상 양자화기)

  • 주수경;유지상
    • Journal of Broadcast Engineering
    • /
    • v.8 no.4
    • /
    • pp.473-480
    • /
    • 2003
  • In this paper, we propose a new quadtree coding a1gorithm to improve the performance of the old one. The new algorithm can process any frame of size in standard and reduce encoding and decoding time by decreasing computational load. It also improves the image quality comparing with any old quantizer based on quadtree and zerotree structure. In order for the new algorithm to be applied for real video codec, we analyze the statistical characteristics of coefficients of differential image and add a function that makes It deal with an arbitrary size of image by using new technique while the old one process by block unit. We can also improve the image quality by scaling the coefficient's value from a differential image. By comparing the performance of the new algorithm with quadtree and SPIHT, it Is shown that PSNR is improved, that the computational load is not reduced in encoding and decoding.

A Study on Multi-resolution Screen based Conference Broadcasting Technology (멀티 해상도 스크린 기반의 컨퍼런스 중계방송 기술 연구)

  • Kim, Young-ae;Yang, Ji-hee;Park, Goo-man
    • Journal of Broadcast Engineering
    • /
    • v.23 no.2
    • /
    • pp.253-260
    • /
    • 2018
  • Personalized media broadcasting services can produce their own broadcasting contents with a variety of creative themes if they have just a transmission platform and devices that can obtain videos and voices of producers without the existing expensive equipment. In this paper, we develop and implement a new broadcasting system by applying this service framework to events such as seminars or academic conferences. The devices can be installed at each conference rooms and the integrated system transmitted to users. They can watch via their multi-resolution screen, such as smart-phones, laptops, and tablet PCs. It has the advantage of being able to receive real-time streaming and VOD services as well as additional information related to the conference. It is expected to provide convenience by allowing attendees to access the information via their devices, thereby creating an impact on participation and the underlying technology for the future research.

Efficient Inference of Image Objects using Semantic Segmentation (시멘틱 세그멘테이션을 활용한 이미지 오브젝트의 효율적인 영역 추론)

  • Lim, Heonyeong;Lee, Yurim;Jee, Minkyu;Go, Myunghyun;Kim, Hakdong;Kim, Wonil
    • Journal of Broadcast Engineering
    • /
    • v.24 no.1
    • /
    • pp.67-76
    • /
    • 2019
  • In this paper, we propose an efficient object classification method based on semantic segmentation for multi-labeled image data. In addition to various pixel unit information and processing techniques such as color information, contour, contrast, and saturation included in image data, a detailed region in which each object is located is extracted as a meaningful unit and the experiment is conducted to reflect the result in the inference. We use a neural network that has been proven to perform well in image classification to understand which object is located where image data containing various class objects are located. Based on these researches, we aim to provide artificial intelligence services that can classify real-time detailed areas of complex images containing various objects in the future.

Deep Learning Based On-Device Augmented Reality System using Multiple Images (다중영상을 이용한 딥러닝 기반 온디바이스 증강현실 시스템)

  • Jeong, Taehyeon;Park, In Kyu
    • Journal of Broadcast Engineering
    • /
    • v.27 no.3
    • /
    • pp.341-350
    • /
    • 2022
  • In this paper, we propose a deep learning based on-device augmented reality (AR) system in which multiple input images are used to implement the correct occlusion in a real environment. The proposed system is composed of three technical steps; camera pose estimation, depth estimation, and object augmentation. Each step employs various mobile frameworks to optimize the processing on the on-device environment. Firstly, in the camera pose estimation stage, the massive computation involved in feature extraction is parallelized using OpenCL which is the GPU parallelization framework. Next, in depth estimation, monocular and multiple image-based depth image inference is accelerated using the mobile deep learning framework, i.e. TensorFlow Lite. Finally, object augmentation and occlusion handling are performed on the OpenGL ES mobile graphics framework. The proposed augmented reality system is implemented as an application in the Android environment. We evaluate the performance of the proposed system in terms of augmentation accuracy and the processing time in the mobile as well as PC environments.

A Study on the Use of RDW Data in Virtual Environment (가상환경에서 방향전환보행 데이터 활용 연구)

  • Lim, Yangmi
    • Journal of Broadcast Engineering
    • /
    • v.27 no.5
    • /
    • pp.629-637
    • /
    • 2022
  • This study is an experiment on the use of RDW(Redirected Walking) technology, which helps users to hardly feel the difference of user movement in the limited physical space and the extended virtual space when the user moves while wearing the HMD in the real world. The RDW function installed in 3D space realization software such as Unity3D is used to induce the user's redirection by slightly distorting the virtual space according to the user's point of view. However, if the RDW distortion rate is used excessively, dissonance is highly likely to occur. In particular, it is easy to make errors that cause cybersickness of users. It is important to obtain the RDW data value in the virtual environment so that the user does not feel fatigue and cybersickness even after wearing the HMD for a long time. In this experiment, it was tested whether the user's RDW was safely implemented, and item and obstacle arrangement data were obtained through this experiment. The RDW data obtained as a result of the experiment were used for item placement and obstacle placement in the virtual space.

MAC-Layer Error Control for Real-Time Broadcasting of MPEG-4 Scalable Video over 3G Networks (3G 네트워크에서 MPEG-4 스케일러블 비디오의 실시간 방송을 위한 실행시간 예측 기반 MAC계층 오류제어)

  • Kang, Kyungtae;Noh, Dong Kun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.3
    • /
    • pp.63-71
    • /
    • 2014
  • We analyze the execution time of Reed-Solomon coding, which is the MAC-layer forward error correction scheme used in CDMA2000 1xEV-DO broadcast services, under different air channel conditions. The results show that the time constraints of MPEG-4 cannot be guaranteed by Reed-Solomon decoding when the packet loss rate (PLR) is high, due to its long computation time on current hardware. To alleviate this problem, we propose three error control schemes. Our static scheme bypasses Reed-Solomon decoding at the mobile node to satisfy the MPEG-4 time constraint when the PLR exceeds a given boundary. Second, dynamic scheme corrects errors in a best-effort manner within the time constraint, instead of giving up altogether when the PLR is high; this achieves a further quality improvement. The third, video-aware dynamic scheme fixes errors in a similar way to the dynamic scheme, but in a priority-driven manner which makes the video appear smoother. Extensive simulation results show the effectiveness of our schemes compared to the original FEC scheme.

MPEG2-TS to RTP Transformation and Application system (MPEG2-TS의 RTP 변환 및 적용 시스템)

  • Im, Sung-Jin;Kim, Ho-Kyom;Hong, Jin-Woo;Jung, Hoe-Kyung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.10a
    • /
    • pp.643-645
    • /
    • 2010
  • The Internet-based multimedia services such as IPTV is being expanded with the development of technology to support the convergence of broadcasting and telecommunications technology for the control seems to be growing larger. Especially for the real-time TV broadcast multicast control technology to support the authentication and resource control, in addition to the technology services that enhance the value of technology for a variety of services in both directions seems to be developed. And, Internet-based transmission system transmit the video content for the video content delivery using RTP(Real Time Transport Protocol). Standardization body, IETF(Internet Engineering Task Force) within the RTP, according to a variety of audio and video formats only transmission format(RTP Payload Format) Establish a separate standard and scalable video content "RTP Payload Format for SVC(Switched Virtual Connection) Video" the standardization is currently processing. In this paper we are improving the quality of broadcasting and telecommunication systems, so that the upper layer by the application can react adaptively to the existing MPEG2-TS and RTP who are provided by a variety of content applied to a variety of devices consumers ETE(End- to-End) QoS(Quality of Service) for enhance the system who was designed and implemented.

  • PDF

Depth Upsampling Method Using Total Generalized Variation (일반적 총변이를 이용한 깊이맵 업샘플링 방법)

  • Hong, Su-Min;Ho, Yo-Sung
    • Journal of Broadcast Engineering
    • /
    • v.21 no.6
    • /
    • pp.957-964
    • /
    • 2016
  • Acquisition of reliable depth maps is a critical requirement in many applications such as 3D videos and free-viewpoint TV. Depth information can be obtained from the object directly using physical sensors, such as infrared ray (IR) sensors. Recently, Time-of-Flight (ToF) range camera including KINECT depth camera became popular alternatives for dense depth sensing. Although ToF cameras can capture depth information for object in real time, but are noisy and subject to low resolutions. Recently, filter-based depth up-sampling algorithms such as joint bilateral upsampling (JBU) and noise-aware filter for depth up-sampling (NAFDU) have been proposed to get high quality depth information. However, these methods often lead to texture copying in the upsampled depth map. To overcome this limitation, we formulate a convex optimization problem using higher order regularization for depth map upsampling. We decrease the texture copying problem of the upsampled depth map by using edge weighting term that chosen by the edge information. Experimental results have shown that our scheme produced more reliable depth maps compared with previous methods.

An Efficient Parallelization Implementation of PU-level ME for Fast HEVC Encoding (고속 HEVC 부호화를 위한 효율적인 PU레벨 움직임예측 병렬화 구현)

  • Park, Soobin;Choi, Kiho;Park, Sang-Hyo;Jang, Euee Seon
    • Journal of Broadcast Engineering
    • /
    • v.18 no.2
    • /
    • pp.178-184
    • /
    • 2013
  • In this paper, we propose an efficient parallelization technique of PU-level motion estimation (ME) in the next generation video coding standard, high efficiency video coding (HEVC) to reduce the time complexity of video encoding. It is difficult to encode video in real-time because ME has significant complexity (i.e., 80 percent at the encoder). In order to solve this problem, various techniques have been studied, and among them is the parallelization, which is carefully concerned in algorithm-level ME design. In this regard, merge estimation method using merge estimation region (MER) that enables ME to be designed in parallel has been proposed; but, parallel ME based on MER has still unconsidered problems to be implemented ideally in HEVC test model (HM). Therefore, we propose two strategies to implement stable parallel ME using MER in HM. Through experimental results, the excellence of our proposed methods is shown; the encoding time using the proposed method is reduced by 25.64 percent on average of that of HM which uses sequential ME.