• Title/Summary/Keyword: codec

Search Result 694, Processing Time 0.025 seconds

Design and Implementation of HDFS Data Encryption Scheme Using ARIA Algorithms on Hadoop (하둡 상에서 ARIA 알고리즘을 이용한 HDFS 데이터 암호화 기법의 설계 및 구현)

  • Song, Youngho;Shin, YoungSung;Chang, Jae-Woo
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.5 no.2
    • /
    • pp.33-40
    • /
    • 2016
  • Due to the growth of social network systems (SNS), big data are realized and Hadoop was developed as a distributed platform for analyzing big data. Enterprises analyze data containing users' sensitive information by using Hadoop and utilize them for marketing. Therefore, researches on data encryption have been done to protect the leakage of sensitive data stored in Hadoop. However, the existing researches support only the AES encryption algorithm, the international standard of data encryption. Meanwhile, Korean government choose ARIA algorithm as a standard data encryption one. In this paper, we propose a HDFS data encryption scheme using ARIA algorithms on Hadoop. First, the proposed scheme provide a HDFS block splitting component which performs ARIA encryption and decryption under the distributed computing environment of Hadoop. Second, the proposed scheme also provide a variable-length data processing component which performs encryption and decryption by adding dummy data, in case when the last block of data does not contains 128 bit data. Finally, we show from performance analysis that our proposed scheme can be effectively used for both text string processing applications and science data analysis applications.

Video Compression Standard Prediction using Attention-based Bidirectional LSTM (어텐션 알고리듬 기반 양방향성 LSTM을 이용한 동영상의 압축 표준 예측)

  • Kim, Sangmin;Park, Bumjun;Jeong, Jechang
    • Journal of Broadcast Engineering
    • /
    • v.24 no.5
    • /
    • pp.870-878
    • /
    • 2019
  • In this paper, we propose an Attention-based BLSTM for predicting the video compression standard of a video. Recently, in NLP, many researches have been studied to predict the next word of sentences, classify and translate sentences by their semantics using the structure of RNN, and they were commercialized as chatbots, AI speakers and translator applications, etc. LSTM is designed to solve the gradient vanishing problem in RNN, and is used in NLP. The proposed algorithm makes video compression standard prediction possible by applying BLSTM and Attention algorithm which focuses on the most important word in a sentence to a bitstream of a video, not an sentence of a natural language.

Supporting ROI transmission of 3D Point Cloud Data based on 3D Manifesto (3차원 Manifesto 기반 3D Point Cloud Data의 ROI 전송 지원 방안)

  • Im, Jiehon;Kim, Junsik;Rhyu, Sungryeul;Kim, Hoejung;Kim, Sang IL;Kim, Kyuheon
    • Journal of the Semiconductor & Display Technology
    • /
    • v.17 no.4
    • /
    • pp.21-26
    • /
    • 2018
  • Recently, the emergence of 3D cameras, 3D scanners and various cameras including Lidar is expected to be applied to applications such as AR, VR, and autonomous mobile vehicles that deal with 3D data. In Particular, the 3D point cloud data consisting of tens to hundreds of thousands of 3D points is rapidly increased in capacity compared with 2D data, Efficient encoding / decoding technology for smooth service within a limited bandwidth, and efficient service provision technology for differentiating the area of interest and the surrounding area are needed. In this paper, we propose a new quality parameter considering characteristics of 3D point cloud instead of quality change based on assumed video codec in MPEG V-PCC used in 3D point cloud compression, 3D Grid division method and representation for selectively transmitting 3D point clouds according to user's area of interest, and propose a new 3D Manifesto. By using the proposed technique, it is possible to generate more bitrate images, and it is confirmed that the efficiency of network, decoder, and renderer can be increased while selectively transmitting as needed.

A Study on the Criteria for Digitization of Records (기록의 디지털화 기준에 관한 연구)

  • Lim, Nayoung;Nam, Youngjoon
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.30 no.3
    • /
    • pp.5-30
    • /
    • 2019
  • The purpose of this study is to suggest an improvement of digitization criteria for records that can faithfully reproduce the contents and properties of original records by complementing the problems and deficiencies of "NAK 26:2018(v2.0) Digitization Criteria for records". Thus, this study proposes a technical standard improvement that should be applied to the digitization process for records not produced in the form of digital files by comparing and analyzing the criteria for digitization of records in Korea with overseas digitization criteria, guidelines, recommendations, and so on. In addition, verifying the validity on this study by interviewing experts from the record-related institutions. As a result, suggested a final improvement of criteria for digitization of records such as applying non compression-Lossless codecs, proposing appropriate resolution values for each type of records, audio channels, frame rates, scan methods, and criteria for microform types.

Characteristic Analysis for Compression of Digital Hologram (디지털 홀로그램의 압축을 위한 특성 분석)

  • Kim, Jin-Kyum;Kim, Kyung-Jin;Kim, Woo-Suk;Lee, Yoon-Huck;Oh, Kwan-Jung;Kim, Jin-Woong;Kim, Dong-Wook;Seo, Young-Ho
    • Journal of Broadcast Engineering
    • /
    • v.24 no.1
    • /
    • pp.164-181
    • /
    • 2019
  • This paper introduces the analysis and development of digital holographic data codec technology to effectively compress hologram data. First, the generation method and data characteristics of the hologram standard data set provided by JPEG Pleno are introduced. We analyze energy compaction according to hologram generation method using discrete wavelet transform and discrete cosine transform. The quantization efficiency according to the hologram generation method is analyzed by applying uniform quantization and non-uniform quantization. We propose a transformation method quantization method suitable for hologram generation method through transform and quantization experiments. Finally, holograms are compressed using standard compression codecs such as JPEG, JPEG2000, AVC/H.264 and HEVC/H.265 and the results are analyzed.

Deep Learning based Raw Audio Signal Bandwidth Extension System (딥러닝 기반 음향 신호 대역 확장 시스템)

  • Kim, Yun-Su;Seok, Jong-Won
    • Journal of IKEEE
    • /
    • v.24 no.4
    • /
    • pp.1122-1128
    • /
    • 2020
  • Bandwidth Extension refers to restoring and expanding a narrow band signal(NB) that is damaged or damaged in the encoding and decoding process due to the lack of channel capacity or the characteristics of the codec installed in the mobile communication device. It means converting to a wideband signal(WB). Bandwidth extension research mainly focuses on voice signals and converts high bands into frequency domains, such as SBR (Spectral Band Replication) and IGF (Intelligent Gap Filling), and restores disappeared or damaged high bands based on complex feature extraction processes. In this paper, we propose a model that outputs an bandwidth extended signal based on an autoencoder among deep learning models, using the residual connection of one-dimensional convolutional neural networks (CNN), the bandwidth is extended by inputting a time domain signal of a certain length without complicated pre-processing. In addition, it was confirmed that the damaged high band can be restored even by training on a dataset containing various types of sound sources including music that is not limited to the speech.

Selective Encryption and Decryption Method for IVC Codec (IVC 코덱을 위한 선택적 암호화 및 복호화 방법)

  • Lee, Min Ku;Kim, Kyu-Tae;Jang, Euee S.
    • Journal of Broadcast Engineering
    • /
    • v.25 no.6
    • /
    • pp.1013-1016
    • /
    • 2020
  • This paper presents a selective encryption and decryption method exploiting the start code of the IVC bitstream. The existing encryption methods for video are largely classified into two methods: Naive Encryption Algorithm (NEA) and Selective Encryption Algorithm (SEA). Since NEA encrypts the entire bitstream, it has the advantage of high security but has the disadvantage of high computational complexity. SEA improves the encryption and decryption speed compared to NEA by encrypting a part of the bitstream, but there is a problem that security is relatively low. The proposed method improves the encryption and decryption speed and the security of the existing SEA by using the start code of the IVC bitstream. As a result of the experiment, the proposed method reduces the encryption speed by 96% and the decryption speed by 98% on average compared to the NEA.

Real-Time Copyright Security Scheme of Immersive Content based on HEVC (HEVC 기반의 실감형 콘텐츠 실시간 저작권 보호 기법)

  • Yun, Chang Seob;Jun, Jae Hyun;Kim, Sung Ho;Kim, Dae Soo
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.1
    • /
    • pp.27-34
    • /
    • 2021
  • In this paper, we propose a copyright protection scheme for real-time streaming of HEVC(High Efficiency Video Coding) based realistic content. Previous research uses encryption and modular operation for copyright pre-protection and copyright post-protection, which causes delays in ultra high resolution video. The proposed scheme maximizes parallelism by using thread pool based DRM(Digital Rights Management) packaging with only HEVC's CABAC(Context Adaptive Binary Arithmetic Coding) codec and GPU based high-speed bit operation(XOR), thus enabling real-time copyright protection. As a result of comparing this scheme with previous research at three resolutions, PSNR showed an average of 8 times higher performance, and the process speed showed an average of 18 times difference. In addition, as a result of comparing the robustness of the forensic mark, the filter and noise attack, which showed the largest and smallest difference, with a 27-fold difference in recompression attacks, showed an 8-fold difference.

SPIHT-based Subband Division Compression Method for High-resolution Image Compression (고해상도 영상 압축을 위한 SPIHT 기반의 부대역 분할 압축 방법)

  • Kim, Woosuk;Park, Byung-Seo;Oh, Kwan-Jung;Seo, Young-Ho
    • Journal of Broadcast Engineering
    • /
    • v.27 no.2
    • /
    • pp.198-206
    • /
    • 2022
  • This paper proposes a method to solve problems that may occur when SPIHT(set partition in hierarchical trees) is used in a dedicated codec for compressing complex holograms with ultra-high resolution. The development of codecs for complex holograms can be largely divided into a method of creating dedicated compression methods and a method of using anchor codecs such as HEVC and JPEG2000 and adding post-processing techniques. In the case of creating a dedicated compression method, a separate conversion tool is required to analyze the spatial characteristics of complex holograms. Zero-tree-based algorithms in subband units such as EZW and SPIHT have a problem that when coding for high-resolution images, intact subband information is not properly transmitted during bitstream control. This paper proposes a method of dividing wavelet subbands to solve such a problem. By compressing each divided subbands, information throughout the subbands is kept uniform. The proposed method showed better restoration results than PSNR compared to the existing method.

Compression of DNN Integer Weight using Video Encoder (비디오 인코더를 통한 딥러닝 모델의 정수 가중치 압축)

  • Kim, Seunghwan;Ryu, Eun-Seok
    • Journal of Broadcast Engineering
    • /
    • v.26 no.6
    • /
    • pp.778-789
    • /
    • 2021
  • Recently, various lightweight methods for using Convolutional Neural Network(CNN) models in mobile devices have emerged. Weight quantization, which lowers bit precision of weights, is a lightweight method that enables a model to be used through integer calculation in a mobile environment where GPU acceleration is unable. Weight quantization has already been used in various models as a lightweight method to reduce computational complexity and model size with a small loss of accuracy. Considering the size of memory and computing speed as well as the storage size of the device and the limited network environment, this paper proposes a method of compressing integer weights after quantization using a video codec as a method. To verify the performance of the proposed method, experiments were conducted on VGG16, Resnet50, and Resnet18 models trained with ImageNet and Places365 datasets. As a result, loss of accuracy less than 2% and high compression efficiency were achieved in various models. In addition, as a result of comparison with similar compression methods, it was verified that the compression efficiency was more than doubled.