• Title/Summary/Keyword: Semantic Compression

Search Result 15, Processing Time 0.024 seconds

An Optimized Iterative Semantic Compression Algorithm And Parallel Processing for Large Scale Data

  • Jin, Ran;Chen, Gang;Tung, Anthony K.H.;Shou, Lidan;Ooi, Beng Chin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.6
    • /
    • pp.2761-2781
    • /
    • 2018
  • With the continuous growth of data size and the use of compression technology, data reduction has great research value and practical significance. Aiming at the shortcomings of the existing semantic compression algorithm, this paper is based on the analysis of ItCompress algorithm, and designs a method of bidirectional order selection based on interval partitioning, which named An Optimized Iterative Semantic Compression Algorithm (Optimized ItCompress Algorithm). In order to further improve the speed of the algorithm, we propose a parallel optimization iterative semantic compression algorithm using GPU (POICAG) and an optimized iterative semantic compression algorithm using Spark (DOICAS). A lot of valid experiments are carried out on four kinds of datasets, which fully verified the efficiency of the proposed algorithm.

Error Concealment Based on Semantic Prioritization with Hardware-Based Face Tracking

  • Lee, Jae-Beom;Park, Ju-Hyun;Lee, Hyuk-Jae;Lee, Woo-Chan
    • ETRI Journal
    • /
    • v.26 no.6
    • /
    • pp.535-544
    • /
    • 2004
  • With video compression standards such as MPEG-4, a transmission error happens in a video-packet basis, rather than in a macroblock basis. In this context, we propose a semantic error prioritization method that determines the size of a video packet based on the importance of its contents. A video packet length is made to be short for an important area such as a facial area in order to reduce the possibility of error accumulation. To facilitate the semantic error prioritization, an efficient hardware algorithm for face tracking is proposed. The increase of hardware complexity is minimal because a motion estimation engine is efficiently re-used for face tracking. Experimental results demonstrate that the facial area is well protected with the proposed scheme.

  • PDF

A Study on Residual U-Net for Semantic Segmentation based on Deep Learning (딥러닝 기반의 Semantic Segmentation을 위한 Residual U-Net에 관한 연구)

  • Shin, Seokyong;Lee, SangHun;Han, HyunHo
    • Journal of Digital Convergence
    • /
    • v.19 no.6
    • /
    • pp.251-258
    • /
    • 2021
  • In this paper, we proposed an encoder-decoder model utilizing residual learning to improve the accuracy of the U-Net-based semantic segmentation method. U-Net is a deep learning-based semantic segmentation method and is mainly used in applications such as autonomous vehicles and medical image analysis. The conventional U-Net occurs loss in feature compression process due to the shallow structure of the encoder. The loss of features causes a lack of context information necessary for classifying objects and has a problem of reducing segmentation accuracy. To improve this, The proposed method efficiently extracted context information through an encoder using residual learning, which is effective in preventing feature loss and gradient vanishing problems in the conventional U-Net. Furthermore, we reduced down-sampling operations in the encoder to reduce the loss of spatial information included in the feature maps. The proposed method showed an improved segmentation result of about 12% compared to the conventional U-Net in the Cityscapes dataset experiment.

A Study on Digital Video Library Development for Semantic-Sensitive Retrieval (시맨틱 검색을 위한 디지털 비디오 라이브러리 구축에 관한 연구)

  • Jang, Sang-Hyun;Lim, Seok-Jong
    • Journal of Information Management
    • /
    • v.37 no.4
    • /
    • pp.93-104
    • /
    • 2006
  • With the advancement of internet and video compression technology, there has been an increasing demand for video, and producted a large quantity contents of UCC. Therefore, Semantic-sensitive retrieval and construction for digital video library is more in demand than ever. However, it is extremely difficult to categorize and label scenes in any video automatically for searching wanted scene. This study proposes a method to extract certain scenes and analyze the video content, and shows the experimental results after categorizing 5 sports news(soccer, baseball, golf, basketball, and volleyball).

A Digital Image Watermarking Using Region Segmentation

  • Park, Min-Chul;Han, Suk-Ki
    • Proceedings of the IEEK Conference
    • /
    • 2002.07b
    • /
    • pp.1260-1263
    • /
    • 2002
  • This paper takes the region segmentation in image processing and the semantic importance in an image analysis into consideration for digital image watermarking. A semantic importance for an object region, which is segmented by specific features, is determined according to the contents of the region. In this paper, face images are the targets of watermarking for their increasing importance, the use of frequency and strong necessity of protection. A face region is detected and segmented as an object region and encoded watermark information is embedded into the region. Employing a masking and filtering method, experiments are carried out and the results show the usefulness of the proposed method even when there are high compression and a synthesis as a case of copyright infringement.

  • PDF

Graph Compression by Identifying Recurring Subgraphs

  • Ahmed, Muhammad Ejaz;Lee, JeongHoon;Na, Inhyuk;Son, Sam;Han, Wook-Shin
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2017.04a
    • /
    • pp.816-819
    • /
    • 2017
  • Current graph mining algorithms suffers from performance issues when querying patterns are in increasingly massive network graphs. However, from our observation most data graphs inherently contains recurring semantic subgraphs/substructures. Most graph mining algorithms treat them as independent subgraphs and perform computations on them redundantly, which result in performance degradation when processing massive graphs. In this paper, we propose an algorithm which exploits these inherent recurring subgraphs/substructures to reduce graph sizes so that redundant computations performed by the traditional graph mining algorithms are reduced. Experimental results show that our graph compression approach achieve up to 69% reduction in graph sizes over the real datasets. Moreover, required time to construct the compressed graphs is also reasonably reduced.

A Study on the Extraction of the dynamic objects using temporal continuity and motion in the Video (비디오에서 객체의 시공간적 연속성과 움직임을 이용한 동적 객체추출에 관한 연구)

  • Park, Changmin
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.12 no.4
    • /
    • pp.115-121
    • /
    • 2016
  • Recently, it has become an important problem to extract semantic objects from videos, which are useful for improving the performance of video compression and video retrieval. In this thesis, an automatic extraction method of moving objects of interest in video is suggested. We define that an moving object of interest should be relatively large in a frame image and should occur frequently in a scene. The moving object of interest should have different motion from camera motion. Moving object of interest are determined through spatial continuity by the AMOS method and moving histogram. Through experiments with diverse scenes, we found that the proposed method extracted almost all of the objects of interest selected by the user but its precision was 69% because of over-extraction.

A study on the quality scalable coding of selected region (선택적 부호화 기법에 관한 연구)

  • 김욱중;이종원;김성대
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.23 no.9A
    • /
    • pp.2325-2332
    • /
    • 1998
  • In this paper, the quality scalable coding of selected region is presented. If a region is semantically more important than the others, it is appropriate that the image compression shcem is capable of handling the regional semantic difference because the information loss of the region of interest is more severe. We propose the quality scalable coding with its model by interoducing the quality scale parameter. It is more extended and generalized image compression philosophy than te conventional coding. As an implementation of the proposed quality scalable coding, H.263 based scheme is presented. This scheme can control the temporal and spatial quality efficiently, and improve the reconstructed image quality of the region of interest.

  • PDF

Provenance Compression Scheme Considering RDF Graph Patterns (RDF 그래프 패턴을 고려한 프로버넌스 압축 기법)

  • Bok, kyoungsoo;Han, Jieun;Noh, Yeonwoo;Yook, Misun;Lim, Jongtae;Lee, Seok-Hee;Yoo, Jaesoo
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.2
    • /
    • pp.374-386
    • /
    • 2016
  • Provenance means the meta data that represents the history or lineage of a data in collaboration storage environments. Therefore, as provenance has been accruing over time, it takes several ten times as large as the original data. The schemes for effciently compressing huge amounts of provenance are required. In this paper, we propose a provenance compression scheme considering the RDF graph patterns. The proposed scheme represents provenance based on a standard PROV model and encodes provenance in numeric data through the text encoding. We compress provenance and RDF data using the graph patterns. Unlike conventional provenance compression techniques, we compress provenance by considering RDF documents on the semantic web. In order to show the superiority of the proposed scheme, we compare it with the existing scheme in terms of compression ratio and the processing time.

Modified Pyramid Scene Parsing Network with Deep Learning based Multi Scale Attention (딥러닝 기반의 Multi Scale Attention을 적용한 개선된 Pyramid Scene Parsing Network)

  • Kim, Jun-Hyeok;Lee, Sang-Hun;Han, Hyun-Ho
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.11
    • /
    • pp.45-51
    • /
    • 2021
  • With the development of deep learning, semantic segmentation methods are being studied in various fields. There is a problem that segmenation accuracy drops in fields that require accuracy such as medical image analysis. In this paper, we improved PSPNet, which is a deep learning based segmentation method to minimized the loss of features during semantic segmentation. Conventional deep learning based segmentation methods result in lower resolution and loss of object features during feature extraction and compression. Due to these losses, the edge and the internal information of the object are lost, and there is a problem that the accuracy at the time of object segmentation is lowered. To solve these problems, we improved PSPNet, which is a semantic segmentation model. The multi-scale attention proposed to the conventional PSPNet was added to prevent feature loss of objects. The feature purification process was performed by applying the attention method to the conventional PPM module. By suppressing unnecessary feature information, eadg and texture information was improved. The proposed method trained on the Cityscapes dataset and use the segmentation index MIoU for quantitative evaluation. As a result of the experiment, the segmentation accuracy was improved by about 1.5% compared to the conventional PSPNet.