• Title/Summary/Keyword: Compress Map

Search Result 15, Processing Time 0.023 seconds

Pyramid Feature Compression with Inter-Level Feature Restoration-Prediction Network (계층 간 특징 복원-예측 네트워크를 통한 피라미드 특징 압축)

  • Kim, Minsub;Sim, Donggyu
    • Journal of Broadcast Engineering
    • /
    • v.27 no.3
    • /
    • pp.283-294
    • /
    • 2022
  • The feature map used in the network for deep learning generally has larger data than the image and a higher compression rate than the image compression rate is required to transmit the feature map. This paper proposes a method for transmitting a pyramid feature map with high compression rate, which is used in a network with an FPN structure that has robustness to object size in deep learning-based image processing. In order to efficiently compress the pyramid feature map, this paper proposes a structure that predicts a pyramid feature map of a level that is not transmitted with pyramid feature map of some levels that transmitted through the proposed prediction network to efficiently compress the pyramid feature map and restores compression damage through the proposed reconstruction network. Suggested mAP, the performance of object detection for the COCO data set 2017 Train images of the proposed method, showed a performance improvement of 31.25% in BD-rate compared to the result of compressing the feature map through VTM12.0 in the rate-precision graph, and compared to the method of performing compression through PCA and DeepCABAC, the BD-rate improved by 57.79%.

Object Detection Network Feature Map Compression using CompressAI (CompressAI 를 활용한 객체 검출 네트워크 피쳐 맵 압축)

  • Do, Jihoon;Lee, Jooyoung;Kim, Younhee;Choi, Jin Soo;Jeong, Se Yoon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2021.06a
    • /
    • pp.7-9
    • /
    • 2021
  • 본 논문은 Detectron2 [1]에서 지원하는 객체 검출 임무 수행 네트워크의 과정 중에서 추출한 피쳐 맵을 신경망 기반으로 압축하는 방법을 제안한다. 이를 위해, 신경 망 기반 영상 압축을 지원하는 공개 소프트웨어인 CompressAI [2] 모델 중 하나인 bmshj2018-hyperprior 의 압축 네트워크를 활용하여 임무 수행 네트워크의 과정 중 스탬 레이어(stem layer)에서 추출된 피쳐 맵을 압축하도록 학습시켰다. 또한, 압축 네트워크의 입력 피쳐 맵의 너비와 높이 크기가 64 의 배수가 되도록 객체 검출 네트워크의 입력 영상 보간 값을 조정하는 방법도 제안한다. 제안하는 신경망 기반 피쳐 맵 압축 방법은 피쳐 맵을 최근 표준이 완료된 차세대 압축 표준 방법인 VVC(Versatile Video Coding, [3])로 압축한 결과에 비해 큰 성능 향상을 보이고, VCM 앵커와 유사한 성능을 보인다.

  • PDF

A study on compression and decompression of hanguel and chinese character bit map font (한글 한자 비트 맵 폰트의 압축과 복원에 관한연구)

  • 조경윤
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.4
    • /
    • pp.63-71
    • /
    • 1996
  • In this paper, a variable length block code for real time compression and decompression of hanguel and chinese character bit map font is proposed. The proposed code shows a good compression ratio in complete form of hangeul myoungjo and godik style and chinese batang and doddum style bit map font. Besides, a compression and decompression ASIC is designed and simulated on CAD. The 0.8 micron CMOS sea of gate is used to implement the ASIC in amount of 5,200 gates, and it runs at simple hardware and compress and decompress at 33M bit/sec at maximum, which is ideal for real time applications.

  • PDF

Hybrid Polyline Simplification for GIS Vector Map Data Compression (GIS 벡터맵 데이터 압축을 위한 혼합형 폴리라인 단순화)

  • Im, Dae-Yeop;Jang, Bong-Joo;Lee, Suk-Hwan;Kwon, Seong-Geun;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.4
    • /
    • pp.418-429
    • /
    • 2013
  • This paper presents a GIS vector map data compression scheme based on hybrid polyline simplification method and SEC(spatial energy compaction). The proposed method extracts all layers which contain polylines in the GIS vector map and compress all polylines in extracted layers by the hybrid polyline simplification and SEC based on MAE(minimum area error) for each segment in the line. The proposed simplification and SEC increase the compression ratio while preserving the shape quality. We analyze the visual aspects and compression efficiency between the original GIS vector map and the compressed map. From experimental results, we verify that our method has the higher compression efficiency and visual quality than conventional methods.

Block Truncation Coding using Reduction Method of Chrominance Data for Color Image Compression (색차 데이터 축소 기법을 사용한 BTC (Block Truncation Coding) 컬러 이미지 압축)

  • Cho, Moon-Ki;Yoon, Yung-Sup
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.49 no.3
    • /
    • pp.30-36
    • /
    • 2012
  • block truncation coding(BTC) image compression is known as a simple and efficient technology for image compression algorithm. In this paper, we propose RMC-BTC algorithm(RMC : reduction method chrominace data) for color image compression. To compress chrominace data, in every BTC block, the RMC-BTC coding employs chrominace data expressed with average of chrominace data and using method of luminance data bit-map to represented chrominance data bit-map. Experimental results shows efficiency of proposed algorithm, as compared with PSNR and compression ratio of the conventional BTC method.

Lossless Compression and Rendering of Multiple Layer Displacement Map (다층 변위 맵의 비손실 압축과 렌더링)

  • Chun, Young-Jae;Kim, Hae-Dong;Cho, Sung-Hyun
    • Journal of Korea Game Society
    • /
    • v.9 no.6
    • /
    • pp.171-178
    • /
    • 2009
  • Multiple layer displacement mapping methods are able to represent more complex and general geometries which cannot be presented by single layer displacement mapping methods, and provide a realistic scene to digital contents such as 3D games and movies with relatively low costs. However, as we use more layers for details, data space is wasted more because lower layers have less displacement data than higher layers. In this paper, we suggest a lossless compression and rendering method of a multiple layer displacement map. Since we compress the map without data loss, the proposed method provides the same quality as the rendering result that uses an original multiple layer displacement map.

  • PDF

Effective Compression Technique for Secure Transmission and Storage of GIS Digital Map (GIS 디지털 맵의 안전한 전송 및 저장을 위한 효율적인 압축 기법)

  • Jang, Bong-Joo;Moon, Kwang-Seok;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.2
    • /
    • pp.210-218
    • /
    • 2011
  • Generally, GIS digital map has been represented and transmitted by ASCII and Binary data forms. Among these forms, Binary form has been widely used in many GIS application fields for the transmission of mass map data. In this paper, we present a hierarchical compression technique of polyline and polygon components for effective storage and transmission of vector map with various degree of decision. These components are core geometric components that represent main layers in vector map. The proposed technique performs firstly the energy compaction of all polyline and polygon components in spatial domain for the lossless compression of detailed vector map and compress independently integer parts and fraction parts of 64bit floating points. From experimental results, we confirmed that the proposed technique has superior compressive performance to the conventional data compression of 7z, zip, rar and gz.

Implement of MapReduce-based Big Data Processing Scheme for Reducing Big Data Processing Delay Time and Store Data (빅데이터 처리시간 감소와 저장 효율성이 향상을 위한 맵리듀스 기반 빅데이터 처리 기법 구현)

  • Lee, Hyeopgeon;Kim, Young-Woon;Kim, Ki-Young
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.10
    • /
    • pp.13-19
    • /
    • 2018
  • MapReduce, the Hadoop's essential core technology, is most commonly used to process big data based on the Hadoop distributed file system. However, the existing MapReduce-based big data processing techniques have a feature of dividing and storing files in blocks predefined in the Hadoop distributed file system, thus wasting huge infrastructure resources. Therefore, in this paper, we propose an efficient MapReduce-based big data processing scheme. The proposed method enhances the storage efficiency of a big data infrastructure environment by converting and compressing the data to be processed into a data format in advance suitable for processing by MapReduce. In addition, the proposed method solves the problem of the data processing time delay arising from when implementing with focus on the storage efficiency.

Distributed MIMO Systems Based on Quantize-Map-and-Forward (QMF) Relaying (양자화 전송 중계 기반 분산 다중 안테나 통신 시스템)

  • Hong, Bi;Choi, Wan
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39A no.7
    • /
    • pp.404-412
    • /
    • 2014
  • Exploiting multiple antennas at mobile devices is difficult due to limited size and power. In this paper, a distributed MIMO protocol achieving the capacity of conventinal MIMO systems is proposed and analyzed. For exploiting distributed MIMO features, Quantize-Map-and-Forward (QMF) scheme shows improved performance than Amplify-and-Forward (AF) scheme. Also, the protocol based on multiple access channel (MAC) is proposed to improve the multiplexing gain. We showed that sufficient condition of the number of slave nodes to achieve the gain of a MAC based protocol. Because the base station can support multiple clusters operating in distributed MIMO, the total cellular capacity can be extremely enhanced in proportional to the number of clusters.

Processing Method of Mass Small File Using Hadoop Platform (하둡 플랫폼을 이용한 대량의 스몰파일 처리방법)

  • Kim, Chang-Bok;Chung, Jae-Pil
    • Journal of Advanced Navigation Technology
    • /
    • v.18 no.4
    • /
    • pp.401-408
    • /
    • 2014
  • Hadoop is composed with MapReduce programming model for distributed processing and HDFS distributed file system. Hadoop is suitable framework for big data processing, but processing of mass small files have many problems. The processing of mass small file in hadoop have problems to created one mapper per one file, and it have problems to needed many memory for store of meta information of file. This paper have comparison evaluation processing method of mass small file with various method in hadoop platform. The processing of general compression format is inadequate because of processing by one mapper regardless of data size. The processing of sequence and hadoop archive file is removed memory problem of namenode by compress and combine of small file. Hadoop archive file is faster then sequence file about combine time of small file. The processing using CombineFileInputFormat class is needed not combine of small file, and it have similar speed big data processing method.