• Title/Summary/Keyword: block-adaptive

Search Result 593, Processing Time 0.022 seconds

Adaptive Chroma Block Partitioning Method using Comparison of Similarity between Channels (채널 간 유사도 비교를 이용한 적응형 색차 블록 분할 방법)

  • Baek, A Ram;Choi, Sanggyu;Choi, Haechul
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2018.06a
    • /
    • pp.260-261
    • /
    • 2018
  • MPEG과 VCEG은 차세대 비디오 부호화 표준 기술 개발를 위한 JVET(Joint Video Exploration Team)을 구성하여 현재 비디오 표준화인 HEVC 대비 높은 부호화 효율을 목표로 연구를 진행하며 CfP(Call for Proposal) 단계를 진행 중이다. JVET의 공통 플랫폼인 JEM(Joint Exploration Test Model)은 HEVC의 quad-tree 기반 블록 분할 구조를 대신하여 더 많은 유연성을 제공하는 QTBT(Quad-tree plus binary-tree)가 적용되었다. QTBT는 화면 내 부호화 효율을 높이기 위한 하나의 방법으로 휘도와 색차 신호에 대해 분할된 블록 구조를 지원한다. 이러한 방법은 채널 간 블록 분할 모양이 동일하거나 비슷한 경우에 중복되는 블록 분할 신호가 발생할 수 있는 단점이 있다. 따라서 본 논문에서는 화면 내 부호화에서 채널 간 유사도 비교를 이용하여 적응형 색차 블록 방법을 제안한다. 제안한 방법의 실험 결과로 JEM 6.0과 비교하여 CfE(Call for Evidence) 영상에서 평균 0.28%의 Y BD-rate 감소와 함께 평균 124.5%의 부호화 복잡도 증가를 확인하였다.

  • PDF

Optimizations for Mobile MIMO Relay Molecular Communication via Diffusion with Network Coding

  • Cheng, Zhen;Sun, Jie;Yan, Jun;Tu, Yuchun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.4
    • /
    • pp.1373-1391
    • /
    • 2022
  • We investigate mobile multiple-input multiple-output (MIMO) molecular communication via diffusion (MCvD) system which is consisted of two source nodes, two destination nodes and one relay node in the mobile three-dimensional channel. First, the combinations of decode-and-forward (DF) relaying protocol and network coding (NC) scheme are implemented at relay node. The adaptive thresholds at relay node and destination nodes can be obtained by maximum a posteriori (MAP) probability detection method. Then the mathematical expressions of the average bit error probability (BEP) of this mobile MIMO MCvD system based on DF and NC scheme are derived. Furthermore, in order to minimize the average BEP, we establish the optimization problem with optimization variables which include the ratio of the number of emitted molecules at two source nodes and the initial position of relay node. We put forward an iterative scheme based on block coordinate descent algorithm which can be used to solve the optimization problem and get optimal values of the optimization variables simultaneously. Finally, the numerical results reveal that the proposed iterative method has good convergence behavior. The average BEP performance of this system can be improved by performing the joint optimizations.

Adaptive low-resolution palmprint image recognition based on channel attention mechanism and modified deep residual network

  • Xu, Xuebin;Meng, Kan;Xing, Xiaomin;Chen, Chen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.3
    • /
    • pp.757-770
    • /
    • 2022
  • Palmprint recognition has drawn increasingly attentions in the past decade due to its uniqueness and reliability. Traditional palmprint recognition methods usually use high-resolution images as the identification basis so that they can achieve relatively high precision. However, high-resolution images mean more computation cost in the recognition process, which usually cannot be guaranteed in mobile computing. Therefore, this paper proposes an improved low-resolution palmprint image recognition method based on residual networks. The main contributions include: 1) We introduce a channel attention mechanism to refactor the extracted feature maps, which can pay more attention to the informative feature maps and suppress the useless ones. 2) The ResStage group structure proposed by us divides the original residual block into three stages, and we stabilize the signal characteristics before each stage by means of BN normalization operation to enhance the feature channel. Comparison experiments are conducted on a public dataset provided by the Hong Kong Polytechnic University. Experimental results show that the proposed method achieve a rank-1 accuracy of 98.17% when tested on low-resolution images with the size of 12dpi, which outperforms all the compared methods obviously.

Adaptive High-order Variation De-noising Method for Edge Detection with Wavelet Coefficients

  • Chenghua Liu;Anhong Wang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.2
    • /
    • pp.412-434
    • /
    • 2023
  • This study discusses the high-order diffusion method in the wavelet domain. It aims to improve the edge protection capability of the high-order diffusion method using wavelet coefficients that can reflect image information. During the first step of the proposed diffusion method, the wavelet packet decomposition is a more refined decomposition method that can extract the texture and structure information of the image at different resolution levels. The high-frequency wavelet coefficients are then used to construct the edge detection function. Subsequently, because accurate wavelet coefficients can more accurately reflect the edges and details of the image information, by introducing the idea of state weight, a scheme for recovering wavelet coefficients is proposed. Finally, the edge detection function is constructed by the module of the wavelet coefficients to guide high-order diffusion, the denoised image is obtained. The experimental results showed that the method presented in this study improves the denoising ability of the high-order diffusion model, and the edge protection index (SSIM) outperforms the main methods, including the block matching and 3D collaborative filtering (BM3D) and the deep learning-based image processing methods. For images with rich textural details, the present method improves the clarity of the obtained images and the completeness of the edges, demonstrating its advantages in denoising and edge protection.

Neural Image Compression using Block based Adaptive Resizing (적응적 크기 조정을 이용한 블록 기반 신경망 이미지 부호화)

  • Park, Min Jeong;Kim, Yeongwoong;Kim, Donghyun;Lim, Sung Chang;Kim, Hui Yong
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.1199-1202
    • /
    • 2022
  • 본 논문에서는 최근 연구되고 있는 신경망 이미지 부호화(NNIC: Neural Network based Image Coding)를 위한 적응적 크기 조정을 이용한 블록 기반 신경망 이미지 부호화 알고리즘을 제안한다. 제안 방법은 이미지를 여러 개의 2N×2N 블록으로 분할한 후 분할된 각 블록에 대해 두 가지 크기 조정 모드 중 하나로 부호화를 수행한다. 첫번째 모드는 2N×2N 블록을 구성하는 4 개의 N×N 블록을 각각 NNIC 인코더의 입력으로 사용하는 모드 1(크기 미조정 모드)이며, 두번째 모드는 2N×2N 블록을 하나의 N×N 블록으로 다운 스케일링하여 NNIC 입력으로 사용하는 모드 2(크기 조정 모드)이다. 모드 결정은 비트율-왜곡 비용(Rate-distortion Cost)이 더 적도록 이루어진다. 블록 기반 부호화와 제안 알고리즘을 비교하면, BDBR 은 약 -1.75%, BDSNR 은 약 0.073dB 으로 제안 알고리즘에서 성능 향상이 나타났고, 픽처 부호화와 제안 알고리즘을을 비교하면 BDBR 은 약 0.57%, BDSNR 은 -0.029dB 로 픽처 부호화와 거의 유사한 성능을 보인다는 것을 확인할 수 있다.

  • PDF

Adaptive Mapping Information Management Scheme for High Performance Large Sale Flash Memory Storages (고성능 대용량 플래시 메모리 저장장치의 효과적인 매핑정보 캐싱을 위한 적응적 매핑정보 관리기법)

  • Lee, Yongju;Kim, Hyunwoo;Kim, Huijeong;Huh, Taeyeong;Jung, Sanghyuk;Song, Yong Ho
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.3
    • /
    • pp.78-87
    • /
    • 2013
  • NAND flash memory has been widely used as a storage medium in mobile devices, PCs, and workstations due to its advantages such as low power consumption, high performance, and random accessability compared to a hard disk drive. However, NAND flash cannot support in-place update so that it is mandatory to erase the entire block before overwriting the corresponding page. In order to overcome this drawback, flash storages need a software support, named Flash Translation Layer. However, as the high performance mass NAND flash memory is getting widely used, the size of mapping tables is increasing more than the limited DRAM size. In this paper, we propose an adaptive mapping information caching algorithm based on page mapping to solve this DRAM space shortage problem. Our algorithm uses a mapping information caching scheme which minimize the flash memory access frequency based on the analysis of several workloads. The experimental results show that the proposed algorithm can increase the performance by up to 70% comparing with the previous mapping information caching algorithm.

An Energy-Delay Efficient System with Adaptive Victim Caches (선택적 희생 캐쉬를 이용한 저전력 고성능 시스템 설계 방안)

  • Kim Cheol Hong;Shim Sunghoon;Jhon Chu Shik;Jhang Seong Tae
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.11_12
    • /
    • pp.663-674
    • /
    • 2005
  • We propose a system aimed at achieving high energy-delay efficiency by using adaptive victim caches. Particularly, we investigate methods to improve the hit rates in the first level of memory hierarchy, which reduces the number of accesses to mort power consuming memory structures such as L2 cache. Victim cache is a memory element for reducing conflict misses in a direct-mapped L1 cache. We present two techniques to fill the victim cache with the blocks that have higher probability to be re-reqeusted by processor. Hit-based victim cache ks tilled with the blocks which were referenced frequently by processor. Replacement-based victim cache is filled with the blocks which were evicted from the sets where block replacements had happened frequently According to our simulations, replacement-based victim cache scheme outperforms the conventional victim cache scheme about $2\%$ on average and refutes the power consumption by up to $8\%$.

A Blind Watermarking Algorithm using CABAC for H.264/AVC Main Profile (H.264/AVC Main Profile을 위한 CABAC-기반의 블라인드 워터마킹 알고리즘)

  • Seo, Young-Ho;Choi, Hyun-Jun;Lee, Chang-Yeul;Kim, Dong-Wook
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.2C
    • /
    • pp.181-188
    • /
    • 2007
  • This paper proposed a watermark embedding/extracting method using CABAC(Context-based Adaptive Binary Arithmetic Coding) which is the entropy encoder for the main profile of MPEG-4 Part 10 H.264/AVC. This algorithm selects the blocks and the coefficients in a block on the bases of the contexts extracted from the relationship to the adjacent blocks and coefficients. A watermark bit is embedded without any modification of coefficient or with replacing the LSB(Least Significant Bit) of the coefficient with a watermark bit by considering both the absolute value of the selected coefficient and the watermark bit. Therefore, it makes it hard for an attacker to find out the watermarked locations. By selecting a few coefficients near the DC coefficient according to the contexts, this algorithm satisfies the robustness requirement. From the results from experiments with various kinds and various strengths of attacks the maximum error ratio of the extracted watermark was 5.02% in maximum, which makes certain that the proposed algorithm has very high level of robustness. Because it embeds the watermark during the context modeling and binarization process of CABAC, the additional amount of calculation for locating and selecting the coefficients to embed watermark is very small. Consequently, it is highly expected that it is very useful in the application area that the video must be compressed right after acquisition.

Design and Implementation of CW Radar-based Human Activity Recognition System (CW 레이다 기반 사람 행동 인식 시스템 설계 및 구현)

  • Nam, Jeonghee;Kang, Chaeyoung;Kook, Jeongyeon;Jung, Yunho
    • Journal of Advanced Navigation Technology
    • /
    • v.25 no.5
    • /
    • pp.426-432
    • /
    • 2021
  • Continuous wave (CW) Doppler radar has the advantage of being able to solve the privacy problem unlike camera and obtains signals in a non-contact manner. Therefore, this paper proposes a human activity recognition (HAR) system using CW Doppler radar, and presents the hardware design and implementation results for acceleration. CW Doppler radar measures signals for continuous operation of human. In order to obtain a single motion spectrogram from continuous signals, an algorithm for counting the number of movements is proposed. In addition, in order to minimize the computational complexity and memory usage, binarized neural network (BNN) was used to classify human motions, and the accuracy of 94% was shown. To accelerate the complex operations of BNN, the FPGA-based BNN accelerator was designed and implemented. The proposed HAR system was implemented using 7,673 logics, 12,105 registers, 10,211 combinational ALUTs, and 18.7 Kb of block memory. As a result of performance evaluation, the operation speed was improved by 99.97% compared to the software implementation.

New Fast Block-Matching Motion Estimation using Temporal and Spatial Correlation of Motion Vectors (움직임 벡터의 시공간 상관성을 이용한 새로운 고속 블럭 정합 움직임 추정 방식)

  • 남재열;서재수;곽진석;이명호;송근원
    • Journal of Broadcast Engineering
    • /
    • v.5 no.2
    • /
    • pp.247-259
    • /
    • 2000
  • This paper introduces a new technique that reduces the search times and Improves the accuracy of motion estimation using high temporal and spatial correlation of motion vector. Instead of using the fixed first search Point of previously proposed search algorithms, the proposed method finds more accurate first search point as to compensating searching area using high temporal and spatial correlation of motion vector. Therefore, the main idea of proposed method is to find first search point to improve the performance of motion estimation and reduce the search times. The proposed method utilizes the direction of the same coordinate block of the previous frame compared with a block of the current frame to use temporal correlation and the direction of the adjacent blocks of the current frame to use spatial correlation. Based on these directions, we compute the first search point. We search the motion vector in the middle of computed first search point with two fixed search patterns. Using that idea, an efficient adaptive predicted direction search algorithm (APDSA) for block matching motion estimation is proposed. In the experimental results show that the PSNR values are improved up to the 3.6dB as depend on the Image sequences and advanced about 1.7dB on an average. The results of the comparison show that the performance of the proposed APDSA algorithm is better than those of other fast search algorithms whether the image sequence contains fast or slow motion, and is similar to the performance of the FS (Full Search) algorithm. Simulation results also show that the performance of the APDSA scheme gives better subjective picture quality than the other fast search algorithms and is closer to that of the FS algorithm.

  • PDF