• Title/Summary/Keyword: Hierarchical algorithm

Search Result 852, Processing Time 0.027 seconds

Fast Motion Estimation for Variable Motion Block Size in H.264 Standard (H.264 표준의 가변 움직임 블록을 위한 고속 움직임 탐색 기법)

  • 최웅일;전병우
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.6
    • /
    • pp.209-220
    • /
    • 2004
  • The main feature of H.264 standard against conventional video standards is the high coding efficiency and the network friendliness. In spite of these outstanding features, it is not easy to implement H.264 codec as a real-time system due to its high requirement of memory bandwidth and intensive computation. Although the variable block size motion compensation using multiple reference frames is one of the key coding tools to bring about its main performance gain, it demands substantial computational complexity due to SAD (Sum of Absolute Difference) calculation among all possible combinations of coding modes to find the best motion vector. For speedup of motion estimation process, therefore, this paper proposes fast algorithms for both integer-pel and fractional-pel motion search. Since many conventional fast integer-pel motion estimation algorithms are not suitable for H.264 having variable motion block sizes, we propose the motion field adaptive search using the hierarchical block structure based on the diamond search applicable to variable motion block sizes. Besides, we also propose fast fractional-pel motion search using small diamond search centered by predictive motion vector based on statistical characteristic of motion vector.

Enhanced Multiresolution Motion Estimation Using Reduction of One-Pixel Shift (단화소 이동 감쇠를 이용한 향상된 다중해상도 움직임 예측 방법)

  • 이상민;이지범;고형화
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.9C
    • /
    • pp.868-875
    • /
    • 2003
  • In this paper, enhanced multiresolution motion estimation(MRME) using reduction of one-pixel shift in wavelet domain is proposed. Conventional multiresolution motion estimation using hierarchical relationship of wavelet coefficient has difficulty for accurate motion estimation due to shift-variant property by decimation process of the wavelet transform. Therefore, to overcome shift-variant property of wavelet coefficient, two level wavelet transform is performed. In order too reduce one-pixel shift on low band signal, S$_4$ band is interpolated by inserting average value. Secondly, one level wavelet transform is applied to the interpolated S$_4$ band. To estimate initial motion vector, block matching algorithm is applied to low band signal S$_{8}$. Multiresolution motion estimation is performed at the rest subbands in low level. According to the experimental results, proposed method showed 1-2dB improvement of PSNR performance at the same bit rate as well as subjective quality compared with the conventional multiresolution motion estimation(MRME) methods and full-search block matching in wavelet domain.

Implementation of Secure System for Blockchain-based Smart Meter Aggregation (블록체인 기반 스마트 미터 집계 보안 시스템 구축)

  • Kim, Yong-Gil;Moon, Kyung-Il
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.2
    • /
    • pp.1-11
    • /
    • 2020
  • As an important basic building block of the smart grid environment, smart meter provides real-time electricity consumption information to the utility. However, ensuring information security and privacy in the smart meter data aggregation process is a non-trivial task. Even though the secure data aggregation for the smart meter has been a lot of attention from both academic and industry researchers in recent years, most of these studies are not secure against internal attackers or cannot provide data integrity. Besides, their computation costs are not satisfactory because the bilinear pairing operation or the hash-to-point operation is performed at the smart meter system. Recently, blockchains or distributed ledgers are an emerging technology that has drawn considerable interest from energy supply firms, startups, technology developers, financial institutions, national governments and the academic community. In particular, blockchains are identified as having the potential to bring significant benefits and innovation for the electricity consumption network. This study suggests a distributed, privacy-preserving, and simple secure smart meter data aggregation system, backed up by Blockchain technology. Smart meter data are aggregated and verified by a hierarchical Merkle tree, in which the consensus protocol is supported by the practical Byzantine fault tolerance algorithm.

Intensity Compensation for Efficient Stereo Image Compression (효율적인 스테레오 영상 압축을 위한 밝기차 보상)

  • Jeon Youngtak;Jeon Byeungwoo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.2 s.302
    • /
    • pp.101-112
    • /
    • 2005
  • As we perceive the world as 3-dimensional through our two eyes, we can extract 3-dimensional information from stereo images obtained from two or more cameras. Since stereo images have a large amount of data, with recent advances in digital video coding technology, efficient compression algorithms have been developed for stereo images. In order to compress stereo images and to obtain 3-D information such as depth, we find disparity vectors by using disparity estimation algorithm generally utilizing pixel differences between stereo pairs. However, it is not unusual to have stereo images having different intensity values for several reasons, such as incorrect control of the iris of each camera, disagreement of the foci of two cameras, orientation, position, and different characteristics of CCD (charge-coupled device) cameras, and so on. The intensity differences of stereo pairs often cause undesirable problems such as incorrect disparity vectors and consequent low coding efficiency. By compensating intensity differences between left and right images, we can obtain higher coding efficiency and hopefully reduce the perceptual burden of brain to combine different information incoming from two eyes. We propose several methods of intensity compensation such as local intensity compensation, global intensity compensation, and hierarchical intensity compensation as very simple and efficient preprocessing tool. Experimental results show that the proposed algerian provides significant improvement in coding efficiency.

R-Tree Construction for The Content Based Publish/Subscribe Service in Peer-to-peer Networks (피어투피어 네트워크에서의 컨텐츠 기반 publish/subscribe 서비스를 위한 R-tree구성)

  • Kim, Yong-Hyuck;Kim, Young-Han;Kang, Nam-Hi
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.46 no.11
    • /
    • pp.1-11
    • /
    • 2009
  • A content based pub/sub (Publish/subscribe) services at the peer-to-peer network has the requirements about how to distribute contents information of subscriber and to delivery the events efficiently. For satisfying the requirements, a DHT(Distributed Hash Table) based pub/sub overlay networking and tree type topology based network construction using filter technique have been proposed. The DHT based technique is suitable for topic based pub/sub service but it's not good contents based service that has the variable requirements. And also filter based tree topology networking is not efficient at the environment where the user requirements are distributed. In this paper we propose the R-Tree algorithm based pub/sub overlay network construction method. The proposed scheme provides cost effective event delivery method by mapping user requirement to multi-dimension and hierarchical grouping of the requirements. It is verified by simulation at the variable environment of user requirements and events.

Frequently Occurred Information Extraction from a Collection of Labeled Trees (라벨 트리 데이터의 빈번하게 발생하는 정보 추출)

  • Paik, Ju-Ryon;Nam, Jung-Hyun;Ahn, Sung-Joon;Kim, Ung-Mo
    • Journal of Internet Computing and Services
    • /
    • v.10 no.5
    • /
    • pp.65-78
    • /
    • 2009
  • The most commonly adopted approach to find valuable information from tree data is to extract frequently occurring subtree patterns from them. Because mining frequent tree patterns has a wide range of applications such as xml mining, web usage mining, bioinformatics, and network multicast routing, many algorithms have been recently proposed to find the patterns. However, existing tree mining algorithms suffer from several serious pitfalls in finding frequent tree patterns from massive tree datasets. Some of the major problems are due to (1) modeling data as hierarchical tree structure, (2) the computationally high cost of the candidate maintenance, (3) the repetitious input dataset scans, and (4) the high memory dependency. These problems stem from that most of these algorithms are based on the well-known apriori algorithm and have used anti-monotone property for candidate generation and frequency counting in their algorithms. To solve the problems, we base a pattern-growth approach rather than the apriori approach, and choose to extract maximal frequent subtree patterns instead of frequent subtree patterns. The proposed method not only gets rid of the process for infrequent subtrees pruning, but also totally eliminates the problem of generating candidate subtrees. Hence, it significantly improves the whole mining process.

  • PDF

Accelerating GPU-based Volume Ray-casting Using Brick Vertex (브릭 정점을 이용한 GPU 기반 볼륨 광선투사법 가속화)

  • Chae, Su-Pyeong;Shin, Byeong-Seok
    • Journal of the Korea Computer Graphics Society
    • /
    • v.17 no.3
    • /
    • pp.1-7
    • /
    • 2011
  • Recently, various researches have been proposed to accelerate GPU-based volume ray-casting. However, those researches may cause several problems such as bottleneck of data transmission between CPU and GPU, requirement of additional video memory for hierarchical structure and increase of processing time whenever opacity transfer function changes. In this paper, we propose an efficient GPU-based empty space skipping technique to solve these problems. We store maximum density in a brick of volume dataset on a vertex element. Then we delete vertices regarded as transparent one by opacity transfer function in geometry shader. Remaining vertices are used to generate bounding boxes of non-transparent area that helps the ray to traverse efficiently. Although these vertices are independent on viewing condition they need to be reproduced when opacity transfer function changes. Our technique provides fast generation of opaque vertices for interactive processing since the generation stage of the opaque vertices is running in GPU pipeline. The rendering results of our algorithm are identical to the that of general GPU ray-casting, but the performance can be up to more than 10 times faster.

Automated Geometric Correction of Geostationary Weather Satellite Images (정지궤도 기상위성의 자동기하보정)

  • Kim, Hyun-Suk;Lee, Tae-Yoon;Hur, Dong-Seok;Rhee, Soo-Ahm;Kim, Tae-Jung
    • Korean Journal of Remote Sensing
    • /
    • v.23 no.4
    • /
    • pp.297-309
    • /
    • 2007
  • The first Korean geostationary weather satellite, Communications, Oceanography and Meteorology Satellite (COMS) will be launched in 2008. The ground station for COMS needs to perform geometric correction to improve accuracy of satellite image data and to broadcast geometrically corrected images to users within 30 minutes after image acquisition. For such a requirement, we developed automated and fast geometric correction techniques. For this, we generated control points automatically by matching images against coastline data and by applying a robust estimation called RANSAC. We used GSHHS (Global Self-consistent Hierarchical High-resolution Shoreline) shoreline database to construct 211 landmark chips. We detected clouds within the images and applied matching to cloud-free sub images. When matching visible channels, we selected sub images located in day-time. We tested the algorithm with GOES-9 images. Control points were generated by matching channel 1 and channel 2 images of GOES against the 211 landmark chips. The RANSAC correctly removed outliers from being selected as control points. The accuracy of sensor models established using the automated control points were in the range of $1{\sim}2$ pixels. Geometric correction was performed and the performance was visually inspected by projecting coastline onto the geometrically corrected images. The total processing time for matching, RANSAC and geometric correction was around 4 minutes.

A study on the selection of the target scope for destruction of personal credit information of customers whose financial transaction effect has ended (금융거래 효과가 종료된 고객의 개인신용정보 파기 대상 범위 선정에 관한 연구)

  • Baek, Song-Yi;Lim, Young-Bin;Lee, Chang-Gil;Chun, Sam-Hyun
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.3
    • /
    • pp.163-169
    • /
    • 2022
  • According to the Credit Information Act, in order to protect customer information by relationship of credit information subjects, it is destroyed and stored separately in two stages according to the period after the financial transaction effect is over. However, there is a limitation in that the destruction of personal credit information of customers whose financial transaction effect has expired cannot be collectively destroyed when the transaction has been terminated, depending on the nature of the financial product and transaction. To this end, the IT person in charge is developing a computerized program according to the target and order of destruction by investigating the business relationship by transaction type in advance. In this process, if the identification of the upper relation between tables is unclear, a compliance issue arises in which personal credit information cannot be destroyed or even information that should not be destroyed because it depends on the subjective judgment of the IT person in charge. Therefore, in this paper, we propose a model and algorithm for identifying the referenced table based on SQL executed in the computer program, analyzing the upper relation between tables with the primary key information of the table, and visualizing and objectively selecting the range to be destroyed. presented and implemented.

A Study on the Classification Model of Overseas Infringing Websites based on Web Hierarchy Similarity Analysis using GNN (GNN을 이용한 웹사이트 Hierarchy 유사도 분석 기반 해외 침해 사이트 분류 모델 연구)

  • Ju-hyeon Seo;Sun-mo Yoo;Jong-hwa Park;Jin-joo Park;Tae-jin Lee
    • Convergence Security Journal
    • /
    • v.23 no.2
    • /
    • pp.47-54
    • /
    • 2023
  • The global popularity of K-content(Korean Wave) has led to a continuous increase in copyright infringement cases involving domestic works, not only within the country but also overseas. In response to this trend, there is active research on technologies for detecting illegal distribution sites of domestic copyrighted materials, with recent studies utilizing the characteristics of domestic illegal distribution sites that often include a significant number of advertising banners. However, the application of detection techniques similar to those used domestically is limited for overseas illegal distribution sites. These sites may not include advertising banners or may have significantly fewer ads compared to domestic sites, making the application of detection technologies used domestically challenging. In this study, we propose a detection technique based on the similarity comparison of links and text trees, leveraging the characteristic of including illegal sharing posts and images of copyrighted materials in a similar hierarchical structure. Additionally, to accurately compare the similarity of large-scale trees composed of a massive number of links, we utilize Graph Neural Network (GNN). The experiments conducted in this study demonstrated a high accuracy rate of over 95% in classifying regular sites and sites involved in the illegal distribution of copyrighted materials. Applying this algorithm to automate the detection of illegal distribution sites is expected to enable swift responses to copyright infringements.