• Title/Summary/Keyword: Quad-Tree Algorithm

Search Result 51, Processing Time 0.023 seconds

Propagation Analysis Method in using 3D Ray Tracing Model in Wireless Cell Planning Software (무선망 설계툴에서 3 차원 광선 추적법을 이용한 전파해석 방법)

  • Shin, Young-Il;Jung, Hyun-Meen;Lee, Seong-Choon
    • 한국정보통신설비학회:학술대회논문집
    • /
    • 2007.08a
    • /
    • pp.251-255
    • /
    • 2007
  • In this paper, propagation analysis method in using 3D Ray Tracing propagation model in wireless cell planning is proposed. Through 3D Ray Tracing model, we can predict the distribution of propagation loss of the received signal. For correct and a low complex analysis, Quad Tree and Pre-Ordering and Hash Function algorithms are included in 3D Ray Tracing algorithm. And 3D Ray Tracing model is embodied in CellTREK that is developed by KT and used to plan Wibro system analysis. In CellTREK, propagation analysis is performed and that result is represented in 3D viewer. In numerical results, it is showed that the proposed scheme outperforms Modified HATA model when comparing with measurement data.

  • PDF

The Study on a Semi-automated Mapping System (반자동 지도입력 시스템기술 개발 연구)

  • 윤재경;이기혁;우창헌;이경자;김수용
    • Spatial Information Research
    • /
    • v.3 no.1
    • /
    • pp.19-27
    • /
    • 1995
  • In this paper, a semi-automated mapping system, which can produ¬ce digital maps by using information acquired from pre-processing procedure, was introduced. To get a binary edge image, which is very important in vectori¬zation process, we applied adaptive smoothing and connection preserving thresho¬Iding algorithm. In mapper program, binary images are converted to vectors and for in-core data structure, extended PR quad tree was used. These procedures are dispatched to personal computers and workstations and through network resource sharing, the whole process was unified and simplified.

  • PDF

Maximum A Posteriori Estimation-based Adaptive Search Range Decision for Accelerating HEVC Motion Estimation on GPU

  • Oh, Seoung-Jun;Lee, Dongkyu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.9
    • /
    • pp.4587-4605
    • /
    • 2019
  • High Efficiency Video Coding (HEVC) suffers from high computational complexity due to its quad-tree structure in motion estimation (ME). This paper exposes an adaptive search range decision algorithm for accelerating HEVC integer-pel ME on GPU which estimates the optimal search range (SR) using a MAP (Maximum A Posteriori) estimator. There are three main contributions; First, we define the motion feature as the standard deviation of motion vector difference values in a CTU. Second, a MAP estimator is proposed, which theoretically estimates the motion feature of the current CTU using the motion feature of a temporally adjacent CTU and its SR without any data dependency. Thus, the SR for the current CTU is parallelly determined. Finally, the values of the prior distribution and the likelihood for each discretized motion feature are computed in advance and stored at a look-up table to further save the computational complexity. Experimental results show in conventional HEVC test sequences that the proposed algorithm can achieves high average time reductions without any subjective quality loss as well as with little BD-bitrate increase.

Development of an Automatic Generation Methodology for Digital Elevation Models using a Two-Dimensional Digital Map (수치지형도를 이용한 DEM 자동 생성 기법의 개발)

  • Park, Chan-Soo;Lee, Seong-Kyu;Suh, Yong-Cheol
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.10 no.3
    • /
    • pp.113-122
    • /
    • 2007
  • The rapid growth of aerial survey and remote sensing technology has enabled the rapid acquisition of very large amounts of geographic data, which should be analyzed using real-time visualization technology. The level of detail(LOD) algorithm is one of the most important elements for realizing real-time visualization. We chose the triangulated irregular network (TIN) method to generate normalized digital elevation model(DEM) data. First, we generated TIN data using contour lines obtained from a two-dimensional(2D) digital map and created a 2D grid array fitting the size of the area. Then, we generated normalized DEM data by calculating the intersection points between the TIN data and the points on the 2D grid array. We used constrained Delaunay triangulation(CDT) and ray-triangle intersection algorithms to calculate the intersection points between the TIN data and the points on the 2D grid array in each step. In addition, we simulated a three-dimensional(3D) terrain model based on normalized DEM data with real-time visualization using a Microsoft Visual C++ 6.0 program in the DirectX API library and a quad-tree LOD algorithm.

  • PDF

Study on Fast HEVC Encoding with Hierarchical Motion Vector Clustering (움직임 벡터의 계층적 군집화를 통한 HEVC 고속 부호화 연구)

  • Lim, Jeongyun;Ahn, Yong-Jo;Sim, Donggyu
    • Journal of Broadcast Engineering
    • /
    • v.21 no.4
    • /
    • pp.578-591
    • /
    • 2016
  • In this paper, the fast encoding algorithm in High Efficiency Video Coding (HEVC) encoder was studied. For the encoding efficiency, the current HEVC reference software is divided the input image into Coding Tree Unit (CTU). then, it should be re-divided into CU up to maximum depth in form of quad-tree for RDO (Rate-Distortion Optimization) in encoding precess. But, it is one of the reason why complexity is high in the encoding precess. In this paper, to reduce the high complexity in the encoding process, it proposed the method by determining the maximum depth of the CU using a hierarchical clustering at the pre-processing. The hierarchical clustering results represented an average combination of motion vectors (MV) on neighboring blocks. Experimental results showed that the proposed method could achieve an average of 16% time saving with minimal BD-rate loss at 1080p video resolution. When combined the previous fast algorithm, the proposed method could achieve an average 45.13% time saving with 1.84% BD-rate loss.

VLSI Array Architecture for High Speed Fractal Image Compression (고속 프랙탈 영상압축을 위한 VLSI 어레이 구조)

  • 성길영;이수진;우종호
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.25 no.4B
    • /
    • pp.708-714
    • /
    • 2000
  • In this paper, an one-dimensional VLSI array for high speed processing of fractal image compression algorithm based the quad-tree partitioning method is proposed. First of all, the single assignment code algorithm is derived from the sequential Fisher's algorithm, and then the data dependence graph(DG) is obtained. The two-dimension array is designed by projecting this DG along the optimal direction and the one-dimensional VLSI array is designed by transforming the obtained two-dimensional array. The number of Input/Output pins in the designed one-dimensional array can be reduced and the architecture of process elements(PEs) can he simplified by sharing the input pins of range and domain blocks and internal arithmetic units of PEs. Also, the utilization of PEs can be increased by reusing PEs for operations to the each block-size. For fractal image compression of 512X512gray-scale image, the proposed array can be processed fastly about 67 times more than sequential algorithm. The operations of the proposed one-dimensional VLSI array are verified by the computer simulation.

  • PDF

The Reduction of Blocky Artifacts in Conditional Replenishment Algorithm for SC-MMH 3DTV Systems (융합형 3DTV를 위한 조건부대체 알고리즘에서의 블록화 현상 제거)

  • Kim, Ji Won;Kim, Sung-Hoon;Kim, Hui Yong;Kim, Ki-Doo;Jung, Kyeong-Hoon
    • Journal of Broadcast Engineering
    • /
    • v.22 no.1
    • /
    • pp.15-27
    • /
    • 2017
  • CRA(Conditional Replenishment Algorithm) was proposed to improve the visual quality of SC-MMH(Service Compatible 3DTV using Main and Mobile Hybrid delivery) which is a kind of hybrid 3DTV system and has been standardized by ATSC. In SC-MMH system, reference view and additional view may have different resolutions and/or encoding methods. To reconstruct 3D view, additional view needs to be enlarged as same as reference view. Although the performance of CRA is quite satisfactory, there may exist some blocky artifacts in the enlarged view since it adopts block-shaped processing unit with quad-tree structure. In this paper, we analyze the main causes of blocky artifacts in CRA and show these artifacts can be successfully suppressed by applying the deblocking filter at receiver side.

Complexity-based Sample Adaptive Offset Parallelism (복잡도 기반 적응적 샘플 오프셋 병렬화)

  • Ryu, Eun-Kyung;Jo, Hyun-Ho;Seo, Jung-Han;Sim, Dong-Gyu;Kim, Doo-Hyun;Song, Joon-Ho
    • Journal of Broadcast Engineering
    • /
    • v.17 no.3
    • /
    • pp.503-518
    • /
    • 2012
  • In this paper, we propose a complexity-based parallelization method of the sample adaptive offset (SAO) algorithm which is one of HEVC in-loop filters. The SAO algorithm can be regarded as region-based process and the regions are obtained and represented with a quad-tree scheme. A offset to minimize a reconstruction error is sent for each partitioned region. The SAO of the HEVC can be parallelized in data-level. However, because the sizes and complexities of the SAO regions are not regular, workload imbalance occurs with multi-core platform. In this paper, we propose a LCU-based SAO algorithm and a complexity prediction algorithm for each LCU. With the proposed complexity-based LCU processing, we found that the proposed algorithm is faster than the sequential implementation by a factor of 2.38 times. In addition, the proposed algorithm is faster than regular parallel implementation SAO by 21%.

Human Visual Perception-Based Quantization For Efficiency HEVC Encoder (HEVC 부호화기 고효율 압축을 위한 인지시각 특징기반 양자화 방법)

  • Kim, Young-Woong;Ahn, Yong-Jo;Sim, Donggyu
    • Journal of Broadcast Engineering
    • /
    • v.22 no.1
    • /
    • pp.28-41
    • /
    • 2017
  • In this paper, the fast encoding algorithm in High Efficiency Video Coding (HEVC) encoder was studied. For the encoding efficiency, the current HEVC reference software is divided the input image into Coding Tree Unit (CTU). then, it should be re-divided into CU up to maximum depth in form of quad-tree for RDO (Rate-Distortion Optimization) in encoding precess. But, it is one of the reason why complexity is high in the encoding precess. In this paper, to reduce the high complexity in the encoding process, it proposed the method by determining the maximum depth of the CU using a hierarchical clustering at the pre-processing. The hierarchical clustering results represented an average combination of motion vectors (MV) on neighboring blocks. Experimental results showed that the proposed method could achieve an average of 16% time saving with minimal BD-rate loss at 1080p video resolution. When combined the previous fast algorithm, the proposed method could achieve an average 45.13% time saving with 1.84% BD-rate loss.

Cloud P2P OLAP: Query Processing Method and Index structure for Peer-to-Peer OLAP on Cloud Computing (Cloud P2P OLAP: 클라우드 컴퓨팅 환경에서의 Peer-to-Peer OLAP 질의처리기법 및 인덱스 구조)

  • Joo, Kil-Hong;Kim, Hun-Dong;Lee, Won-Suk
    • Journal of Internet Computing and Services
    • /
    • v.12 no.4
    • /
    • pp.157-172
    • /
    • 2011
  • The latest active studies on distributed OLAP to adopt a distributed environment are mainly focused on DHT P2P OLAP and Grid OLAP. However, these approaches have its weak points, the P2P OLAP has limitations to multidimensional range queries in the cloud computing environment due to the nature of structured P2P. On the other hand, the Grid OLAP has no regard for adjacency and time series. It focused on its own sub set lookup algorithm. To overcome the above limits, this paper proposes an efficient central managed P2P approach for a cloud computing environment. When a multi-level hybrid P2P method is combined with an index load distribution scheme, the performance of a multi-dimensional range query is enhanced. The proposed scheme makes the OLAP query results of a user to be able to reused by other users' volatile cube search. For this purpose, this paper examines the combination of an aggregation cube hierarchy tree, a quad-tree, and an interval-tree as an efficient index structure. As a result, the proposed cloud P2P OLAP scheme can manage the adjacency and time series factor of an OLAP query. The performance of the proposed scheme is analyzed by a series of experiments to identify its various characteristics.