• Title/Summary/Keyword: Set Partitioning Problem

Search Result 45, Processing Time 0.023 seconds

An Efficient Clustering Algorithm for Massive GPS Trajectory Data (대용량 GPS 궤적 데이터를 위한 효율적인 클러스터링)

  • Kim, Taeyong;Park, Bokuk;Park, Jinkwan;Cho, Hwan-Gue
    • Journal of KIISE
    • /
    • v.43 no.1
    • /
    • pp.40-46
    • /
    • 2016
  • Digital road map generation is primarily based on artificial satellite photographing or in-site manual survey work. Therefore, these map generation procedures require a lot of time and a large budget to create and update road maps. Consequently, people have tried to develop automated map generation systems using GPS trajectory data sets obtained by public vehicles. A fundamental problem in this road generation procedure involves the extraction of representative trajectory such as main roads. Extracting a representative trajectory requires the base data set of piecewise line segments(GPS-trajectories), which have close starting and ending points. So, geometrically similar trajectories are selected for clustering before extracting one representative trajectory from among them. This paper proposes a new divide- and-conquer approach by partitioning the whole map region into regular grid sub-spaces. We then try to find similar trajectories by sweeping. Also, we applied the $Fr{\acute{e}}chet$ distance measure to compute the similarity between a pair of trajectories. We conducted experiments using a set of real GPS data with more than 500 vehicle trajectories obtained from Gangnam-gu, Seoul. The experiment shows that our grid partitioning approach is fast and stable and can be used in real applications for vehicle trajectory clustering.

Partitioning likelihood method in the analysis of non-monotone missing data

  • Kim Jae-Kwang
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2004.11a
    • /
    • pp.1-8
    • /
    • 2004
  • We address the problem of parameter estimation in multivariate distributions under ignorable non-monotone missing data. The factoring likelihood method for monotone missing data, termed by Robin (1974), is extended to a more general case of non-monotone missing data. The proposed method is algebraically equivalent to the Newton-Raphson method for the observed likelihood, but avoids the burden of computing the first and the second partial derivatives of the observed likelihood Instead, the maximum likelihood estimates and their information matrices for each partition of the data set are computed separately and combined naturally using the generalized least squares method. A numerical example is also presented to illustrate the method.

  • PDF

An Attribute Replicating Vertical Partition Method by Genetic Algorithm in the Physical Design of Relational Database (관계형 데이터베이스의 물리적 설계에서 유전해법을 이용한 속성 중복 수직분할 방법)

  • 유종찬;김재련
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.21 no.46
    • /
    • pp.33-49
    • /
    • 1998
  • In order to improve the performance of relational databases, one has to reduce the number of disk accesses necessary to transfer data from disk to main memory. The paper proposes to reduce the number of disk I/O accesses by vertically partitioning relation into fragments and allowing attribute replication to fragments if necessary. When zero-one integer programming model is solved by the branch-and-bound method, it requires much computing time to solve a large sized problem. Therefore, heuristic solutions using genetic algorithm(GA) are presented. GA in this paper adapts a few ideas which are different from traditional genetic algorithms, for examples, a rank-based sharing fitness function, elitism and so on. In order to improve performance of GA, a set of optimal parameter levels is determined by the experiment and makes use of it. As relations are vertically partitioned allowing attribute replications and saved in disk, an attribute replicating vertical partition method by GA can attain less access cost than non-attribute-replication one and require less computing time than the branch-and-bound method in large-sized problems. Also, it can acquire a good solution similar to the optimum solution in small-sized problem.

  • PDF

A Study on Error-Resilient, Scalable Video Codecs Based on the Set Partitioning in Hierarchical Trees(SPIHT) Algorithm (계층적 트리의 집합 분할 알고리즘(SPIHT)에 기반한 에러에 강하고 가변적인 웨이브렛 비디오 코덱에 관한 연구)

  • Inn-Ho, Jee
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.1
    • /
    • pp.37-43
    • /
    • 2023
  • Compressed still image or video bitstreams require protection from channel errors in a wireless channel. Embedded Zerotree Coding(EZW), SPIHT could have provided unprecedented high performance in image compression with low complexity. If bit error is generated by dint of wireless channel transmission problem, the loss of synchronization on between encoder and decoder causes serious performance degradation. But wavelet zerotree coding algorithms are producing variable-length codewords, extremely sensitive to bit errors. The idea is to partition the lifting coefficients. A many partition of lifting transform coefficients distributes channel error from wireless channel to each partition. Therefore synchronization problem that caused quality deterioration in still image and video stream was improved.

SPIHT-based Subband Division Compression Method for High-resolution Image Compression (고해상도 영상 압축을 위한 SPIHT 기반의 부대역 분할 압축 방법)

  • Kim, Woosuk;Park, Byung-Seo;Oh, Kwan-Jung;Seo, Young-Ho
    • Journal of Broadcast Engineering
    • /
    • v.27 no.2
    • /
    • pp.198-206
    • /
    • 2022
  • This paper proposes a method to solve problems that may occur when SPIHT(set partition in hierarchical trees) is used in a dedicated codec for compressing complex holograms with ultra-high resolution. The development of codecs for complex holograms can be largely divided into a method of creating dedicated compression methods and a method of using anchor codecs such as HEVC and JPEG2000 and adding post-processing techniques. In the case of creating a dedicated compression method, a separate conversion tool is required to analyze the spatial characteristics of complex holograms. Zero-tree-based algorithms in subband units such as EZW and SPIHT have a problem that when coding for high-resolution images, intact subband information is not properly transmitted during bitstream control. This paper proposes a method of dividing wavelet subbands to solve such a problem. By compressing each divided subbands, information throughout the subbands is kept uniform. The proposed method showed better restoration results than PSNR compared to the existing method.

Classification of the Seoul Metropolitan Subway Stations using Graph Partitioning (그래프 분할을 이용한 서울 수도권 지하철역들의 분류)

  • Park, Jong-Soo;Lee, Keum-Sook
    • Journal of the Economic Geographical Society of Korea
    • /
    • v.15 no.3
    • /
    • pp.343-357
    • /
    • 2012
  • The Seoul metropolitan subway system can be represented by a graph which consists of nodes and edges. In this paper, we study classification of subway stations and trip behaviour of subway passengers through partitioning the graph of the subway system into roughly equal groups. A weight of each edge of the graph is set to the number of passengers who pass the edge, where the number of passengers is extracted from the transportation card transaction database. Since the graph partitioning problem is NP-complete, we propose a heuristic algorithm to partition the subway graph. The heuristic algorithm uses one of two alternative objective functions, one of which is to minimize the sum of weights of edges connecting nodes in different groups and the other is to maximize the ratio of passengers who get on the subway train at one subway station and get off at another subway station in the same group to the total subway passengers. In the experimental results, we illustrate the subway stations and edges in each group by color on a map and analyze the trip behaviour of subway passengers by the group origin-destination matrix.

  • PDF

New Optimization Algorithm for Data Clustering (최적화에 기반 한 데이터 클러스터링 알고리즘)

  • Kim, Ju-Mi
    • Journal of Intelligence and Information Systems
    • /
    • v.13 no.3
    • /
    • pp.31-45
    • /
    • 2007
  • Large data handling is one of critical issues that the data mining community faces. This is particularly true for computationally intense tasks such as data clustering. Random sampling of instances is one possible means of achieving large data handling, but a pervasive problem with this approach is how to deal with the noise in the evaluation of the learning algorithm. This paper develops a new optimization based clustering approach using an algorithm specifically designed for noisy performance. Numerical results show this algorithm better than the other algorithms such as PAM and CLARA. Also with this algorithm substantial benefits can be achieved in terms of computational time without sacrificing solution quality using partial data.

  • PDF

Multiple Description Coding using Whitening Ttansform

  • Park, Kwang-Pyo;Lee, Keun-Young
    • Proceedings of the IEEK Conference
    • /
    • 2002.07b
    • /
    • pp.1003-1006
    • /
    • 2002
  • In the communications systems with diversity, we are commonly faced on needing of new source coding technique, error resilient coding. The error resilient coding addresses the coding algorithm that has the robustness to unreliability of communications channel. In recent years, many error resilient coding techniques were proposed such as data partitioning, resynchronization, error detection, concealment, reference picture selection and multiple description coding (MDC). Especially, the MDC using correlating transform explicitly adds correlation between two descriptions to enable the estimation of one set from the other. However, in the conventional correlating transform method, there is a critical problem that decoder must know statistics of original image. In this paper, we propose an enhanced method, the MDC using whitening transform that is not necessary additional statistical information to decode image because the DCT coefficients to apply whitening transform to an image have uni-variance statistics. Our experimental results show that the proposed method achieves a good trade-off between the coding efficiency and the reconstruction quality. In the proposed method, the PSNR of images reconstructed from two descriptions is about 0.7dB higher than conventional method at the 1.0 BPP and from only one description is about 1,8dB higher at the same rate.

  • PDF

Parallel 2D-DWT Hardware Architecture for Image Compression Using the Lifting Scheme (이미지 압축을 위한 Lifting Scheme을 이용한 병렬 2D-DWT 하드웨어 구조)

  • Kim, Jong-Woog;Chong, Jong-Wha
    • Journal of IKEEE
    • /
    • v.6 no.1 s.10
    • /
    • pp.80-86
    • /
    • 2002
  • This paper presents a fast hardware architecture to implement a 2-D DWT(Discrete Wavelet Transform) computed by lifting scheme framework. The conventional 2-D DWT hardware architecture has problem in internal memory, hardware resource, and latency. The proposed architecture was based on the 4-way partitioned data set. This architecture is configured with a pipelining parallel architecture for 4-way partitioning method. Due to the use of this architecture, total latency was improved by 50%, and memory size was reduced by using lifting scheme.

  • PDF

Goal-oriented multi-collision source algorithm for discrete ordinates transport calculation

  • Wang, Xinyu;Zhang, Bin;Chen, Yixue
    • Nuclear Engineering and Technology
    • /
    • v.54 no.7
    • /
    • pp.2625-2634
    • /
    • 2022
  • Discretization errors are extremely challenging conundrums of discrete ordinates calculations for radiation transport problems with void regions. In previous work, we have presented a multi-collision source method (MCS) to overcome discretization errors, but the efficiency needs to be improved. This paper proposes a goal-oriented algorithm for the MCS method to adaptively determine the partitioning of the geometry and dynamically change the angular quadrature in remaining iterations. The importance factor based on the adjoint transport calculation obtains the response function to get a problem-dependent, goal-oriented spatial decomposition. The difference in the scalar fluxes from one high-order quadrature set to a lower one provides the error estimation as a driving force behind the dynamic quadrature. The goal-oriented algorithm allows optimizing by using ray-tracing technology or high-order quadrature sets in the first few iterations and arranging the integration order of the remaining iterations from high to low. The algorithm has been implemented in the 3D transport code ARES and was tested on the Kobayashi benchmarks. The numerical results show a reduction in computation time on these problems for the same desired level of accuracy as compared to the standard ARES code, and it has clear advantages over the traditional MCS method in solving radiation transport problems with reflective boundary conditions.