• Title/Summary/Keyword: Merge Algorithm

Search Result 171, Processing Time 0.028 seconds

A study on Web-based Video Panoramic Virtual Reality for Hose Cyber Shell Museum (비디오 파노라마 가상현실을 기반으로 하는 호서 사이버 패류 박물관의 연구)

  • Hong, Sung-Soo;khan, Irfan;Kim, Chang-ki
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2012.11a
    • /
    • pp.1468-1471
    • /
    • 2012
  • It is always a dream to recreate the experience of a particular place, the Panorama Virtual Reality has been interpreted as a kind of technology to create virtual environments and the ability to maneuver angle for and select the path of view in a dynamic scene. In this paper we examined an efficient algorithm for Image registration and stitching of captured imaged from a video stream. Two approaches are studied in this paper. First, dynamic programming is used to spot the ideal key points, match these points to merge adjacent images together, later image blending is use for smooth color transitions. In second approach, FAST and SURF detection are used to find distinct features in the images and a nearest neighbor algorithm is used to match corresponding features, estimate homography with matched key points using RANSAC. The paper also covers the automatically choosing (recognizing, comparing) images to stitching method.

Development of a Markov Chain Monte Carlo parameter estimation pipeline for compact binary coalescences with KAGRA GW detector (카그라 마코브 체인 몬테칼로 모수 추정 파이프라인 분석 개발과 밀집 쌍성의 물리량 측정)

  • Kim, Chunglee;Jeon, Chaeyeon;Lee, Hyung Won;Kim, Jeongcho;Tagoshi, Hideyuki
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.45 no.1
    • /
    • pp.51.3-52
    • /
    • 2020
  • We present the status of the development of a Markov Chain Monte Carlo (MCMC) parameter estimation (PE) pipeline for compact binary coalescences (CBCs) with the Japanese KAGRA gravitational-wave (GW) detector. The pipeline is included in the KAGRA Algorithm Library (KAGALI). Basic functionalities are benchmarked from the LIGO Algorithm Library (LALSuite) but the KAGRA MCMC PE pipeline will provide a simpler, memory-efficient pipeline to estimate physical parameters from gravitational waves emitted from compact binaries consisting of black holes or neutron stars. Applying inspiral-merge-ringdown and inspiral waveforms, we performed simulations of various black hole binaries, we performed the code sanity check and performance test. In this talk, we present the situation of GW observation with the Covid-19 pandemic. In addition to preliminary PE results with the KAGALI MCMC PE pipeline, we discuss how we can optimize a CBC PE pipeline toward the next observation run.

  • PDF

Feature Weighting in Projected Clustering for High Dimensional Data (고차원 데이타에 대한 투영 클러스터링에서 특성 가중치 부여)

  • Park, Jong-Soo
    • Journal of KIISE:Databases
    • /
    • v.32 no.3
    • /
    • pp.228-242
    • /
    • 2005
  • The projected clustering seeks to find clusters in different subspaces within a high dimensional dataset. We propose an algorithm to discover near optimal projected clusters without user specified parameters such as the number of output clusters and the average cardinality of subspaces of projected clusters. The objective function of the algorithm computes projected energy, quality, and the number of outliers in each process of clustering. In order to minimize the projected energy and to maximize the quality in clustering, we start to find best subspace of each cluster on the density of input points by comparing standard deviations of the full dimension. The weighting factor for each dimension of the subspace is used to get id of probable error in measuring projected distances. Our extensive experiments show that our algorithm discovers projected clusters accurately and it is scalable to large volume of data sets.

Efficient Color Image Segmentation using SOM and Grassfire Algorithm (SOM과 grassfire 기법을 이용한 효율적인 컬러 영상 분할)

  • Hwang, Young-Chul;Cha, Eui-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2008.08a
    • /
    • pp.142-145
    • /
    • 2008
  • This paper proposes a computationally efficient algorithm for color image segmentation using self-organizing map(SOM) and grassfire algorithm. We reduce a computation time by decreasing the number of input neuron and input data which is used for learning at SOM. First converting input image to CIE $L^*u^*v^*$ color space and run the learning stage with the SOM-input neuron size is three and output neuron structure is 4by4 or 5by5. After learning, compute output value correspondent with input pixel and merge adjacent pixels which have same output value into segment using grassfire algorithm. The experimental results with various images show that proposed method lead to a good segmentation results than others.

  • PDF

Improved Minimum Spanning Tree based Image Segmentation with Guided Matting

  • Wang, Weixing;Tu, Angyan;Bergholm, Fredrik
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.1
    • /
    • pp.211-230
    • /
    • 2022
  • In image segmentation, for the condition that objects (targets) and background in an image are intertwined or their common boundaries are vague as well as their textures are similar, and the targets in images are greatly variable, the deep learning might be difficult to use. Hence, a new method based on graph theory and guided feathering is proposed. First, it uses a guided feathering algorithm to initially separate the objects from background roughly, then, the image is separated into two different images: foreground image and background image, subsequently, the two images are segmented accurately by using the improved graph-based algorithm respectively, and finally, the two segmented images are merged together as the final segmentation result. For the graph-based new algorithm, it is improved based on MST in three main aspects: (1) the differences between the functions of intra-regional and inter-regional; (2) the function of edge weight; and (3) re-merge mechanism after segmentation in graph mapping. Compared to the traditional algorithms such as region merging, ordinary MST and thresholding, the studied algorithm has the better segmentation accuracy and effect, therefore it has the significant superiority.

Segmented Douglas-Peucker Algorithm Based on the Node Importance

  • Wang, Xiaofei;Yang, Wei;Liu, Yan;Sun, Rui;Hu, Jun;Yang, Longcheng;Hou, Boyang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.4
    • /
    • pp.1562-1578
    • /
    • 2020
  • Vector data compression algorithm can meet requirements of different levels and scales by reducing the data amount of vector graphics, so as to reduce the transmission, processing time and storage overhead of data. In view of the fact that large threshold leading to comparatively large error in Douglas-Peucker vector data compression algorithm, which has difficulty in maintaining the uncertainty of shape features and threshold selection, a segmented Douglas-Peucker algorithm based on node importance is proposed. Firstly, the algorithm uses the vertical chord ratio as the main feature to detect and extract the critical points with large contribution to the shape of the curve, so as to ensure its basic shape. Then, combined with the radial distance constraint, it selects the maximum point as the critical point, and introduces the threshold related to the scale to merge and adjust the critical points, so as to realize local feature extraction between two critical points to meet the requirements in accuracy. Finally, through a large number of different vector data sets, the improved algorithm is analyzed and evaluated from qualitative and quantitative aspects. Experimental results indicate that the improved vector data compression algorithm is better than Douglas-Peucker algorithm in shape retention, compression error, results simplification and time efficiency.

Image Segmentation Using Morphological Operation and Region Merging (형태학적 연산과 영역 융합을 이용한 영상 분할)

  • 강의성;이태형;고성제
    • Journal of Broadcast Engineering
    • /
    • v.2 no.2
    • /
    • pp.156-169
    • /
    • 1997
  • This paper proposes an image segmentation technique using watershed algorithm followed by region merging method. A gradient image is obtained by applying multiscale gradient algorithm to the image simplified by morphological filters. Since the watershed algorithm produces the oversegmented image. it is necessary to merge small segmented regions as wel]' as region having similar characteristics. For region merging. we utilize the merging criteria based on both the mean value of the pixels of each region and the edge intensities between regions obtained by the contour following process. Experimental results show that the proposed method produces meaningful image segmentation results.

  • PDF

An Efficient Processor Allocation Scheme for Hypercube (하이퍼큐브에서의 효과적인 프로세서할당 기법)

  • Son, Yoo-Ek;Nam, Jae-Yeal
    • The Transactions of the Korea Information Processing Society
    • /
    • v.3 no.4
    • /
    • pp.781-790
    • /
    • 1996
  • processors must be allocated to incoming tasks in a way that will maximize the processor utilization and minimize the system fragmentation. Thus, an efficient method of allocating processors in a hypercube is a key to system performance. In order to achieve this goal, it is necessary to detect the availability of a subcube of required size and merge the released small cubes to form a larger ones. This paper presents the tree-exchange algorithm which detemines the levels and partners of the binary tree representation of a hypercube, and an efficient allocation strategy using the algorithm. The complexity for search time of the algorithm is $O\ulcorner$n/2$\lrcorner$$\times$2n)and it shows good performance in comparison with other strategies.

  • PDF

Small Target Detection with Clutter Rejection using Stochastic Hypothesis Testing

  • Kang, Suk-Jong;Kim, Do-Jong;Ko, Jung-Ho;Bae, Hyeon-Deok
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.12
    • /
    • pp.1559-1565
    • /
    • 2007
  • The many target-detection methods that use forward-looking infrared (FUR) images can deal with large targets measuring $70{\times}40$ pixels, utilizing their shape features. However, detection small targets is difficult because they are more obscure and there are many target-like objects. Therefore, few studies have examined how to detect small targets consisting of fewer than $30{\times}10$ pixels. This paper presents a small target detection method using clutter rejection with stochastic hypothesis testing for FLIR imagery. The proposed algorithm consists of two stages; detection and clutter rejection. In the detection stage, the mean of the input FLIR image is first removed and then the image is segmented using Otsu's method. A closing operation is also applied during the detection stage in order to merge any single targets detected separately. Then, the residual of the clutters is eliminated using statistical hypothesis testing based on the t-test. Several FLIR images are used to prove the performance of the proposed algorithm. The experimental results show that the proposed algorithm accurately detects small targets (Jess than $30{\times}10$ pixels) with a low false alarm rate compared to the center-surround difference method using the receiver operating characteristics (ROC) curve.

  • PDF

A Heuristic Polynomial Time Algorithm for Crew Scheduling Problem

  • Lee, Sang-Un
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.11
    • /
    • pp.69-75
    • /
    • 2015
  • This paper suggests heuristic polynomial time algorithm for crew scheduling problem that is a kind of optimization problems. This problem has been solved by linear programming, set cover problem, set partition problem, column generation, etc. But the optimal solution has not been obtained by these methods. This paper sorts transit costs $c_{ij}$ to ascending order, and the task i and j crew paths are merged in case of the sum of operation time ${\Sigma}o$ is less than day working time T. As a result, we can be obtain the minimum number of crews $_{min}K$ and minimum transit cost $z=_{min}c_{ij}$. For the transit cost of specific number of crews $K(K>_{min}K)$, we delete the maximum $c_{ij}$ as much as the number of $K-_{min}K$, and to partition a crew path. For the 5 benchmark data, this algorithm can be gets less transit cost than state-of-the-art algorithms, and gets the minimum number of crews.