• Title/Summary/Keyword: Parallel data processing

Search Result 751, Processing Time 0.033 seconds

Fast Motion Artifact Correction Using l$_1$-norm (l$_1$-norm을 이용한 움직임 인공물의 고속 보정)

  • Zho, Sang-Young;Kim, Eung-Yeop;Kim, Dong-Hyun
    • Investigative Magnetic Resonance Imaging
    • /
    • v.13 no.1
    • /
    • pp.22-30
    • /
    • 2009
  • Purpose : Patient motion during magnetic resonance (MR) imaging is one of the major problems due to its long scan time. Entropy based post-processing motion correction techniques have been shown to correct motion artifact effectively. One of main limitations of these techniques however is its long processing time. In this study, we propose several methods to reduce this long processing time effectively. Materials and Methods : To reduce the long processing time, we used the separability property of two dimensional Fourier transform (2-D FT). Also, a computationally light metric (sum of all image pixel intensity) was used instead of the entropy criterion. Finally, partial Fourier reconstruction, in particular the projection onto convex set (POCS) method, was combined thereby reducing the size of the data which should be processed and corrected. Results : Time savings of each proposed method are presented with different data size of brain images. In vivo data were processed using the proposed method and showed similar image quality. The total processing time was reduced to 15% in two dimensional images and 30% in the three dimensional images. Conclusion : The proposed methods can be useful in reducing image motion artifacts when only post-processing motion correction algorithms are available. The proposed methods can also be combined with parallel imaging technique to further reduce the processing times.

  • PDF

Parallel Range Query processing on R-tree with Graphics Processing Units (GPU를 이용한 R-tree에서의 범위 질의의 병렬 처리)

  • Yu, Bo-Seon;Kim, Hyun-Duk;Choi, Won-Ik;Kwon, Dong-Seop
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.5
    • /
    • pp.669-680
    • /
    • 2011
  • R-trees are widely used in various areas such as geographical information systems, CAD systems and spatial databases in order to efficiently index multi-dimensional data. As data sets used in these areas grow in size and complexity, however, range query operations on R-tree are needed to be further faster to meet the area-specific constraints. To address this problem, there have been various research efforts to develop strategies for acceleration query processing on R-tree by using the buffer mechanism or parallelizing the query processing on R-tree through multiple disks and processors. As a part of the strategies, approaches which parallelize query processing on R-tree through Graphics Processor Units(GPUs) have been explored. The use of GPUs may guarantee improved performances resulting from faster calculations and reduced disk accesses but may cause additional overhead costs caused by high memory access latencies and low data exchange rate between GPUs and the CPU. In this paper, to address the overhead problems and to adapt GPUs efficiently, we propose a novel approach which uses a GPU as a buffer to parallelize query processing on R-tree. The use of buffer algorithm can give improved performance by reducing the number of disk access and maximizing coalesced memory access resulting in minimizing GPU memory access latencies. Through the extensive performance studies, we observed that the proposed approach achieved up to 5 times higher query performance than the original CPU-based R-trees.

Analysis tool for the diffusion model using GPU: SNUDM-G (GPU를 이용한 확산모형 분석 도구: SNUDM-G)

  • Lee, Dajung;Lee, Hyosun;Koh, Sungryong
    • Korean Journal of Cognitive Science
    • /
    • v.33 no.3
    • /
    • pp.155-168
    • /
    • 2022
  • In this paper, we introduce the SNUDM-G, a diffusion model analysis tool with improved computational speed. Although the diffusion model has been applied to explain various cognitive tasks, its use was limited due to computational difficulties. In particular, SNUDM(Koh et al., 2020), one of the diffusion model analysis tools, has a disadvantage in terms of processing speed because it sequentially generates 20,000 data when approximating the diffusion process. To overcome this limitation, we propose to use graphic processing units(GPU) in the process of approximating the diffusion process with a random walk process. Since 20,000 data can be generated in parallel using the graphic processing units, the estimation speed can be increased compared to generating data through sequential processing. As a result of analyzing the data of Experiment 1 by Ratcliff et al. (2004) and recovering the parameters with SNUDM-G using GPU and SNUDM using CPU, SNUDM-G estimated slightly higher values for certain parameters than SNUDM. However, in term of computational speed, SNUDM-G estimated the parameters much faster than SNUDM. This result shows that a more efficient diffusion model analysis for various cognitive tasks is possible using this tool and further suggests that the processing speed of various cognitive models can be improved by using graphic processing units in the future.

A Study on the Parallel Escape Maze through Cooperative Activities of Humanoid Robots (인간형 로봇들의 협력 작업을 통한 미로 동시 탈출에 관한 연구)

  • Jun, Bong-Gi
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.6
    • /
    • pp.1441-1446
    • /
    • 2014
  • For the escape from a maze, the cooperative method by robot swarm was proposed in this paper. The robots can freely move by collecting essential data and making a decision in the use of sensors; however, a central control system is required to organize all robots for the escape from the maze. The robots explore new mazes and then send the information to the system for analyzing and mapping the escaping route. Three issues were considered as follows for the effective escape by multiple robots from the mazes in this paper. In the first, the mazes began to divide and secondly, dead-ends should be blocked. Finally, after the first arrivals at the destination, a shortcut should be provided for rapid escaping from the maze. The parallel-escape algorithms were applied to the different size of mazes, so that robot swarm can effectively get away the mazes.

Reinforcement Learning for Minimizing Tardiness and Set-Up Change in Parallel Machine Scheduling Problems for Profile Shops in Shipyard (조선소 병렬 기계 공정에서의 납기 지연 및 셋업 변경 최소화를 위한 강화학습 기반의 생산라인 투입순서 결정)

  • So-Hyun Nam;Young-In Cho;Jong Hun Woo
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.60 no.3
    • /
    • pp.202-211
    • /
    • 2023
  • The profile shops in shipyards produce section steels required for block production of ships. Due to the limitations of shipyard's production capacity, a considerable amount of work is already outsourced. In addition, the need to improve the productivity of the profile shops is growing because the production volume is expected to increase due to the recent boom in the shipbuilding industry. In this study, a scheduling optimization was conducted for a parallel welding line of the profile process, with the aim of minimizing tardiness and the number of set-up changes as objective functions to achieve productivity improvements. In particular, this study applied a dynamic scheduling method to determine the job sequence considering variability of processing time. A Markov decision process model was proposed for the job sequence problem, considering the trade-off relationship between two objective functions. Deep reinforcement learning was also used to learn the optimal scheduling policy. The developed algorithm was evaluated by comparing its performance with priority rules (SSPT, ATCS, MDD, COVERT rule) in test scenarios constructed by the sampling data. As a result, the proposed scheduling algorithms outperformed than the priority rules in terms of set-up ratio, tardiness, and makespan.

An Efficient Clustering Method based on Multi Centroid Set using MapReduce (맵리듀스를 이용한 다중 중심점 집합 기반의 효율적인 클러스터링 방법)

  • Kang, Sungmin;Lee, Seokjoo;Min, Jun-ki
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.7
    • /
    • pp.494-499
    • /
    • 2015
  • As the size of data increases, it becomes important to identify properties by analyzing big data. In this paper, we propose a k-Means based efficient clustering technique, called MCSKMeans (Multi centroid set k-Means), using distributed parallel processing framework MapReduce. A problem with the k-Means algorithm is that the accuracy of clustering depends on initial centroids created randomly. To alleviate this problem, the MCSK-Means algorithm reduces the dependency of initial centroids using sets consisting of k centroids. In addition, we apply the agglomerative hierarchical clustering technique for creating k centroids from centroids in m centroid sets which are the results of the clustering phase. In this paper, we implemented our MCSK-Means based on the MapReduce framework for processing big data efficiently.

A study on performance improvement considering the balance between corpus in Neural Machine Translation (인공신경망 기계번역에서 말뭉치 간의 균형성을 고려한 성능 향상 연구)

  • Park, Chanjun;Park, Kinam;Moon, Hyeonseok;Eo, Sugyeong;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.5
    • /
    • pp.23-29
    • /
    • 2021
  • Recent deep learning-based natural language processing studies are conducting research to improve performance by training large amounts of data from various sources together. However, there is a possibility that the methodology of learning by combining data from various sources into one may prevent performance improvement. In the case of machine translation, data deviation occurs due to differences in translation(liberal, literal), style(colloquial, written, formal, etc.), domains, etc. Combining these corpora into one for learning can adversely affect performance. In this paper, we propose a new Corpus Weight Balance(CWB) method that considers the balance between parallel corpora in machine translation. As a result of the experiment, the model trained with balanced corpus showed better performance than the existing model. In addition, we propose an additional corpus construction process that enables coexistence with the human translation market, which can build high-quality parallel corpus even with a monolingual corpus.

Kirchhoff prestack depth migration for gas hydrate seismic data set (가스 하이드레이트 자료에 대한 중합전 키르히호프 심도 구조보정)

  • Hien, Doan Huy;Jang, Seong-Hyung;Kim, Young-Wan;Suh, Sang-Yong
    • 한국신재생에너지학회:학술대회논문집
    • /
    • 2007.06a
    • /
    • pp.493-496
    • /
    • 2007
  • Korean Institute of Geosciences and Mineral Resources (KIGAM) has studied on gas hydrate in the Ulleung Basin, East sea of Korea since 1997. Most of all, a evidence for existence of gas hydrate, possible new energy resources, in seismic reflection data is bottom simulating reflection (BSR) which parallel to the sea bottom. Here we conducted the conventional data processing for gas hydrate data and Kirchhoff prestack depth migration. Kirchhoff migration is widely used for pre- and post-stack migration might be helpful to better image as well as to get the geological information. The processed stack image by GEOBIT showed some geological structures such as faults and shallow gas hydrate seeping area indicated by strong BSR. The BSR in the stack image showed at TWT 3.07s between shot gather No 3940 to No 4120. The estimated gas seeping area occurred at the shot point No 4187 to No 4203 and it seems to have some minor faults at shot point No 3735, 3791, 3947 and 4120. According to the result of depth migration, the BSR showed as 2.3km below the sea bottom.

  • PDF

A Virtual Microscope System for Educational Applications (교육 분야 응용을 위한 가상 현미경 시스템)

  • Cho, Seung-Ho;Beynon, Mike;Saltz, Joel
    • The KIPS Transactions:PartD
    • /
    • v.10D no.1
    • /
    • pp.117-124
    • /
    • 2003
  • The system implemented in this paper partitions and stores specimen data captured by a light microscope on distributed or parallel systems. Users ran observe images on computers as we use a physical microscope. Based on the client-server computing model, the system consists of client, coordinator, and data manager. Three components communicate messages. For retrieving images, we implemented the client program with necessary functions for educational applications such at image mark and text annotation, and defined the communication protocol. We performed the experiment for introducing a tape storage which stores a large volume of data. The experiment results showed performance improvement by data partitioning and indexing technique.

A Hybrid Mechanism of Particle Swarm Optimization and Differential Evolution Algorithms based on Spark

  • Fan, Debin;Lee, Jaewan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.12
    • /
    • pp.5972-5989
    • /
    • 2019
  • With the onset of the big data age, data is growing exponentially, and the issue of how to optimize large-scale data processing is especially significant. Large-scale global optimization (LSGO) is a research topic with great interest in academia and industry. Spark is a popular cloud computing framework that can cluster large-scale data, and it can effectively support the functions of iterative calculation through resilient distributed datasets (RDD). In this paper, we propose a hybrid mechanism of particle swarm optimization (PSO) and differential evolution (DE) algorithms based on Spark (SparkPSODE). The SparkPSODE algorithm is a parallel algorithm, in which the RDD and island models are employed. The island model is used to divide the global population into several subpopulations, which are applied to reduce the computational time by corresponding to RDD's partitions. To preserve population diversity and avoid premature convergence, the evolutionary strategy of DE is integrated into SparkPSODE. Finally, SparkPSODE is conducted on a set of benchmark problems on LSGO and show that, in comparison with several algorithms, the proposed SparkPSODE algorithm obtains better optimization performance through experimental results.