• Title/Summary/Keyword: two-dimensional multi-core system

Search Result 7, Processing Time 0.028 seconds

Exploration of an Optimal Two-Dimensional Multi-Core System for Singular Value Decomposition (특이치 분해를 위한 최적의 2차원 멀티코어 시스템 탐색)

  • Park, Yong-Hun;Kim, Cheol-Hong;Kim, Jong-Myon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.9
    • /
    • pp.21-31
    • /
    • 2014
  • Singular value decomposition (SVD) has been widely used to identify unique features from a data set in various fields. However, a complex matrix calculation of SVD requires tremendous computation time. This paper improves the performance of a representative one-sided block Jacoby algorithm using a two-dimensional (2D) multi-core system. In addition, this paper explores an optimal multi-core system by varying the number of processing elements in the 2D multi-core system with the same 400MHz clock frequency and TSMC 28nm technology for each matrix-based one-sided block Jacoby algorithm ($128{\times}128$, $64{\times}64$, $32{\times}32$, $16{\times}16$). Moreover, this paper demonstrates the potential of the 2D multi-core system for the one-sided block Jacoby algorithm by comparing the performance of the multi-core system with a commercial high-performance graphics processing unit (GPU).

Verification of a two-step code system MCS/RAST-F to fast reactor core analysis

  • Tran, Tuan Quoc;Cherezov, Alexey;Du, Xianan;Lee, Deokjung
    • Nuclear Engineering and Technology
    • /
    • v.54 no.5
    • /
    • pp.1789-1803
    • /
    • 2022
  • RAST-F is a new full-core analysis code based on the two-step approach that couples a multi-group cross-section generation Monte-Carlo code MCS and a multi-group nodal diffusion solver. To demonstrate the feasibility of using MCS/RAST-F for fast reactor analysis, this paper presents the coupled nodal code verification results for the MET-1000 and CAR-3600 benchmark cores. Three different multi-group cross-section calculation schemes are employed to improve the agreement between the nodal and reference solutions. The reference solution is obtained by the MCS code using continuous-energy nuclear data. Additionally, the MCS/RAST-F nodal solution is verified with results based on cross-section generated by collision probability code TULIP. A good agreement between MCS/RAST-F and reference solution is observed with less than 120 pcm discrepancy in keff and less than 1.2% root-mean-square error in power distribution. This study confirms the two-step approach MCS/RAST-F as a reliable tool for the three-dimensional simulation of reactor cores with fast spectrum.

ANALYSIS OF THE ISP-50 DIRECT VESSEL INJECTION SBLOCA IN THE ATLAS FACILITY WITH THE RELAP5/MOD3.3 CODE

  • Sharabi, Medhat;Freixa, Jordi
    • Nuclear Engineering and Technology
    • /
    • v.44 no.7
    • /
    • pp.709-718
    • /
    • 2012
  • The pressurized water reactor APR1400 adopts DVI (Direct Vessel Injection) for the emergency cooling water in the upper downcomer annulus. The International Standard Problem number 50 (ISP-50) was launched with the aim to investigate thermal hydraulic phenomena during a 50% DVI line break scenario with best estimate codes making use of the experimental data available from the ATLAS facility located at KAERI. The present work describes the calculation results obtained for the ISP-50 using the RELAP5/MOD3.3 system code. The work aims at validation and assessment of the code to reproduce the observed phenomena and investigate about its limitations to predict complicated mixing phenomena between the subcooled emergency cooling water and the two-phase flow in the downcomer. The obtained results show that the overall trends of the main test variables are well reproduced by the calculations. In particular, the pressure in the primary system show excellent agreement with the experiment. The loop seal clearance phenomenon was observed in the calculation and it was found to have an important influence on the transient progression. Moreover, the collapsed water levels in the core are accurately reproduced in the simulations. However, the drop in the downcomer level before the activation of the DVI from safety injection tanks was underestimated due to multi-dimensional phenomena in the downcomer that are not properly captured by one-dimensional simulations.

Calculation of Power Distributions on Uranium- and Plutonium-Loaded Cores Moderated by Light Water (우라늄 및 플루토늄 장전 노심에서의 출력 분포 계산)

  • Sang Keun Lee;Kap Suk Moon;Jong-Hwa Jang;Ji Bok Lee;Chang Kun Lee
    • Nuclear Engineering and Technology
    • /
    • v.15 no.4
    • /
    • pp.267-279
    • /
    • 1983
  • An analytical system has been established for scrutinizing both uranium- and plutonium-fueled lattices moderated by light water. This system consists of two primary codes. One is a unit cell program called KICC, which has theoretical foundation on the models of GAM and THERMOS incorporated with appropriate approximate treatments for various phenomena, whereas the other is a multi-dimensional diffusion-depletion program entitled KIDD. The adequacy of this system is verified by performing extensive benchmark calculations on a variety of critical experiments. The average value of effective multiplication factors for the selected nineteen UO$_2$ critical experiments of heterogeneous lattice structure is calculated to be 1.0006 with a standard deviation of 0.0039. Power distributions have also been calculated for some critical experiments fueled with both uranium and plutonium of varying concentrations. The maximum percentage difference between the measured and calculated power distributions appears to be less than 5%. This result, together with the previously reported result, illustrates that the KICC/KIDD system is a very effective tool for the analysis of a light water reactor core.

  • PDF

Multi-Vector Document Embedding Using Semantic Decomposition of Complex Documents (복합 문서의 의미적 분해를 통한 다중 벡터 문서 임베딩 방법론)

  • Park, Jongin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.19-41
    • /
    • 2019
  • According to the rapidly increasing demand for text data analysis, research and investment in text mining are being actively conducted not only in academia but also in various industries. Text mining is generally conducted in two steps. In the first step, the text of the collected document is tokenized and structured to convert the original document into a computer-readable form. In the second step, tasks such as document classification, clustering, and topic modeling are conducted according to the purpose of analysis. Until recently, text mining-related studies have been focused on the application of the second steps, such as document classification, clustering, and topic modeling. However, with the discovery that the text structuring process substantially influences the quality of the analysis results, various embedding methods have actively been studied to improve the quality of analysis results by preserving the meaning of words and documents in the process of representing text data as vectors. Unlike structured data, which can be directly applied to a variety of operations and traditional analysis techniques, Unstructured text should be preceded by a structuring task that transforms the original document into a form that the computer can understand before analysis. It is called "Embedding" that arbitrary objects are mapped to a specific dimension space while maintaining algebraic properties for structuring the text data. Recently, attempts have been made to embed not only words but also sentences, paragraphs, and entire documents in various aspects. Particularly, with the demand for analysis of document embedding increases rapidly, many algorithms have been developed to support it. Among them, doc2Vec which extends word2Vec and embeds each document into one vector is most widely used. However, the traditional document embedding method represented by doc2Vec generates a vector for each document using the whole corpus included in the document. This causes a limit that the document vector is affected by not only core words but also miscellaneous words. Additionally, the traditional document embedding schemes usually map each document into a single corresponding vector. Therefore, it is difficult to represent a complex document with multiple subjects into a single vector accurately using the traditional approach. In this paper, we propose a new multi-vector document embedding method to overcome these limitations of the traditional document embedding methods. This study targets documents that explicitly separate body content and keywords. In the case of a document without keywords, this method can be applied after extract keywords through various analysis methods. However, since this is not the core subject of the proposed method, we introduce the process of applying the proposed method to documents that predefine keywords in the text. The proposed method consists of (1) Parsing, (2) Word Embedding, (3) Keyword Vector Extraction, (4) Keyword Clustering, and (5) Multiple-Vector Generation. The specific process is as follows. all text in a document is tokenized and each token is represented as a vector having N-dimensional real value through word embedding. After that, to overcome the limitations of the traditional document embedding method that is affected by not only the core word but also the miscellaneous words, vectors corresponding to the keywords of each document are extracted and make up sets of keyword vector for each document. Next, clustering is conducted on a set of keywords for each document to identify multiple subjects included in the document. Finally, a Multi-vector is generated from vectors of keywords constituting each cluster. The experiments for 3.147 academic papers revealed that the single vector-based traditional approach cannot properly map complex documents because of interference among subjects in each vector. With the proposed multi-vector based method, we ascertained that complex documents can be vectorized more accurately by eliminating the interference among subjects.

Recent Progress in Air-Conditioning and Refrigeration Research : A Review of Papers Published in the Korean Journal of Air-Conditioning and Refrigeration Engineering in 2016 (설비공학 분야의 최근 연구 동향 : 2016년 학회지 논문에 대한 종합적 고찰)

  • Lee, Dae-Young;Kim, Sa Ryang;Kim, Hyun-Jung;Kim, Dong-Seon;Park, Jun-Seok;Ihm, Pyeong Chan
    • Korean Journal of Air-Conditioning and Refrigeration Engineering
    • /
    • v.29 no.6
    • /
    • pp.327-340
    • /
    • 2017
  • This article reviews the papers published in the Korean Journal of Air-Conditioning and Refrigeration Engineering during 2016. It is intended to understand the status of current research in the areas of heating, cooling, ventilation, sanitation, and indoor environments of buildings and plant facilities. Conclusions are as follows. (1) The research works on the thermal and fluid engineering have been reviewed as groups of flow, heat and mass transfer, the reduction of pollutant exhaust gas, cooling and heating, the renewable energy system and the flow around buildings. CFD schemes were used more for all research areas. (2) Research works on heat transfer area have been reviewed in the categories of heat transfer characteristics, pool boiling and condensing heat transfer and industrial heat exchangers. Researches on heat transfer characteristics included the results of the long-term performance variation of the plate-type enthalpy exchange element made of paper, design optimization of an extruded-type cooling structure for reducing the weight of LED street lights, and hot plate welding of thermoplastic elastomer packing. In the area of pool boiling and condensing, the heat transfer characteristics of a finned-tube heat exchanger in a PCM (phase change material) thermal energy storage system, influence of flow boiling heat transfer on fouling phenomenon in nanofluids, and PCM at the simultaneous charging and discharging condition were studied. In the area of industrial heat exchangers, one-dimensional flow network model and porous-media model, and R245fa in a plate-shell heat exchanger were studied. (3) Various studies were published in the categories of refrigeration cycle, alternative refrigeration/energy system, system control. In the refrigeration cycle category, subjects include mobile cold storage heat exchanger, compressor reliability, indirect refrigeration system with $CO_2$ as secondary fluid, heat pump for fuel-cell vehicle, heat recovery from hybrid drier and heat exchangers with two-port and flat tubes. In the alternative refrigeration/energy system category, subjects include membrane module for dehumidification refrigeration, desiccant-assisted low-temperature drying, regenerative evaporative cooler and ejector-assisted multi-stage evaporation. In the system control category, subjects include multi-refrigeration system control, emergency cooling of data center and variable-speed compressor control. (4) In building mechanical system research fields, fifteenth studies were reported for achieving effective design of the mechanical systems, and also for maximizing the energy efficiency of buildings. The topics of the studies included energy performance, HVAC system, ventilation, renewable energies, etc. Proposed designs, performance tests using numerical methods and experiments provide useful information and key data which could be help for improving the energy efficiency of the buildings. (5) The field of architectural environment was mostly focused on indoor environment and building energy. The main researches of indoor environment were related to the analyses of indoor thermal environments controlled by portable cooler, the effects of outdoor wind pressure in airflow at high-rise buildings, window air tightness related to the filling piece shapes, stack effect in core type's office building and the development of a movable drawer-type light shelf with adjustable depth of the reflector. The subjects of building energy were worked on the energy consumption analysis in office building, the prediction of exit air temperature of horizontal geothermal heat exchanger, LS-SVM based modeling of hot water supply load for district heating system, the energy saving effect of ERV system using night purge control method and the effect of strengthened insulation level to the building heating and cooling load.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.