• Title/Summary/Keyword: in-memory computing

Search Result 766, Processing Time 0.026 seconds

APPLICATION OF BACKWARD DIFFERENTIATION FORMULA TO SPATIAL REACTOR KINETICS CALCULATION WITH ADAPTIVE TIME STEP CONTROL

  • Shim, Cheon-Bo;Jung, Yeon-Sang;Yoon, Joo-Il;Joo, Han-Gyu
    • Nuclear Engineering and Technology
    • /
    • v.43 no.6
    • /
    • pp.531-546
    • /
    • 2011
  • The backward differentiation formula (BDF) method is applied to a three-dimensional reactor kinetics calculation for efficient yet accurate transient analysis with adaptive time step control. The coarse mesh finite difference (CMFD) formulation is used for an efficient implementation of the BDF method that does not require excessive memory to store old information from previous time steps. An iterative scheme to update the nodal coupling coefficients through higher order local nodal solutions is established in order to make it possible to store only node average fluxes of the previous five time points. An adaptive time step control method is derived using two order solutions, the fifth and the fourth order BDF solutions, which provide an estimate of the solution error at the current time point. The performance of the BDF- and CMFD-based spatial kinetics calculation and the adaptive time step control scheme is examined with the NEACRP control rod ejection and rod withdrawal benchmark problems. The accuracy is first assessed by comparing the BDF-based results with those of the Crank-Nicholson method with an exponential transform. The effectiveness of the adaptive time step control is then assessed in terms of the possible computing time reduction in producing sufficiently accurate solutions that meet the desired solution fidelity.

Non-Preemptive Fixed Priority Scheduling for Design of Real-Time Embedded Systems (실시간 내장형 시스템의 설계를 위할 비선점형 고정우선순위 스케줄링)

  • Park, Moon-Ju
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.2
    • /
    • pp.89-97
    • /
    • 2009
  • Embedded systems widely used in ubiquitous environments usually employ an event-driven programming model instead of thread-based programming model in order to create a more robust system that uses less memory. However, as the software for embedded systems becomes more complex, it becomes hard to program as a single event handler using the event-driven programming model. This paper discusses the implementation of non-preemptive real-time scheduling theory for the design of embedded systems. To this end, we present an efficient schedulability test method for a given non-preemptive task set using a sufficient condition. This paper also shows that the notion of sub-tasks in embedded systems can overcome the problem of low utilization that is a main drawback of non-preemptive scheduling.

High fidelity transient solver in STREAM based on multigroup coarse-mesh finite difference method

  • Anisur Rahman;Hyun Chul Lee;Deokjung Lee
    • Nuclear Engineering and Technology
    • /
    • v.55 no.9
    • /
    • pp.3301-3312
    • /
    • 2023
  • This study incorporates a high-fidelity transient analysis solver based on multigroup CMFD in the MOC code STREAM. Transport modeling with heterogeneous geometries of the reactor core increases computational cost in terms of memory and time, whereas the multigroup CMFD reduces the computational cost. The reactor condition does not change at every time step, which is a vital point for the utilization of CMFD. CMFD correction factors are updated from the transport solution whenever the reactor core condition changes, and the simulation continues until the end. The transport solution is adjusted once CMFD achieves the solution. The flux-weighted method is used for rod decusping to update the partially inserted control rod cell material, which maintains the solution's stability. A smaller time-step size is needed to obtain an accurate solution, which increases the computational cost. The adaptive step-size control algorithm is robust for controlling the time step size. This algorithm is based on local errors and has the potential capability to accept or reject the solution. Several numerical problems are selected to analyze the performance and numerical accuracy of parallel computing, rod decusping, and adaptive time step control. Lastly, a typical pressurized LWR was chosen to study the rod-ejection accident.

Structural reliability analysis using temporal deep learning-based model and importance sampling

  • Nguyen, Truong-Thang;Dang, Viet-Hung
    • Structural Engineering and Mechanics
    • /
    • v.84 no.3
    • /
    • pp.323-335
    • /
    • 2022
  • The main idea of the framework is to seamlessly combine a reasonably accurate and fast surrogate model with the importance sampling strategy. Developing a surrogate model for predicting structures' dynamic responses is challenging because it involves high-dimensional inputs and outputs. For this purpose, a novel surrogate model based on cutting-edge deep learning architectures specialized for capturing temporal relationships within time-series data, namely Long-Short term memory layer and Transformer layer, is designed. After being properly trained, the surrogate model could be utilized in place of the finite element method to evaluate structures' responses without requiring any specialized software. On the other hand, the importance sampling is adopted to reduce the number of calculations required when computing the failure probability by drawing more relevant samples near critical areas. Thanks to the portability of the trained surrogate model, one can integrate the latter with the Importance sampling in a straightforward fashion, forming an efficient framework called TTIS, which represents double advantages: less number of calculations is needed, and the computational time of each calculation is significantly reduced. The proposed approach's applicability and efficiency are demonstrated through three examples with increasing complexity, involving a 1D beam, a 2D frame, and a 3D building structure. The results show that compared to the conventional Monte Carlo simulation, the proposed method can provide highly similar reliability results with a reduction of up to four orders of magnitudes in time complexity.

Bioinformatics services for analyzing massive genomic datasets

  • Ko, Gunhwan;Kim, Pan-Gyu;Cho, Youngbum;Jeong, Seongmun;Kim, Jae-Yoon;Kim, Kyoung Hyoun;Lee, Ho-Yeon;Han, Jiyeon;Yu, Namhee;Ham, Seokjin;Jang, Insoon;Kang, Byunghee;Shin, Sunguk;Kim, Lian;Lee, Seung-Won;Nam, Dougu;Kim, Jihyun F.;Kim, Namshin;Kim, Seon-Young;Lee, Sanghyuk;Roh, Tae-Young;Lee, Byungwook
    • Genomics & Informatics
    • /
    • v.18 no.1
    • /
    • pp.8.1-8.10
    • /
    • 2020
  • The explosive growth of next-generation sequencing data has resulted in ultra-large-scale datasets and ensuing computational problems. In Korea, the amount of genomic data has been increasing rapidly in the recent years. Leveraging these big data requires researchers to use large-scale computational resources and analysis pipelines. A promising solution for addressing this computational challenge is cloud computing, where CPUs, memory, storage, and programs are accessible in the form of virtual machines. Here, we present a cloud computing-based system, Bio-Express, that provides user-friendly, cost-effective analysis of massive genomic datasets. Bio-Express is loaded with predefined multi-omics data analysis pipelines, which are divided into genome, transcriptome, epigenome, and metagenome pipelines. Users can employ predefined pipelines or create a new pipeline for analyzing their own omics data. We also developed several web-based services for facilitating downstream analysis of genome data. Bio-Express web service is freely available at https://www. bioexpress.re.kr/.

A Study on the Efficiency of Join Operation On Stream Data Using Sliding Windows (스트림 데이터에서 슬라이딩 윈도우를 사용한 조인 연산의 효율에 관한 연구)

  • Yang, Young-Hyoo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.2
    • /
    • pp.149-157
    • /
    • 2012
  • In this thesis, the problem of computing approximate answers to continuous sliding-window joins over data streams when the available memory may be insufficient to keep the entire join state. One approximation scenario is to provide a maximum subset of the result, with the objective of losing as few result tuples as possible. An alternative scenario is to provide a random sample of the join result, e.g., if the output of the join is being aggregated. It is shown formally that neither approximation can be addressed effectively for a sliding-window join of arbitrary input streams. Previous work has addressed only the maximum-subset problem, and has implicitly used a frequency based model of stream arrival. There exists a sampling problem for this model. More importantly, it is shown that a broad class of applications for which an age-based model of stream arrival is more appropriate, and both approximation scenarios under this new model are addressed. Finally, for the case of multiple joins being executed with an overall memory constraint, an algorithm for memory allocation across the join that optimizes a combined measure of approximation in all scenarios considered is provided.

The Study on Development of a Digital Internet Radio Receiver (디지털 인터넷 라디오 수신기 구현에 대한 연구)

  • Park, In-Gyu
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.12 no.2
    • /
    • pp.102-110
    • /
    • 2006
  • This paper explains the design and development of the stand-alone high sound quality Internet Radio system, which is aimed for a small embedded type audio device rather than a general PC type. This device is designed to work with an Internet connection. This kind of system is not standardized so far, and also the related algorithm is not open to the public. So it is necessary to analyze several receiving algorithms of current radio receivers, and develop our own hardware in order to overcome these obstacles, finally to get the high quality of sound radio. The main electronic components of this Internet Radio are TCP/IP interfaces, an audio MP3 decoder, an I/O interface, and a Flash Memory Card with advanced audio multicasting for the next-generation Internet Radio. Basic structures and implementation issues of the next-generation most-versatile digital music player, and Internet Radio receivers, are discussed.

Profiler Design for Evaluating Performance of WebCL Applications (WebCL 기반 애플리케이션의 성능 평가를 위한 프로파일러 설계 및 구현)

  • Kim, Cheolwon;Cho, Hyeonjoong
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.4 no.8
    • /
    • pp.239-244
    • /
    • 2015
  • WebCL was proposed for high complex computing in Javascript. Since WebCL-based applications are distributed and executed on an unspecified number of general clients, it is important to profile their performances on different clients. Several profilers have been introduced to support various programming languages but WebCL profiler has not been developed yet. In this paper, we present a WebCL profiler to evaluate WebCL-based applications and monitor the status of GPU on which they run. This profiler helps developers know the execution time of applications, memory read/write time, GPU statues such as its power consumption, temperature, and clock speed.

A Study on the Analysis of Bilinear Systems via Fast Walsh Transform (고속 월쉬 변환을 이용한 쌍일차계의 해석에 관한 연구)

  • 김태훈;심재선
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.16 no.1
    • /
    • pp.85-91
    • /
    • 2002
  • Generally when the orthogonal functions are used in system analysis, the time consuming processes of high order matrix inversion for finding the Kronecker products and the truncation errors are occurred. In this paper, a method for the system analysis of bilinear systems via fast walsh transform is devised. This method requires neither the inversion of large matrices nor the cumbersome procedures for finding Kronecker products. Thus, both the computing time and required memory size can be significantly reduced.

A MDIT(Mobile Digital Investment Trust) Agent design and security enhancement using 3BC and E2mECC (3BC와 F2mECC를 이용한 MDIT(Mobile Digital Investment Trust) 에이전트 설계 및 보안 강화)

  • Jeong Eun-Hee;Lee Byung-Kwan
    • Journal of Internet Computing and Services
    • /
    • v.6 no.3
    • /
    • pp.1-16
    • /
    • 2005
  • This paper propose not only MDIT(Mobile Digital Investment Trust) agent design for Trust Investment under Mobile E-commerce environment, but also the symmetric key algorithm 3BC(Bit, Byte and Block Cypher) and the public encryption algorithm F2mECC for solving the problems of memory capacity, CPU processing time, and security that mobile environment has. In Particular, the MDIT Security Agent is the banking security project that introduces the concept of investment trust in mobile e-commerce, This mobile security protocol creates a shared secrete key using F2mECC and then it's value is used for 3BC that is block encryption technique. The security and the processing speed of MDIT agent are enhanced using 3BC and F2mECC.

  • PDF