• Title/Summary/Keyword: Fast Computation

Search Result 750, Processing Time 0.034 seconds

Topology of High Speed System Emulator and Its Software (초고속 시스템 에뮬레이터의 구조와 이를 위한 소프트웨어)

  • Kim, Nam-Do;Yang, Se-Yang
    • The KIPS Transactions:PartA
    • /
    • v.8A no.4
    • /
    • pp.479-488
    • /
    • 2001
  • As the SoC designs complexity constantly increases, the simulation that uses their software models simply takes too much time. To solve this problem, FPGA-based logic emulators have been developed and commonly used in the industry. However, FPGA-based logic emulators are facing with the problems of which not only very low FPGA resource usage rate due to the very limited number of pins in FPGAs, but also the emulation speed getting slow drastically as the complexity of designs increases. In this paper, we proposed a new innovative emulation architecture and its software that has high FPGA resource usage rate and makes the emulation extremely fast. The proposed emulation system has merits to overcome the FPGA pin limitation by pipelined ring which transfers multiple logic signal through a single physical pin, and it also makes possible to use a high speed system clock through the intelligent ring topology. In this topology, not only all signal transfer channels among EPGAs are totally separated from user logic so that a high speed system clock can be used, but also the depth of combinational paths is kept swallow as much as possible. Both of these are contributed to achieve high speed emulation. For pipelined singnals transfer among FPGAs we adopt a few heuristic scheduling having low computation complexity. Experimental result with a 12 bit microcontroller has shown that high speed emulation possible even with these simple heuristic scheduling algorithms.

  • PDF

Feature Selection to Predict Very Short-term Heavy Rainfall Based on Differential Evolution (미분진화 기반의 초단기 호우예측을 위한 특징 선택)

  • Seo, Jae-Hyun;Lee, Yong Hee;Kim, Yong-Hyuk
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.6
    • /
    • pp.706-714
    • /
    • 2012
  • The Korea Meteorological Administration provided the recent four-years records of weather dataset for our very short-term heavy rainfall prediction. We divided the dataset into three parts: train, validation and test set. Through feature selection, we select only important features among 72 features to avoid significant increase of solution space that arises when growing exponentially with the dimensionality. We used a differential evolution algorithm and two classifiers as the fitness function of evolutionary computation to select more accurate feature subset. One of the classifiers is Support Vector Machine (SVM) that shows high performance, and the other is k-Nearest Neighbor (k-NN) that is fast in general. The test results of SVM were more prominent than those of k-NN in our experiments. Also we processed the weather data using undersampling and normalization techniques. The test results of our differential evolution algorithm performed about five times better than those using all features and about 1.36 times better than those using a genetic algorithm, which is the best known. Running times when using a genetic algorithm were about twenty times longer than those when using a differential evolution algorithm.

A Comparative Study on Dietary Life according to the Obesity Assessment Methods of Higher Grade Elementary School Students in Jeonju (전주지역 고학년 초등학생의 비만판정 방법에 따른 식생활 비교연구)

  • Yu, Ok-Kyeong;Cha, Youn-Soo
    • Korean Journal of Human Ecology
    • /
    • v.9 no.4
    • /
    • pp.83-93
    • /
    • 2006
  • This study was done for finding out if eating habits, eating behaviors were different between non-obese and obese elementary school students in Jeonju Area. Total 2568 students of 1364 male and 1204 female of the 4th, 5th, and 6th year in 5 elementary schools were surveyed and the statistics of the result was analyzed by SPSS program. The results are summarized as follows: 1. Obesity was defined as Body Mass Index(BMI) that exceeded 85th and Obesity Index(OI) that exceeded 110. First, subjects were divided into 4 groups : lean, normal, overweight and obese. Second subjects were reclassified into non-obese(lean and normal) and obese(overweight and obese) groups. Average height of male and female students were 142.5cm, 143.1cm and weight of those were 36.4kg and 37.9kg respectively. 2. As results of obesity computation, obese male students were 19.6%(overweight 11.3%, obese 8.3%) in BMI and obese male students were 25.0%(overweight 12.5.%, obese 12.5%) in OI. Especially Obesity percent rate of male student were significantly higher on that of female student in OI method. 3. Examining obesity between male and female, there were statistically different between male students and female students in OI, but there were not statistically different in BMI. With regard to grade level(4th, 5th, 6th), there were statistically different among grade levels. 4. Examining correlation between eating habits(eating behaviors) and obesity, there were statistically significant in some cases. For example, there were statistically significant correlation between fast eating habit and obesity. And the relation analysis of general environments and obesity showed that there were statistically significant in some cases. These results suggest that the number of overweight students can be increased due to the amount and kinds of food children have as well as the general causes of overweight such as genetic, environmental and psychological reason. Surveying about children's eating habits, eating behaviors this study methodically. Working with parents is necessary and comparison of eating habits, eating behaviors and nutrition knowledge between the past and their presents are also needed in a future.

  • PDF

Evaluation of the DCT-PLS Method for Spatial Gap Filling of Gridded Data (격자자료 결측복원을 위한 DCT-PLS 기법의 활용성 평가)

  • Youn, Youjeong;Kim, Seoyeon;Jeong, Yemin;Cho, Subin;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.6_1
    • /
    • pp.1407-1419
    • /
    • 2020
  • Long time-series gridded data is crucial for the analyses of Earth environmental changes. Climate reanalysis and satellite images are now used as global-scale periodical and quantitative information for the atmosphere and land surface. This paper examines the feasibility of DCT-PLS (penalized least square regression based on discrete cosine transform) for the spatial gap filling of gridded data through the experiments for multiple variables. Because gap-free data is required for an objective comparison of original with gap-filled data, we used LDAPS (Local Data Assimilation and Prediction System) daily data and MODIS (Moderate Resolution Imaging Spectroradiometer) monthly products. In the experiments for relative humidity, wind speed, LST (land surface temperature), and NDVI (normalized difference vegetation index), we made sure that randomly generated gaps were retrieved very similar to the original data. The correlation coefficients were over 0.95 for the four variables. Because the DCT-PLS method does not require ancillary data and can refer to both spatial and temporal information with a fast computation, it can be applied to operative systems for satellite data processing.

Hierarchical Particle Swarm Optimization for Multi UAV Waypoints Planning Under Various Threats (다양한 위협 하에서 복수 무인기의 경로점 계획을 위한 계층적 입자 군집 최적화)

  • Chung, Wonmo;Kim, Myunggun;Lee, Sanha;Lee, Sang-Pill;Park, Chun-Shin;Son, Hungsun
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.50 no.6
    • /
    • pp.385-391
    • /
    • 2022
  • This paper presents to develop a path planning algorithm combining gradient descent-based path planning (GBPP) and particle swarm optimization (PSO) for considering prohibited flight areas, terrain information, and characteristics of fixed-wing unmmaned aerial vehicle (UAV) in 3D space. Path can be generated fast using GBPP, but it is often happened that an unsafe path can be generated by converging to a local minimum depending on the initial path. Bio-inspired swarm intelligence algorithms, such as Genetic algorithm (GA) and PSO, can avoid the local minima problem by sampling several paths. However, if the number of optimal variable increases due to an increase in the number of UAVs and waypoints, it requires heavy computation time and efforts due to increasing the number of particles accordingly. To solve the disadvantages of the two algorithms, hierarchical path planning algorithm associated with hierarchical particle swarm optimization (HPSO) is developed by defining the initial path, which is the input of GBPP, as two variables including particles variables. Feasibility of the proposed algorithm is verified by software-in-the-loop simulation (SILS) of flight control computer (FCC) for UAVs.

Text Classification Using Heterogeneous Knowledge Distillation

  • Yu, Yerin;Kim, Namgyu
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.10
    • /
    • pp.29-41
    • /
    • 2022
  • Recently, with the development of deep learning technology, a variety of huge models with excellent performance have been devised by pre-training massive amounts of text data. However, in order for such a model to be applied to real-life services, the inference speed must be fast and the amount of computation must be low, so the technology for model compression is attracting attention. Knowledge distillation, a representative model compression, is attracting attention as it can be used in a variety of ways as a method of transferring the knowledge already learned by the teacher model to a relatively small-sized student model. However, knowledge distillation has a limitation in that it is difficult to solve problems with low similarity to previously learned data because only knowledge necessary for solving a given problem is learned in a teacher model and knowledge distillation to a student model is performed from the same point of view. Therefore, we propose a heterogeneous knowledge distillation method in which the teacher model learns a higher-level concept rather than the knowledge required for the task that the student model needs to solve, and the teacher model distills this knowledge to the student model. In addition, through classification experiments on about 18,000 documents, we confirmed that the heterogeneous knowledge distillation method showed superior performance in all aspects of learning efficiency and accuracy compared to the traditional knowledge distillation.

A Study about Learning Graph Representation on Farmhouse Apple Quality Images with Graph Transformer (그래프 트랜스포머 기반 농가 사과 품질 이미지의 그래프 표현 학습 연구)

  • Ji Hun Bae;Ju Hwan Lee;Gwang Hyun Yu;Gyeong Ju Kwon;Jin Young Kim
    • Smart Media Journal
    • /
    • v.12 no.1
    • /
    • pp.9-16
    • /
    • 2023
  • Recently, a convolutional neural network (CNN) based system is being developed to overcome the limitations of human resources in the apple quality classification of farmhouse. However, since convolutional neural networks receive only images of the same size, preprocessing such as sampling may be required, and in the case of oversampling, information loss of the original image such as image quality degradation and blurring occurs. In this paper, in order to minimize the above problem, to generate a image patch based graph of an original image and propose a random walk-based positional encoding method to apply the graph transformer model. The above method continuously learns the position embedding information of patches which don't have a positional information based on the random walk algorithm, and finds the optimal graph structure by aggregating useful node information through the self-attention technique of graph transformer model. Therefore, it is robust and shows good performance even in a new graph structure of random node order and an arbitrary graph structure according to the location of an object in an image. As a result, when experimented with 5 apple quality datasets, the learning accuracy was higher than other GNN models by a minimum of 1.3% to a maximum of 4.7%, and the number of parameters was 3.59M, which was about 15% less than the 23.52M of the ResNet18 model. Therefore, it shows fast reasoning speed according to the reduction of the amount of computation and proves the effect.

Fast Full Search Block Matching Algorithm Using The Search Region Subsampling and The Difference of Adjacent Pixels (탐색 영역 부표본화 및 이웃 화소간의 차를 이용한 고속 전역 탐색 블록 정합 알고리듬)

  • Cheong, Won-Sik;Lee, Bub-Ki;Lee, Kyeong-Hwan;Choi, Jung-Hyun;Kim, Kyeong-Kyu;Kim, Duk-Gyoo;Lee, Kuhn-Il
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.36S no.11
    • /
    • pp.102-111
    • /
    • 1999
  • In this paper, we propose a fast full search block matching algorithm using the search region subsampling and the difference of adjacent pixels in current block. In the proposed algorithm, we calculate the lower bound of mean absolute difference (MAD) at each search point using the MAD value of neighbor search point and the difference of adjacent pixels in current block. After that, we perform block matching process only at the search points that need block matching process using the lower bound of MAD at each search point. To calculate the lower bound of MAD at each search point, we need the MAD value of neighbor search point. Therefore, the search points are subsampled at the factor of 4 and the MAD value at the subsampled search points are calculated by the block matching process. And then, the lower bound of MAD at the rest search points are calculated using the MAD value of the neighbor subsampled search point and the difference of adjacent pixels in current block. Finally, we discard the search points that have the lower bound of MAD value exceed the reference MAD which is the minimum MAD value of the MAD values at the subsampled search points and we perform the block matching process only at the search points that need block matching process. By doing so, we can reduce the computation complexity drastically while the motion compensated error performance is kept the same as that of full search block matching algorithm (FSBMA). The experimental results show that the proposed method has a much lower computational complexity than that of FSBMA while the motion compensated error performance of the proposed method is kept same as that of FSBMA.

  • PDF

An Iterative, Interactive and Unified Seismic Velocity Analysis (반복적 대화식 통합 탄성파 속도분석)

  • Suh Sayng-Yong;Chung Bu-Heung;Jang Seong-Hyung
    • Geophysics and Geophysical Exploration
    • /
    • v.2 no.1
    • /
    • pp.26-32
    • /
    • 1999
  • Among the various seismic data processing sequences, the velocity analysis is the most time consuming and man-hour intensive processing steps. For the production seismic data processing, a good velocity analysis tool as well as the high performance computer is required. The tool must give fast and accurate velocity analysis. There are two different approches in the velocity analysis, batch and interactive. In the batch processing, a velocity plot is made at every analysis point. Generally, the plot consisted of a semblance contour, super gather, and a stack pannel. The interpreter chooses the velocity function by analyzing the velocity plot. The technique is highly dependent on the interpreters skill and requires human efforts. As the high speed graphic workstations are becoming more popular, various interactive velocity analysis programs are developed. Although, the programs enabled faster picking of the velocity nodes using mouse, the main improvement of these programs is simply the replacement of the paper plot by the graphic screen. The velocity spectrum is highly sensitive to the presence of the noise, especially the coherent noise often found in the shallow region of the marine seismic data. For the accurate velocity analysis, these noise must be removed before the spectrum is computed. Also, the velocity analysis must be carried out by carefully choosing the location of the analysis point and accuarate computation of the spectrum. The analyzed velocity function must be verified by the mute and stack, and the sequence must be repeated most time. Therefore an iterative, interactive, and unified velocity analysis tool is highly required. An interactive velocity analysis program, xva(X-Window based Velocity Analysis) was invented. The program handles all processes required in the velocity analysis such as composing the super gather, computing the velocity spectrum, NMO correction, mute, and stack. Most of the parameter changes give the final stack via a few mouse clicks thereby enabling the iterative and interactive processing. A simple trace indexing scheme is introduced and a program to nike the index of the Geobit seismic disk file was invented. The index is used to reference the original input, i.e., CDP sort, directly A transformation techinique of the mute function between the T-X domain and NMOC domain is introduced and adopted to the program. The result of the transform is simliar to the remove-NMO technique in suppressing the shallow noise such as direct wave and refracted wave. However, it has two improvements, i.e., no interpolation error and very high speed computing time. By the introduction of the technique, the mute times can be easily designed from the NMOC domain and applied to the super gather in the T-X domain, thereby producing more accurate velocity spectrum interactively. The xva program consists of 28 files, 12,029 lines, 34,990 words and 304,073 characters. The program references Geobit utility libraries and can be installed under Geobit preinstalled environment. The program runs on X-Window/Motif environment. The program menu is designed according to the Motif style guide. A brief usage of the program has been discussed. The program allows fast and accurate seismic velocity analysis, which is necessary computing the AVO (Amplitude Versus Offset) based DHI (Direct Hydrocarn Indicator), and making the high quality seismic sections.

  • PDF

Design and Implementation of an Execution-Provenance Based Simulation Data Management Framework for Computational Science Engineering Simulation Platform (계산과학공학 플랫폼을 위한 실행-이력 기반의 시뮬레이션 데이터 관리 프레임워크 설계 및 구현)

  • Ma, Jin;Lee, Sik;Cho, Kum-won;Suh, Young-kyoon
    • Journal of Internet Computing and Services
    • /
    • v.19 no.1
    • /
    • pp.77-86
    • /
    • 2018
  • For the past few years, KISTI has been servicing an online simulation execution platform, called EDISON, allowing users to conduct simulations on various scientific applications supplied by diverse computational science and engineering disciplines. Typically, these simulations accompany large-scale computation and accordingly produce a huge volume of output data. One critical issue arising when conducting those simulations on an online platform stems from the fact that a number of users simultaneously submit to the platform their simulation requests (or jobs) with the same (or almost unchanging) input parameters or files, resulting in charging a significant burden on the platform. In other words, the same computing jobs lead to duplicate consumption computing and storage resources at an undesirably fast pace. To overcome excessive resource usage by such identical simulation requests, in this paper we introduce a novel framework, called IceSheet, to efficiently manage simulation data based on execution metadata, that is, provenance. The IceSheet framework captures and stores each provenance associated with a conducted simulation. The collected provenance records are utilized for not only inspecting duplicate simulation requests but also performing search on existing simulation results via an open-source search engine, ElasticSearch. In particular, this paper elaborates on the core components in the IceSheet framework to support the search and reuse on the stored simulation results. We implemented as prototype the proposed framework using the engine in conjunction with the online simulation execution platform. Our evaluation of the framework was performed on the real simulation execution-provenance records collected on the platform. Once the prototyped IceSheet framework fully functions with the platform, users can quickly search for past parameter values entered into desired simulation software and receive existing results on the same input parameter values on the software if any. Therefore, we expect that the proposed framework contributes to eliminating duplicate resource consumption and significantly reducing execution time on the same requests as previously-executed simulations.