• Title/Summary/Keyword: 직접 미분

Search Result 140, Processing Time 0.021 seconds

Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

  • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.1-17
    • /
    • 2017
  • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.

Development of a back analysis program for reasonable derivation of tunnel design parameters (합리적인 터널설계정수 산정을 위한 역해석 프로그램 개발)

  • Kim, Young-Joon;Lee, Yong-Joo
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.15 no.3
    • /
    • pp.357-373
    • /
    • 2013
  • In this paper, a back analysis program for analyzing the behavior of tunnel-ground system and evaluating the material properties and tunnel design parameters was developed. This program was designed to be able to implement the back analysis of underground structure by combination of using FLAC and optimized algorithm as direct method. In particular, Rosenbrock method which is able to do direct search without obtaining differential coefficient was adopted for the back analysis algorithm among optimization methods. This back analysis program was applied to the site to evaluate the design parameters. The back analysis was carried out using field measurement results from 5 sites. In the course of back analysis, nonlinear regression analysis was carried out to identify the optimum function of the measured ground displacement. Exponential function and fractional function were used for the regression analysis and total displacement calculated by optimum function was used as the back analysis input data. As a result, displacement recalculated through the back analysis using measured displacement of the structure showed 4.5% of error factor comparing to the measured data. Hence, the program developed in this study proved to be effectively applicable to tunnel analysis.

Direct Design Sensitivity Analysis of Frequency Response Function Using Krylov Subspace Based Model Order Reduction (Krylov 부공간 모델차수축소법을 이용한 주파수응답함수의 직접 설계민감도 해석)

  • Han, Jeong-Sam
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.23 no.2
    • /
    • pp.153-163
    • /
    • 2010
  • In this paper a frequency response analysis using Krylov subspace-based model reduction and its design sensitivity analysis with respect to design variables are presented. Since the frequency response and its design sensitivity information are necessary for a gradient-based optimization, problems of high computational cost and resource may occur in the case that frequency response of a large sized finite element model is involved in the optimization iterations. In the suggested method model order reduction of finite element models are used to calculate both frequency response and frequency response sensitivity, therefore one can maximize the speed of numerical computation for the frequency response and its design sensitivity. As numerical examples, a semi-monocoque shell and an array-type $4{\times}4$ MEMS resonator are adopted to show the accuracy and efficiency of the suggested approach in calculating the FRF and its design sensitivity. The frequency response sensitivity through the model reduction shows a great time reduction in numerical computation and a good agreement with that from the initial full finite element model.

Estimation of the Central Aortic Pulse using Transfer Function and Improvement of an Augmentation Point Detection Algorithm (전달함수를 이용한 대동맥 맥파 추정 및 증강점 검출 알고리즘 개선에 관한 연구)

  • Im, Jae-Joong
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.45 no.3
    • /
    • pp.68-79
    • /
    • 2008
  • Aortic AIx(augmentation index) has been used to measure aortic stiffness quantitatively and even to evaluate ventricular load. However, in order to calculate aortic AIx catheters should be inserted to the subjects' artery, which hampers its clinical usage. To overcome such limitation, aortic AIx has been indirectly calculated by estimating aortic pressure wave from the peripheral arterial pulse by applying transfer functions. In this study, central aortic pressure waves using Millar catheter and radial artery pulse waves using tonometry pressure sensor were measured to establish transfer functions for an estimation of central aortic pressure waves from radial artery pulse waves. Also, an algorithm which detects augmentation point for the calculation of AIx were developed. Developed algorithm for the detection of augmentation point gradually increases the differential order to detect inflection point rather than detects the distinctive point that appears after a specific time. Transfer functions were established using 10th order ARX model and were verified for the stability of the transfer function through residual analysis. Evaluation of an algorithm for the detection of augmentation point were performed by comparing the augmentation points obtained from developed algorithm with the known augmentation points synthesized in various conditions. In addition, developed algorithm for the AIx is proved to provide more accurate results than the ones developed by previous studies. The significance of the study was in two folds. Firstly, the results could provide the basis for the measurement of aortic stiffness using easily-measurable radial artery pulse waves, and secondly, extension of the study may enable the early diagnosis of various vascular diseases.

Optimization Method for the Design of LCD Back-Light Unit (LCD Back-Light Unit 설계를 위한 최적화 기법)

  • Seo Heekyung;Ryu Yangseon;Choi Joonsoo;Hahn Kwang-Soo;Kim Seongcheol
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.3
    • /
    • pp.133-147
    • /
    • 2005
  • Various types of ray-tracing methods are used to predict the quantity measures of radiation illumination, the uniformity of illumination, radiation performance of LCD BLU(Hack-Light Unit). The uniformity of radiation illumination is one of the most important design factor of BLU and is usually controlled by the diffusive-ink pattern printed on the bottom of light-guide panel of BLU. Therefore it is desirable to produce an improved (ideally, the optimal) ink pattern to achieve the best uniformity of radiation illumination. In this paper, we applied the Welder-Mead simplex-search method among various direct search method to compute the optimal ink pattern. Direct search methods are widely used to optimize the functions which are often highly nonlinear, unpredictably discontinuous, and nondifferentiable, The ink-pattern controlling the uniformity of radiation illumination is one type of these functions. In this paper, we found that simplex search methods are well suited to computing the optimal diffusive-ink pattern. In extensive numerical testing, we have found the simplex search method to be reasonably efficient and reliable at computing the optimal diffusive-ink pattern. The result also suggests that optimization can improve the functionality of simulation tools which are used to design LCD BLU.

The Effect of Chloride Additives and pH on Direct Aqueous Carbonation of Cement Paste (시멘트 풀의 직접수성탄산화에서 Chloride 첨가제와 pH의 영향)

  • Lee, Jinhyun;Hwang, Jinyeon;Lee, Hyomin;Son, Byeongseo;Oh, Jiho
    • Journal of the Mineralogical Society of Korea
    • /
    • v.28 no.1
    • /
    • pp.39-49
    • /
    • 2015
  • Recently, carbon capture and storage (CCS) techniques have been globally studied. This study was conducted to use waste cement powder as an efficient raw material of mineral carbonation for $CO_2$ sequestration. Direct aqueous carbonation experiment was conducted with injecting pure $CO_2$ gas (99.9%) to a reactor containing $200m{\ell}$ reacting solution and the pulverized cement paste (W:C = 6:4) having particle size less than 0.15 mm. The effects of two additives (NaCl, $MgCl_2$) in carbonation were analyzed. The characteristics of carbonate minerals and carbonation process according to the type of additives and pH change were carefully evaluated. pH of reacting solution was gradually decreased with injecting $CO_2$ gas. $Ca^{2+}$ ion concentration in $MgCl_2$ containing solution was continuously decreased. In none $MgCl_2$ solution, however, $Ca^{2+}$ ion concentration was increased again as pH decreased. This is probably due to the dissolution of newly formed carbonate mineral in low pH solution. XRD analysis indicates that calcite is dominant carbonate mineral in none $MgCl_2$ solution whereas aragonite is dominant in $MgCl_2$ containing solution. Unstable vaterite formed in early stage of experiment was transformed to well crystallized calcite with decreasing pH in the absence of $MgCl_2$ additives. In the presence of $MgCl_2$ additives, the content of aragonite was increased with decreasing pH whereas the content of calite was decreased.

An Empirical Model for Forecasting Alternaria Leaf Spot in Apple (사과 점무늬낙엽병(斑點落葉病)예찰을 위한 한 경험적 모델)

  • Kim, Choong-Hoe;Cho, Won-Dae;Kim, Seung-Chul
    • Korean journal of applied entomology
    • /
    • v.25 no.4 s.69
    • /
    • pp.221-228
    • /
    • 1986
  • An empirical model to predict initial disease occurrence and subsequent progress of Alternaria leaf spot was constructed based on the modified degree day temperature and frequency of rainfall in three years field experiments. Climatic factors were analized 10-day bases, beginning April 20 to the end of August, and were used as variables for model construction. Cumulative degree portion (CDP) that is over $10^{\circ}C$ in the daily average temperature was used as a parameter to determine the relationship between temperature and initial disease occurrence. Around one hundred and sixty of CDP was needed to initiate disease incidence. This value was considered as temperature threshhold. After reaching 160 CDP, time of initial occurrence was determined by frequency of rainfall. At least four times of rainfall were necessary to be accumulated for initial occurrence of the disease after passing temperature threshhold. Disease progress after initial incidence generally followed the pattern of frequency of rainfall accumulated in those periods. Apparent infection rate (r) in the general differential equation dx/dt=xr(1-x) for individual epidemics when x is disease proportion and t is time, was a linear function of accumulation rate of rainfall frequency (Rc) and was able to be directly estimated based on the equation r=1.06Rc-0.11($R^2=0.993$). Disease severity (x) after t time could be predicted using exponential equation $[x/(1-x)]=[x_0/(1-x)]e^{(b_0+b_1R_c)t}$ derived from the differential equation, when $x_0$ is initial disease, $b_0\;and\;b_1$ are constants. There was a significant linear relationship between disease progress and cumulative number of air-borne conidia of Alternaria mali. When the cumulative number of air-borne conidia was used as an independent variable to predict disease severity, accuracy of prediction was poor with $R^2=0.3328$.

  • PDF

Pyrolysis and Combustion Characteristics of an Pinus densiflora for the Protection of Forest Resources (산림자원 보호를 위한 적송의 열분해 및 연소 특성 연구)

  • Park, Jin-Mo;Kim, Seung-Soo
    • Applied Chemistry for Engineering
    • /
    • v.21 no.6
    • /
    • pp.664-669
    • /
    • 2010
  • The forest area of domestic is 6370304 ha, which covers 70% of the whole country, and especially Gangwon-do is remarkably larger than other Province. A thick forest of the country has the most basic component among other natural environments as well as it has invaluable worth to human being such as scientific research and educational value. However due to the breakout of forest fire since 1990s, the loss of trees, destruction of natural environment and ecology, economic damage have been occurring and its scale also has become larger. The causes of becoming larger in scale are resulted from forest components which mainly consist of needle leaf trees, wide leaf trees, fallen leaves, herbaceous plants so that it has been a direct cause for forest fire. However, few research on combustion and pyrolysis characteristics has been done in domestic and abroad. The study on the combustion and pyrolysis for Pinus densiflora which are typical needle leaf trees has been tried using TGA. Pinus desiflora started to being ignited at around $162^{\circ}C$ and pyrolysis was done at around $197^{\circ}C$. Differential method was applied to calculate activation energy and frequency factor according to the variation of conversion. Activation energy in pyrolysis was increased from 79 kJ/mol to 487 kJ/mol with increasing conversion and average activation energy was 195 kJ/mol. The activation energy in combustion was decreased from 148 kJ/mol to 133 kJ/mol.

Screening for Cold-Active Protease-Producing Bacteria from the Culture Collection of Polar Microorganisms and Characterization of Proteolytic Activities (남북극 유래 저온성 박테리아 Culture Collection에서 저온활성 프로테아제 생산균주의 스크리닝과 효소 특성)

  • Kim, Doc-Kyu;Park, Ha-Ju;Lee, Yung-Mi;Hong, Soon-Gyu;Lee, Hong-Kum;Yim, Joung-Han
    • Korean Journal of Microbiology
    • /
    • v.46 no.1
    • /
    • pp.73-79
    • /
    • 2010
  • The Korea Polar Research Institute (KOPRI) has assembled a culture collection of cold-adapted bacterial strains from both the Arctic and Antarctic. To identify excellent protease-producers among the proteolytic bacterial collection (874 strains), 78 strains were selected in advance according to their relative activities and were subsequently re-examined for their extracellular protease activity on $0.1{\times}$ ZoBell plates supplemented with 1% skim milk at various temperatures. This rapid and direct screening method permitted the selection of a small group of 15 cold-adapted bacterial strains, belonging to either the genus Pseudoalteromonas (13 strains) or Flavobacterium (2 strains), that showed proteolytic activities at temperatures ranging between $5-15^{\circ}C$. The cold-active proteases from these strains were classified into four categories (serine protease, aspartic protease, cysteine protease, and metalloprotease) according to the extent of enzymatic inhibition by a class-specific protease inhibitor. Since highly active and/or cold-adapted proteases have the potential for industrial or commercial enzyme development, the protease-producing bacteria selected in this work will be studied as a valuable natural source of new proteases. Our results also highlight the relevance of the Antarctic for the isolation of protease-producing bacteria active at low temperatures.

Shape Design Optimization of Crack Propagation Problems Using Meshfree Methods (무요소법을 이용한 균열진전 문제의 형상 최적설계)

  • Kim, Jae-Hyun;Ha, Seung-Hyun;Cho, Seonho
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.27 no.5
    • /
    • pp.337-343
    • /
    • 2014
  • This paper presents a continuum-based shape design sensitivity analysis(DSA) method for crack propagation problems using a reproducing kernel method(RKM), which facilitates the remeshing problem required for finite element analysis(FEA) and provides the higher order shape functions by increasing the continuity of the kernel functions. A linear elasticity is considered to obtain the required stress field around the crack tip for the evaluation of J-integral. The sensitivity of displacement field and stress intensity factor(SIF) with respect to shape design variables are derived using a material derivative approach. For efficient computation of design sensitivity, an adjoint variable method is employed tather than the direct differentiation method. Through numerical examples, The mesh-free and the DSA methods show excellent agreement with finite difference results. The DSA results are further extended to a shape optimization of crack propagation problems to control the propagation path.