• Title/Summary/Keyword: Optimization of Process parameters

Search Result 925, Processing Time 0.037 seconds

A Study on Optimization of Process Parameters in Zone Melting Recrystallization Using Tungsten Halogen Lamp (텅스텐 할로겐 램프를 사용하는 ZMR공정의 매개변수 최적화에 관한 연구)

  • Choi, Jin-Ho;Song, Ho-Jun;Lee, Ho-Jun;Kim, Choong-Ki
    • Korean Journal of Materials Research
    • /
    • v.2 no.3
    • /
    • pp.180-190
    • /
    • 1992
  • Some solutions to several major problems in ZMR such as agglomeration of polysilicon, slips and local substrate melting are described. Experiments are performed with varying polysilicon thickness and capping oxide thickness. The aggmeration can be eliminated when nitrogen is introduced at the capping oxide layer-to-polysilicon interface and polysilicon-to-buried oxide layer interface by annealing the SOI samples at $1100^{\circ}$ in $NH_3$ ambient for three hours. The slips and local substrate melting are removed when the back surface of silicon substrate is sandblasted to produce the back surface roughness of about $20{\mu}m$. The subboundary spacing increases with increasing polysilicon thickness and the uniformity of recrystallized SOI film thickness improves with increasing capping oxide thickness, improving the quality of recrystallized SOI film. When the polysilicon thickness is about $1.0{\mu}m$ and the capping oxide thickness is $2.5{\mu}m$, the thickness variation of the recrystallized SOI film is about ${\pm}200{\AA}$ and the subboundary spacing is about $70-120{\mu}m$.

  • PDF

Monitoring of the Optimum Conditions for the Fermentation of Onion Wine (양파주의 최적 발효조건 모니터링)

  • Choi, In-Hag;Jo, Deokjo;Lee, Gee-Dong
    • Food Science and Preservation
    • /
    • v.20 no.2
    • /
    • pp.257-264
    • /
    • 2013
  • Central composite design along with response surface methodology (RSM) was applied to improve the fermentation process in onion (Allium cepa) wine production. The effects of different fermentation parameters (time, temperature, and initial sugar content) were found to be significant with respect to the physicochemical and sensory properties of wine. The maximum score for the alcoholic content was obtained at $29.27^{\circ}C$ fermentation temperature, 103.43 h fermentation time, and $27.52^{\circ}Brix$ initial sugar content. The maximum score for overall palatability was obtained at $39.27^{\circ}C$ fermentation temperature, 57.28 h fermentation time, and $22.14^{\circ}Brix$ initial sugar content. The coefficients of determination ($R^2$) were 0.9620 and 0.9060 for alcoholic content and overall palatability, respectively. The ranges of the optimum fermentation conditions ($28{\sim}32^{\circ}C$, 80~90 hr, and $20{\sim}25^{\circ}Brix$) were obtained by superimposing the response surfaces with regard to the alcoholic content and overall palatability of onion wine.

Prediction of Music Generation on Time Series Using Bi-LSTM Model (Bi-LSTM 모델을 이용한 음악 생성 시계열 예측)

  • Kwangjin, Kim;Chilwoo, Lee
    • Smart Media Journal
    • /
    • v.11 no.10
    • /
    • pp.65-75
    • /
    • 2022
  • Deep learning is used as a creative tool that could overcome the limitations of existing analysis models and generate various types of results such as text, image, and music. In this paper, we propose a method necessary to preprocess audio data using the Niko's MIDI Pack sound source file as a data set and to generate music using Bi-LSTM. Based on the generated root note, the hidden layers are composed of multi-layers to create a new note suitable for the musical composition, and an attention mechanism is applied to the output gate of the decoder to apply the weight of the factors that affect the data input from the encoder. Setting variables such as loss function and optimization method are applied as parameters for improving the LSTM model. The proposed model is a multi-channel Bi-LSTM with attention that applies notes pitch generated from separating treble clef and bass clef, length of notes, rests, length of rests, and chords to improve the efficiency and prediction of MIDI deep learning process. The results of the learning generate a sound that matches the development of music scale distinct from noise, and we are aiming to contribute to generating a harmonistic stable music.

Mission Analysis Involving Hall Thruster for On-Orbit Servicing (궤도상 유지보수를 위한 홀추력기 임무해석)

  • Kwon, Kybeom
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.48 no.10
    • /
    • pp.791-799
    • /
    • 2020
  • Launched in October 2019, Northrop Grumman's MEV-1 was the world's first unmanned mission demonstrating the practical feasibility of on-orbit servicing. Although the concept of on-orbit servicing was proposed several decades ago, it has been developed to various mission concepts providing services such as orbit change, station keeping, propellant and equipment supply, upgrade, repair, on-orbit assembly and production, and space debris removal. The historical success of MEV-1 is expected to expand the market of on-orbit servicing for government agencies and commercial sectors worldwide. The on-orbit servicing essentially requires the utilization of a highly propellant efficient electric propulsion system due to the nature of the mission. In this study, the space mission analysis for a simple on-orbit mission involving Hall thruster is conducted, which is life extension mission for geostationary orbit satellites. In order to analyze the mission, design space exploration for various Hall thruster design variable combinations is performed. The values of design variables and operational parameters of Hall thruster suitable for the mission are proposed through design space analysis and optimization, and mission performance is derived. In addition, the direction of further improvement for the current on-orbit mission analysis process and space mission analysis involving Hall thruster is reviewed.

Design of Experiment and Analysis Method for the Integrated Logistics System Using Orthogonal Array (직교배열을 이용한 통합물류시스템의 실험 설계 및 분석방법)

  • Park, Youl-Kee;Um, In-Sup;Lee, Hong-Chul
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.12 no.12
    • /
    • pp.5622-5632
    • /
    • 2011
  • This paper presents the simulation design and analysis of Integrated Logistics System(ILS) which is operated by using the AGV(Automated Guided Vehicle). To maximize the operation performances of ILS with AGV, many parameters should be considered such as the number, velocity, and dispatching rule of AGV, part types, scheduling, and buffer sizes. We established the design of experiment in a way of Orthogonal Array in order to consider (1)maximizing the throughput; (2)maximizing the vehicle utilization; (3)minimizing the congestion; and (4)maximizing the Automated Storage and Retrieval System(AS/RS) utilization among various critical factors. Furthermore, we performed the optimization by using the simulation-based analysis and Evolution Strategy(ES). As a result, Orthogonal Array which is conducted far fewer than ES significantly saved not only the time but the same outcome when compared after validation test on the result from the two methods. Therefore, this approach ensures the confidence and provides better process for quick analysis by specifying exact experiment outcome even though it provides small number of experiment.

TIR Holographic lithography using Surface Relief Hologram Mask (표면 부조 홀로그램 마스크를 이용한 내부전반사 홀로그래픽 노광기술)

  • Park, Woo-Jae;Lee, Joon-Sub;Song, Seok-Ho;Lee, Sung-Jin;Kim, Tae-Hyun
    • Korean Journal of Optics and Photonics
    • /
    • v.20 no.3
    • /
    • pp.175-181
    • /
    • 2009
  • Holographic lithography is one of the potential technologies for next generation lithography which can print large areas (6") as well as very fine patterns ($0.35{\mu}m$). Usually, photolithography has been developed with two target purposes. One was for LCD applications which require large areas (over 6") and micro pattern (over $1.5{\mu}m$) exposure. The other was for semiconductor applications which require small areas (1.5") and nano pattern (under $0.2{\mu}m$) exposure. However, holographic lithography can print fine patterns from $0.35{\mu}m$ to $1.5{\mu}m$ keeping the exposure area inside 6". This is one of the great advantages in order to realize high speed fine pattern photolithography. How? It is because holographic lithography is taking holographic optics instead of projection optics. A hologram mask is the key component of holographic optics, which can perform the same function as projection optics. In this paper, Surface-Relief TIR Hologram Mask technology is introduced, and enables more robust hologram masks than those previously reported that were formed in photopolymer recording materials. We describe the important parameters in the fabrication process and their optimization, and we evaluate the patterns printed from the surface-relief TIR hologram masks.

Development of Water Quality Management System in Daecheong Reservoir Using Geographic Information System (GIS를 이용한 저수지의 수질관리시스템 구축)

  • 한건연;백창현
    • Spatial Information Research
    • /
    • v.12 no.1
    • /
    • pp.13-27
    • /
    • 2004
  • The current industrial development and the increase of population in Daecheong Reservoir basin have produced a rapid increase of wastewater discharge. This has resulted in problem of water quality control and management. Although many efforts have been carried out, reservoir water quality has not significantly improved. In this sense, the development of water quality management system is required to improve reservoir water quality. The goal of this study is to design a GIS-based water quality management system for the scientific water quality control and management in the Daecheong Reservoir. For general water quality analysis, WASP5 model was applied to the Daecheong Reservoir. A sensitivity analysis was made to determine significant parameters and an optimization was made to estimate optimal values. The calibration and verification were performed by using observed water quality data for Daecheong Reservoir. A water quality management system for Daecheong Reservoir was made by connecting the WASP5 model to ArcView. It allows a Windows-based Graphic User Interface(GUI) to implement all operation with regard to water quality analysis. The proposed water quality management system has capability for the on-line data process including water quality simulation, and has a post processor far the reasonable visualization for various output. The modeling system in this study will be an efficient NGIS(National Geographic Information System) far planning of reservoir water quality management.

  • PDF

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

Process Optimization of Dextran Production by Leuconostoc sp. strain YSK. Isolated from Fermented Kimchi (김치로부터 분리된 Leuconostoc sp. strain YSK 균주에 의한 덱스트란 생산 조건의 최적화)

  • Hwang, Seung-Kyun;Hong, Jun-Taek;Jung, Kyung-Hwan;Chang, Byung-Chul;Hwang, Kyung-Suk;Shin, Jung-Hee; Yim, Sung-Paal;Yoo, Sun-Kyun
    • Journal of Life Science
    • /
    • v.18 no.10
    • /
    • pp.1377-1383
    • /
    • 2008
  • A bacterium producing non- or partially digestible dextran was isolated from kimchi broth by enrichment culture technique. The bacterium was identified tentatively as Leuconostoc sp. strain SKY. We established the response surface methodology (Box-Behnken design) to optimize the principle parameters such as culture pH, temperature, and yeast extract concentration for maximizing production of dextran. The ranges of parameters were determined based on prior screening works done at our laboratory and accordingly chosen as 5.5, 6.5, and 7.5 for pH, 25, 30, and $35^{\circ}C$ for temperature, and 1, 5, and 9 g/l yeast extract. Initial concentration of sucrose was 100 g/l. The mineral medium consisted of 3.0 g $KH_2PO_4$, 0.01 g $FeSO_4{\cdot}H_2O$, 0.01 g $MnSO_4{\cdot}4H_2O$, 0.2 g $MgSO_4{\cdot}7H_2O$, 0.01 g NaCl, and 0.05 g $CaCO_3$ per 1 liter deionized water. The optimum values of pH and temperature, and yeast extract concentration were obtained at pH (around 7.0), temperature (27 to $28^{\circ}C$), and yeast extract (6 to 7 g/l). The best dextran yield was 60% (dextran/g sucrose). The best dextran productivity was 0.8 g/h-l.