• Title/Summary/Keyword: performance-based optimization

Search Result 2,574, Processing Time 0.036 seconds

Optimization of Electro-Optical Properties of Acrylate-based Polymer-Dispersed Liquid Crystals for use in Transparent Conductive ZITO/Ag/ZITO Multilayer Films (투명 전도성 ZITO/Ag/ZITO 다층막 필름 적용을 위한 아크릴레이트 기반 고분자분산액정의 전기광학적 특성 최적화)

  • Cho, Jung-Dae;Kim, Yang-Bae;Heo, Gi-Seok;Kim, Eun-Mi;Hong, Jin-Who
    • Applied Chemistry for Engineering
    • /
    • v.31 no.3
    • /
    • pp.291-298
    • /
    • 2020
  • ZITO/Ag/ZITO multilayer transparent electrodes at room temperature on glass substrates were prepared using RF/DC magnetron sputtering. Transparent conductive films with a sheet resistance of 9.4 Ω/㎡ and a transmittance of 83.2% at 550 nm were obtained for the multilayer structure comprising ZITO/Ag/ZITO (100/8/42 nm). The sheet resistance and transmittance of ZITO/Ag/ZITO multilayer films meant that they would be highly applicable for use in polymer-dispersed liquid crystal (PDLC)-based smart windows due to the ability to effectively block infrared rays (heat rays) and thereby act as an energy-saving smart glass. Effects of the thickness of the PDLC layer and the intensity of ultraviolet light (UV) on electro-optical properties, photopolymerization kinetics, and morphologies of difunctional urethane acrylate-based PDLC systems were investigated using new transparent conducting electrodes. A PDLC cell photo-cured using UV at an intensity of 2.0 mW/c㎡ with a 15 ㎛-thick PDLC layer showed outstanding off-state opacity, good on-state transmittance, and favorable driving voltage. Also, the PDLC-based smart window optimized in this study formed liquid crystal droplets with a favorable microstructure, having an average size range of 2~5 ㎛ for scattering light efficiently, which could contribute to its superior final performance.

A Tree-Based Routing Algorithm Considering An Optimization for Efficient Link-Cost Estimation in Military WSN Environments (무선 센서 네트워크에서 링크 비용 최적화를 고려한 감시·정찰 환경의 트리 기반 라우팅 알고리즘에 대한 연구)

  • Kong, Joon-Ik;Lee, Jae-Ho;Kang, Ji-Heon;Eom, Doo-Seop
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.8B
    • /
    • pp.637-646
    • /
    • 2012
  • Recently, Wireless Sensor Networks (WSNs) are used in many applications. When sensor nodes are deployed on special areas, where humans have any difficulties to get in, the nodes form network topology themselves. By using the sensor nodes, users are able to obtain environmental information. Due to the lack of the battery capability, sensor nodes should be efficiently managed with energy consumption in WSNs. In specific applications (e.g. in intrusion detections), intruders tend to occur unexpectedly. For the energy efficiency in the applications, an appropriate algorithm is strongly required. In this paper, we propose tree-based routing algorithm for the specific applications, which based on the intrusion detection. In addition, In order to decrease traffic density, the proposed algorithm provides enhanced method considering link cost and load balance, and it establishes efficient links amongst the sensor nodes. Simultaneously, by using the proposed scheme, parent and child nodes are (re-)defined. Furthermore, efficient routing table management facilitates to improve energy efficiency especially in the limited power source. In order to apply a realistic military environment, in this paper, we design three scenarios according to an intruder's moving direction; (1) the intruder is passing along a path where sensor nodes have been already deployed. (2) the intruders are crossing the path. (3) the intruders, who are moving as (1)'s scenario, are certainly deviating from the middle of the path. In conclusion, through the simulation results, we obtain the performance results in terms of latency and energy consumption, and analyze them. Finally, we validate our algorithm is highly able to adapt on such the application environments.

Prototype based Classification by Generating Multidimensional Spheres per Class Area (클래스 영역의 다차원 구 생성에 의한 프로토타입 기반 분류)

  • Shim, Seyong;Hwang, Doosung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.2
    • /
    • pp.21-28
    • /
    • 2015
  • In this paper, we propose a prototype-based classification learning by using the nearest-neighbor rule. The nearest-neighbor is applied to segment the class area of all the training data into spheres within which the data exist from the same class. Prototypes are the center of spheres and their radii are computed by the mid-point of the two distances to the farthest same class point and the nearest another class point. And we transform the prototype selection problem into a set covering problem in order to determine the smallest set of prototypes that include all the training data. The proposed prototype selection method is based on a greedy algorithm that is applicable to the training data per class. The complexity of the proposed method is not complicated and the possibility of its parallel implementation is high. The prototype-based classification learning takes up the set of prototypes and predicts the class of test data by the nearest neighbor rule. In experiments, the generalization performance of our prototype classifier is superior to those of the nearest neighbor, Bayes classifier, and another prototype classifier.

Analysis of the Effectiveness of Big Data-Based Six Sigma Methodology: Focus on DX SS (빅데이터 기반 6시그마 방법론의 유효성 분석: DX SS를 중심으로)

  • Kim Jung Hyuk;Kim Yoon Ki
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.13 no.1
    • /
    • pp.1-16
    • /
    • 2024
  • Over recent years, 6 Sigma has become a key methodology in manufacturing for quality improvement and cost reduction. However, challenges have arisen due to the difficulty in analyzing large-scale data generated by smart factories and its traditional, formal application. To address these limitations, a big data-based 6 Sigma approach has been developed, integrating the strengths of 6 Sigma and big data analysis, including statistical verification, mathematical optimization, interpretability, and machine learning. Despite its potential, the practical impact of this big data-based 6 Sigma on manufacturing processes and management performance has not been adequately verified, leading to its limited reliability and underutilization in practice. This study investigates the efficiency impact of DX SS, a big data-based 6 Sigma, on manufacturing processes, and identifies key success policies for its effective introduction and implementation in enterprises. The study highlights the importance of involving all executives and employees and researching key success policies, as demonstrated by cases where methodology implementation failed due to incorrect policies. This research aims to assist manufacturing companies in achieving successful outcomes by actively adopting and utilizing the methodologies presented.

Recent Progress in Air Conditioning and Refrigeration Research - A Review of Papers Published in the Korean Journal of Air-Conditioning and Refrigeration Engineering in 2002 and 2003 - (공기조화, 냉동 분야의 최근 연구 동향 -2002년 및 2003년 학회지 논문에 대한 종합적 고찰 -)

  • Chung Kwang-Seop;Kim Min Soo;Kim Yongchan;Park Kyoung Kuhn;Park Byung-Yoon;Cho Keumnam
    • Korean Journal of Air-Conditioning and Refrigeration Engineering
    • /
    • v.16 no.12
    • /
    • pp.1234-1268
    • /
    • 2004
  • A review on the papers published in the Korean Journal of Air-Conditioning and Refrigerating Engineering in 2002 and 2003 has been carried out. Focus has been put on current status of research in the aspect of heating, cooling, air-conditioning, ventilation, sanitation and building environment/design. The conclusions are as follows. (1) Most of fundamental studies on fluid flow were related with heat transportation in diverse facilities. Drop formation and rivulet flow on solid surfaces were interesting topics related with condensation augmentation. Research on micro environment considering flow, heat transfer, humidity was also interesting to promote comfortable living environment. It can be extended considering biological aspects. Development of fans and blowers of high performance and low noise were continuing research topics. Well developed CFD technologies were widely applied for analysis and design of various facilities and their systems. (2) Heat transfer characteristics of enhanced finned tube heat exchangers and heat sinks were extensively investigated. Experimental studies on the boiling heat transfer, vortex generators, fluidized bed heat exchangers, and frosting and defrosting characteristics were also conducted. In addition, the numerical simulations on various heat exchangers were performed and reported to show heat transfer characteristics and performance of the heat exchanger. (3) A review of the recent studies shows that the performance analysis of heat pump have been made by various simulations and experiments. Progresses have been made specifically on the multi-type heat pump systems and other heat pump systems in which exhaust energy is utilized. The performance characteristics of heat pipe have been studied numerically and experimentally, which proves the validity of the developed simulation programs. The effect of various factors on the heat pipe performance has also been examined. Studies of the ice storage system have been focused on the operational characteristics of the system and on the basics of thermal storage materials. Researches into the phase change have been carried out steadily. Several papers deal with the cycle analysis of a few thermodynamic systems which are very useful in the field of air-conditioning and refrigeration. (4) Recent studies on refrigeration and air-conditioning systems have focused on the system performance and efficiency enhancement when new alternative refrigerants are applied. Heat transfer characteristics during evaporation and condensation are investigated for several tube shapes and new alternative refrigerants including natural refrigerants. Efficiency of various compressors and performance of new expansion devices are also dealt with for better design of refrigeration/air conditioning system. In addition to the studies related with thermophysical properties of refrigerant mixtures, studies on new refrigerants are also carried out. It should be noted that the researches on two-phase flow are constantly carried out. (5) A review of the recent studies on absorption refrigeration system indicates that heat and mass transfer enhancement is the key factor in improving the system performance. Various experiments have been carried out and diverse simulation models have been presented. Study on the small scale absorption refrigeration system draws a new attention. Cooling tower was also the research object in the respect of enhancement its efficiency, and performance analysis and optimization was carried out. (6) Based on a review of recent studies on indoor thermal environment and building service systems, it is noticed that research issues have mainly focused on several innovative systems such as personal environmental modules, air-barrier type perimeterless system with UFAC, radiant floor cooling system, etc. New approaches are highlighted for improving indoor environmental conditions and minimizing energy consumption, various activities of building energy management and cost-benefit analysis for economic evaluation.

Steel Plate Faults Diagnosis with S-MTS (S-MTS를 이용한 강판의 표면 결함 진단)

  • Kim, Joon-Young;Cha, Jae-Min;Shin, Junguk;Yeom, Choongsub
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.47-67
    • /
    • 2017
  • Steel plate faults is one of important factors to affect the quality and price of the steel plates. So far many steelmakers generally have used visual inspection method that could be based on an inspector's intuition or experience. Specifically, the inspector checks the steel plate faults by looking the surface of the steel plates. However, the accuracy of this method is critically low that it can cause errors above 30% in judgment. Therefore, accurate steel plate faults diagnosis system has been continuously required in the industry. In order to meet the needs, this study proposed a new steel plate faults diagnosis system using Simultaneous MTS (S-MTS), which is an advanced Mahalanobis Taguchi System (MTS) algorithm, to classify various surface defects of the steel plates. MTS has generally been used to solve binary classification problems in various fields, but MTS was not used for multiclass classification due to its low accuracy. The reason is that only one mahalanobis space is established in the MTS. In contrast, S-MTS is suitable for multi-class classification. That is, S-MTS establishes individual mahalanobis space for each class. 'Simultaneous' implies comparing mahalanobis distances at the same time. The proposed steel plate faults diagnosis system was developed in four main stages. In the first stage, after various reference groups and related variables are defined, data of the steel plate faults is collected and used to establish the individual mahalanobis space per the reference groups and construct the full measurement scale. In the second stage, the mahalanobis distances of test groups is calculated based on the established mahalanobis spaces of the reference groups. Then, appropriateness of the spaces is verified by examining the separability of the mahalanobis diatances. In the third stage, orthogonal arrays and Signal-to-Noise (SN) ratio of dynamic type are applied for variable optimization. Also, Overall SN ratio gain is derived from the SN ratio and SN ratio gain. If the derived overall SN ratio gain is negative, it means that the variable should be removed. However, the variable with the positive gain may be considered as worth keeping. Finally, in the fourth stage, the measurement scale that is composed of selected useful variables is reconstructed. Next, an experimental test should be implemented to verify the ability of multi-class classification and thus the accuracy of the classification is acquired. If the accuracy is acceptable, this diagnosis system can be used for future applications. Also, this study compared the accuracy of the proposed steel plate faults diagnosis system with that of other popular classification algorithms including Decision Tree, Multi Perception Neural Network (MLPNN), Logistic Regression (LR), Support Vector Machine (SVM), Tree Bagger Random Forest, Grid Search (GS), Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The steel plates faults dataset used in the study is taken from the University of California at Irvine (UCI) machine learning repository. As a result, the proposed steel plate faults diagnosis system based on S-MTS shows 90.79% of classification accuracy. The accuracy of the proposed diagnosis system is 6-27% higher than MLPNN, LR, GS, GA and PSO. Based on the fact that the accuracy of commercial systems is only about 75-80%, it means that the proposed system has enough classification performance to be applied in the industry. In addition, the proposed system can reduce the number of measurement sensors that are installed in the fields because of variable optimization process. These results show that the proposed system not only can have a good ability on the steel plate faults diagnosis but also reduce operation and maintenance cost. For our future work, it will be applied in the fields to validate actual effectiveness of the proposed system and plan to improve the accuracy based on the results.

Opportunity Tree Framework Design For Optimization of Software Development Project Performance (소프트웨어 개발 프로젝트 성능의 최적화를 위한 Opportunity Tree 모델 설계)

  • Song Ki-Won;Lee Kyung-Whan
    • The KIPS Transactions:PartD
    • /
    • v.12D no.3 s.99
    • /
    • pp.417-428
    • /
    • 2005
  • Today, IT organizations perform projects with vision related to marketing and financial profit. The objective of realizing the vision is to improve the project performing ability in terms of QCD. Organizations have made a lot of efforts to achieve this objective through process improvement. Large companies such as IBM, Ford, and GE have made over $80\%$ of success through business process re-engineering using information technology instead of business improvement effect by computers. It is important to collect, analyze and manage the data on performed projects to achieve the objective, but quantitative measurement is difficult as software is invisible and the effect and efficiency caused by process change are not visibly identified. Therefore, it is not easy to extract the strategy of improvement. This paper measures and analyzes the project performance, focusing on organizations' external effectiveness and internal efficiency (Qualify, Delivery, Cycle time, and Waste). Based on the measured project performance scores, an OT (Opportunity Tree) model was designed for optimizing the project performance. The process of design is as follows. First, meta data are derived from projects and analyzed by quantitative GQM(Goal-Question-Metric) questionnaire. Then, the project performance model is designed with the data obtained from the quantitative GQM questionnaire and organization's performance score for each area is calculated. The value is revised by integrating the measured scores by area vision weights from all stakeholders (CEO, middle-class managers, developer, investor, and custom). Through this, routes for improvement are presented and an optimized improvement method is suggested. Existing methods to improve software process have been highly effective in division of processes' but somewhat unsatisfactory in structural function to develop and systemically manage strategies by applying the processes to Projects. The proposed OT model provides a solution to this problem. The OT model is useful to provide an optimal improvement method in line with organization's goals and can reduce risks which may occur in the course of improving process if it is applied with proposed methods. In addition, satisfaction about the improvement strategy can be improved by obtaining input about vision weight from all stakeholders through the qualitative questionnaire and by reflecting it to the calculation. The OT is also useful to optimize the expansion of market and financial performance by controlling the ability of Quality, Delivery, Cycle time, and Waste.

A Study on Risk Parity Asset Allocation Model with XGBoos (XGBoost를 활용한 리스크패리티 자산배분 모형에 관한 연구)

  • Kim, Younghoon;Choi, HeungSik;Kim, SunWoong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.135-149
    • /
    • 2020
  • Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.

N- and P-doping of Transition Metal Dichalcogenide (TMD) using Artificially Designed DNA with Lanthanide and Metal Ions

  • Kang, Dong-Ho;Park, Jin-Hong
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2016.02a
    • /
    • pp.292-292
    • /
    • 2016
  • Transition metal dichalcogenides (TMDs) with a two-dimensional layered structure have been considered highly promising materials for next-generation flexible, wearable, stretchable and transparent devices due to their unique physical, electrical and optical properties. Recent studies on TMD devices have focused on developing a suitable doping technique because precise control of the threshold voltage ($V_{TH}$) and the number of tightly-bound trions are required to achieve high performance electronic and optoelectronic devices, respectively. In particular, it is critical to develop an ultra-low level doping technique for the proper design and optimization of TMD-based devices because high level doping (about $10^{12}cm^{-2}$) causes TMD to act as a near-metallic layer. However, it is difficult to apply an ion implantation technique to TMD materials due to crystal damage that occurs during the implantation process. Although safe doping techniques have recently been developed, most of the previous TMD doping techniques presented very high doping levels of ${\sim}10^{12}cm^{-2}$. Recently, low-level n- and p-doping of TMD materials was achieved using cesium carbonate ($Cs_2CO_3$), octadecyltrichlorosilane (OTS), and M-DNA, but further studies are needed to reduce the doping level down to an intrinsic level. Here, we propose a novel DNA-based doping method on $MoS_2$ and $WSe_2$ films, which enables ultra-low n- and p-doping control and allows for proper adjustments in device performance. This is achieved by selecting and/or combining different types of divalent metal and trivalent lanthanide (Ln) ions on DNA nanostructures. The available n-doping range (${\Delta}n$) on the $MoS_2$ by Ln-DNA (DNA functionalized by trivalent Ln ions) is between $6{\times}10^9cm^{-2}$ and $2.6{\times}10^{10}cm^{-2}$, which is even lower than that provided by pristine DNA (${\sim}6.4{\times}10^{10}cm^{-2}$). The p-doping change (${\Delta}p$) on $WSe_2$ by Ln-DNA is adjusted between $-1.0{\times}10^{10}cm^{-2}$ and $-2.4{\times}10^{10}cm^{-2}$. In the case of Co-DNA (DNA functionalized by both divalent metal and trivalent Ln ions) doping where $Eu^{3+}$ or $Gd^{3+}$ ions were incorporated, a light p-doping phenomenon is observed on $MoS_2$ and $WSe_2$ (respectively, negative ${\Delta}n$ below $-9{\times}10^9cm^{-2}$ and positive ${\Delta}p$ above $1.4{\times}10^{10}cm^{-2}$) because the added $Cu^{2+}$ ions probably reduce the strength of negative charges in Ln-DNA. However, a light n-doping phenomenon (positive ${\Delta}n$ above $10^{10}cm^{-2}$ and negative ${\Delta}p$ below $-1.1{\times}10^{10}cm^{-2}$) occurs in the TMD devices doped by Co-DNA with $Tb^{3+}$ or $Er^{3+}$ ions. A significant (factor of ~5) increase in field-effect mobility is also observed on the $MoS_2$ and $WSe_2$ devices, which are, respectively, doped by $Tb^{3+}$-based Co-DNA (n-doping) and $Gd^{3+}$-based Co-DNA (p-doping), due to the reduction of effective electron and hole barrier heights after the doping. In terms of optoelectronic device performance (photoresponsivity and detectivity), the $Tb^{3+}$ or $Er^{3+}$-Co-DNA (n-doping) and the $Eu^{3+}$ or $Gd^{3+}$-Co-DNA (p-doping) improve the $MoS_2$ and $WSe_2$ photodetectors, respectively.

  • PDF

Evaluation of Soil Parameters Using Adaptive Management Technique (적응형 관리 기법을 이용한 지반 물성 값의 평가)

  • Koo, Bonwhee;Kim, Taesik
    • Journal of the Korean GEO-environmental Society
    • /
    • v.18 no.2
    • /
    • pp.47-51
    • /
    • 2017
  • In this study, the optimization algorithm by inverse analysis that is the core of the adaptive management technique was adopted to update the soil engineering properties based on the ground response during the construction. Adaptive management technique is the framework wherein construction and design procedures are adjusted based on observations and measurements made as construction proceeds. To evaluate the performance of the adaptive management technique, the numerical simulation for the triaxial tests and the synthetic deep excavation were conducted with the Hardening Soil model. To effectively conduct the analysis, the effective parameters among the parameters employed in the model were selected based on the composite scaled sensitivity analysis. The results from the undrained triaxial tests performed with soft Chicago clays were used for the parameter calibration. The simulation for the synthetic deep excavation were conducted assuming that the soil engineering parameters obtained from the triaxial simulation represent the actual field condition. These values were used as the reference values. The observation for the synthetic deep excavation simulations was the horizontal displacement of the support wall that has the highest composite scaled sensitivity among the other possible observations. It was found that the horizontal displacement of the support wall with the various initial soil properties were converged to the reference displacement by using the adaptive management technique.