• Title/Summary/Keyword: Performance of Optimization

Search Result 5,489, Processing Time 0.045 seconds

Photocatalytic Oxidation of Arsenite Using Goethite and UVC-Lamp (침철석과 UVC-Lamp를 이용한 아비산염의 광촉매 산화)

  • Jeon, Ji-Hun;Kim, Seong-Hee;Cho, Hyen-Goo;Kim, Soon-Oh
    • Economic and Environmental Geology
    • /
    • v.50 no.3
    • /
    • pp.215-224
    • /
    • 2017
  • Arsenic (As) is known to be the most toxic element and frequently detected in groundwater environment. Inorganic As exists as arsenite [As(III)] and arsenate [As(V)] in reduced and oxidized environments, respectively. It has been reported that the toxicity of arsenite is much higher than that of arsenate and furthermore arsenite shows relatively higher mobility in aqueous environments. For this reason, there have been numerous researches on the process for oxidation of arsenite to arsenate to reduce the toxicity of arsenic. In particular, photooxidation has been considered to be simple, economical, and efficient to attain such goal. This study was conducted to evaluate the applicability of naturally-occurring goethite as a photocatalyst to substitute for $TiO_2$ which has been mostly used in the photooxidation processes so far. In addition, the effects of several factors on the overall performance of arsenite photocatalytic oxidation process were evaluated. The results show that the efficiency of the process was affected by total concentration of dissolved cations rather than by the kind of those cations and also the relatively higher pH conditions seemed to be more favorable to the process. In the case of coexistence of arsenite and arsenate, the removal tendency by adsorption onto goethite appeared to be different between arsenite and arsenate due to their different affinities with goethite, but any effect on the photocatalytic oxidation of arsenite was not observed. In terms of effect of humic acid on the process, it is likely that the higher concentration of humic acid reduced the overall performance of the arsenite photocatalytic oxidation as a result of competing interaction of activated oxygen species, such as hydroxyl and superoxide radicals, with arsenite and humic acid. In addition, it is revealed that the injection of oxygen gas improved the process because oxygen contributes to arsenite oxidation as an electron acceptor. Based on the results of the study, consequently, the photocatalytic oxidation of aqueous arsenite using goethite seems to be greatly feasible with the optimization of process.

Recent Progress in Air-Conditioning and Refrigeration Research: A Review of Papers Published in the Korean Journal of Air-Conditioning and Refrigeration Engineering in 2008 (설비공학 분야의 최근 연구 동향: 2008년 학회지 논문에 대한 종합적 고찰)

  • Han, Hwa-Taik;Choi, Chang-Ho;Lee, Dae-Young;Kim, Seo-Young;Kwon, Yong-Il;Choi, Jong-Min
    • Korean Journal of Air-Conditioning and Refrigeration Engineering
    • /
    • v.21 no.12
    • /
    • pp.715-732
    • /
    • 2009
  • This article reviews the papers published in the Korean Journal of Air-Conditioning and Refrigeration Engineering during 2008. It is intended to understand the status of current research in the areas of heating, cooling, ventilation, sanitation, and indoor environments of buildings and plant facilities. Conclusions are as follows. (1) Research trends in thermal and fluid engineering have been surveyed in the categories of general fluid flow, fluid machinery and piping, new and renewable energy, and fire. Well-developed CFD technologies were widely applied in developing facilities and their systems. New research topics include fire, fuel cell, and solar energy. Research was mainly focused on flow distribution and optimization in the fields of fluid machinery and piping. Topics related to the development of fans and compressors had been popular, but were no longer investigated widely. Research papers on micro heat exchangers using nanofluids and micro pumps were also not presented during this period. There were some studies on thermal reliability and performance in the fields of new and renewable energy. Numerical simulations of smoke ventilation and the spread of fire were the main topics in the field of fire. (2) Research works on heat transfer presented in 2008 have been reviewed in the categories of heat transfer characteristics, industrial heat exchangers, and ground heat exchangers. Research on heat transfer characteristics included thermal transport in cryogenic vessels, dish solar collectors, radiative thermal reflectors, variable conductance heat pipes, and flow condensation and evaporation of refrigerants. In the area of industrial heat exchangers, examined are research on micro-channel plate heat exchangers, liquid cooled cold plates, fin-tube heat exchangers, and frost behavior of heat exchanger fins. Measurements on ground thermal conductivity and on the thermal diffusion characteristics of ground heat exchangers were reported. (3) In the field of refrigeration, many studies were presented on simultaneous heating and cooling heat pump systems. Switching between various operation modes and optimizing the refrigerant charge were considered in this research. Studies of heat pump systems using unutilized energy sources such as sewage water and river water were reported. Evaporative cooling was studied both theoretically and experimentally as a potential alternative to the conventional methods. (4) Research papers on building facilities have been reviewed and divided into studies on heat and cold sources, air conditioning and air cleaning, ventilation, automatic control of heat sources with piping systems, and sound reduction in hydraulic turbine dynamo rooms. In particular, considered were efficient and effective uses of energy resulting in reduced environmental pollution and operating costs. (5) In the field of building environments, many studies focused on health and comfort. Ventilation. system performance was considered to be important in improving indoor air conditions. Due to high oil prices, various tests were planned to examine building energy consumption and to cut life cycle costs.

Performance assessment of an urban stormwater infiltration trench considering facility maintenance (침투도랑 유지관리를 통한 도시 강우유출수 처리 성능 평가)

  • Reyes, N.J. D.G.;Geronimo, F.K.F.;Choi, H.S.;Kim, L.H.
    • Journal of Wetlands Research
    • /
    • v.20 no.4
    • /
    • pp.424-431
    • /
    • 2018
  • Stormwater runoff containing considerable amounts of pollutants such as particulates, organics, nutrients, and heavy metals contaminate natural bodies of water. At present, best management practices (BMP) intended to reduce the volume and treat pollutants from stormwater runoff were devised to serve as cost-effective measures of stormwater management. However, improper design and lack of proper maintenance can lead to degradation of the facility, making it unable to perform its intended function. This study evaluated an infiltration trench (IT) that went through a series of maintenance operations. 41 monitored rainfall events from 2009 to 2016 were used to evaluate the pollutant removal capabilities of the IT. Assessment of the water quality and hydrological data revealed that the inflow volume was the most relative factor affecting the unit pollutant loads (UPL) entering the facility. Seasonal variations also affected the pollutant removal capabilities of the IT. During the summer season, the increased rainfall depths and runoff volumes diminished the pollutant removal efficiency (RE) of the facility due to increased volumes that washed off larger pollutant loads and caused the IT to overflow. Moreover, the system also exhibited reduced pollutant RE for the winter season due to frozen media layers and chemical-related mechanisms impacted by the low winter temperature. Maintenance operations also posed considerable effects of the performance of the IT. During the first two years of operation, the IT exhibited a decrease in pollutant RE due to aging and lack of proper maintenance. However, some events also showed reduced pollutant RE succeeding the maintenance as a result of disturbed sediments that were not removed from the geotextile. Ultimately, the presented effects of maintenance operations in relation to the pollutant RE of the system may lead to the optimization of maintenance schedules and procedures for BMP of same structure.

Performance Analysis and Comparison of Stream Ciphers for Secure Sensor Networks (안전한 센서 네트워크를 위한 스트림 암호의 성능 비교 분석)

  • Yun, Min;Na, Hyoung-Jun;Lee, Mun-Kyu;Park, Kun-Soo
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.18 no.5
    • /
    • pp.3-16
    • /
    • 2008
  • A Wireless Sensor Network (WSN for short) is a wireless network consisting of distributed small devices which are called sensor nodes or motes. Recently, there has been an extensive research on WSN and also on its security. For secure storage and secure transmission of the sensed information, sensor nodes should be equipped with cryptographic algorithms. Moreover, these algorithms should be efficiently implemented since sensor nodes are highly resource-constrained devices. There are already some existing algorithms applicable to sensor nodes, including public key ciphers such as TinyECC and standard block ciphers such as AES. Stream ciphers, however, are still to be analyzed, since they were only recently standardized in the eSTREAM project. In this paper, we implement over the MicaZ platform nine software-based stream ciphers out of the ten in the second and final phases of the eSTREAM project, and we evaluate their performance. Especially, we apply several optimization techniques to six ciphers including SOSEMANUK, Salsa20 and Rabbit, which have survived after the final phase of the eSTREAM project. We also present the implementation results of hardware-oriented stream ciphers and AES-CFB fur reference. According to our experiment, the encryption speeds of these software-based stream ciphers are in the range of 31-406Kbps, thus most of these ciphers are fairly acceptable fur sensor nodes. In particular, the survivors, SOSEMANUK, Salsa20 and Rabbit, show the throughputs of 406Kbps, 176Kbps and 121Kbps using 70KB, 14KB and 22KB of ROM and 2811B, 799B and 755B of RAM, respectively. From the viewpoint of encryption speed, the performances of these ciphers are much better than that of the software-based AES, which shows the speed of 106Kbps.

Optimization of Characteristic Change due to Differences in the Electrode Mixing Method (전극 혼합 방식의 차이로 인한 특성 변화 최적화)

  • Jeong-Tae Kim;Carlos Tafara Mpupuni;Beom-Hui Lee;Sun-Yul Ryou
    • Journal of the Korean Electrochemical Society
    • /
    • v.26 no.1
    • /
    • pp.1-10
    • /
    • 2023
  • The cathode, which is one of the four major components of a lithium secondary battery, is an important component responsible for the energy density of the battery. The mixing process of active material, conductive material, and polymer binder is very essential in the commonly used wet manufacturing process of the cathode. However, in the case of mixing conditions of the cathode, since there is no systematic method, in most cases, differences in performance occur depending on the manufacturer. Therefore, LiMn2O4 (LMO) cathodes were prepared using a commonly used THINKY mixer and homogenizer to optimize the mixing method in the cathode slurry preparation step, and their characteristics were compared. Each mixing condition was performed at 2000 RPM and 7 min, and to determine only the difference in the mixing method during the manufacture of the cathode other experiment conditions (mixing time, material input order, etc.) were kept constant. Among the manufactured THINKY mixer LMO (TLMO) and homogenizer LMO (HLMO), HLMO has more uniform particle dispersion than TLMO, and thus shows higher adhesive strength. Also, the result of the electrochemical evaluation reveals that HLMO cathode showed improved performance with a more stable life cycle compared to TLMO. The initial discharge capacity retention rate of HLMO at 69 cycles was 88%, which is about 4.4 times higher than that of TLMO, and in the case of rate capability, HLMO exhibited a better capacity retention even at high C-rates of 10, 15, and 20 C and the capacity recovery at 1 C was higher than that of TLMO. It's postulated that the use of a homogenizer improves the characteristics of the slurry containing the active material, the conductive material, and the polymer binder creating an electrically conductive network formed by uniformly dispersing the conductive material suppressing its strong electrostatic properties thus avoiding aggregation. As a result, surface contact between the active material and the conductive material increases, electrons move more smoothly, changes in lattice volume during charging and discharging are more reversible and contact resistance between the active material and the conductive material is suppressed.

Comparison of Convolutional Neural Network (CNN) Models for Lettuce Leaf Width and Length Prediction (상추잎 너비와 길이 예측을 위한 합성곱 신경망 모델 비교)

  • Ji Su Song;Dong Suk Kim;Hyo Sung Kim;Eun Ji Jung;Hyun Jung Hwang;Jaesung Park
    • Journal of Bio-Environment Control
    • /
    • v.32 no.4
    • /
    • pp.434-441
    • /
    • 2023
  • Determining the size or area of a plant's leaves is an important factor in predicting plant growth and improving the productivity of indoor farms. In this study, we developed a convolutional neural network (CNN)-based model to accurately predict the length and width of lettuce leaves using photographs of the leaves. A callback function was applied to overcome data limitations and overfitting problems, and K-fold cross-validation was used to improve the generalization ability of the model. In addition, ImageDataGenerator function was used to increase the diversity of training data through data augmentation. To compare model performance, we evaluated pre-trained models such as VGG16, Resnet152, and NASNetMobile. As a result, NASNetMobile showed the highest performance, especially in width prediction, with an R_squared value of 0.9436, and RMSE of 0.5659. In length prediction, the R_squared value was 0.9537, and RMSE of 0.8713. The optimized model adopted the NASNetMobile architecture, the RMSprop optimization tool, the MSE loss functions, and the ELU activation functions. The training time of the model averaged 73 minutes per Epoch, and it took the model an average of 0.29 seconds to process a single lettuce leaf photo. In this study, we developed a CNN-based model to predict the leaf length and leaf width of plants in indoor farms, which is expected to enable rapid and accurate assessment of plant growth status by simply taking images. It is also expected to contribute to increasing the productivity and resource efficiency of farms by taking appropriate agricultural measures such as adjusting nutrient solution in real time.

Optimization of Analytical Methods for Ochratoxin A and Zearalenone by UHPLC in Rice Straw Silage and Winter Forage Crops (UHPLC를 이용한 볏짚 사일리지와 동계사료작물의 오크라톡신과 제랄레논 분석법 최적화)

  • Ham, Hyeonheui;Mun, Hye Yeon;Lee, Kyung Ah;Lee, Soohyung;Hong, Sung Kee;Lee, Theresa;Ryu, Jae-Gee
    • Journal of The Korean Society of Grassland and Forage Science
    • /
    • v.36 no.4
    • /
    • pp.333-339
    • /
    • 2016
  • The objective of this study was to optimize analytical methods for ochratoxin A (OTA) and zearalenone (ZEA) in rice straw silage and winter forage crops using ultra-high performance liquid chromatography (UHPLC). Samples free of mycotoxins were spiked with $50{\mu}g/kg$, $250{\mu}g/kg$, or $500{\mu}g/kg$ of OTA and $300{\mu}g/kg$, $1500{\mu}g/kg$, or $3000{\mu}g/kg$ of ZEA. OTA and ZEA were extracted by acetonitrile and cleaned-up using an immunoaffinity column. They were then subjected to analysis with UHPLC equipped with a fluorescence detector. The correlation coefficients of calibration curves showed high linearity ($R^2{\geq_-}0.9999$ for OTA and $R^2{\geq_-}0.9995$ for ZEA). The limit of detection and quantification were $0.1{\mu}g/kg$ and $0.3{\mu}g/kg$, respectively, for OTA and $5{\mu}g/kg$ and $16.7{\mu}g/kg$, respectively, for ZEA. The recovery and relative standard deviation (RSD) of OTA were as follows: rice straw = 84.23~95.33%, 2.59~4.77%; Italian ryegrass = 79.02~95%, 0.86~5.83%; barley = 74.93~97%, 0.85~9.19%; rye = 77.99~96.67%, 0.33~6.26%. The recovery and RSD of ZEA were: rice straw = 109.6~114.22%, 0.67~7.15%; Italian ryegrass = 98.01~109.44%, 1.65~4.81%; barley = 98~113.53%, 0.25~5.85%; rye = 90.44~108.56%, 2.5~4.66%. They both satisfied the standards of European Commission criteria (EC 401-2006) for quantitative analysis. These results showed that the optimized methods could be used for mycotoxin analysis of forages.

Bankruptcy prediction using an improved bagging ensemble (개선된 배깅 앙상블을 활용한 기업부도예측)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.121-139
    • /
    • 2014
  • Predicting corporate failure has been an important topic in accounting and finance. The costs associated with bankruptcy are high, so the accuracy of bankruptcy prediction is greatly important for financial institutions. Lots of researchers have dealt with the topic associated with bankruptcy prediction in the past three decades. The current research attempts to use ensemble models for improving the performance of bankruptcy prediction. Ensemble classification is to combine individually trained classifiers in order to gain more accurate prediction than individual models. Ensemble techniques are shown to be very useful for improving the generalization ability of the classifier. Bagging is the most commonly used methods for constructing ensemble classifiers. In bagging, the different training data subsets are randomly drawn with replacement from the original training dataset. Base classifiers are trained on the different bootstrap samples. Instance selection is to select critical instances while deleting and removing irrelevant and harmful instances from the original set. Instance selection and bagging are quite well known in data mining. However, few studies have dealt with the integration of instance selection and bagging. This study proposes an improved bagging ensemble based on instance selection using genetic algorithms (GA) for improving the performance of SVM. GA is an efficient optimization procedure based on the theory of natural selection and evolution. GA uses the idea of survival of the fittest by progressively accepting better solutions to the problems. GA searches by maintaining a population of solutions from which better solutions are created rather than making incremental changes to a single solution to the problem. The initial solution population is generated randomly and evolves into the next generation by genetic operators such as selection, crossover and mutation. The solutions coded by strings are evaluated by the fitness function. The proposed model consists of two phases: GA based Instance Selection and Instance based Bagging. In the first phase, GA is used to select optimal instance subset that is used as input data of bagging model. In this study, the chromosome is encoded as a form of binary string for the instance subset. In this phase, the population size was set to 100 while maximum number of generations was set to 150. We set the crossover rate and mutation rate to 0.7 and 0.1 respectively. We used the prediction accuracy of model as the fitness function of GA. SVM model is trained on training data set using the selected instance subset. The prediction accuracy of SVM model over test data set is used as fitness value in order to avoid overfitting. In the second phase, we used the optimal instance subset selected in the first phase as input data of bagging model. We used SVM model as base classifier for bagging ensemble. The majority voting scheme was used as a combining method in this study. This study applies the proposed model to the bankruptcy prediction problem using a real data set from Korean companies. The research data used in this study contains 1832 externally non-audited firms which filed for bankruptcy (916 cases) and non-bankruptcy (916 cases). Financial ratios categorized as stability, profitability, growth, activity and cash flow were investigated through literature review and basic statistical methods and we selected 8 financial ratios as the final input variables. We separated the whole data into three subsets as training, test and validation data set. In this study, we compared the proposed model with several comparative models including the simple individual SVM model, the simple bagging model and the instance selection based SVM model. The McNemar tests were used to examine whether the proposed model significantly outperforms the other models. The experimental results show that the proposed model outperforms the other models.

An Optimization Study on a Low-temperature De-NOx Catalyst Coated on Metallic Monolith for Steel Plant Applications (제철소 적용을 위한 저온형 금속지지체 탈질 코팅촉매 최적화 연구)

  • Lee, Chul-Ho;Choi, Jae Hyung;Kim, Myeong Soo;Seo, Byeong Han;Kang, Cheul Hui;Lim, Dong-Ha
    • Clean Technology
    • /
    • v.27 no.4
    • /
    • pp.332-340
    • /
    • 2021
  • With the recent reinforcement of emission standards, it is necessary to make efforts to reduce NOx from air pollutant-emitting workplaces. The NOx reduction method mainly used in industrial facilities is selective catalytic reduction (SCR), and the most commercial SCR catalyst is the ceramic honeycomb catalyst. This study was carried out to reduce the NOx emitted from steel plants by applying De-NOx catalyst coated on metallic monolith. The De-NOx catalyst was synthesized through the optimized coating technique, and the coated catalyst was uniformly and strongly adhered onto the surface of the metallic monolith according to the air jet erosion and bending test. Due to the good thermal conductivity of metallic monolith, the De-NOx catalyst coated on metallic monolith showed good De-NOx efficiency at low temperatures (200 ~ 250 ℃). In addition, the optimal amount of catalyst coating on the metallic monolith surface was confirmed for the design of an economical catalyst. Based on these results, the De-NOx catalyst of commercial grade size was tested in a semi-pilot De-NOx performance facility under a simulated gas similar to the exhaust gas emitted from a steel plant. Even at a low temperature (200 ℃), it showed excellent performance satisfying the emission standard (less than 60 ppm). Therefore, the De-NOx catalyst coated metallic monolith has good physical and chemical properties and showed a good De-NOx efficiency even with the minimum amount of catalyst. Additionally, it was possible to compact and downsize the SCR reactor through the application of a high-density cell. Therefore, we suggest that the proposed De-NOx catalyst coated metallic monolith may be a good alternative De-NOx catalyst for industrial uses such as steel plants, thermal power plants, incineration plants ships, and construction machinery.

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF