• Title/Summary/Keyword: performance-based optimization

Search Result 2,574, Processing Time 0.037 seconds

CO2 Conversion by Controlling the Reduction Temperature of Cobalt Catalyst (코발트 촉매의 환원온도 조절을 통한 CO2 전환 공정)

  • Heuntae Jo;Jaehoon Kim
    • Clean Technology
    • /
    • v.30 no.3
    • /
    • pp.188-194
    • /
    • 2024
  • This study investigates the impact of reduction temperature on the structure and performance of cobalt-manganese (CM) based catalysts in the direct hydrogenation reaction of carbon dioxide (CO2). It was observed that at a reduction temperature of 350 ℃, these catalysts could successfully facilitate the conversion of CO2 into long-chain hydrocarbons. This efficiency is attributed to the optimal conditions provided by the core-shell structure of the catalysts, which effectively catalyzes both the reverse water-gas shift (RWGS) and Fischer-Tropsch (FT) reactions. However, as the reduction temperature increased to 600 ℃, the effectiveness of the reaction process was hindered, and there was a shift in selectivity towards methane. This shift is due to the excessive reduction of the catalyst's outer shell, which reduces the number of RWGS sites and subsequently suppresses the production of CO. These findings highlight the importance of carefully controlling the reduction temperature in the design and optimization of cobalt-based catalysts. Maintaining a balance between the RWGS and FT reactions is crucial. This emphasizes that the reduction temperature is a key factor in efficiently generating long-chain hydrocarbons from CO2.

Low Noise Vacuum Cleaner Design (저소음 청소기 개발)

  • Joo, Jae-Man;Lee, Jun-Hwa;Hong, Seun-Gee;Oh, Jang-Keun;Song, Hwa-Gyu
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2007.11a
    • /
    • pp.939-942
    • /
    • 2007
  • Vacuum cleaner is a close life product that can remove various dusts from our surroundings. However well vacuum cleaner clean our environments, many people are looking away from it, due to its loud noise. Its noise causes a big trouble in the usual life, for example, catch calls, TV watching and discussing etc. To reduce these inconveniences, noise reduction methods and systematic design of low noise vacuum cleaner are studied in this paper. At first, sound quality investigation is performed to get the noise level and quality that make people TV watching and catch calls available. Based on the European and domestic customer SQ survey result, sound power, peak noise level and target sound spectrum guideline are studied and introduced. As a second, precise product sound spectrums are designed into each part based on the sound quality result. Fan-motor, brush, mainbody, cyclone spectrums are decided to get the final target sound based on the contribution level. Fan-motor is the major noise source of vacuum cleaner. Specially, its peak sound, RPM peak and BPF Peak, cause the people nervous. To reduce these peak sounds, high rotating impeller and diffuser are focused due to its interaction. A lot of experimental and numerical tests, operation points are investigated and optimization of flow path area between diffusers is performed. As a bagless device, cyclones are one of the major noise sources of vacuum cleaner. To reduce its noise, previous research is used and adopted well. Brush is the most difficult part to reduce noise. Its noise sources are all comes from aero-acoustic phenomena. Numerical analysis helps the understanding of flow structure and pattern, and a lot of experimental test are performed to reduce the noise. Gaps between the carpet and brush are optimized and flow paths are re-designed to lower the noise. Reduction is performed with keeping the cleaning efficiency and handling power together and much reduction of noise is acquired. With all above parts, main-body design is studied. To do a systematic design, configuration design developments technique is introduced from airplane design and evolved with each component design. As a first configuration, fan-motor installation position is investigated and 10 configuration ideas are developed and tested. As a second step, reduced size and compressed configuration candidates are tested and evaluated by a lot of major factor. Noise, power, mass production availability, size, flow path are evaluated together. If noise reduction configuration results in other performance degrade, the noise reduction configuration is ineffective. As a third configuration, cyclones are introduced and the size is reduced one more time and fourth, fifth, sixth, seventh configuration are evolved with size and design image with noise and other performance indexes. Finally we can get a overall much noise level reduction configuration. All above investigations are adopted into vacuum cleaner design and final customer satisfaction tests in Europe are performed. 1st grade sound quality and lowest noise level of bagless vacuum cleaner are achieved.

  • PDF

Slot-Time Optimization Scheme for Underwater Acoustic Sensor Networks (수중음향 센서네트워크를 위한 슬롯시간 최적화 기법)

  • Lee, Dongwon;Kim, Sunmyeng;Lee, Hae-Yeoun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39C no.4
    • /
    • pp.351-361
    • /
    • 2014
  • Compared to a terrestrial communication, the high BER(Bit Error Ratio) and low channel bandwidth are the major factor of throughput degradation due to characteristics of underwater channel. Therefore, a MAC protocol must be designed to solve this problem in UWASNs(Underwater Acoustic Sensor Networks). MAC protocols for UWASNs can be classified into two major types according to the contention scheme(Contention-free scheme and Contention-based scheme). In large scale of sensor networks, a Contention-based scheme is commonly used due to time-synchronize problem of Contention-free scheme. In the contention-based scheme, Each node contends with neighbor nodes to access network channel by using Back-off algorithm. But a Slot-Time of Back-off algorithm has long delay times which are cause of decrease network throughput. In this paper, we propose a new scheme to solve this problem. The proposed scheme uses variable Slot-Time instead of fixed Slot-Time. Each node measures propagation delay from neighbors which are used by Slot-time. Therefore, Slot-Times of each node are optimized by considering node deployment. Consequently, the wasted-time for Back-off is reduced and network throughput is improved. A new mac protocol performance in throughput and delay is assessed through NS3 and compared with existing MAC protocol(MACA-U). Finally, it was proved that the MAC protocol using the proposed scheme has better performance than existing MAC protocol as a result of comparison.

Using a H/W ADL-based Compiler for Fixed-point Audio Codec Optimization thru Application Specific Instructions (응용프로그램에 특화된 명령어를 통한 고정 소수점 오디오 코덱 최적화를 위한 ADL 기반 컴파일러 사용)

  • Ahn Min-Wook;Paek Yun-Heung;Cho Jeong-Hun
    • The KIPS Transactions:PartA
    • /
    • v.13A no.4 s.101
    • /
    • pp.275-288
    • /
    • 2006
  • Rapid design space exploration is crucial to customizing embedded system design for exploiting the application behavior. As the time-to-market becomes a key concern of the design, the approach based on an application specific instruction-set processor (ASIP) is considered more seriously as one alternative design methodology. In this approach, the instruction set architecture (ISA) for a target processor is frequently modified to best fit the application with regard to code size and speed. Two goals of this paper is to introduce our new retargetable compiler and how it has been used in ASIP-based design space exploration for a popular digital signal processing (DSP) application. Newly developed retargetable compiler provides not only the functionality of previous retargetable compilers but also visualizes the features of the application program and profiles it so that it can help architecture designers and application programmers to insert new application specific instructions into target architecture for performance increase. Given an initial RISC-style ISA for the target processor, we characterized the application code and incrementally updated the ISA with more application specific instructions to give the compiler a better chance to optimize assembly code for the application. We get 32% performance increase and 20% program size reduction using 6 audio codec specific instructions from retargetable compiler. Our experimental results manifest a glimpse of evidence that a higgly retargetable compiler is essential to rapidly prototype a new ASIP for a specific application.

Analysis of Air Current Measurements at External Induction-Style Kitchen and Bathroom Vents (외기유인형 주방·욕실 배기구의 기류측정 분석)

  • Lee, Yong-Ho;Kim, Seong-Yong;Park, Jin-Chul;Hwang, Jung-Ha
    • Journal of the Korean Solar Energy Society
    • /
    • v.32 no.6
    • /
    • pp.76-84
    • /
    • 2012
  • This study conducted experiments to measure air currents in an experimental building according to external conditions, types of induction ducts, and types of internal sockets by applying an external induction duct comprised of inducing openings and lines and induction units to the kitchen and bathroom vents at the rooftop of a super high-rise apartment building in order to help to improve the venting performance. The study also proposed the optimization of the external induction-style kitchen and bathroom vents capable of wind power generation. (1) As for air current distribution according to vent velocity changes, it increased the venting performance of the kitchen and bathroom by 1.0m/s at vent velocity of 2.0m/s or higher and allowed for wind power generation. (2)As for air current distribution according to external velocity changes, it increased the venting performance of the kitchen and bathroom by 1.2m/s at external velocity of 2.0m/s or higher and allowed for wind power generation. (3)As for air current distribution according to wind direction changes($0{\sim}180^{\circ}$), it was favorable for higher vent velocity when the angle between the external induction duct direction and prevailing wind direction was within ${\pm}30^{\circ}$. (4)As for air current distribution according to induction duct type, the[M1] type combining the inducing openings and lines with the induction units recorded the highest improvement effects in the kitchen and bathroom venting performance by increasing vent velocity by 46%. (5)As for air current distribution according to the changing types of internal sockets where the main ducts of the kitchen and bathroom are connected to the external induction ducts, the venturi tube type[Sv] increased vent velocity by 66% based on the smoothest external inflow.

Improving Generalization Performance of Neural Networks using Natural Pruning and Bayesian Selection (자연 프루닝과 베이시안 선택에 의한 신경회로망 일반화 성능 향상)

  • 이현진;박혜영;이일병
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.3_4
    • /
    • pp.326-338
    • /
    • 2003
  • The objective of a neural network design and model selection is to construct an optimal network with a good generalization performance. However, training data include noises, and the number of training data is not sufficient, which results in the difference between the true probability distribution and the empirical one. The difference makes the teaming parameters to over-fit only to training data and to deviate from the true distribution of data, which is called the overfitting phenomenon. The overfilled neural network shows good approximations for the training data, but gives bad predictions to untrained new data. As the complexity of the neural network increases, this overfitting phenomenon also becomes more severe. In this paper, by taking statistical viewpoint, we proposed an integrative process for neural network design and model selection method in order to improve generalization performance. At first, by using the natural gradient learning with adaptive regularization, we try to obtain optimal parameters that are not overfilled to training data with fast convergence. By adopting the natural pruning to the obtained optimal parameters, we generate several candidates of network model with different sizes. Finally, we select an optimal model among candidate models based on the Bayesian Information Criteria. Through the computer simulation on benchmark problems, we confirm the generalization and structure optimization performance of the proposed integrative process of teaming and model selection.

A optimization study on the preparation and coating conditions on honeycomb type of Pd/TiO2 catalysts to secure hydrogen utilization process safety (수소 활용공정 안전성 확보를 위한 Pd/TiO2 수소 상온산화 촉매의 제조 및 허니컴 구조의 코팅 조건 최적화 연구)

  • Jang, Young hee;Lee, Sang Moon;Kim, Sung Su
    • Journal of the Korea Organic Resources Recycling Association
    • /
    • v.29 no.4
    • /
    • pp.47-54
    • /
    • 2021
  • In this study, the performance of a honeycomb-type hydrogen oxidation catalyst to remove hydrogen in a hydrogen economy society to secure leaking hydrogen. The Pd/TiO2 catalyst was prepared based on a liquid phase reduction method that is not exposed to a heat source, and it was showed through H2-chemisorption analysis that it existed as very small active particles of 2~4 nm. In addition, it was found that the metal dispersion decreased and the active particle size increased as the reduction reaction temperature increased. It was meant that the active metal particle size and the hydrogen oxidation performance were in a proportional correlation, so that it was consistent with the hydrogen oxidation performance reduction result. The prepared catalyst was coated on a support in the form of a honeycomb so that it could be applied to the hydrogen industrial process. When 20 wt% or more of the AS-40 binder was coated, oxidation performance of 90% or more was observed under low-concentration hydrogen conditions. It was showed through SEM analysis that long-term catalytic activity can be expected by enhancing the adhesion strength of the catalyst and preventing catalyst desorption. It is a basic research that can secure safety in a hydrogen society such as gasification, organic resource, and it can be utilized as a system that can respond to unexpected safety accidents in the future.

Social Network-based Hybrid Collaborative Filtering using Genetic Algorithms (유전자 알고리즘을 활용한 소셜네트워크 기반 하이브리드 협업필터링)

  • Noh, Heeryong;Choi, Seulbi;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.19-38
    • /
    • 2017
  • Collaborative filtering (CF) algorithm has been popularly used for implementing recommender systems. Until now, there have been many prior studies to improve the accuracy of CF. Among them, some recent studies adopt 'hybrid recommendation approach', which enhances the performance of conventional CF by using additional information. In this research, we propose a new hybrid recommender system which fuses CF and the results from the social network analysis on trust and distrust relationship networks among users to enhance prediction accuracy. The proposed algorithm of our study is based on memory-based CF. But, when calculating the similarity between users in CF, our proposed algorithm considers not only the correlation of the users' numeric rating patterns, but also the users' in-degree centrality values derived from trust and distrust relationship networks. In specific, it is designed to amplify the similarity between a target user and his or her neighbor when the neighbor has higher in-degree centrality in the trust relationship network. Also, it attenuates the similarity between a target user and his or her neighbor when the neighbor has higher in-degree centrality in the distrust relationship network. Our proposed algorithm considers four (4) types of user relationships - direct trust, indirect trust, direct distrust, and indirect distrust - in total. And, it uses four adjusting coefficients, which adjusts the level of amplification / attenuation for in-degree centrality values derived from direct / indirect trust and distrust relationship networks. To determine optimal adjusting coefficients, genetic algorithms (GA) has been adopted. Under this background, we named our proposed algorithm as SNACF-GA (Social Network Analysis - based CF using GA). To validate the performance of the SNACF-GA, we used a real-world data set which is called 'Extended Epinions dataset' provided by 'trustlet.org'. It is the data set contains user responses (rating scores and reviews) after purchasing specific items (e.g. car, movie, music, book) as well as trust / distrust relationship information indicating whom to trust or distrust between users. The experimental system was basically developed using Microsoft Visual Basic for Applications (VBA), but we also used UCINET 6 for calculating the in-degree centrality of trust / distrust relationship networks. In addition, we used Palisade Software's Evolver, which is a commercial software implements genetic algorithm. To examine the effectiveness of our proposed system more precisely, we adopted two comparison models. The first comparison model is conventional CF. It only uses users' explicit numeric ratings when calculating the similarities between users. That is, it does not consider trust / distrust relationship between users at all. The second comparison model is SNACF (Social Network Analysis - based CF). SNACF differs from the proposed algorithm SNACF-GA in that it considers only direct trust / distrust relationships. It also does not use GA optimization. The performances of the proposed algorithm and comparison models were evaluated by using average MAE (mean absolute error). Experimental result showed that the optimal adjusting coefficients for direct trust, indirect trust, direct distrust, indirect distrust were 0, 1.4287, 1.5, 0.4615 each. This implies that distrust relationships between users are more important than trust ones in recommender systems. From the perspective of recommendation accuracy, SNACF-GA (Avg. MAE = 0.111943), the proposed algorithm which reflects both direct and indirect trust / distrust relationships information, was found to greatly outperform a conventional CF (Avg. MAE = 0.112638). Also, the algorithm showed better recommendation accuracy than the SNACF (Avg. MAE = 0.112209). To confirm whether these differences are statistically significant or not, we applied paired samples t-test. The results from the paired samples t-test presented that the difference between SNACF-GA and conventional CF was statistical significant at the 1% significance level, and the difference between SNACF-GA and SNACF was statistical significant at the 5%. Our study found that the trust/distrust relationship can be important information for improving performance of recommendation algorithms. Especially, distrust relationship information was found to have a greater impact on the performance improvement of CF. This implies that we need to have more attention on distrust (negative) relationships rather than trust (positive) ones when tracking and managing social relationships between users.

Design and Implementation of A Distributed Information Integration System based on Metadata Registry (메타데이터 레지스트리 기반의 분산 정보 통합 시스템 설계 및 구현)

  • Kim, Jong-Hwan;Park, Hea-Sook;Moon, Chang-Joo;Baik, Doo-Kwon
    • The KIPS Transactions:PartD
    • /
    • v.10D no.2
    • /
    • pp.233-246
    • /
    • 2003
  • The mediator-based system integrates heterogeneous information systems with the flexible manner. But it does not give much attention on the query optimization issues, especially for the query reusing. The other thing is that it does not use standardized metadata for schema matching. To improve this two issues, we propose mediator-based Distributed Information Integration System (DIIS) which uses query caching regarding performance and uses ISO/IEC 11179 metadata registry in terms of standardization. The DIIS is designed to provide decision-making support, which logically integrates the distributed heterogeneous business information systems based on the Web environment. We designed the system in the aspect of three-layer expression formula architecture using the layered pattern to improve the system reusability and to facilitate the system maintenance. The functionality and flow of core components of three-layer architecture are expressed in terms of process line diagrams and assembly line diagrams of Eriksson Penker Extension Model (EPEM), a methodology of an extension of UML. For the implementation, Supply Chain Management (SCM) domain is used. And we used the Web-based environment for user interface. The DIIS supports functions of query caching and query reusability through Query Function Manager (QFM) and Query Function Repository (QFR) such that it enhances the query processing speed and query reusability by caching the frequently used queries and optimizing the query cost. The DIIS solves the diverse heterogeneity problems by mapping MetaData Registry (MDR) based on ISO/IEC 11179 and Schema Repository (SCR).

An Intelligent Intrusion Detection Model Based on Support Vector Machines and the Classification Threshold Optimization for Considering the Asymmetric Error Cost (비대칭 오류비용을 고려한 분류기준값 최적화와 SVM에 기반한 지능형 침입탐지모형)

  • Lee, Hyeon-Uk;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.157-173
    • /
    • 2011
  • As the Internet use explodes recently, the malicious attacks and hacking for a system connected to network occur frequently. This means the fatal damage can be caused by these intrusions in the government agency, public office, and company operating various systems. For such reasons, there are growing interests and demand about the intrusion detection systems (IDS)-the security systems for detecting, identifying and responding to unauthorized or abnormal activities appropriately. The intrusion detection models that have been applied in conventional IDS are generally designed by modeling the experts' implicit knowledge on the network intrusions or the hackers' abnormal behaviors. These kinds of intrusion detection models perform well under the normal situations. However, they show poor performance when they meet a new or unknown pattern of the network attacks. For this reason, several recent studies try to adopt various artificial intelligence techniques, which can proactively respond to the unknown threats. Especially, artificial neural networks (ANNs) have popularly been applied in the prior studies because of its superior prediction accuracy. However, ANNs have some intrinsic limitations such as the risk of overfitting, the requirement of the large sample size, and the lack of understanding the prediction process (i.e. black box theory). As a result, the most recent studies on IDS have started to adopt support vector machine (SVM), the classification technique that is more stable and powerful compared to ANNs. SVM is known as a relatively high predictive power and generalization capability. Under this background, this study proposes a novel intelligent intrusion detection model that uses SVM as the classification model in order to improve the predictive ability of IDS. Also, our model is designed to consider the asymmetric error cost by optimizing the classification threshold. Generally, there are two common forms of errors in intrusion detection. The first error type is the False-Positive Error (FPE). In the case of FPE, the wrong judgment on it may result in the unnecessary fixation. The second error type is the False-Negative Error (FNE) that mainly misjudges the malware of the program as normal. Compared to FPE, FNE is more fatal. Thus, when considering total cost of misclassification in IDS, it is more reasonable to assign heavier weights on FNE rather than FPE. Therefore, we designed our proposed intrusion detection model to optimize the classification threshold in order to minimize the total misclassification cost. In this case, conventional SVM cannot be applied because it is designed to generate discrete output (i.e. a class). To resolve this problem, we used the revised SVM technique proposed by Platt(2000), which is able to generate the probability estimate. To validate the practical applicability of our model, we applied it to the real-world dataset for network intrusion detection. The experimental dataset was collected from the IDS sensor of an official institution in Korea from January to June 2010. We collected 15,000 log data in total, and selected 1,000 samples from them by using random sampling method. In addition, the SVM model was compared with the logistic regression (LOGIT), decision trees (DT), and ANN to confirm the superiority of the proposed model. LOGIT and DT was experimented using PASW Statistics v18.0, and ANN was experimented using Neuroshell 4.0. For SVM, LIBSVM v2.90-a freeware for training SVM classifier-was used. Empirical results showed that our proposed model based on SVM outperformed all the other comparative models in detecting network intrusions from the accuracy perspective. They also showed that our model reduced the total misclassification cost compared to the ANN-based intrusion detection model. As a result, it is expected that the intrusion detection model proposed in this paper would not only enhance the performance of IDS, but also lead to better management of FNE.