• Title/Summary/Keyword: performance-based optimization

Search Result 2,576, Processing Time 0.028 seconds

Performance Optimization of Numerical Ocean Modeling on Cloud Systems (클라우드 시스템에서 해양수치모델 성능 최적화)

  • JUNG, KWANGWOOG;CHO, YANG-KI;TAK, YONG-JIN
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.27 no.3
    • /
    • pp.127-143
    • /
    • 2022
  • Recently, many attempts to run numerical ocean models in cloud computing environments have been tried actively. A cloud computing environment can be an effective means to implement numerical ocean models requiring a large-scale resource or quickly preparing modeling environment for global or large-scale grids. Many commercial and private cloud computing systems provide technologies such as virtualization, high-performance CPUs and instances, ether-net based high-performance-networking, and remote direct memory access for High Performance Computing (HPC). These new features facilitate ocean modeling experimentation on commercial cloud computing systems. Many scientists and engineers expect cloud computing to become mainstream in the near future. Analysis of the performance and features of commercial cloud services for numerical modeling is essential in order to select appropriate systems as this can help to minimize execution time and the amount of resources utilized. The effect of cache memory is large in the processing structure of the ocean numerical model, which processes input/output of data in a multidimensional array structure, and the speed of the network is important due to the communication characteristics through which a large amount of data moves. In this study, the performance of the Regional Ocean Modeling System (ROMS), the High Performance Linpack (HPL) benchmarking software package, and STREAM, the memory benchmark were evaluated and compared on commercial cloud systems to provide information for the transition of other ocean models into cloud computing. Through analysis of actual performance data and configuration settings obtained from virtualization-based commercial clouds, we evaluated the efficiency of the computer resources for the various model grid sizes in the virtualization-based cloud systems. We found that cache hierarchy and capacity are crucial in the performance of ROMS using huge memory. The memory latency time is also important in the performance. Increasing the number of cores to reduce the running time for numerical modeling is more effective with large grid sizes than with small grid sizes. Our analysis results will be helpful as a reference for constructing the best computing system in the cloud to minimize time and cost for numerical ocean modeling.

Status Diagnosis Algorithm for Optimizing Power Generation of PV Power Generation System due to PV Module and Inverter Failure, Leakage and Arc Occurrence (태양광 모듈, 인버터 고장, 누설 및 아크 발생에 따른 태양광발전시스템의 발전량 최적화를 위한 상태진단 알고리즘)

  • Yongho Yoon
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.4
    • /
    • pp.135-140
    • /
    • 2024
  • It is said that PV power generation systems have a long lifespan compared to other renewable energy sources and require little maintenance. However, there are cases where the performance expected during initial design is not achieved due to shading, temperature rise, mismatch, contamination/deterioration of PV modules, failure of inverter, leakage current, and arc generation. Therefore, in order to solve the problems of these systems, the power generation amount and operation status are investigated qualitatively, or the performance is comparatively analyzed based on the performance ratio (PR), which is the performance index of the solar power generation system. However, because it includes large losses, it is difficult to accurately determine whether there are any abnormalities such as performance degradation, failure, or defects in the PV power generation system using only the performance coefficient. In this paper, we studied a status diagnosis algorithm for shading, inverter failure, leakage, and arcing of PV modules to optimize the power generation of PV power generation systems according to changes in the surrounding environment. In addition, using the studied algorithm, we examined the results of an empirical test on condition diagnosis for each area and the resulting optimized operation of power generation.

Development of a decision supporting system for forest management based on the Tabu Search heuristic algorithm (Tabu Search 휴리스틱 알고리즘을 이용한 산림경영 의사결정지원시스템 구현)

  • Park, Ji-Hoon;Won, Hyun-Kyu;Kim, Young-Hwan;Kim, Man-Pil
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.10
    • /
    • pp.229-237
    • /
    • 2010
  • Recently, forest management objectives become more complex and complicated, and spatial constraints were necessarily considered for ecological stability. Now forest planning is required to provide an optimized solution that is able to achieve a number of management objectives and constraints. In this study, we developed a decision supporting system based on the one of dynamic planning techniques, Tabu Search (TS) heuristic algorithm, which enable one to generate an optimized solution for given objectives and constraints. For this purpose, we analyzed the logical flow of the algorithm and designed the subsequence of processes. To develop a high-performance computing system, we examined a number of strategy to minimize execution time and workloads in each process and to maximize efficiency of using system resources. We examined two model based on the original TS algorithm and revised version of TS algorithm and compared their performance in optimization process. The results showed high performance of the developed system in providing feasible solutions for several management objectives and constraints. Moreover, the revised version of TS algorithm was appeared to be more stable for providing results with minimum variation. The developed system is expected to use for developing forest management plans in Korea.

An Iterative Data-Flow Optimal Scheduling Algorithm based on Genetic Algorithm for High-Performance Multiprocessor (고성능 멀티프로세서를 위한 유전 알고리즘 기반의 반복 데이터흐름 최적화 스케줄링 알고리즘)

  • Chang, Jeong-Uk;Lin, Chi-Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.15 no.6
    • /
    • pp.115-121
    • /
    • 2015
  • In this paper, we proposed an iterative data-flow optimal scheduling algorithm based on genetic algorithm for high-performance multiprocessor. The basic hardware model can be extended to include detailed features of the multiprocessor architecture. This is illustrated by implementing a hardware model that requires routing the data transfers over a communication network with a limited capacity. The scheduling method consists of three layers. In the top layer a genetic algorithm takes care of the optimization. It generates different permutations of operations, that are passed on to the middle layer. The global scheduling makes the main scheduling decisions based on a permutation of operations. Details of the hardware model are not considered in this layer. This is done in the bottom layer by the black-box scheduling. It completes the scheduling of an operation and ensures that the detailed hardware model is obeyed. Both scheduling method can insert cycles in the schedule to ensure that a valid schedule is always found quickly. In order to test the performance of the scheduling method, the results of benchmark of the five filters show that the scheduling method is able to find good quality schedules in reasonable time.

A study on end-to-end speaker diarization system using single-label classification (단일 레이블 분류를 이용한 종단 간 화자 분할 시스템 성능 향상에 관한 연구)

  • Jaehee Jung;Wooil Kim
    • The Journal of the Acoustical Society of Korea
    • /
    • v.42 no.6
    • /
    • pp.536-543
    • /
    • 2023
  • Speaker diarization, which labels for "who spoken when?" in speech with multiple speakers, has been studied on a deep neural network-based end-to-end method for labeling on speech overlap and optimization of speaker diarization models. Most deep neural network-based end-to-end speaker diarization systems perform multi-label classification problem that predicts the labels of all speakers spoken in each frame of speech. However, the performance of the multi-label-based model varies greatly depending on what the threshold is set to. In this paper, it is studied a speaker diarization system using single-label classification so that speaker diarization can be performed without thresholds. The proposed model estimate labels from the output of the model by converting speaker labels into a single label. To consider speaker label permutations in the training, the proposed model is used a combination of Permutation Invariant Training (PIT) loss and cross-entropy loss. In addition, how to add the residual connection structures to model is studied for effective learning of speaker diarization models with deep structures. The experiment used the Librispech database to generate and use simulated noise data for two speakers. When compared with the proposed method and baseline model using the Diarization Error Rate (DER) performance the proposed method can be labeling without threshold, and it has improved performance by about 20.7 %.

TCP Performance Analysis of Packet Buffering in Mobile IP based Networks (모바일 IP 네트워크에서 패킷 버퍼링 방식의 TCP 성능 분석)

  • 허경;노재성;조성준;엄두섭;차균현
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.5B
    • /
    • pp.475-488
    • /
    • 2003
  • To prevent performance degradation of TCP due to packet losses in the smooth handoff by the route optimization extension of Mobile IP protocol, a buffering of packets at a base station is needed. A buffering of packets at a base station recovers those packets dropped during handoff by forwarding buffered packets at the old base station to the mobile user. But, when the mobile user moves to a congested base station in a new foreign subnetwork, those buffered packets forwarded by the old base station are dropped and TCP transmission performance of a mobile user in the congested base station degrades due to increased congestion by those forwarded burst packets. In this paper, considering the general case that a mobile user moves to a congested base station, we analyze the influence of packet buffering on TCP performance according to handoff arrival distribution for Drop-tail and RED (Random Early Detection) buffer management schemes. Simulation results show that RED scheme can reduce the congestion increased by those forwarded burst packets comparing Drop-Tail, but RED scheme cannot avoid Global Synchronization due to forwarded burst packets by the old base station and new buffer management scheme to avoid it is needed in Mobile IP based networks.

Design and Performance Evaluation of Digital Twin Prototype Based on Biomass Plant (바이오매스 플랜트기반 디지털트윈 프로토타입 설계 및 성능 평가)

  • Chae-Young Lim;Chae-Eun Yeo;Seong-Yool Ahn;Myung-Ok Lee;Ho-Jin Sung
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.5
    • /
    • pp.935-940
    • /
    • 2023
  • Digital-twin technology is emerging as an innovative solution for all industries, including manufacturing and production lines. Therefore, this paper optimizes all the energy used in a biomass plant based on unused resources. We will then implement a digital-twin prototype for biomass plants and evaluate its performance in order to improve the efficiency of plant operations. The proposed digital-twin prototype applies a standard communication platform between the framework and the gateway and is implemented to enable real-time collaboration. and, define the message sequence between the client server and the gateway. Therefore, an interface is implemented to enable communication with the host server. In order to verify the performance of the proposed prototype, we set up a virtual environment to collect data from the server and perform a data collection evaluation. As a result, it was confirmed that the proposed framework can contribute to energy optimization and improvement of operational efficiency when applied to biomass plants.

A Study On Memory Optimization for Applying Deep Learning to PC (딥러닝을 PC에 적용하기 위한 메모리 최적화에 관한 연구)

  • Lee, Hee-Yeol;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.21 no.2
    • /
    • pp.136-141
    • /
    • 2017
  • In this paper, we propose an algorithm for memory optimization to apply deep learning to PC. The proposed algorithm minimizes the memory and computation processing time by reducing the amount of computation processing and data required in the conventional deep learning structure in a general PC. The algorithm proposed in this paper consists of three steps: a convolution layer configuration process using a random filter with discriminating power, a data reduction process using PCA, and a CNN structure creation using SVM. The learning process is not necessary in the convolution layer construction process using the discriminating random filter, thereby shortening the learning time of the overall deep learning. PCA reduces the amount of memory and computation throughput. The creation of the CNN structure using SVM maximizes the effect of reducing the amount of memory and computational throughput required. In order to evaluate the performance of the proposed algorithm, we experimented with Yale University's Extended Yale B face database. The results show that the algorithm proposed in this paper has a similar performance recognition rate compared with the existing CNN algorithm. And it was confirmed to be excellent. Based on the algorithm proposed in this paper, it is expected that a deep learning algorithm with many data and computation processes can be implemented in a general PC.

State-Aware Re-configuration Model for Multi-Radio Wireless Mesh Networks

  • Zakaria, Omar M.;Hashim, Aisha-Hassan Abdalla;Hassan, Wan Haslina;Khalifa, Othman Omran;Azram, Mohammad;Goudarzi, Shidrokh;Jivanadham, Lalitha Bhavani;Zareei, Mahdi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.1
    • /
    • pp.146-170
    • /
    • 2017
  • Joint channel assignment and routing is a well-known problem in multi-radio wireless mesh networks for which optimal configurations is required to optimize the overall throughput and fairness. However, other objectives need to be considered in order to provide a high quality service to network users when it deployed with high traffic dynamic. In this paper, we propose a re-configuration optimization model that optimizes the network throughput in addition to reducing the disruption to the mesh clients' traffic due to the re-configuration process. In this multi-objective optimization model, four objective functions are proposed to be minimized namely maximum link-channel utilization, network average contention, channel re-assignment cost, and re-routing cost. The latter two objectives focus on reducing the re-configuration overhead. This is to reduce the amount of disrupted traffic due to the channel switching and path re-routing resulted from applying the new configuration. In order to adapt to traffic dynamics in the network which might be caused by many factors i.e. users' mobility, a centralized heuristic re-configuration algorithm called State-Aware Joint Routing and Channel Assignment (SA-JRCA) is proposed in this research based on our re-configuration model. The proposed algorithm re-assigns channels to radios and re-configures flows' routes with aim of achieving a tradeoff between maximizing the network throughput and minimizing the re-configuration overhead. The ns-2 simulator is used as simulation tool and various metrics are evaluated. These metrics include channel-link utilization, channel re-assignment cost, re-routing cost, throughput, and delay. Simulation results show the good performance of SA-JRCA in term of packet delivery ratio, aggregated throughput and re-configuration overhead. It also shows higher stability to the traffic variation in comparison with other compared algorithms which suffer from performance degradation when high traffic dynamics is applied.

Semiconductor wafer exhaust moisture displacement unit (반도체 웨이퍼 공정 배기가스 수분제어장치)

  • Chan, Danny;Kim, Jonghae
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.8
    • /
    • pp.5541-5549
    • /
    • 2015
  • This paper introduces a safer and more power efficient heater by using induction heating, to apply to the semiconductor wafer fabrication exhaust gas cleaning system. The exhaust gas cleaning system is currently made with filament heater that generates an endothermic reaction of N2 gas for the removal of moisture. Induction theory, through the bases of theoretical optimization and electronic implementation, is applied in the design of the induction heater specifically for the semiconductor wafer exhaust system. The new induction heating design provides a solution to the issues with the current energy inefficient, unreliable, and unsafe design. A robust and calibrated design of the induction heater is used to optimize the energy consumption. Optimization is based on the calibrated ZVS induction circuit design specified by the resonant frequency of the exhaust pipe. The fail-safe energy limiter embedded in the system uses a voltage regulator through the feedback of the MOSFET control, which allows the system performance to operate within the specification of the N2 Heater unit. A specification and performance comparison from current conventional filament heater is made with the calibrated induction heater design for numerical analysis and the proof of a better design.