• Title/Summary/Keyword: model reduction method

Search Result 1,978, Processing Time 0.04 seconds

Research on CO2 Emission Characteristics of Arterial Roads in Incheon Metropolitan City (인천광역시 간선도로의 이산화탄소 배출 특성 연구)

  • Byoung-JoYoon;Seung-Jun Lee;Hyo-Sik Hwang
    • Journal of the Society of Disaster Information
    • /
    • v.19 no.1
    • /
    • pp.184-194
    • /
    • 2023
  • Purpose: The purpose of this study is to identify the characteristics of C02 emissions by road before establishing a policy to reduce greenhouse gas emissions. Method: As for the analysis method, the traffic volume and speed of the road were estimated using the traffic Assignment model targeting 27 arterial road axes in Incheon Metropolitan City. And, after estimating CO2 emissions by road axis by applying this, the characteristics of each group were analyzed through cluster analysis. Result: As a result of cluster analysis using total CO2 emissions, CO2 emissions by truck vehicles, and the ratio of truck vehicle emissions to total carbon dioxide emissions, four clusters were classified. When examining the characteristics of each road included in each group, it was analyzed that the characteristics of each group appeared according to the level of impact by CO2 emissions and truck vehicles. Conclusion: It is judged that it is necessary to establish a plan in consideration of CO2 emission characteristics for road CO2 management for greenhouse gas reduction.

Prediction of Deformation Behavior of a Shallow NATM Tunnel by Strain Softening Analysis (연화모델을 이용한 저토피 NATM 터널의 변형거동의 예측)

  • Lee, Jae-Ho;Shinich, Akutagawa;Kim, Young-Su
    • Journal of the Korean Geotechnical Society
    • /
    • v.23 no.9
    • /
    • pp.17-28
    • /
    • 2007
  • Urban tunnels are usually important in terms of prediction and control of surface settlement, gradient and ground displacement. This paper has studied the application of strain softening analysis to predict deformation behavior of an urban NATM tunnel. The applied strain softening model considered the reduction of shear stiffness and strength parameter after yielding with strain softening effects of a given material. Measurements of surface subsidence and ground displacement were adopted to monitor the ground behavior resulting from the tunneling and to modify tunnel design. The numerical analysis results produced a strain distribution, deformational mechanism and surface settlement profile, which are in good agreement with the results of case study. The approach of strain softening modeling is expected to be a good prediction method on the ground displacement associated with NATM tunneling at shallow depth and soft ground.

Clustering Analysis of Science and Engineering College Students' understanding on Probability and Statistics (Robust PCA를 활용한 이공계 대학생의 확률 및 통계 개념 이해도 분석)

  • Yoo, Yongseok
    • Journal of Convergence for Information Technology
    • /
    • v.12 no.3
    • /
    • pp.252-258
    • /
    • 2022
  • In this study, we propose a method for analyzing students' understanding of probability and statistics in small lectures at universities. A computer-based test for probability and statistics was performed on 95 science and engineering college students. After dividing the students' responses into 7 clusters using the Robust PCA and the Gaussian mixture model, the achievement of each subject was analyzed for each cluster. High-ranking clusters generally showed high achievement on most topics except for statistical estimation, and low-achieving clusters showed strengths and weaknesses on different topics. Compared to the widely used PCA-based dimension reduction followed by clustering analysis, the proposed method showed each group's characteristics more clearly. The characteristics of each cluster can be used to develop an individualized learning strategy.

FGRS(Fish Growth Regression System), Which predicts the growth of fish (물고기의 성장도를 예측하는 FGRS(Fish Growth Regression System))

  • Sung-Kwon Won;Yong-Bo Sim;Su-Rak Son;Yi-Na Jung
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.5
    • /
    • pp.347-353
    • /
    • 2023
  • Measuring the growth of fish in fish farms still uses a laborious method. This method requires a lot of labor and causes stress to the fish, which has a negative impact on mortality. To solve this problem, we propose the Fish Growth Regression System (FGRS), a system to automate the growth of fish. FGRS consists of two modules. The first is a module that detects fish based on Yolo v8, and the second consists of a module that predicts the growth of fish using fish image data and a CNN-based neural network model. As a result of the simulation, the average prediction error before learning was 134.2 days, but after learning, the average error decreased to 39.8 days. It is expected that the system proposed in this paper can be used to predict the growing date and use the growth prediction of fish to contribute to automation in fish farms, resulting in a significant reduction in labor and cost savings.

A Study on the Risk Assessment and Improvement Methods Based on Hydrogen Explosion Accidents of a Power Plant and Water Electrolysis System (발전소 및 수전해 시스템의 수소 폭발 사고 사례 기반 위험성 평가 및 개선 방안 연구)

  • MIN JAE JEON;DAE JIN JANG;MIN CHUL LEE
    • Journal of Hydrogen and New Energy
    • /
    • v.35 no.1
    • /
    • pp.66-74
    • /
    • 2024
  • This study addresses the escalating issue of worldwide hydrogen gas accidents, which has seen a significant increase in occurrences. To comprehensively evaluate the risks associated with hydrogen, a two approach was employed in this study. Firstly, a qualitative risk assessment was conducted using the bow-tie method. Secondly, a quantitative consequence analysis was carried out utilizing the areal locations of hazardous atmospheres (ALOHA) model. The study applied this method to two incidents, the hydrogen explosion accident occurred at the Muskingum River power plant in Ohio, USA, 2007 and the hydrogen storage tank explosion accident occurred at the K Technopark water electrolysis system in Korea, 2019. The results of the risk assessments revealed critical issues such as deterioration of gas pipe, human errors in incident response and the omission of important gas cleaning facility. By analyzing the cause of accidents and assessing risks quantitatively, the effective accident response plans are proposed and the effectiveness is evaluated by comparing the effective distance obtained by ALOHA simulation. Notably, the implementation of these measures led to a significant 54.5% reduction in the risk degree of potential explosions compared to the existing risk levels.

A hybrid algorithm for the synthesis of computer-generated holograms

  • Nguyen The Anh;An Jun Won;Choe Jae Gwang;Kim Nam
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 2003.07a
    • /
    • pp.60-61
    • /
    • 2003
  • A new approach to reduce the computation time of genetic algorithm (GA) for making binary phase holograms is described. Synthesized holograms having diffraction efficiency of 75.8% and uniformity of 5.8% are proven in computer simulation and experimentally demonstrated. Recently, computer-generated holograms (CGHs) having high diffraction efficiency and flexibility of design have been widely developed in many applications such as optical information processing, optical computing, optical interconnection, etc. Among proposed optimization methods, GA has become popular due to its capability of reaching nearly global. However, there exits a drawback to consider when we use the genetic algorithm. It is the large amount of computation time to construct desired holograms. One of the major reasons that the GA' s operation may be time intensive results from the expense of computing the cost function that must Fourier transform the parameters encoded on the hologram into the fitness value. In trying to remedy this drawback, Artificial Neural Network (ANN) has been put forward, allowing CGHs to be created easily and quickly (1), but the quality of reconstructed images is not high enough to use in applications of high preciseness. For that, we are in attempt to find a new approach of combiningthe good properties and performance of both the GA and ANN to make CGHs of high diffraction efficiency in a short time. The optimization of CGH using the genetic algorithm is merely a process of iteration, including selection, crossover, and mutation operators [2]. It is worth noting that the evaluation of the cost function with the aim of selecting better holograms plays an important role in the implementation of the GA. However, this evaluation process wastes much time for Fourier transforming the encoded parameters on the hologram into the value to be solved. Depending on the speed of computer, this process can even last up to ten minutes. It will be more effective if instead of merely generating random holograms in the initial process, a set of approximately desired holograms is employed. By doing so, the initial population will contain less trial holograms equivalent to the reduction of the computation time of GA's. Accordingly, a hybrid algorithm that utilizes a trained neural network to initiate the GA's procedure is proposed. Consequently, the initial population contains less random holograms and is compensated by approximately desired holograms. Figure 1 is the flowchart of the hybrid algorithm in comparison with the classical GA. The procedure of synthesizing a hologram on computer is divided into two steps. First the simulation of holograms based on ANN method [1] to acquire approximately desired holograms is carried. With a teaching data set of 9 characters obtained from the classical GA, the number of layer is 3, the number of hidden node is 100, learning rate is 0.3, and momentum is 0.5, the artificial neural network trained enables us to attain the approximately desired holograms, which are fairly good agreement with what we suggested in the theory. The second step, effect of several parameters on the operation of the hybrid algorithm is investigated. In principle, the operation of the hybrid algorithm and GA are the same except the modification of the initial step. Hence, the verified results in Ref [2] of the parameters such as the probability of crossover and mutation, the tournament size, and the crossover block size are remained unchanged, beside of the reduced population size. The reconstructed image of 76.4% diffraction efficiency and 5.4% uniformity is achieved when the population size is 30, the iteration number is 2000, the probability of crossover is 0.75, and the probability of mutation is 0.001. A comparison between the hybrid algorithm and GA in term of diffraction efficiency and computation time is also evaluated as shown in Fig. 2. With a 66.7% reduction in computation time and a 2% increase in diffraction efficiency compared to the GA method, the hybrid algorithm demonstrates its efficient performance. In the optical experiment, the phase holograms were displayed on a programmable phase modulator (model XGA). Figures 3 are pictures of diffracted patterns of the letter "0" from the holograms generated using the hybrid algorithm. Diffraction efficiency of 75.8% and uniformity of 5.8% are measured. We see that the simulation and experiment results are fairly good agreement with each other. In this paper, Genetic Algorithm and Neural Network have been successfully combined in designing CGHs. This method gives a significant reduction in computation time compared to the GA method while still allowing holograms of high diffraction efficiency and uniformity to be achieved. This work was supported by No.mOl-2001-000-00324-0 (2002)) from the Korea Science & Engineering Foundation.

  • PDF

Study on the Heat Transfer Phenomenon around Underground Concrete Digesters for Bigas Production Systems (생물개스 발생시스템을 위한 지하매설콘크리트 다이제스터의 열전달에 관한 연구)

  • 김윤기;고재균
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.22 no.1
    • /
    • pp.53-66
    • /
    • 1980
  • The research work is concerned with the analytical and experimental studies on the heat transfer phenomenon around the underground concrete digester used for biogas production Systems. A mathematical and computational method was developed to estimate heat losses from underground cylindrical concrete digester used for biogas production systems. To test its feasibility and to evaluate thermal parameters of materials related, the method was applied to six physical model digesters. The cylindrical concrete digester was taken as a physical model, to which the model,atical model of heat balance can be applied. The mathematical model was transformed by means of finite element method and used to analyze temperature distribution with respect to several boundary conditions and design parameters. The design parameters of experimental digesters were selected as; three different sizes 40cm by 80cm, 80cm by 160cm and l00cm by 200cm in diameter and height; two different levels of insulation materials-plain concrete and vermiculite mixing in concrete; and two different types of installation-underground and half-exposed. In order to carry out a particular aim of this study, the liquid within the digester was substituted by water, and its temperature was controlled in five levels-35。 C, 30。 C, 25。 C, 20。C and 15。C; and the ambient air temperature and ground temperature were checked out of the system under natural winter climate conditions. The following results were drawn from the study. 1.The analytical method, by which the estimated values of temperature distribution around a cylindrical digester were obtained, was able to be generally accepted from the comparison of the estimated values with the measured. However, the difference between the estimated and measured temperature had a trend to be considerably increased when the ambient temperature was relatively low. This was mainly related variations of input parameters including the thermal conductivity of soil, applied to the numerical analysis. Consequently, the improvement of these input data for the simulated operation of the numerical analysis is expected as an approach to obtain better refined estimation. 2.The difference between estimated and measured heat losses was shown to have the similar trend to that of temperature distribution discussed above. 3.It was found that a map of isothermal lines drawn from the estimated temperature distribution was very useful for a general observation of the direction and rate of heat transfer within the boundary. From this analysis, it was interpreted that most of heat losses is passed through the triangular section bounded within 45 degrees toward the wall at the bottom edge of the digesten Therefore, any effective insulation should be considered within this region. 4.It was verified by experiment that heat loss per unit volume of liquid was reduced as the size of the digester became larger For instance, at the liquid temperature of 35˚ C, the heat loss per unit volume from the 0. 1m$^3$ digester was 1, 050 Kcal/hr m$^3$, while at for 1. 57m$^3$ digester was 150 Kcal/hr m$^3$. 5.In the light of insulation, the vermiculite concrete was consistently shown to be superior to the plain concrete. At the liquid temperature ranging from 15。 C to 350 C, the reduction of heat loss was ranged from 5% to 25% for the half-exposed digester, while from 10% to 28% for the fully underground digester. 6.In the comparison of heat loss between the half-exposed and underground digesters, the heat loss from the former was fr6m 1,6 to 2, 6 times as much as that from the latter. This leads to the evidence that the underground digester takes advantage of heat conservation during winter.

  • PDF

Analysis of Greenhouse Gas Reduction according to Different Scenarios of Zero Food Waste Residential Buildings (음식물류폐기물 제로화 주거단지 구축 시나리오별 비용 및 환경효과 분석)

  • Oh, Jeong-Ik;Yoon, Eun-Joo;Park, Ire;Kim, Yeong-Min
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.38 no.7
    • /
    • pp.353-363
    • /
    • 2016
  • In this study, traditional treatment scenario of food wastes that collected and transported food waste is recycled in large treatment facilities and suggested treatment scenario of onsite zero discharge system that food waste is treated in housing complex were supposed. The scenarios were compared and analyzed by capital expenditure, oil consumption, $CO_2$ emission quantity, operating expenditure and management expenses. The capital expenditure, oil consumption and $CO_2$ emission quantity of small scale dispersion dealing method is the lowest compared to traditional treatment method. As a results, it is possible to obtain the effect that operating expenditure was reduced by 91% and management expenses was reduced by 40% with suggested treatment method. The treatment method that have low capital expenditure is tend to lower oil consumption and $CO_2$ emission quantity. The small scale dispersion dealing method have the lowest capital expenditure, oil consumption and $CO_2$ emission quantity and the linked method with sewage treatment have the highest expenditure and $CO_2$ emission quantity. Eventually, the optimal model of onsite zero discharge system in housing complex is small scale dispersion dealing method.

A Study on Development of Management Targets and Evaluation of Target Achievement for Non-point Source Pollution Management in Saemangeum Watershed (새만금 비점오염원 관리지역에서의 목표설정 및 달성도 평가방법론 연구)

  • Kim, Eun-Jung;Park, Bae-Kyung;Kim, Yong-Seok;Rhew, Doug-Hee;Jung, Kwang-Wook
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.37 no.8
    • /
    • pp.480-491
    • /
    • 2015
  • In this study, methods using LDC (Load Duration Curve) and watershed model were suggested to develope management targets and evaluate target achievement for non-point source pollution management considering watershed and runoff characteristics and possibility for achievement of target. These methods were applied for Saemangeum watershed which was designated as nonpoint source pollution management area recently. Flow duration interval of 5 to 40% was selected as flow range for management considering runoff characteristics and TP was selected as indicator for management. Management targets were developed based on scenarios for non-point source pollutant reduction of management priority areas using LDC method and HSPF model which was calibrated using 4 years data (2009~2012). In the scenario of LID, road sweeping and 50% reduction in CSOs and untreated sewage at Jeonju A20 and 30% reduction in fertilizer and 50% in livestock NPS at Mankyung C03, Dongjin A14 and KobuA14, management targets for Mangyung bridge, Dongjin bridge, Jeonju stream and Gunpo bridge were developed as TP 0.38, 0.18, 0.64 and 0.16 mg/L respectively. When TP loads at the target stations were assumed to have been reduced by a certain percentage (10%), management targets for those target stations were developed as TP 0.35, 0.17, 0.60 and 0.15 mg/L respectively. The result of this study is expected to be used as reference material for management master plan, implementation plan and implementation assessment for non-point source management area.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.