• Title/Summary/Keyword: Machine Learning #2

Search Result 1,718, Processing Time 0.028 seconds

Mapping Mammalian Species Richness Using a Machine Learning Algorithm (머신러닝 알고리즘을 이용한 포유류 종 풍부도 매핑 구축 연구)

  • Zhiying Jin;Dongkun Lee;Eunsub Kim;Jiyoung Choi;Yoonho Jeon
    • Journal of Environmental Impact Assessment
    • /
    • v.33 no.2
    • /
    • pp.53-63
    • /
    • 2024
  • Biodiversity holds significant importance within the framework of environmental impact assessment, being utilized in site selection for development, understanding the surrounding environment, and assessing the impact on species due to disturbances. The field of environmental impact assessment has seen substantial research exploring new technologies and models to evaluate and predict biodiversity more accurately. While current assessments rely on data from fieldwork and literature surveys to gauge species richness indices, limitations in spatial and temporal coverage underscore the need for high-resolution biodiversity assessments through species richness mapping. In this study, leveraging data from the 4th National Ecosystem Survey and environmental variables, we developed a species distribution model using Random Forest. This model yielded mapping results of 24 mammalian species' distribution, utilizing the species richness index to generate a 100-meter resolution map of species richness. The research findings exhibited a notably high predictive accuracy, with the species distribution model demonstrating an average AUC value of 0.82. In addition, the comparison with National Ecosystem Survey data reveals that the species richness distribution in the high-resolution species richness mapping results conforms to a normal distribution. Hence, it stands as highly reliable foundational data for environmental impact assessment. Such research and analytical outcomes could serve as pivotal new reference materials for future urban development projects, offering insights for biodiversity assessment and habitat preservation endeavors.

Study on the Seismic Random Noise Attenuation for the Seismic Attribute Analysis (탄성파 속성 분석을 위한 탄성파 자료 무작위 잡음 제거 연구)

  • Jongpil Won;Jungkyun Shin;Jiho Ha;Hyunggu Jun
    • Economic and Environmental Geology
    • /
    • v.57 no.1
    • /
    • pp.51-71
    • /
    • 2024
  • Seismic exploration is one of the widely used geophysical exploration methods with various applications such as resource development, geotechnical investigation, and subsurface monitoring. It is essential for interpreting the geological characteristics of subsurface by providing accurate images of stratum structures. Typically, geological features are interpreted by visually analyzing seismic sections. However, recently, quantitative analysis of seismic data has been extensively researched to accurately extract and interpret target geological features. Seismic attribute analysis can provide quantitative information for geological interpretation based on seismic data. Therefore, it is widely used in various fields, including the analysis of oil and gas reservoirs, investigation of fault and fracture, and assessment of shallow gas distributions. However, seismic attribute analysis is sensitive to noise within the seismic data, thus additional noise attenuation is required to enhance the accuracy of the seismic attribute analysis. In this study, four kinds of seismic noise attenuation methods are applied and compared to mitigate random noise of poststack seismic data and enhance the attribute analysis results. FX deconvolution, DSMF, Noise2Noise, and DnCNN are applied to the Youngil Bay high-resolution seismic data to remove seismic random noise. Energy, sweetness, and similarity attributes are calculated from noise-removed seismic data. Subsequently, the characteristics of each noise attenuation method, noise removal results, and seismic attribute analysis results are qualitatively and quantitatively analyzed. Based on the advantages and disadvantages of each noise attenuation method and the characteristics of each seismic attribute analysis, we propose a suitable noise attenuation method to improve the result of seismic attribute analysis.

Development and Assessment of LSTM Model for Correcting Underestimation of Water Temperature in Korean Marine Heatwave Prediction System (한반도 고수온 예측 시스템의 수온 과소모의 보정을 위한 LSTM 모델 구축 및 예측성 평가)

  • NA KYOUNG IM;HYUNKEUN JIN;GYUNDO PAK;YOUNG-GYU PARK;KYEONG OK KIM;YONGHAN CHOI;YOUNG HO KIM
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.29 no.2
    • /
    • pp.101-115
    • /
    • 2024
  • The ocean heatwave is emerging as a major issue due to global warming, posing a direct threat to marine ecosystems and humanity through decreased food resources and reduced carbon absorption capacity of the oceans. Consequently, the prediction of ocean heatwaves in the vicinity of the Korean Peninsula is becoming increasingly important for marine environmental monitoring and management. In this study, an LSTM model was developed to improve the underestimated prediction of ocean heatwaves caused by the coarse vertical grid system of the Korean Peninsula Ocean Prediction System. Based on the results of ocean heatwave predictions for the Korean Peninsula conducted in 2023, as well as those generated by the LSTM model, the performance of heatwave predictions in the East Sea, Yellow Sea, and South Sea areas surrounding the Korean Peninsula was evaluated. The LSTM model developed in this study significantly improved the prediction performance of sea surface temperatures during periods of temperature increase in all three regions. However, its effectiveness in improving prediction performance during periods of temperature decrease or before temperature rise initiation was limited. This demonstrates the potential of the LSTM model to address the underestimated prediction of ocean heatwaves caused by the coarse vertical grid system during periods of enhanced stratification. It is anticipated that the utility of data-driven artificial intelligence models will expand in the future to improve the prediction performance of dynamical models or even replace them.

Verification Test of High-Stability SMEs Using Technology Appraisal Items (기술력 평가항목을 이용한 고안정성 중소기업 판별력 검증)

  • Jun-won Lee
    • Information Systems Review
    • /
    • v.20 no.4
    • /
    • pp.79-96
    • /
    • 2018
  • This study started by focusing on the internalization of the technology appraisal model into the credit rating model to increase the discriminative power of the credit rating model not only for SMEs but also for all companies, reflecting the items related to the financial stability of the enterprises among the technology appraisal items. Therefore, it is aimed to verify whether the technology appraisal model can be applied to identify high-stability SMEs in advance. We classified companies into industries (manufacturing vs. non-manufacturing) and the age of company (initial vs. non-initial), and defined as a high-stability company that has achieved an average debt ratio less than 1/2 of the group for three years. The C5.0 was applied to verify the discriminant power of the model. As a result of the analysis, there is a difference in importance according to the type of industry and the age of company at the sub-item level, but in the mid-item level the R&D capability was a key variable for discriminating high-stability SMEs. In the early stage of establishment, the funding capacity (diversification of funding methods, capital structure and capital cost which taking into account profitability) is an important variable in financial stability. However, we concluded that technology development infrastructure, which enables continuous performance as the age of company increase, becomes an important variable affecting financial stability. The classification accuracy of the model according to the age of company and industry is 71~91%, and it is confirmed that it is possible to identify high-stability SMEs by using technology appraisal items.

A Development of Flood Mapping Accelerator Based on HEC-softwares (HEC 소프트웨어 기반 홍수범람지도 엑셀러레이터 개발)

  • Kim, JongChun;Hwang, Seokhwan;Jeong, Jongho
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.44 no.2
    • /
    • pp.173-182
    • /
    • 2024
  • In recent, there has been a trend toward primarily utilizing data-driven models employing artificial intelligence technologies, such as machine learning, for flood prediction. These data-driven models offer the advantage of utilizing pre-training results, significantly reducing the required simulation time. However, it remains that a considerable amount of flood data is necessary for the pre-training in data-driven models, while the available observed data for application is often insufficient. As an alternative, validated simulation results from physically-based models are being employed as pre-training data alongside observed data. In this context, we developed a flood mapping accelerator to generate flood maps for pre-training. The proposed accelerator automates the entire process of flood mapping, i.e., estimating flood discharge using HEC-1, calculating water surface levels using HEC-RAS, simulating channel overflow and generating flood maps using RAS Mapper. With the accelerator, users can easily prepare a database for pre-training of data-driven models from hundreds to tens of thousands of rainfall scenarios. It includes various convenient menus containing a Graphic User Interface(GUI), and its practical applicability has been validated across 26 test-beds.

Neural Network Analysis of Determinants Affecting Purchase Decisions in Fashion Eyewear (신경망분석기법을 이용한 패션 아이웨어 구매결정요소에 관한 연구)

  • Kim Ji Min
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.5
    • /
    • pp.163-171
    • /
    • 2024
  • This study applies neural network analysis techniques to examine the factors influencing the purchasing decisions of fashion eyewear among women in their 30s and 40s, comparing these findings with traditional parametric analysis methods. In the fashion area, machine learning techniques are utilized for personalized fashion recommendation systems. However, research on such applications in Korea remains insufficient. By reanalyzing a study conducted in 2017 using traditional quantitative methods with these new techniques, this study aims to confirm the utility of neural network methods. Notably, the study finds that the classification accuracy of preferred sunglasses design is highest, at 86.2%, when the L-BFGS-B neural network is activated using the hyperbolic tangent function. The most critical factors influencing purchasing decisions were consumers' occupations and their pursuit of new styles. It is interpreted that Korean sunglasses consumers prefer "safe changes." These findings are consistent for selecting both the frames and lenses of sunglasses. Traditional quantitative analysis suggests that the type of sunglasses preferred varies according to the group to which a consumer belongs. In contrast, neural network analysis predicts the preferred sunglasses for each individual, thereby facilitating the development of personalized sunglasses recommendation systems.

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.

The Dynamics of CO2 Budget in Gwangneung Deciduous Old-growth Forest: Lessons from the 15 years of Monitoring (광릉 낙엽활엽수 노령림의 CO2 수지 역학: 15년 관측으로부터의 교훈)

  • Yang, Hyunyoung;Kang, Minseok;Kim, Joon;Ryu, Daun;Kim, Su-Jin;Chun, Jung-Hwa;Lim, Jong-Hwan;Park, Chan Woo;Yun, Soon Jin
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.23 no.4
    • /
    • pp.198-221
    • /
    • 2021
  • After large-scale reforestation in the 1960s and 1970s, forests in Korea have gradually been aging. Net ecosystem CO2 exchange of old-growth forests is theoretically near zero; however, it can be a CO2 sink or source depending on the intervention of disturbance or management. In this study, we report the CO2 budget dynamics of the Gwangneung deciduous old-growth forest (GDK) in Korea and examined the following two questions: (1) is the preserved GDK indeed CO2 neutral as theoretically known? and (2) can we explain the dynamics of CO2 budget by the common mechanisms reported in the literature? To answer, we analyzed the 15-year long CO2 flux data measured by eddy covariance technique along with other biometeorological data at the KoFlux GDK site from 2006 to 2020. The results showed that (1) GDK switched back-and-forth between sink and source of CO2 but averaged to be a week CO2 source (and turning to a moderate CO2 source for the recent five years) and (2) the interannual variability of solar radiation, growing season length, and leaf area index showed a positive correlation with that of gross primary production (GPP) (R2=0.32~0.45); whereas the interannual variability of both air and surface temperature was not significantly correlated with that of ecosystem respiration (RE). Furthermore, the machine learning-based model trained using the dataset of early monitoring period (first 10 years) failed to reproduce the observed interannual variations of GPP and RE for the recent five years. Biomass data analysis suggests that carbon emissions from coarse woody debris may have contributed partly to the conversion to a moderate CO2 source. To properly understand and interpret the long-term CO2 budget dynamics of GDK, new framework of analysis and modeling based on complex systems science is needed. Also, it is important to maintain the flux monitoring and data quality along with the monitoring of coarse woody debris and disturbances.

Subimage Detection of Window Image Using AdaBoost (AdaBoost를 이용한 윈도우 영상의 하위 영상 검출)

  • Gil, Jong In;Kim, Manbae
    • Journal of Broadcast Engineering
    • /
    • v.19 no.5
    • /
    • pp.578-589
    • /
    • 2014
  • Window image is displayed through a monitor screen when we execute the application programs on the computer. This includes webpage, video player and a number of applications. The webpage delivers a variety of information by various types in comparison with other application. Unlike a natural image captured from a camera, the window image like a webpage includes diverse components such as text, logo, icon, subimage and so on. Each component delivers various types of information to users. However, the components with different characteristic need to be divided locally, because text and image are served by various type. In this paper, we divide window images into many sub blocks, and classify each divided region into background, text and subimage. The detected subimages can be applied into 2D-to-3D conversion, image retrieval, image browsing and so forth. There are many subimage classification methods. In this paper, we utilize AdaBoost for verifying that the machine learning-based algorithm can be efficient for subimage detection. In the experiment, we showed that the subimage detection ratio is 93.4 % and false alarm is 13 %.

A Case Study of "Engineering Design" Education with Emphasize on Hands-on Experience (기계공학과에서 제시하는 Hands-on Experience 중심의 "엔지니어링 디자인" 교과목의 강의사례)

  • Kim, Hong-Chan;Kim, Ji-Hoon;Kim, Kwan-Ju;Kim, Jung-Soo
    • Journal of Engineering Education Research
    • /
    • v.10 no.2
    • /
    • pp.44-61
    • /
    • 2007
  • The present investigation is concerned chiefly with new curriculum development at the Department of Mechanical System & Design Engineering at Hongik University with the aim of enhancing creativity, team working and communication capability which modern engineering education is emphasizing on. 'Mechanical System & Design Engineering' department equipped with new curriculum emphasizing engineering design is new name for mechanical engineering department in Hongik University. To meet radically changing environment and demands of industries toward engineering education, the department has shifted its focus from analog-based and machine-centered hard approach to digital-based and human-centered soft approach. Three new programs of Introduction to Mechanical System & Design Engineering, Creative Engineering Design and Product Design emphasize hands-on experiences through project-based team working. Sketch model and prototype making process is strongly emphasized and cardboard, poly styrene foam and foam core plate are provided as working material instead of traditional hard engineering material such as metals material because these three programs focus more on creative idea generation and dynamic communication among team members rather than the end results. With generative, visual and concrete experiences that can compensate existing engineering classes with traditional focus on analytic, mathematical and reasoning, hands-on experiences can play a significant role for engineering students to develop creative thinking and engineering sense needed to face ill-defined real-world design problems they are expected to encounter upon graduation.