• Title/Summary/Keyword: gradient-based model

Search Result 726, Processing Time 0.027 seconds

Hierarchical Finite-Element Modeling of SiCp/Al2124-T4 Composites with Dislocation Plasticity and Size-Dependent Failure (전위 소성과 크기 종속 파손을 고려한 SiCp/Al2124-T4 복합재의 계층적 유한요소 모델링)

  • Suh, Yeong-Sung;Kim, Yong-Bae
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.36 no.2
    • /
    • pp.187-194
    • /
    • 2012
  • The strength of particle-reinforced metal matrix composites is, in general, known to be increased by the geometrically necessary dislocations punched around a particle that form during cooling after consolidation because of coefficient of thermal expansion (CTE) mismatch between the particle and the matrix. An additional strength increase may also be observed, since another type of geometrically necessary dislocation can be formed during extensive deformation as a result of the strain gradient plasticity due to the elastic-plastic mismatch between the particle and the matrix. In this paper, the magnitudes of these two types of dislocations are calculated based on the dislocation plasticity. The dislocations are then converted to the respective strengths and allocated hierarchically to the matrix around the particle in the axisymmetric finite-element unit cell model. The proposed method is shown to be very effective by performing finite-element strength analysis of $SiC_p$/Al2124-T4 composites that included ductile failure in the matrix and particlematrix decohesion. The predicted results for different particle sizes and volume fractions show that the length scale effect of the particle size obviously affects the strength and failure behavior of the particle-reinforced metal matrix composites.

Semantic Segmentation of the Submerged Marine Debris in Undersea Images Using HRNet Model (HRNet 기반 해양침적쓰레기 수중영상의 의미론적 분할)

  • Kim, Daesun;Kim, Jinsoo;Jang, Seonwoong;Bak, Suho;Gong, Shinwoo;Kwak, Jiwoo;Bae, Jaegu
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1329-1341
    • /
    • 2022
  • Destroying the marine environment and marine ecosystem and causing marine accidents, marine debris is generated every year, and among them, submerged marine debris is difficult to identify and collect because it is on the seabed. Therefore, deep-learning-based semantic segmentation was experimented on waste fish nets and waste ropes using underwater images to identify efficient collection and distribution. For segmentation, a high-resolution network (HRNet), a state-of-the-art deep learning technique, was used, and the performance of each optimizer was compared. In the segmentation result fish net, F1 score=(86.46%, 86.20%, 85.29%), IoU=(76.15%, 75.74%, 74.36%), For the rope F1 score=(80.49%, 80.48%, 77.86%), IoU=(67.35%, 67.33%, 63.75%) in the order of adaptive moment estimation (Adam), Momentum, and stochastic gradient descent (SGD). Adam's results were the highest in both fish net and rope. Through the research results, the evaluation of segmentation performance for each optimizer and the possibility of segmentation of marine debris in the latest deep learning technique were confirmed. Accordingly, it is judged that by applying the latest deep learning technique to the identification of submerged marine debris through underwater images, it will be helpful in estimating the distribution of marine sedimentation debris through more accurate and efficient identification than identification through the naked eye.

3D Modeling from 2D Stereo Image using 2-Step Hybrid Method (2단계 하이브리드 방법을 이용한 2D 스테레오 영상의 3D 모델링)

  • No, Yun-Hyang;Go, Byeong-Cheol;Byeon, Hye-Ran;Yu, Ji-Sang
    • Journal of KIISE:Software and Applications
    • /
    • v.28 no.7
    • /
    • pp.501-510
    • /
    • 2001
  • Generally, it is essential to estimate exact disparity for the 3D modeling from stereo images. Because existing methods calculate disparities from a whole image, they require too much cimputational time and bring about the mismatching problem. In this article, using the characteristic that the disparity vectors in stereo images are distributed not equally in a whole image but only exist about the background and obhect, we do a wavelet transformation on stereo images and estimate coarse disparity fields from the reduced lowpass field using area-based method at first-step. From these coarse disparity vectors, we generate disparity histogram and then separate object from background area using it. Afterwards, we restore only object area to the original image and estimate dense and accurate disparity by our two-step pixel-based method which does not use pixel brightness but use second gradient. We also extract feature points from the separated object area and estimate depth information by applying disparity vectors and camera parameters. Finally, we generate 3D model using both feature points and their z coordinates. By using our proposed, we can considerably reduce the computation time and estimate the precise disparity through the additional pixel-based method using LOG filter. Furthermore, our proposed foreground/background method can solve the mismatching problem of existing Delaunay triangulation and generate accurate 3D model.

  • PDF

A Study on the Prediction of Disc Cutter Wear Using TBM Data and Machine Learning Algorithm (TBM 데이터와 머신러닝 기법을 이용한 디스크 커터마모 예측에 관한 연구)

  • Tae-Ho, Kang;Soon-Wook, Choi;Chulho, Lee;Soo-Ho, Chang
    • Tunnel and Underground Space
    • /
    • v.32 no.6
    • /
    • pp.502-517
    • /
    • 2022
  • As the use of TBM increases, research has recently increased to to analyze TBM data with machine learning techniques to predict the exchange cycle of disc cutters, and predict the advance rate of TBM. In this study, a regression prediction of disc cutte wear of slurry shield TBM site was made by combining machine learning based on the machine data and the geotechnical data obtained during the excavation. The data were divided into 7:3 for training and testing the prediction of disc cutter wear, and the hyper-parameters are optimized by cross-validated grid-search over a parameter grid. As a result, gradient boosting based on the ensemble model showed good performance with a determination coefficient of 0.852 and a root-mean-square-error of 3.111 and especially excellent results in fit times along with learning performance. Based on the results, it is judged that the suitability of the prediction model using data including mechanical data and geotechnical information is high. In addition, research is needed to increase the diversity of ground conditions and the amount of disc cutter data.

The spatial-effect profile of visual attention in perception and memory (지각과 단기 기억 수준에 발현되는 주의 효과의 공간적 연장 패턴 비교)

  • Hyun, Joo-Seok
    • Korean Journal of Cognitive Science
    • /
    • v.19 no.3
    • /
    • pp.311-330
    • /
    • 2008
  • The effect of spatial attention gradually decreases as a function of the distance between the locus of attention and a target. According to this hypothesis, we tested the spatial-effect profile of visual attention when it operates on perception and memory. Experiment 1 measured accuracy of discriminating the color of a simultaneously masked target after presenting a pre-cue to either at the target location or away from the target (perception-intensive task). Experiment 2 measured accuracy of recognizing the color of several items at and around the pre-cued location (memory-intensive task). In the perception-intensive condition, the accuracy gradually dropped as the distance between the cue and target location increases. However, in the memory-intensive condition, subjects remembered only the item at the cued location. This suggests spatial attention in a memory-intensive process would operate on object-based representations. Experiment 2 showed the object-based effect observed in Experiment 1 can be also present in perception under a special circumstance. The results indicate that spatial attention can operate on object-based representations in a memory-intensive process whereas it flexibly can operate either on location-based or object-based representations in a perception-intensive process.

  • PDF

Geochemistry of Precambrian Metamorphic Rocks from Yongin-Anseong Area, the Southernmost Part of Central Gyeonggi Massif (경기육괴 중부 남단(용인-안성지역)에 분포하는 선캠브리아기 변성암류의 지구화학적 특징)

  • 이승구;송용선;증전창정
    • The Journal of the Petrological Society of Korea
    • /
    • v.13 no.3
    • /
    • pp.142-151
    • /
    • 2004
  • The metamorphic rocks of Yongin-Anseong area in Gyeonggi massif are composed of high-grade gneisses and schists which are considered as Precambrian basement, and Jurassic granite which intruded the metamorphic rocks. In this paper, we discuss the geochemical characteristics of metamorphic rocks and granites in this area based on REE and Nd isotope geochemistry. And we also discuss the petrogenetic relationship between metamorphic rocks and granites in this area. Most of Nd model ages (T$\_$DM/$\^$Nd/) from the metamorphic rocks range ca. 2.6Ga~2.9Ga which are correspond to the main crustal formation stage in Gyeonggi massif by Lee et. al. (2003). And Nd model ages show that the source material of quartzofeldspathic gneiss is slightly older than that of biotite banded gneiss. In chondrite-normalized rare earth element pattern, the range of (La/Yb)$\_$N/ value from biotite banded gneiss is 37~136, which shows sharp gradient and suggests that biotite banded gneiss was originated from a strongly fractionated source material. However, that of amphibolite is 4.65~6.64, which shows nearly flattened pattern. Particularly, the chondrite normalized REE patterns from the high-grade metamorphic rocks show the REE geochemisoy of original source material before metamorphism. In addition, the values of (La/Yb)$\_$N/ and Nd model ages of granite are 32~40 and 1.69Ga~2.08Ga, respectively, which suggest that the source material of granite is different from that of Precambrian basement such as biotite banded gneiss and quartzofeldspthic gneiss in the area.

Comparison of Machine Learning-Based Greenhouse VPD Prediction Models (머신러닝 기반의 온실 VPD 예측 모델 비교)

  • Jang Kyeong Min;Lee Myeong Bae;Lim Jong Hyun;Oh Han Byeol;Shin Chang Sun;Park Jang Woo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.3
    • /
    • pp.125-132
    • /
    • 2023
  • In this study, we compared the performance of machine learning models for predicting Vapor Pressure Deficits (VPD) in greenhouses that affect pore function and photosynthesis as well as plant growth due to nutrient absorption of plants. For VPD prediction, the correlation between the environmental elements in and outside the greenhouse and the temporal elements of the time series data was confirmed, and how the highly correlated elements affect VPD was confirmed. Before analyzing the performance of the prediction model, the amount and interval of analysis time series data (1 day, 3 days, 7 days) and interval (20 minutes, 1 hour) were checked to adjust the amount and interval of data. Finally, four machine learning prediction models (XGB Regressor, LGBM Regressor, Random Forest Regressor, etc.) were applied to compare the prediction performance by model. As a result of the prediction of the model, when data of 1 day at 20 minute intervals were used, the highest prediction performance was 0.008 for MAE and 0.011 for RMSE in LGBM. In addition, it was confirmed that the factor that most influences VPD prediction after 20 minutes was VPD (VPD_y__71) from the past 20 minutes rather than environmental factors. Using the results of this study, it is possible to increase crop productivity through VPD prediction, condensation of greenhouses, and prevention of disease occurrence. In the future, it can be used not only in predicting environmental data of greenhouses, but also in various fields such as production prediction and smart farm control models.

Phase Segmentation of PVA Fiber-Reinforced Cementitious Composites Using U-net Deep Learning Approach (U-net 딥러닝 기법을 활용한 PVA 섬유 보강 시멘트 복합체의 섬유 분리)

  • Jeewoo Suh;Tong-Seok Han
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.36 no.5
    • /
    • pp.323-330
    • /
    • 2023
  • The development of an analysis model that reflects the microstructure characteristics of polyvinyl alcohol (PVA) fiber-reinforced cementitious composites, which have a highly complex microstructure, enables synergy between efficient material design and real experiments. PVA fiber orientations are an important factor that influences the mechanical behavior of PVA fiber-reinforced cementitious composites. Owing to the difficulty in distinguishing the gray level value obtained from micro-CT images of PVA fibers from adjacent phases, fiber segmentation is time-consuming work. In this study, a micro-CT test with a voxel size of 0.65 ㎛3 was performed to investigate the three-dimensional distribution of fibers. To segment the fibers and generate training data, histogram, morphology, and gradient-based phase-segmentation methods were used. A U-net model was proposed to segment fibers from micro-CT images of PVA fiber-reinforced cementitious composites. Data augmentation was applied to increase the accuracy of the training, using a total of 1024 images as training data. The performance of the model was evaluated using accuracy, precision, recall, and F1 score. The trained model achieved a high fiber segmentation performance and efficiency, and the approach can be applied to other specimens as well.

Estimation of High Resolution Sea Surface Salinity Using Multi Satellite Data and Machine Learning (다종 위성자료와 기계학습을 이용한 고해상도 표층 염분 추정)

  • Sung, Taejun;Sim, Seongmun;Jang, Eunna;Im, Jungho
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_2
    • /
    • pp.747-763
    • /
    • 2022
  • Ocean salinity affects ocean circulation on a global scale and low salinity water around coastal areas often has an impact on aquaculture and fisheries. Microwave satellite sensors (e.g., Soil Moisture Active Passive [SMAP]) have provided sea surface salinity (SSS) based on the dielectric characteristics of water associated with SSS and sea surface temperature (SST). In this study, a Light Gradient Boosting Machine (LGBM)-based model for generating high resolution SSS from Geostationary Ocean Color Imager (GOCI) data was proposed, having machine learning-based improved SMAP SSS by Jang et al. (2022) as reference data (SMAP SSS (Jang)). Three schemes with different input variables were tested, and scheme 3 with all variables including Multi-scale Ultra-high Resolution SST yielded the best performance (coefficient of determination = 0.60, root mean square error = 0.91 psu). The proposed LGBM-based GOCI SSS had a similar spatiotemporal pattern with SMAP SSS (Jang), with much higher spatial resolution even in coastal areas, where SMAP SSS (Jang) was not available. In addition, when tested for the great flood occurred in Southern China in August 2020, GOCI SSS well simulated the spatial and temporal change of Changjiang Diluted Water. This research provided a potential that optical satellite data can be used to generate high resolution SSS associated with the improved microwave-based SSS especially in coastal areas.

A Study on Risk Parity Asset Allocation Model with XGBoos (XGBoost를 활용한 리스크패리티 자산배분 모형에 관한 연구)

  • Kim, Younghoon;Choi, HeungSik;Kim, SunWoong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.135-149
    • /
    • 2020
  • Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.