• Title/Summary/Keyword: 복잡 시스템

Search Result 6,256, Processing Time 0.031 seconds

Development of Correction Formulas for KMA AAOS Soil Moisture Observation Data (기상청 농업기상관측망 토양수분 관측자료 보정식 개발)

  • Choi, Sung-Won;Park, Juhan;Kang, Minseok;Kim, Jongho;Sohn, Seungwon;Cho, Sungsik;Chun, Hyenchung;Jung, Ki-Yuol
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.24 no.1
    • /
    • pp.13-34
    • /
    • 2022
  • Soil moisture data have been collected at 11 agrometeorological stations operated by The Korea Meteorological Administration (KMA). This study aimed to verify the accuracy of soil moisture data of KMA and develop a correction formula to be applied to improve their quality. The soil of the observation field was sampled to analyze its physical properties that affect soil water content. Soil texture was classified to be sandy loam and loamy sand at most sites. The bulk density of the soil samples was about 1.5 g/cm3 on average. The content of silt and clay was also closely related to bulk density and water holding capacity. The EnviroSCAN model, which was used as a reference sensor, was calibrated using the self-manufactured "reference soil moisture observation system". Comparison between the calibrated reference sensor and the field sensor of KMA was conducted at least three times at each of the 11 sites. Overall, the trend of fluctuations over time in the measured values of the two sensors appeared similar. Still, there were sites where the latter had relatively lower soil moisture values than the former. A linear correction formula was derived for each site and depth using the range and average of the observed data for the given period. This correction formula resulted in an improvement in agreement between sensor values at the Suwon site. In addition, the detailed approach was developed to estimate the correction value for the period in which a correction formula was not calculated. In summary, the correction of soil moisture data at a regular time interval, e.g., twice a year, would be recommended for all observation sites to improve the quality of soil moisture observation data.

A study on the manufacturing method and usefulness of Bolus-helmet used for malignant scalp tumor patients (악성두피종양환자에게 사용되는 보루스헬멧(Bolus-helmet)의 제작방법 및 유용성에 관한 연구)

  • Lee, joung jin;Moon, jae hee;Kim, hee sung;Kim, koon joo;Seo, jung min;Choi, jae hoon;Kim, sung gi;Jang, in gi
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.33
    • /
    • pp.15-24
    • /
    • 2021
  • This study is about the introduction and usefulness evaluation of the manufacturing method of the bolus-helmet. Helmet-production for the treatment of scalp tumor patients has been tried and will continue in many creative and various ways. However, Most of the research data did not significantly reduce the psychological burden and physical and physical discomfort that the patient had to bear due to the time and economic cost required for the production of the helmet, the convenience of production, and the complexity of the process. In addition, recently, studies using more advanced technologies and equipment such as 3D-printer technology, which are being studied as a way to increase the treatment effect, are being introduced, but the time, economic cost, and psychological and physical burden are still the sole responsibility of the patient. Isn't it getting worse? The reality is that the thoughts of concern cannot be erased. Therefore, by maintaining the physical properties of the bolus and manufacturing a helmet without incurring additional costs, the physical and physical discomfort aggravated to the patient was reduced and the procedure and time for helmet manufacturing were minimized. In this way, it was possible to reduce the time, economic cost, and physical discomfort required for the production of the helmet, and it was also possible to minimize the psychological burden of the patient, although it is invisible. Additionally, in evaluating the usefulness of helmets, we are able to continuously seek and develop ways to reduce the air-gap interval, and as a result, we will be able to introduce a method to keep it within 2.0mm along with the manufacturing method through this study. I feel very welcome. Finally, I hope that anyone working in the Department of Radiation Oncology will be able to easily manufacture the helmet required for radiation therapy using a bolus through the guide-line on helmet manufacturing provided by this institute. I hope and hope that if you have any questions or inquiries that arise during the production process, please feel free to contact us through the researcher's e-mail or mobile phone at any time.

Transcriptomic Analysis of Triticum aestivum under Salt Stress Reveals Change of Gene Expression (RNA sequencing을 이용한 염 스트레스 처리 밀(Triticum aestivum)의 유전자 발현 차이 확인 및 후보 유전자 선발)

  • Jeon, Donghyun;Lim, Yoonho;Kang, Yuna;Park, Chulsoo;Lee, Donghoon;Park, Junchan;Choi, Uchan;Kim, Kyeonghoon;Kim, Changsoo
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.67 no.1
    • /
    • pp.41-52
    • /
    • 2022
  • As a cultivar of Korean wheat, 'Keumgang' wheat variety has a fast growth period and can be grown stably. Hexaploid wheat (Triticum aestivum) has moderately high salt tolerance compared to tetraploid wheat (Triticum turgidum L.). However, the molecular mechanisms related to salt tolerance of hexaploid wheat have not been elucidated yet. In this study, the candidate genes related to salt tolerance were identified by investigating the genes that are differently expressed in Keumgang variety and examining salt tolerant mutation '2020-s1340.'. A total of 85,771,537 reads were obtained after quality filtering using NextSeq 500 Illumina sequencing technology. A total of 23,634,438 reads were aligned with the NCBI Campala Lr22a pseudomolecule v5 reference genome (Triticum aestivum). A total of 282 differentially expressed genes (DEGs) were identified in the two Triticum aestivum materials. These DEGs have functions, including salt tolerance related traits such as 'wall-associated receptor kinase-like 8', 'cytochrome P450', '6-phosphofructokinase 2'. In addition, the identified DEGs were classified into three categories, including biological process, molecular function, cellular component using gene ontology analysis. These DEGs were enriched significantly for terms such as the 'copper ion transport', 'oxidation-reduction process', 'alternative oxidase activity'. These results, which were obtained using RNA-seq analysis, will improve our understanding of salt tolerance of wheat. Moreover, this study will be a useful resource for breeding wheat varieties with improved salt tolerance using molecular breeding technology.

Development and Experimental Performance Evaluation of Steel Composite Girder by Turn Over Process (단면회전방법을 적용한 강합성 소수주거더 개발 및 실험적 성능 평가)

  • Kim, Sung Jae;Yi, Na Hyun;Kim, Sung Bae;Kim, Jang-Ho Jay
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.30 no.5A
    • /
    • pp.407-415
    • /
    • 2010
  • In Korea, more than 90% of the total number of steel bridges built for 40~70 m span length is a steel box-girder bridge type. A steel box-girder bridge is suitable for long span or curved bridges with outstanding flexural and torsional rigidity as well as good constructability and safety. However, a steel box-girder bridge is uneconomical, requiring many secondary members and workmanship such as stiffeners and ribs requiring welding attachments to flanges or webs. Therefore, in US and Japan, a plate girder bridge, which is relatively cheap and easy to construct is generally used. One type of the plate girder bridge is the two- or three-main girder plate bridge, which is a composite plate girder bridge that minimizes the number of required main girders by increasing the distance between the adjacent girders. Also, for the simplification of girder section, the stiffener which requires attachment to the web is not required. The two-main steel girder plate bridge is a representative type of plate girder bridges, which is suitable for bridges with 10 m effective width and has been developed in the early 1960s in France. To ensure greater safety of two- or three-main girder plate bridges, a larger steel section is used in the bridge domestically than in Europe or Japan. Also, the total number of two- or three-main girder plate bridge constructed in Korea is significantly less than the steel box girder bridge due to a lack of designers' familiarity with more complex design detailing of the bridge compare to that of a steel box girder bridge design. In this study, a new construction method called Turn Over method is proposed to minimize the steel section size used in a two- or three-main girder plate bridge by applying prestressing force to the member using confining concrete section's weight to reduce construction cost. Also, a full scale 20 m Turn Over girder specimen and a Turn Over girder bridge specimen were tested to evaluate constructability and structural safety of the members constructed using Turn Over process.

Research on ITB Contract Terms Classification Model for Risk Management in EPC Projects: Deep Learning-Based PLM Ensemble Techniques (EPC 프로젝트의 위험 관리를 위한 ITB 문서 조항 분류 모델 연구: 딥러닝 기반 PLM 앙상블 기법 활용)

  • Hyunsang Lee;Wonseok Lee;Bogeun Jo;Heejun Lee;Sangjin Oh;Sangwoo You;Maru Nam;Hyunsik Lee
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.11
    • /
    • pp.471-480
    • /
    • 2023
  • The Korean construction order volume in South Korea grew significantly from 91.3 trillion won in public orders in 2013 to a total of 212 trillion won in 2021, particularly in the private sector. As the size of the domestic and overseas markets grew, the scale and complexity of EPC (Engineering, Procurement, Construction) projects increased, and risk management of project management and ITB (Invitation to Bid) documents became a critical issue. The time granted to actual construction companies in the bidding process following the EPC project award is not only limited, but also extremely challenging to review all the risk terms in the ITB document due to manpower and cost issues. Previous research attempted to categorize the risk terms in EPC contract documents and detect them based on AI, but there were limitations to practical use due to problems related to data, such as the limit of labeled data utilization and class imbalance. Therefore, this study aims to develop an AI model that can categorize the contract terms based on the FIDIC Yellow 2017(Federation Internationale Des Ingenieurs-Conseils Contract terms) standard in detail, rather than defining and classifying risk terms like previous research. A multi-text classification function is necessary because the contract terms that need to be reviewed in detail may vary depending on the scale and type of the project. To enhance the performance of the multi-text classification model, we developed the ELECTRA PLM (Pre-trained Language Model) capable of efficiently learning the context of text data from the pre-training stage, and conducted a four-step experiment to validate the performance of the model. As a result, the ensemble version of the self-developed ITB-ELECTRA model and Legal-BERT achieved the best performance with a weighted average F1-Score of 76% in the classification of 57 contract terms.

Hepatoprotective Effects of the Extracts of Alnus japonica Leaf on Alcohol-Induced Liver Damage in HepG2/2E1 Cells (알코올로 유도된 간손상 모델 HepG2/2E1 세포에서 오리나무 잎 추출물의 간보호효과)

  • Bo-Ram Kim;Tae-Su Kim;Su Hui Seong;Seahee Han;Jin-Ho Kim;Chan Seo;Ha-Nul Lee;Sua Im;Jung Eun Kim;Ji Min Jung;Do-Yun Jeong;Kyung-Min Choi;Jin-Woo Jeong
    • Korean Journal of Plant Resources
    • /
    • v.37 no.2
    • /
    • pp.120-129
    • /
    • 2024
  • Alcoholic liver disease (ALD) is a significant risk factor in the global disease burden. The stem bark of the Betulaceae plant Alnus japonica, which is indigenous to Korea, has been used as a popular folk medicine for hepatitis and cancer. However, the preventive effect of Alnus japonica leaf extracts on alcohol-related liver damage has not been investigated. The objective of this study was to investigate the hepatoprotective effects of the extracts of Alnus japonica leaf (AJL) against ethanol-induced liver damage in HepG2/2E1 cells. Treatment with AJL significantly prevented ethanol-induced cytotoxicity in HepG2/2E1 cells by reducing the levels of alanine aminotransferase (ALT) and aspartate aminotransferase (AST). This protective effect was likely associated with antioxidant potential of AJL, as evidenced by the attenuation of reactive oxygen species (ROS) and malondialdehyde (MDA) production and restoration of the depleted glutathione (GSH) levels in ethanol-induced HepG2/2E1 cells. Our findings suggest that FCC might be considered as a useful agent in the prevention of liver damage induced by oxidative stress by increasing the antioxidant defense mechanism.

Multi-Variate Tabular Data Processing and Visualization Scheme for Machine Learning based Analysis: A Case Study using Titanic Dataset (기계 학습 기반 분석을 위한 다변량 정형 데이터 처리 및 시각화 방법: Titanic 데이터셋 적용 사례 연구)

  • Juhyoung Sung;Kiwon Kwon;Kyoungwon Park;Byoungchul Song
    • Journal of Internet Computing and Services
    • /
    • v.25 no.4
    • /
    • pp.121-130
    • /
    • 2024
  • As internet and communication technology (ICT) is improved exponentially, types and amount of available data also increase. Even though data analysis including statistics is significant to utilize this large amount of data, there are inevitable limits to process various and complex data in general way. Meanwhile, there are many attempts to apply machine learning (ML) in various fields to solve the problems according to the enhancement in computational performance and increase in demands for autonomous systems. Especially, data processing for the model input and designing the model to solve the objective function are critical to achieve the model performance. Data processing methods according to the type and property have been presented through many studies and the performance of ML highly varies depending on the methods. Nevertheless, there are difficulties in deciding which data processing method for data analysis since the types and characteristics of data have become more diverse. Specifically, multi-variate data processing is essential for solving non-linear problem based on ML. In this paper, we present a multi-variate tabular data processing scheme for ML-aided data analysis by using Titanic dataset from Kaggle including various kinds of data. We present the methods like input variable filtering applying statistical analysis and normalization according to the data property. In addition, we analyze the data structure using visualization. Lastly, we design an ML model and train the model by applying the proposed multi-variate data process. After that, we analyze the passenger's survival prediction performance of the trained model. We expect that the proposed multi-variate data processing and visualization can be extended to various environments for ML based analysis.

Sequence Stratigraphy of the Yeongweol Group (Cambrian-Ordovician), Taebaeksan Basin, Korea: Paleogeographic Implications (전기고생대 태백산분지 영월층군의 순차층서 연구를 통한 고지리적 추론)

  • Kwon, Y.K.
    • Economic and Environmental Geology
    • /
    • v.45 no.3
    • /
    • pp.317-333
    • /
    • 2012
  • The Yeongweol Group is a Lower Paleozoic mixed carbonate-siliciclastic sequence in the Taebaeksan Basin of Korea, and consists of five lithologic formations: Sambangsan, Machari, Wagok, Mungok, and Yeongheung in ascending order. Sequence stratigraphic interpretation of the group indicates that initial flooding in the Yeongweol area of the Taebaeksan Basin resulted in basal siliciclastic-dominated sequences of the Sambangsan Formation during the Middle Cambrian. The accelerated sea-level rise in the late Middle to early Late Cambrian generated a mixed carbonate-siliciclastic slope or deep ramp sequence of shale, grainstone and breccia intercalations, representing the lower part of the Machari Formation. The continued rise of sea level in the Late Cambrian made substantial accommodation space and activated subtidal carbonate factory, forming carbonate-dominated subtidal platform sequence in the middle and upper parts of the Machari Formation. The overlying Wagok Formation might originally be a ramp carbonate sequence of subtidal ribbon carbonates and marls with conglomerates, deposited during the normal rise of relative sea level in the late Late Cambrian. The formation was affected by unstable dolomitization shortly after the deposition during the relative sea-level fall in the latest Cambrian or earliest Ordovician. Subsequently, it was extensively dolomitized under the deep burial diagenetic condition. During the Early Ordovician (Tremadocian), global transgression (viz. Sauk) was continued, and subtidal ramp deposition was sustained in the Yeongweol platform, forming the Mungok Formation. The formation is overlain by the peritidal carbonates of the Yeongheung Formation, and is stacked by cyclic sedimentation during the Early to Middle Ordovician (Arenigian to Caradocian). The lithologic change from subtidal ramp to peritidal facies is preserved at the uppermost part of the Mungok Formation. The transition between Sauk and Tippecanoe sequences is recognized within the middle part of the Yeongheung Formation as a minimum accommodation zone. The global eustatic fall in the earliest Middle Ordovician and the ensuing rise of relative sea level during the Darrwillian to Caradocian produced broadly-prograding peritidal carbonates of shallowing-upward cyclic successions within the Yeongheung Formation. The reconstructed relative sea-level curve of the Yeongweol platform is very similar to that of the Taebaek platform. This reveals that the Yeongweol platform experienced same tectonic movements with the Taebaek platform, and consequently that both platform sequences might be located in a body or somewhere separately in the margin of the North China platform. The significant differences in lithologic and stratigraphic successions imply that the Yeongweol platform was much far from the Taebaek platform and not associated with the Taebaek platform as a single depositional system. The Yeongweol platform was probably located in relatively open shallow marine environments, whereas the Taebaek platform was a part of the restricted embayments. During the late Paleozoic to early Mesozoic amalgamations of the Korean massifs, the Yeongweol platform was probably pushed against the Taebaek platform by the complex movement, forming fragmented platform sequences of the Taebaeksan Basin.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

A Study on the Development Trend of Artificial Intelligence Using Text Mining Technique: Focused on Open Source Software Projects on Github (텍스트 마이닝 기법을 활용한 인공지능 기술개발 동향 분석 연구: 깃허브 상의 오픈 소스 소프트웨어 프로젝트를 대상으로)

  • Chong, JiSeon;Kim, Dongsung;Lee, Hong Joo;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.1-19
    • /
    • 2019
  • Artificial intelligence (AI) is one of the main driving forces leading the Fourth Industrial Revolution. The technologies associated with AI have already shown superior abilities that are equal to or better than people in many fields including image and speech recognition. Particularly, many efforts have been actively given to identify the current technology trends and analyze development directions of it, because AI technologies can be utilized in a wide range of fields including medical, financial, manufacturing, service, and education fields. Major platforms that can develop complex AI algorithms for learning, reasoning, and recognition have been open to the public as open source projects. As a result, technologies and services that utilize them have increased rapidly. It has been confirmed as one of the major reasons for the fast development of AI technologies. Additionally, the spread of the technology is greatly in debt to open source software, developed by major global companies, supporting natural language recognition, speech recognition, and image recognition. Therefore, this study aimed to identify the practical trend of AI technology development by analyzing OSS projects associated with AI, which have been developed by the online collaboration of many parties. This study searched and collected a list of major projects related to AI, which were generated from 2000 to July 2018 on Github. This study confirmed the development trends of major technologies in detail by applying text mining technique targeting topic information, which indicates the characteristics of the collected projects and technical fields. The results of the analysis showed that the number of software development projects by year was less than 100 projects per year until 2013. However, it increased to 229 projects in 2014 and 597 projects in 2015. Particularly, the number of open source projects related to AI increased rapidly in 2016 (2,559 OSS projects). It was confirmed that the number of projects initiated in 2017 was 14,213, which is almost four-folds of the number of total projects generated from 2009 to 2016 (3,555 projects). The number of projects initiated from Jan to Jul 2018 was 8,737. The development trend of AI-related technologies was evaluated by dividing the study period into three phases. The appearance frequency of topics indicate the technology trends of AI-related OSS projects. The results showed that the natural language processing technology has continued to be at the top in all years. It implied that OSS had been developed continuously. Until 2015, Python, C ++, and Java, programming languages, were listed as the top ten frequently appeared topics. However, after 2016, programming languages other than Python disappeared from the top ten topics. Instead of them, platforms supporting the development of AI algorithms, such as TensorFlow and Keras, are showing high appearance frequency. Additionally, reinforcement learning algorithms and convolutional neural networks, which have been used in various fields, were frequently appeared topics. The results of topic network analysis showed that the most important topics of degree centrality were similar to those of appearance frequency. The main difference was that visualization and medical imaging topics were found at the top of the list, although they were not in the top of the list from 2009 to 2012. The results indicated that OSS was developed in the medical field in order to utilize the AI technology. Moreover, although the computer vision was in the top 10 of the appearance frequency list from 2013 to 2015, they were not in the top 10 of the degree centrality. The topics at the top of the degree centrality list were similar to those at the top of the appearance frequency list. It was found that the ranks of the composite neural network and reinforcement learning were changed slightly. The trend of technology development was examined using the appearance frequency of topics and degree centrality. The results showed that machine learning revealed the highest frequency and the highest degree centrality in all years. Moreover, it is noteworthy that, although the deep learning topic showed a low frequency and a low degree centrality between 2009 and 2012, their ranks abruptly increased between 2013 and 2015. It was confirmed that in recent years both technologies had high appearance frequency and degree centrality. TensorFlow first appeared during the phase of 2013-2015, and the appearance frequency and degree centrality of it soared between 2016 and 2018 to be at the top of the lists after deep learning, python. Computer vision and reinforcement learning did not show an abrupt increase or decrease, and they had relatively low appearance frequency and degree centrality compared with the above-mentioned topics. Based on these analysis results, it is possible to identify the fields in which AI technologies are actively developed. The results of this study can be used as a baseline dataset for more empirical analysis on future technology trends that can be converged.