• Title/Summary/Keyword: Input framework

Search Result 489, Processing Time 0.024 seconds

Development of a surrogate model based on temperature for estimation of evapotranspiration and its use for drought index applicability assessment (증발산 산정을 위한 온도기반의 대체모형 개발 및 가뭄지수 적용성 평가)

  • Kim, Ho-Jun;Kim, Kyoungwook;Kwon, Hyun-Han
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.11
    • /
    • pp.969-983
    • /
    • 2021
  • Evapotranspiration, one of the hydrometeorological components, is considered an important variable for water resource planning and management and is primarily used as input data for hydrological models such as water balance models. The FAO56 PM method has been recommended as a standard approach to estimate the reference evapotranspiration with relatively high accuracy. However, the FAO56 PM method is often challenging to apply because it requires considerable hydrometeorological variables. In this perspective, the Hargreaves equation has been widely adopted to estimate the reference evapotranspiration. In this study, a set of parameters of the Hargreaves equation was calibrated with relatively long-term data within a Bayesian framework. Statistical index (CC, RMSE, IoA) is used to validate the model. RMSE for monthly results reduced from 7.94 ~ 24.91 mm/month to 7.94 ~ 24.91 mm/month for the validation period. The results confirmed that the accuracy was significantly improved compared to the existing Hargreaves equation. Further, the evaporative demand drought index (EDDI) based on the evaporative demand (E0) was proposed. To confirm the effectiveness of the EDDI, this study evaluated the estimated EDDI for the recent drought events from 2014 to 2015 and 2018, along with precipitation and SPI. As a result of the evaluation of the Han-river watershed in 2018, the weekly EDDI increased to more than 2 and it was confirmed that EDDI more effectively detects the onset of drought caused by heatwaves. EDDI can be used as a drought index, particularly for heatwave-driven flash drought monitoring and along with SPI.

Prospect of future water resources in the basins of Chungju Dam and Soyang-gang Dam using a physics-based distributed hydrological model and a deep-learning-based LSTM model (물리기반 분포형 수문 모형과 딥러닝 기반 LSTM 모형을 활용한 충주댐 및 소양강댐 유역의 미래 수자원 전망)

  • Kim, Yongchan;Kim, Youngran;Hwang, Seonghwan;Kim, Dongkyun
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.12
    • /
    • pp.1115-1124
    • /
    • 2022
  • The impact of climate change on water resources was evaluated for Chungju Dam and Soyang-gang Dam basins by constructing an integrated modeling framework consisting of a dam inflow prediction model based on the Variable Infiltration Capacity (VIC) model, a distributed hydrologic model, and an LSTM based dam outflow prediction model. Considering the uncertainty of future climate data, four models of CMIP6 GCM were used as input data of VIC model for future period (2021-2100). As a result of applying future climate data, the average inflow for period increased as the future progressed, and the inflow in the far future (2070-2100) increased by up to 22% compared to that of the observation period (1986-2020). The minimum value of dam discharge lasting 4~50 days was significantly lower than the observed value. This indicates that droughts may occur over a longer period than observed in the past, meaning that citizens of Seoul metropolitan areas may experience severe water shortages due to future droughts. In addition, compared to the near and middle futures, the change in water storage has occurred rapidly in the far future, suggesting that the difficulties of water resource management may increase.

A modified U-net for crack segmentation by Self-Attention-Self-Adaption neuron and random elastic deformation

  • Zhao, Jin;Hu, Fangqiao;Qiao, Weidong;Zhai, Weida;Xu, Yang;Bao, Yuequan;Li, Hui
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.1-16
    • /
    • 2022
  • Despite recent breakthroughs in deep learning and computer vision fields, the pixel-wise identification of tiny objects in high-resolution images with complex disturbances remains challenging. This study proposes a modified U-net for tiny crack segmentation in real-world steel-box-girder bridges. The modified U-net adopts the common U-net framework and a novel Self-Attention-Self-Adaption (SASA) neuron as the fundamental computing element. The Self-Attention module applies softmax and gate operations to obtain the attention vector. It enables the neuron to focus on the most significant receptive fields when processing large-scale feature maps. The Self-Adaption module consists of a multiplayer perceptron subnet and achieves deeper feature extraction inside a single neuron. For data augmentation, a grid-based crack random elastic deformation (CRED) algorithm is designed to enrich the diversities and irregular shapes of distributed cracks. Grid-based uniform control nodes are first set on both input images and binary labels, random offsets are then employed on these control nodes, and bilinear interpolation is performed for the rest pixels. The proposed SASA neuron and CRED algorithm are simultaneously deployed to train the modified U-net. 200 raw images with a high resolution of 4928 × 3264 are collected, 160 for training and the rest 40 for the test. 512 × 512 patches are generated from the original images by a sliding window with an overlap of 256 as inputs. Results show that the average IoU between the recognized and ground-truth cracks reaches 0.409, which is 29.8% higher than the regular U-net. A five-fold cross-validation study is performed to verify that the proposed method is robust to different training and test images. Ablation experiments further demonstrate the effectiveness of the proposed SASA neuron and CRED algorithm. Promotions of the average IoU individually utilizing the SASA and CRED module add up to the final promotion of the full model, indicating that the SASA and CRED modules contribute to the different stages of model and data in the training process.

The Prediction of Cryptocurrency Prices Using eXplainable Artificial Intelligence based on Deep Learning (설명 가능한 인공지능과 CNN을 활용한 암호화폐 가격 등락 예측모형)

  • Taeho Hong;Jonggwan Won;Eunmi Kim;Minsu Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.2
    • /
    • pp.129-148
    • /
    • 2023
  • Bitcoin is a blockchain technology-based digital currency that has been recognized as a representative cryptocurrency and a financial investment asset. Due to its highly volatile nature, Bitcoin has gained a lot of attention from investors and the public. Based on this popularity, numerous studies have been conducted on price and trend prediction using machine learning and deep learning. This study employed LSTM (Long Short Term Memory) and CNN (Convolutional Neural Networks), which have shown potential for predictive performance in the finance domain, to enhance the classification accuracy in Bitcoin price trend prediction. XAI(eXplainable Artificial Intelligence) techniques were applied to the predictive model to enhance its explainability and interpretability by providing a comprehensive explanation of the model. In the empirical experiment, CNN was applied to technical indicators and Google trend data to build a Bitcoin price trend prediction model, and the CNN model using both technical indicators and Google trend data clearly outperformed the other models using neural networks, SVM, and LSTM. Then SHAP(Shapley Additive exPlanations) was applied to the predictive model to obtain explanations about the output values. Important prediction drivers in input variables were extracted through global interpretation, and the interpretation of the predictive model's decision process for each instance was suggested through local interpretation. The results show that our proposed research framework demonstrates both improved classification accuracy and explainability by using CNN, Google trend data, and SHAP.

Cycle-Consistent Generative Adversarial Network: Effect on Radiation Dose Reduction and Image Quality Improvement in Ultralow-Dose CT for Evaluation of Pulmonary Tuberculosis

  • Chenggong Yan;Jie Lin;Haixia Li;Jun Xu;Tianjing Zhang;Hao Chen;Henry C. Woodruff;Guangyao Wu;Siqi Zhang;Yikai Xu;Philippe Lambin
    • Korean Journal of Radiology
    • /
    • v.22 no.6
    • /
    • pp.983-993
    • /
    • 2021
  • Objective: To investigate the image quality of ultralow-dose CT (ULDCT) of the chest reconstructed using a cycle-consistent generative adversarial network (CycleGAN)-based deep learning method in the evaluation of pulmonary tuberculosis. Materials and Methods: Between June 2019 and November 2019, 103 patients (mean age, 40.8 ± 13.6 years; 61 men and 42 women) with pulmonary tuberculosis were prospectively enrolled to undergo standard-dose CT (120 kVp with automated exposure control), followed immediately by ULDCT (80 kVp and 10 mAs). The images of the two successive scans were used to train the CycleGAN framework for image-to-image translation. The denoising efficacy of the CycleGAN algorithm was compared with that of hybrid and model-based iterative reconstruction. Repeated-measures analysis of variance and Wilcoxon signed-rank test were performed to compare the objective measurements and the subjective image quality scores, respectively. Results: With the optimized CycleGAN denoising model, using the ULDCT images as input, the peak signal-to-noise ratio and structural similarity index improved by 2.0 dB and 0.21, respectively. The CycleGAN-generated denoised ULDCT images typically provided satisfactory image quality for optimal visibility of anatomic structures and pathological findings, with a lower level of image noise (mean ± standard deviation [SD], 19.5 ± 3.0 Hounsfield unit [HU]) than that of the hybrid (66.3 ± 10.5 HU, p < 0.001) and a similar noise level to model-based iterative reconstruction (19.6 ± 2.6 HU, p > 0.908). The CycleGAN-generated images showed the highest contrast-to-noise ratios for the pulmonary lesions, followed by the model-based and hybrid iterative reconstruction. The mean effective radiation dose of ULDCT was 0.12 mSv with a mean 93.9% reduction compared to standard-dose CT. Conclusion: The optimized CycleGAN technique may allow the synthesis of diagnostically acceptable images from ULDCT of the chest for the evaluation of pulmonary tuberculosis.

Performance of ChatGPT on the Korean National Examination for Dental Hygienists

  • Soo-Myoung Bae;Hye-Rim Jeon;Gyoung-Nam Kim;Seon-Hui Kwak;Hyo-Jin Lee
    • Journal of dental hygiene science
    • /
    • v.24 no.1
    • /
    • pp.62-70
    • /
    • 2024
  • Background: This study aimed to evaluate ChatGPT's performance accuracy in responding to questions from the national dental hygienist examination. Moreover, through an analysis of ChatGPT's incorrect responses, this research intended to pinpoint the predominant types of errors. Methods: To evaluate ChatGPT-3.5's performance according to the type of national examination questions, the researchers classified 200 questions of the 49th National Dental Hygienist Examination into recall, interpretation, and solving type questions. The researchers strategically modified the questions to counteract potential misunderstandings from implied meanings or technical terminology in Korea. To assess ChatGPT-3.5's problem-solving capabilities in applying previously acquired knowledge, the questions were first converted to subjective type. If ChatGPT-3.5 generated an incorrect response, an original multiple-choice framework was provided again. Two hundred questions were input into ChatGPT-3.5 and the generated responses were analyzed. After using ChatGPT, the accuracy of each response was evaluated by researchers according to the types of questions, and the types of incorrect responses were categorized (logical, information, and statistical errors). Finally, hallucination was evaluated when ChatGPT provided misleading information by answering something that was not true as if it were true. Results: ChatGPT's responses to the national examination were 45.5% accurate. Accuracy by question type was 60.3% for recall and 13.0% for problem-solving type questions. The accuracy rate for the subjective solving questions was 13.0%, while the accuracy for the objective questions increased to 43.5%. The most common types of incorrect responses were logical errors 65.1% of all. Of the total 102 incorrectly answered questions, 100 were categorized as hallucinations. Conclusion: ChatGPT-3.5 was found to be limited in its ability to provide evidence-based correct responses to the Korean national dental hygiene examination. Therefore, dental hygienists in the education or clinical fields should be careful to use artificial intelligence-generated materials with a critical view.

Ontology-based Course Mentoring System (온톨로지 기반의 수강지도 시스템)

  • Oh, Kyeong-Jin;Yoon, Ui-Nyoung;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.149-162
    • /
    • 2014
  • Course guidance is a mentoring process which is performed before students register for coming classes. The course guidance plays a very important role to students in checking degree audits of students and mentoring classes which will be taken in coming semester. Also, it is intimately involved with a graduation assessment or a completion of ABEEK certification. Currently, course guidance is manually performed by some advisers at most of universities in Korea because they have no electronic systems for the course guidance. By the lack of the systems, the advisers should analyze each degree audit of students and curriculum information of their own departments. This process often causes the human error during the course guidance process due to the complexity of the process. The electronic system thus is essential to avoid the human error for the course guidance. If the relation data model-based system is applied to the mentoring process, then the problems in manual way can be solved. However, the relational data model-based systems have some limitations. Curriculums of a department and certification systems can be changed depending on a new policy of a university or surrounding environments. If the curriculums and the systems are changed, a scheme of the existing system should be changed in accordance with the variations. It is also not sufficient to provide semantic search due to the difficulty of extracting semantic relationships between subjects. In this paper, we model a course mentoring ontology based on the analysis of a curriculum of computer science department, a structure of degree audit, and ABEEK certification. Ontology-based course guidance system is also proposed to overcome the limitation of the existing methods and to provide the effectiveness of course mentoring process for both of advisors and students. In the proposed system, all data of the system consists of ontology instances. To create ontology instances, ontology population module is developed by using JENA framework which is for building semantic web and linked data applications. In the ontology population module, the mapping rules to connect parts of degree audit to certain parts of course mentoring ontology are designed. All ontology instances are generated based on degree audits of students who participate in course mentoring test. The generated instances are saved to JENA TDB as a triple repository after an inference process using JENA inference engine. A user interface for course guidance is implemented by using Java and JENA framework. Once a advisor or a student input student's information such as student name and student number at an information request form in user interface, the proposed system provides mentoring results based on a degree audit of current student and rules to check scores for each part of a curriculum such as special cultural subject, major subject, and MSC subject containing math and basic science. Recall and precision are used to evaluate the performance of the proposed system. The recall is used to check that the proposed system retrieves all relevant subjects. The precision is used to check whether the retrieved subjects are relevant to the mentoring results. An officer of computer science department attends the verification on the results derived from the proposed system. Experimental results using real data of the participating students show that the proposed course guidance system based on course mentoring ontology provides correct course mentoring results to students at all times. Advisors can also reduce their time cost to analyze a degree audit of corresponding student and to calculate each score for the each part. As a result, the proposed system based on ontology techniques solves the difficulty of mentoring methods in manual way and the proposed system derive correct mentoring results as human conduct.

Evaluation of Setup Uncertainty on the CTV Dose and Setup Margin Using Monte Carlo Simulation (몬테칼로 전산모사를 이용한 셋업오차가 임상표적체적에 전달되는 선량과 셋업마진에 대하여 미치는 영향 평가)

  • Cho, Il-Sung;Kwark, Jung-Won;Cho, Byung-Chul;Kim, Jong-Hoon;Ahn, Seung-Do;Park, Sung-Ho
    • Progress in Medical Physics
    • /
    • v.23 no.2
    • /
    • pp.81-90
    • /
    • 2012
  • The effect of setup uncertainties on CTV dose and the correlation between setup uncertainties and setup margin were evaluated by Monte Carlo based numerical simulation. Patient specific information of IMRT treatment plan for rectal cancer designed on the VARIAN Eclipse planning system was utilized for the Monte Carlo simulation program including the planned dose distribution and tumor volume information of a rectal cancer patient. The simulation program was developed for the purpose of the study on Linux environment using open source packages, GNU C++ and ROOT data analysis framework. All misalignments of patient setup were assumed to follow the central limit theorem. Thus systematic and random errors were generated according to the gaussian statistics with a given standard deviation as simulation input parameter. After the setup error simulations, the change of dose in CTV volume was analyzed with the simulation result. In order to verify the conventional margin recipe, the correlation between setup error and setup margin was compared with the margin formula developed on three dimensional conformal radiation therapy. The simulation was performed total 2,000 times for each simulation input of systematic and random errors independently. The size of standard deviation for generating patient setup errors was changed from 1 mm to 10 mm with 1 mm step. In case for the systematic error the minimum dose on CTV $D_{min}^{stat{\cdot}}$ was decreased from 100.4 to 72.50% and the mean dose $\bar{D}_{syst{\cdot}}$ was decreased from 100.45% to 97.88%. However the standard deviation of dose distribution in CTV volume was increased from 0.02% to 3.33%. The effect of random error gave the same result of a reduction of mean and minimum dose to CTV volume. It was found that the minimum dose on CTV volume $D_{min}^{rand{\cdot}}$ was reduced from 100.45% to 94.80% and the mean dose to CTV $\bar{D}_{rand{\cdot}}$ was decreased from 100.46% to 97.87%. Like systematic error, the standard deviation of CTV dose ${\Delta}D_{rand}$ was increased from 0.01% to 0.63%. After calculating a size of margin for each systematic and random error the "population ratio" was introduced and applied to verify margin recipe. It was found that the conventional margin formula satisfy margin object on IMRT treatment for rectal cancer. It is considered that the developed Monte-carlo based simulation program might be useful to study for patient setup error and dose coverage in CTV volume due to variations of margin size and setup error.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.

백악기 미국 걸프만 퇴적층의 지구조적, 퇴적학적, 석유지질학적 고찰 (A Review of Tectonic, Sedinlentologic Framework and Petroleum Geology of the Cretaceous U. S. enlf Coast Sedimentary Sequence)

  • Cheong Dae-Kyo
    • The Korean Journal of Petroleum Geology
    • /
    • v.4 no.1_2 s.5
    • /
    • pp.27-39
    • /
    • 1996
  • In the Cretaceous, the Gulf Coast Basin evolved as a marginal sag basin. Thick clastic and carbonate sequences cover the disturbed and diapirically deformed salt layer. In the Cretaceous the salinities of the Gulf Coast Basin probably matched the Holocene Persian Gulf, as is evidenced by the widespread development of supratidal anhydrite. The major Lower Cretaceous reservoir formations are the Cotton Valley, Hosston, Travis Peak siliciclastics, and Sligo, Trinity (Pine Island, Pearsall, Glen Rose), Edwards, Georgetown/Buda carbonates. Source rocks are down-dip offshore marine shales and marls, and seals are either up-dip shales, dense limestones, or evaporites. During this period, the entire Gulf Basin was a shallow sea which to the end of Cretaceous had been rimmed to the southwest by shallow marine carbonates while fine-grained terrigengus clastics were deposited on the northern and western margins of the basin. The main Upper Cretaceous reservoir groups of the Gulf Coast, which were deposited in the period of a major sea level .rise with the resulting deep water conditions, are Woodbinefruscaloosa sands, Austin chalk and carbonates, Taylor and Navarro sandstones. Source rocks are down-dip offshore shales and seals are up-dip shales. Major trap types of the Lower and Upper Cretaceous include salt-related anticlines from low relief pillows to complex salt diapirs. Growth fault structures with rollover anticlines on downthrown fault blocks are significant Gulf Coast traps. Permeability barriers, up-dip pinch-out sand bodies, and unconformity truncations also play a key role in oil exploration from the Cretaceous Gulf Coast reservoirs. The sedimentary sequences of the major Cretaceous reseuoir rocks are a good match to the regressional phases on the global sea level cuwe, suggesting that the Cretaceous Gulf Coast sedimentary stratigraphy relatively well reflects a response to eustatic sea level change throughout its history. Thus, of the three main factors controlling sedimentation (tectonic subsidence, sediment input, and eustatic sea level change) in the Gulf Coast Basin, sea-level ranks first in the period.

  • PDF