• Title/Summary/Keyword: Learning Evaluation System

Search Result 1,031, Processing Time 0.026 seconds

Evaluation and Predicting PM10 Concentration Using Multiple Linear Regression and Machine Learning (다중선형회귀와 기계학습 모델을 이용한 PM10 농도 예측 및 평가)

  • Son, Sanghun;Kim, Jinsoo
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.6_3
    • /
    • pp.1711-1720
    • /
    • 2020
  • Particulate matter (PM) that has been artificially generated during the recent of rapid industrialization and urbanization moves and disperses according to weather conditions, and adversely affects the human skin and respiratory systems. The purpose of this study is to predict the PM10 concentration in Seoul using meteorological factors as input dataset for multiple linear regression (MLR), support vector machine (SVM), and random forest (RF) models, and compared and evaluated the performance of the models. First, the PM10 concentration data obtained at 39 air quality monitoring sites (AQMS) in Seoul were divided into training and validation dataset (8:2 ratio). The nine meteorological factors (mean, maximum, and minimum temperature, precipitation, average and maximum wind speed, wind direction, yellow dust, and relative humidity), obtained by the automatic weather system (AWS), were composed to input dataset of models. The coefficients of determination (R2) between the observed PM10 concentration and that predicted by the MLR, SVM, and RF models was 0.260, 0.772, and 0.793, respectively, and the RF model best predicted the PM10 concentration. Among the AQMS used for model validation, Gwanak-gu and Gangnam-daero AQMS are relatively close to AWS, and the SVM and RF models were highly accurate according to the model validations. The Jongno-gu AQMS is relatively far from the AWS, but since PM10 concentration for the two adjacent AQMS were used for model training, both models presented high accuracy. By contrast, Yongsan-gu AQMS was relatively far from AQMS and AWS, both models performed poorly.

The Realities and Problems of Master Teacher System in China (중국 특급교사제(特級敎師制) 운영실태 분석 및 시사점)

  • Kim, Ee-Gyeong;LI, Jia-Yi
    • Korean Journal of Comparative Education
    • /
    • v.24 no.6
    • /
    • pp.163-185
    • /
    • 2014
  • Along with concerns about deteriorating social and economic status of teachers around the world, Master Teacher System(MTS) has been considered as one of the alternatives to transform teaching profession into a more attractive job. In this study, the conditions and problems associated with the MTS in China is analyzed to draw implications for South Korea, which recently legalized the MTS. Research framework including four research questions is developed based on the controversies surrounding MTS of South Korea. The main findings show that the MTS in China was introduced to improve teachers' social and economic status along with the quality of prospective teachers. A very small number of master teachers are selected through rigorous standards including longer service period. They are given additional monetary and non-monetary compensations in return for their teaching-learning leadership and responsibilities. As highly respected educators, they enjoy the lifelong benefits, although they are annually evaluated. It is evident that the MTS has contributed to improving the attractiveness of teaching profession in China. Nevertheless, there are many problems associated with selection standards and methods of master teachers, their roles, compensation, evaluation and terms of service. Recent criticism due to changing circumstances surrounding education in China makes the MTS more questionable. Based on the findings, major implications for future directions of MTS of South Korea are drawn and suggested.

GPT-enabled SNS Sentence writing support system Based on Image Object and Meta Information (이미지 객체 및 메타정보 기반 GPT 활용 SNS 문장 작성 보조 시스템)

  • Dong-Hee Lee;Mikyeong Moon;Bong-Jun, Choi
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.24 no.3
    • /
    • pp.160-165
    • /
    • 2023
  • In this study, we propose an SNS sentence writing assistance system that utilizes YOLO and GPT to assist users in writing texts with images, such as SNS. We utilize the YOLO model to extract objects from images inserted during writing, and also extract meta-information such as GPS information and creation time information, and use them as prompt values for GPT. To use the YOLO model, we trained it on form image data, and the mAP score of the model is about 0.25 on average. GPT was trained on 1,000 blog text data with the topic of 'restaurant reviews', and the model trained in this study was used to generate sentences with two types of keywords extracted from the images. A survey was conducted to evaluate the practicality of the generated sentences, and a closed-ended survey was conducted to clearly analyze the survey results. There were three evaluation items for the questionnaire by providing the inserted image and keyword sentences. The results showed that the keywords in the images generated meaningful sentences. Through this study, we found that the accuracy of image-based sentence generation depends on the relationship between image keywords and GPT learning contents.

An Artificial Intelligence-Based Automated Echocardiographic Analysis: Enhancing Efficiency and Prognostic Evaluation in Patients With Revascularized STEMI

  • Yeonggul Jang;Hyejung Choi;Yeonyee E. Yoon;Jaeik Jeon;Hyejin Kim;Jiyeon Kim;Dawun Jeong;Seongmin Ha;Youngtaek Hong;Seung-Ah Lee;Jiesuck Park;Wonsuk Cho;Hong-Mi Choi;In-Chang Hwang;Goo-Yeong Cho;Hyuk-Jae Chang
    • Korean Circulation Journal
    • /
    • v.54 no.11
    • /
    • pp.743-756
    • /
    • 2024
  • Background and Objectives: Although various cardiac parameters on echocardiography have clinical importance, their measurement by conventional manual methods is time-consuming and subject to variability. We evaluated the feasibility, accuracy, and predictive value of an artificial intelligence (AI)-based automated system for echocardiographic analysis in patients with ST-segment elevation myocardial infarction (STEMI). Methods: The AI-based system was developed using a nationwide echocardiographic dataset from five tertiary hospitals, and automatically identified views, then segmented and tracked the left ventricle (LV) and left atrium (LA) to produce volume and strain values. Both conventional manual measurements and AI-based fully automated measurements of the LV ejection fraction and global longitudinal strain, and LA volume index and reservoir strain were performed in 632 patients with STEMI. Results: The AI-based system accurately identified necessary views (overall accuracy, 98.5%) and successfully measured LV and LA volumes and strains in all cases in which conventional methods were applicable. Inter-method analysis showed strong correlations between measurement methods, with Pearson coefficients ranging 0.81-0.92 and intraclass correlation coefficients ranging 0.74-0.90. For the prediction of clinical outcomes (composite of all-cause death, re-hospitalization due to heart failure, ventricular arrhythmia, and recurrent myocardial infarction), AI-derived measurements showed predictive value independent of clinical risk factors, comparable to those from conventional manual measurements. Conclusions: Our fully automated AI-based approach for LV and LA analysis on echocardiography is feasible and provides accurate measurements, comparable to conventional methods, in patients with STEMI, offering a promising solution for comprehensive echocardiographic analysis, reduced workloads, and improved patient care.

The Prediction of Export Credit Guarantee Accident using Machine Learning (기계학습을 이용한 수출신용보증 사고예측)

  • Cho, Jaeyoung;Joo, Jihwan;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.83-102
    • /
    • 2021
  • The government recently announced various policies for developing big-data and artificial intelligence fields to provide a great opportunity to the public with respect to disclosure of high-quality data within public institutions. KSURE(Korea Trade Insurance Corporation) is a major public institution for financial policy in Korea, and thus the company is strongly committed to backing export companies with various systems. Nevertheless, there are still fewer cases of realized business model based on big-data analyses. In this situation, this paper aims to develop a new business model which can be applied to an ex-ante prediction for the likelihood of the insurance accident of credit guarantee. We utilize internal data from KSURE which supports export companies in Korea and apply machine learning models. Then, we conduct performance comparison among the predictive models including Logistic Regression, Random Forest, XGBoost, LightGBM, and DNN(Deep Neural Network). For decades, many researchers have tried to find better models which can help to predict bankruptcy since the ex-ante prediction is crucial for corporate managers, investors, creditors, and other stakeholders. The development of the prediction for financial distress or bankruptcy was originated from Smith(1930), Fitzpatrick(1932), or Merwin(1942). One of the most famous models is the Altman's Z-score model(Altman, 1968) which was based on the multiple discriminant analysis. This model is widely used in both research and practice by this time. The author suggests the score model that utilizes five key financial ratios to predict the probability of bankruptcy in the next two years. Ohlson(1980) introduces logit model to complement some limitations of previous models. Furthermore, Elmer and Borowski(1988) develop and examine a rule-based, automated system which conducts the financial analysis of savings and loans. Since the 1980s, researchers in Korea have started to examine analyses on the prediction of financial distress or bankruptcy. Kim(1987) analyzes financial ratios and develops the prediction model. Also, Han et al.(1995, 1996, 1997, 2003, 2005, 2006) construct the prediction model using various techniques including artificial neural network. Yang(1996) introduces multiple discriminant analysis and logit model. Besides, Kim and Kim(2001) utilize artificial neural network techniques for ex-ante prediction of insolvent enterprises. After that, many scholars have been trying to predict financial distress or bankruptcy more precisely based on diverse models such as Random Forest or SVM. One major distinction of our research from the previous research is that we focus on examining the predicted probability of default for each sample case, not only on investigating the classification accuracy of each model for the entire sample. Most predictive models in this paper show that the level of the accuracy of classification is about 70% based on the entire sample. To be specific, LightGBM model shows the highest accuracy of 71.1% and Logit model indicates the lowest accuracy of 69%. However, we confirm that there are open to multiple interpretations. In the context of the business, we have to put more emphasis on efforts to minimize type 2 error which causes more harmful operating losses for the guaranty company. Thus, we also compare the classification accuracy by splitting predicted probability of the default into ten equal intervals. When we examine the classification accuracy for each interval, Logit model has the highest accuracy of 100% for 0~10% of the predicted probability of the default, however, Logit model has a relatively lower accuracy of 61.5% for 90~100% of the predicted probability of the default. On the other hand, Random Forest, XGBoost, LightGBM, and DNN indicate more desirable results since they indicate a higher level of accuracy for both 0~10% and 90~100% of the predicted probability of the default but have a lower level of accuracy around 50% of the predicted probability of the default. When it comes to the distribution of samples for each predicted probability of the default, both LightGBM and XGBoost models have a relatively large number of samples for both 0~10% and 90~100% of the predicted probability of the default. Although Random Forest model has an advantage with regard to the perspective of classification accuracy with small number of cases, LightGBM or XGBoost could become a more desirable model since they classify large number of cases into the two extreme intervals of the predicted probability of the default, even allowing for their relatively low classification accuracy. Considering the importance of type 2 error and total prediction accuracy, XGBoost and DNN show superior performance. Next, Random Forest and LightGBM show good results, but logistic regression shows the worst performance. However, each predictive model has a comparative advantage in terms of various evaluation standards. For instance, Random Forest model shows almost 100% accuracy for samples which are expected to have a high level of the probability of default. Collectively, we can construct more comprehensive ensemble models which contain multiple classification machine learning models and conduct majority voting for maximizing its overall performance.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.

Analysis of Curriculum Development Processes and the Relationship between General Statements of the Curriculum and Science Curriculum (교육과정 개발 체제 및 총론과 과학과 교육과정의 연계성 분석)

  • Lee, Yang-Rak
    • Journal of The Korean Association For Science Education
    • /
    • v.24 no.3
    • /
    • pp.468-480
    • /
    • 2004
  • It has been criticized that there are discrepancy between 'general statements' of the curriculum and subject-matter curricula. The possible reasons for this are as follows: The developers of the general statements were educational curriculum specialists. These specialists were not good enough to develop general statements and guidelines of subject matter curricula reflecting the characteristics of science contents, to examine developed science curriculum, and to give feedback to science curriculum developers. Under the present curriculum developing system where curriculum is developed in ten months or less by the research team commissioned unpredictably and imminently, it might be difficult to develop valid and precise science curriculum reflecting the purport of the general statements and teachers' needs. The inadequacy of these curriculum development processes resulted in (1) inconsistent statement about the school year to be applied to differentiated curriculum, (2) abstract and ambiguous stating about the characteristics, teaching-learning and assessment guidelines of enrichment activities, and (3) failure to reduce science contents to a reasonable level. Therefore curriculum development centers should be designated in advance to do basic research at ordinary times, and organized into a cooperative system among them. Two years or more of developing time and wider participation of scientists are recommended to develop more valid and precise science curriculum. In addition, commentaries on science curriculum should be published before textbook writing begins.

An Electric Load Forecasting Scheme with High Time Resolution Based on Artificial Neural Network (인공 신경망 기반의 고시간 해상도를 갖는 전력수요 예측기법)

  • Park, Jinwoong;Moon, Jihoon;Hwang, Eenjun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.11
    • /
    • pp.527-536
    • /
    • 2017
  • With the recent development of smart grid industry, the necessity for efficient EMS(Energy Management System) has been increased. In particular, in order to reduce electric load and energy cost, sophisticated electric load forecasting and efficient smart grid operation strategy are required. In this paper, for more accurate electric load forecasting, we extend the data collected at demand time into high time resolution and construct an artificial neural network-based forecasting model appropriate for the high time resolution data. Furthermore, to improve the accuracy of electric load forecasting, time series data of sequence form are transformed into continuous data of two-dimensional space to solve that problem that machine learning methods cannot reflect the periodicity of time series data. In addition, to consider external factors such as temperature and humidity in accordance with the time resolution, we estimate their value at the time resolution using linear interpolation method. Finally, we apply the PCA(Principal Component Analysis) algorithm to the feature vector composed of external factors to remove data which have little correlation with the power data. Finally, we perform the evaluation of our model through 5-fold cross-validation. The results show that forecasting based on higher time resolution improve the accuracy and the best error rate of 3.71% was achieved at the 3-min resolution.

Appropriation of Human Resources into Human Assets and Its Typology (인적자원의 인적자산화 과정과 자산유형)

  • Jeong, Kioh
    • Journal of Service Research and Studies
    • /
    • v.9 no.2
    • /
    • pp.77-88
    • /
    • 2019
  • Appropriation means the process of transforming resources to property. John Locke earlier investigated the appropriation process of natural resources into the land property, which grounded the jurisprudential base of the private ownership of the land. In the same way human resources are transformed into the human assets. Appropriation process, very rarely studied so far, in this case of human property is the focus of this paper. The appropriation of intangible property is by far easier than the appropriation of tangible property. Learning is a process of embodiment, which naturally mean the process of appropriation. For the material resources which exist out of human body, appropriation necessary need special philosophical and institutional justification. In the process appropriation for intangibles, investigator found, appropriator and learner either can be same, or can be differentiated. In the former case substantial human assets are created while in the latter relational human assets are built. After the discussion of appropriation process, Investigator proceeds to the problem of visualizing the invisibles. Evaluation and assessment issue were discussed in this perspective. Qualification system is particularly noted as a system to regulate substantial human assets including their issuing and registration. The work done in this paper would contribute in understanding the law of education and the law of qualification.

Development of 'Healthy Couple Relationship' Curriculum in High School Based on Backward Design (백워드 디자인에 기반한 고등학교 '건강한 커플관계' 교육과정(안) 개발)

  • Yu, In-Young;Park, Mi-Jeong
    • Journal of Korean Home Economics Education Association
    • /
    • v.31 no.3
    • /
    • pp.1-21
    • /
    • 2019
  • The purpose of this study is to develop a 'Healthy Couple Relationship' curriculum(plan) for high school home economics based on backward design. In this study, the content elements of 'Healthy Couple Relationship' were extracted through literature analysis, and the need for contents of 'Healthy Couple Relationship' was surveyed in 197 families of home economics teachers and 154 high school students. Based on this, the 'Healthy Couple Relationship' curriculum(plan) of high school home economics curriculum was developed as the backward design, and it was verified by the expert group. The results are as follows: First, when measured on a 5-point Likert scale, the mean scores of the need for content elements of 'Healthy Couple Relationship' was 4.39 for teachers, and 4.02 for students, respectively. The content elements of 'understanding dating violence' was 4.70 for teachers and 4.19 for students. Second, the developed 'Healthy Couple Relationship' curriculum consists of two templates, one for each unit, including curriculum goals, unit composition and unit goals, 8 learning subjects and content elements, evaluation plans including 24 lesson plans. In this study, it is meaningful to propose a "Healthy Couple Relationship" curriculum as an elective course in high school home economics curriculum in preparation for the high school credit system, and to lay the foundations for opening elective courses.