• Title/Summary/Keyword: artificial intelligence-based models

Search Result 575, Processing Time 0.023 seconds

Prediction of Traffic Congestion in Seoul by Deep Neural Network (심층인공신경망(DNN)과 다각도 상황 정보 기반의 서울시 도로 링크별 교통 혼잡도 예측)

  • Kim, Dong Hyun;Hwang, Kee Yeon;Yoon, Young
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.18 no.4
    • /
    • pp.44-57
    • /
    • 2019
  • Various studies have been conducted to solve traffic congestions in many metropolitan cities through accurate traffic flow prediction. Most studies are based on the assumption that past traffic patterns repeat in the future. Models based on such an assumption fall short in case irregular traffic patterns abruptly occur. Instead, the approaches such as predicting traffic pattern through big data analytics and artificial intelligence have emerged. Specifically, deep learning algorithms such as RNN have been prevalent for tackling the problems of predicting temporal traffic flow as a time series. However, these algorithms do not perform well in terms of long-term prediction. In this paper, we take into account various external factors that may affect the traffic flows. We model the correlation between the multi-dimensional context information with temporal traffic speed pattern using deep neural networks. Our model trained with the traffic data from TOPIS system by Seoul, Korea can predict traffic speed on a specific date with the accuracy reaching nearly 90%. We expect that the accuracy can be improved further by taking into account additional factors such as accidents and constructions for the prediction.

Stiffness Enhancement of Piecewise Integrated Composite Robot Arm using Machine Learning (머신 러닝을 이용한 PIC 로봇 암 강성 향상에 대한 연구)

  • Ji, Seungmin;Ham, Seokwoo;Cheon, Seong S.
    • Composites Research
    • /
    • v.35 no.5
    • /
    • pp.303-308
    • /
    • 2022
  • PIC (Piecewise Integrated Composite) is a new concept for designing a composite structure with mosaically assigning various types of stacking sequences in order to improve mechanical properties of laminated composites. Also, machine learning is a sub-category of artificial intelligence, that refers to the process by which computers develop the ability to continuously learn from and make predictions based on data, then make adjustments without further programming. In the present study, the tapered box beam type PIC robot arm for carrying and transferring wide and thin LCD display was designed based on the machine learning in order to increase structural stiffness. Essential training data were collected from the reference elements, which were intentionally designated elements among finite element models, during preliminary FE analysis. Additionally, triaxiality values for each finite element were obtained for judging the dominant external loading type, such as tensile, compressive or shear. Training and evaluating machine learning model were conducted using the training data and loading types of elements were predicted in case the level accuracy was fulfilled. Three types of stacking sequences, which were to be known as robust toward specific loading types, were mosaically assigned to the PIC robot arm. Henceforth, the bending type FE analysis was carried out and its result claimed that the PIC robot arm showed increased stiffness compared to conventional uni-stacking sequence type composite robot arm.

A Study on the Application of Virtual Space Design Using the Blended Education Method - A La Carte Model Based on the Creation of Infographic - (블렌디드 교육방식을 활용한 가상공간 디자인 적용에 관한 연구 -알 라 카르테 모델 (A La Carte) 인포그래픽 가상공간 제작을 중심으로-)

  • Cho, Hyun Kyung
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.5
    • /
    • pp.279-284
    • /
    • 2022
  • As a study of the blended learning method on design education through the blended learning method, I would like to propose that more advanced learner-led customized design education is possible. Understanding in face-to-face classes and advantages in non-face-to-face classes can be supplemented in an appropriate way in remote classes. Advanced artificial intelligence and big data technology can provide personalized and subdivided learning materials and effective learning methods tailored to learners' levels and interests based on quantified data in design classes. In this paper, it was proposed to maximize the efficiency of the class by applying a method that exceeds the limitations of time and space through the proposal of the A La Carte model (A La Carte). It is a remote class that can be heard anytime, anywhere, and it is also possible to bridge the educational quality and educational gap provided to students living in underprivileged areas. As the goal of fostering creative convergence-type future talents, it is changing with a rapid technological development speed. It is necessary to adapt to the change in learning methods in line with this. An analysis of the infographic virtual space design and construction process through the A La Carte model (A La Carte) proposal was presented. Rather than simply acquiring knowledge, it is expected that knowledge can be sorted, distinguished, learned, and easily reborn with its own knowledge.

A Study on the Use of Contrast Agent and the Improvement of Body Part Classification Performance through Deep Learning-Based CT Scan Reconstruction (딥러닝 기반 CT 스캔 재구성을 통한 조영제 사용 및 신체 부위 분류 성능 향상 연구)

  • Seongwon Na;Yousun Ko;Kyung Won Kim
    • Journal of Broadcast Engineering
    • /
    • v.28 no.3
    • /
    • pp.293-301
    • /
    • 2023
  • Unstandardized medical data collection and management are still being conducted manually, and studies are being conducted to classify CT data using deep learning to solve this problem. However, most studies are developing models based only on the axial plane, which is a basic CT slice. Because CT images depict only human structures unlike general images, reconstructing CT scans alone can provide richer physical features. This study seeks to find ways to achieve higher performance through various methods of converting CT scan to 2D as well as axial planes. The training used 1042 CT scans from five body parts and collected 179 test sets and 448 with external datasets for model evaluation. To develop a deep learning model, we used InceptionResNetV2 pre-trained with ImageNet as a backbone and re-trained the entire layer of the model. As a result of the experiment, the reconstruction data model achieved 99.33% in body part classification, 1.12% higher than the axial model, and the axial model was higher only in brain and neck in contrast classification. In conclusion, it was possible to achieve more accurate performance when learning with data that shows better anatomical features than when trained with axial slice alone.

Research on optimal safety ship-route based on artificial intelligence analysis using marine environment prediction (해양환경 예측정보를 활용한 인공지능 분석 기반의 최적 안전항로 연구)

  • Dae-yaoung Eeom;Bang-hee Lee
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2023.05a
    • /
    • pp.100-103
    • /
    • 2023
  • Recently, development of maritime autonomoust surface ships and eco-friendly ships, production and evaluation research considering various marine environments is needed in the field of optimal routes as the demand for accurate and detailed real-time marine environment prediction information expands. An algorithm that can calculate the optimal route while reducing the risk of the marine environment and uncertainty in energy consumption in smart ships was developed in 2 stages. In the first stage, a profile was created by combining marine environmental information with ship location and status information within the Automatic Ship Identification System(AIS). In the second stage, a model was developed that could define the marine environment energy map using the configured profile results, A regression equation was generated by applying Random Forest among machine learning techniques to reflect about 600,000 data. The Random Forest coefficient of determination (R2) was 0.89, showing very high reliability. The Dijikstra shortest path algorithm was applied to the marine environment prediction at June 1 to 3, 2021, and to calculate the optimal safety route and express it on the map. The route calculated by the random forest regression model was streamlined, and the route was derived considering the state of the marine environment prediction information. The concept of route calculation based on real-time marine environment prediction information in this study is expected to be able to calculate a realistic and safe route that reflects the movement tendency of ships, and to be expanded to a range of economic, safety, and eco-friendliness evaluation models in the future.

  • PDF

A Model for Constructing Learner Data in AI-based Mathematical Digital Textbooks for Individual Customized Learning (개별 맞춤형 학습을 위한 인공지능(AI) 기반 수학 디지털교과서의 학습자 데이터 구축 모델)

  • Lee, Hwayoung
    • Education of Primary School Mathematics
    • /
    • v.26 no.4
    • /
    • pp.333-348
    • /
    • 2023
  • Clear analysis and diagnosis of various characteristic factors of individual students is the most important in order to realize individual customized teaching and learning, which is considered the most essential function of math artificial intelligence-based digital textbooks. In this study, analysis factors and tools for individual customized learning diagnosis and construction models for data collection and analysis were derived from mathematical AI digital textbooks. To this end, according to the Ministry of Education's recent plan to apply AI digital textbooks, the demand for AI digital textbooks in mathematics, personalized learning and prior research on data for it, and factors for learner analysis in mathematics digital platforms were reviewed. As a result of the study, the researcher summarized the factors for learning analysis as factors for learning readiness, process and performance, achievement, weakness, and propensity analysis as factors for learning duration, problem solving time, concentration, math learning habits, and emotional analysis as factors for confidence, interest, anxiety, learning motivation, value perception, and attitude analysis as factors for learning analysis. In addition, the researcher proposed noon data on the problem, learning progress rate, screen recording data on student activities, event data, eye tracking device, and self-response questionnaires as data collection tools for these factors. Finally, a data collection model was proposed that time-series these factors before, during, and after learning.

Exploring the Potential of AI Tools in University Writing Assessment: Comparing Evaluation Criteria between Humans and Generative AI (대학 글쓰기 평가에서 인공지능 도구의 활용 가능성 탐색: 인간과 생성형 AI 간 평가 기준 비교)

  • So-Young Park;ByungYoon Lee
    • Journal of Practical Engineering Education
    • /
    • v.16 no.5_spc
    • /
    • pp.663-676
    • /
    • 2024
  • This study, from the perspective of Learning with AI, aimed to explore the educational applicability of writing evaluation criteria generated by artificial intelligence. Specifically, it sought to systematically analyze the similarities and differences between AI-generated criteria and those developed by humans. The research questions for this study were set as follows: 1) What characteristics do the writing evaluation criteria generated by AI tools have? 2) What similarities and differences exist between the writing evaluation criteria generated by humans and AI tools? GPT and Claude were selected as representative AI tools, and they were tasked with generating writing evaluation criteria for undergraduate students. These AI-generated criteria were then compared with human-created criteria. The results showed a commonality: Both humans and AI-tools placed the highest importance on categories related to content. However, while humans evaluated based on three main categories - content, organization, and language usage - the AI tools included additional categories such as format and citations, original thinking, and overall impression. In general, human tended to include more detailed items within each evaluation category, while AI tools presented more concise items. Notably, differences were observed in language-related aspects and scoring systems, which were influenced by the AI tools being developed based on English. This study offers important insights into the development of collaborative evaluation models between humans and AI, and it explores the potential role of AI as a complementary tool in educational assessment in the future.

A study on the prediction of korean NPL market return (한국 NPL시장 수익률 예측에 관한 연구)

  • Lee, Hyeon Su;Jeong, Seung Hwan;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.123-139
    • /
    • 2019
  • The Korean NPL market was formed by the government and foreign capital shortly after the 1997 IMF crisis. However, this market is short-lived, as the bad debt has started to increase after the global financial crisis in 2009 due to the real economic recession. NPL has become a major investment in the market in recent years when the domestic capital market's investment capital began to enter the NPL market in earnest. Although the domestic NPL market has received considerable attention due to the overheating of the NPL market in recent years, research on the NPL market has been abrupt since the history of capital market investment in the domestic NPL market is short. In addition, decision-making through more scientific and systematic analysis is required due to the decline in profitability and the price fluctuation due to the fluctuation of the real estate business. In this study, we propose a prediction model that can determine the achievement of the benchmark yield by using the NPL market related data in accordance with the market demand. In order to build the model, we used Korean NPL data from December 2013 to December 2017 for about 4 years. The total number of things data was 2291. As independent variables, only the variables related to the dependent variable were selected for the 11 variables that indicate the characteristics of the real estate. In order to select the variables, one to one t-test and logistic regression stepwise and decision tree were performed. Seven independent variables (purchase year, SPC (Special Purpose Company), municipality, appraisal value, purchase cost, OPB (Outstanding Principle Balance), HP (Holding Period)). The dependent variable is a bivariate variable that indicates whether the benchmark rate is reached. This is because the accuracy of the model predicting the binomial variables is higher than the model predicting the continuous variables, and the accuracy of these models is directly related to the effectiveness of the model. In addition, in the case of a special purpose company, whether or not to purchase the property is the main concern. Therefore, whether or not to achieve a certain level of return is enough to make a decision. For the dependent variable, we constructed and compared the predictive model by calculating the dependent variable by adjusting the numerical value to ascertain whether 12%, which is the standard rate of return used in the industry, is a meaningful reference value. As a result, it was found that the hit ratio average of the predictive model constructed using the dependent variable calculated by the 12% standard rate of return was the best at 64.60%. In order to propose an optimal prediction model based on the determined dependent variables and 7 independent variables, we construct a prediction model by applying the five methodologies of discriminant analysis, logistic regression analysis, decision tree, artificial neural network, and genetic algorithm linear model we tried to compare them. To do this, 10 sets of training data and testing data were extracted using 10 fold validation method. After building the model using this data, the hit ratio of each set was averaged and the performance was compared. As a result, the hit ratio average of prediction models constructed by using discriminant analysis, logistic regression model, decision tree, artificial neural network, and genetic algorithm linear model were 64.40%, 65.12%, 63.54%, 67.40%, and 60.51%, respectively. It was confirmed that the model using the artificial neural network is the best. Through this study, it is proved that it is effective to utilize 7 independent variables and artificial neural network prediction model in the future NPL market. The proposed model predicts that the 12% return of new things will be achieved beforehand, which will help the special purpose companies make investment decisions. Furthermore, we anticipate that the NPL market will be liquidated as the transaction proceeds at an appropriate price.

A Recidivism Prediction Model Based on XGBoost Considering Asymmetric Error Costs (비대칭 오류 비용을 고려한 XGBoost 기반 재범 예측 모델)

  • Won, Ha-Ram;Shim, Jae-Seung;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.127-137
    • /
    • 2019
  • Recidivism prediction has been a subject of constant research by experts since the early 1970s. But it has become more important as committed crimes by recidivist steadily increase. Especially, in the 1990s, after the US and Canada adopted the 'Recidivism Risk Assessment Report' as a decisive criterion during trial and parole screening, research on recidivism prediction became more active. And in the same period, empirical studies on 'Recidivism Factors' were started even at Korea. Even though most recidivism prediction studies have so far focused on factors of recidivism or the accuracy of recidivism prediction, it is important to minimize the prediction misclassification cost, because recidivism prediction has an asymmetric error cost structure. In general, the cost of misrecognizing people who do not cause recidivism to cause recidivism is lower than the cost of incorrectly classifying people who would cause recidivism. Because the former increases only the additional monitoring costs, while the latter increases the amount of social, and economic costs. Therefore, in this paper, we propose an XGBoost(eXtream Gradient Boosting; XGB) based recidivism prediction model considering asymmetric error cost. In the first step of the model, XGB, being recognized as high performance ensemble method in the field of data mining, was applied. And the results of XGB were compared with various prediction models such as LOGIT(logistic regression analysis), DT(decision trees), ANN(artificial neural networks), and SVM(support vector machines). In the next step, the threshold is optimized to minimize the total misclassification cost, which is the weighted average of FNE(False Negative Error) and FPE(False Positive Error). To verify the usefulness of the model, the model was applied to a real recidivism prediction dataset. As a result, it was confirmed that the XGB model not only showed better prediction accuracy than other prediction models but also reduced the cost of misclassification most effectively.

A Study on the Effect of the Document Summarization Technique on the Fake News Detection Model (문서 요약 기법이 가짜 뉴스 탐지 모형에 미치는 영향에 관한 연구)

  • Shim, Jae-Seung;Won, Ha-Ram;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.201-220
    • /
    • 2019
  • Fake news has emerged as a significant issue over the last few years, igniting discussions and research on how to solve this problem. In particular, studies on automated fact-checking and fake news detection using artificial intelligence and text analysis techniques have drawn attention. Fake news detection research entails a form of document classification; thus, document classification techniques have been widely used in this type of research. However, document summarization techniques have been inconspicuous in this field. At the same time, automatic news summarization services have become popular, and a recent study found that the use of news summarized through abstractive summarization has strengthened the predictive performance of fake news detection models. Therefore, the need to study the integration of document summarization technology in the domestic news data environment has become evident. In order to examine the effect of extractive summarization on the fake news detection model, we first summarized news articles through extractive summarization. Second, we created a summarized news-based detection model. Finally, we compared our model with the full-text-based detection model. The study found that BPN(Back Propagation Neural Network) and SVM(Support Vector Machine) did not exhibit a large difference in performance; however, for DT(Decision Tree), the full-text-based model demonstrated a somewhat better performance. In the case of LR(Logistic Regression), our model exhibited the superior performance. Nonetheless, the results did not show a statistically significant difference between our model and the full-text-based model. Therefore, when the summary is applied, at least the core information of the fake news is preserved, and the LR-based model can confirm the possibility of performance improvement. This study features an experimental application of extractive summarization in fake news detection research by employing various machine-learning algorithms. The study's limitations are, essentially, the relatively small amount of data and the lack of comparison between various summarization technologies. Therefore, an in-depth analysis that applies various analytical techniques to a larger data volume would be helpful in the future.