• Title/Summary/Keyword: potential learning

Search Result 1,102, Processing Time 0.027 seconds

Can ChatGPT Pass the National Korean Occupational Therapy Licensure Examination? (ChatGPT는 한국작업치료사면허시험에 합격할 수 있을까?)

  • Hong, Junhwa;Kim, Nayeon;Min, Hyemin;Yang, Hamin;Lee, Sihyun;Choi, Seojin;Park, Jin-Hyuck
    • Therapeutic Science for Rehabilitation
    • /
    • v.13 no.1
    • /
    • pp.65-74
    • /
    • 2024
  • Objective : This study assessed ChatGPT, an artificial intelligence system based on a large language model, for its ability to pass the National Korean Occupational Therapy Licensure Examination (NKOTLE). Methods : Using NKOTLE questions from 2018 to 2022, provided by the Korea Health and Medical Personnel Examination Institute, this study employed English prompts to determine the accuracy of ChatGPT in providing correct answers. Two researchers independently conducted the entire process, and the average accuracy of both researchers was used to determine whether ChatGPT passed over the 5-year period. The degree of agreement between ChatGPT answers of the two researchers was assessed. Results : ChatGPT passed the 2020 examination but failed to pass the other 4 years' examination. Specifically, its accuracy in questions related to medical regulations ranged from 25% to 57%, whereas its accuracy in other questions exceeded 60%. ChatGPT exhibited a strong agreement between researchers, except for medical regulation questions, and this agreement was significantly correlated with accuracy. Conclusion : There are still limitations to the application of ChatGPT to answer questions influenced by language or culture. Future studies should explore its potential as an educational tool for students majoring in occupational therapy through optimized prompts and continuous learning from the data.

Development and Assessment of LSTM Model for Correcting Underestimation of Water Temperature in Korean Marine Heatwave Prediction System (한반도 고수온 예측 시스템의 수온 과소모의 보정을 위한 LSTM 모델 구축 및 예측성 평가)

  • NA KYOUNG IM;HYUNKEUN JIN;GYUNDO PAK;YOUNG-GYU PARK;KYEONG OK KIM;YONGHAN CHOI;YOUNG HO KIM
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.29 no.2
    • /
    • pp.101-115
    • /
    • 2024
  • The ocean heatwave is emerging as a major issue due to global warming, posing a direct threat to marine ecosystems and humanity through decreased food resources and reduced carbon absorption capacity of the oceans. Consequently, the prediction of ocean heatwaves in the vicinity of the Korean Peninsula is becoming increasingly important for marine environmental monitoring and management. In this study, an LSTM model was developed to improve the underestimated prediction of ocean heatwaves caused by the coarse vertical grid system of the Korean Peninsula Ocean Prediction System. Based on the results of ocean heatwave predictions for the Korean Peninsula conducted in 2023, as well as those generated by the LSTM model, the performance of heatwave predictions in the East Sea, Yellow Sea, and South Sea areas surrounding the Korean Peninsula was evaluated. The LSTM model developed in this study significantly improved the prediction performance of sea surface temperatures during periods of temperature increase in all three regions. However, its effectiveness in improving prediction performance during periods of temperature decrease or before temperature rise initiation was limited. This demonstrates the potential of the LSTM model to address the underestimated prediction of ocean heatwaves caused by the coarse vertical grid system during periods of enhanced stratification. It is anticipated that the utility of data-driven artificial intelligence models will expand in the future to improve the prediction performance of dynamical models or even replace them.

Quality of Radiomics Research on Brain Metastasis: A Roadmap to Promote Clinical Translation

  • Chae Jung Park;Yae Won Park;Sung Soo Ahn;Dain Kim;Eui Hyun Kim;Seok-Gu Kang;Jong Hee Chang;Se Hoon Kim;Seung-Koo Lee
    • Korean Journal of Radiology
    • /
    • v.23 no.1
    • /
    • pp.77-88
    • /
    • 2022
  • Objective: Our study aimed to evaluate the quality of radiomics studies on brain metastases based on the radiomics quality score (RQS), Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) checklist, and the Image Biomarker Standardization Initiative (IBSI) guidelines. Materials and Methods: PubMed MEDLINE, and EMBASE were searched for articles on radiomics for evaluating brain metastases, published until February 2021. Of the 572 articles, 29 relevant original research articles were included and evaluated according to the RQS, TRIPOD checklist, and IBSI guidelines. Results: External validation was performed in only three studies (10.3%). The median RQS was 3.0 (range, -6 to 12), with a low basic adherence rate of 50.0%. The adherence rate was low in comparison to the "gold standard" (10.3%), stating the potential clinical utility (10.3%), performing the cut-off analysis (3.4%), reporting calibration statistics (6.9%), and providing open science and data (3.4%). None of the studies involved test-retest or phantom studies, prospective studies, or cost-effectiveness analyses. The overall rate of adherence to the TRIPOD checklist was 60.3% and low for reporting title (3.4%), blind assessment of outcome (0%), description of the handling of missing data (0%), and presentation of the full prediction model (0%). The majority of studies lacked pre-processing steps, with bias-field correction, isovoxel resampling, skull stripping, and gray-level discretization performed in only six (20.7%), nine (31.0%), four (3.8%), and four (13.8%) studies, respectively. Conclusion: The overall scientific and reporting quality of radiomics studies on brain metastases published during the study period was insufficient. Radiomics studies should adhere to the RQS, TRIPOD, and IBSI guidelines to facilitate the translation of radiomics into the clinical field.

A Comparative Study on Reservoir Level Prediction Performance Using a Deep Neural Network with ASOS, AWS, and Thiessen Network Data

  • Hye-Seung Park;Hyun-Ho Yang;Ho-Jun Lee; Jongwook Yoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.3
    • /
    • pp.67-74
    • /
    • 2024
  • In this paper, we present a study aimed at analyzing how different rainfall measurement methods affect the performance of reservoir water level predictions. This work is particularly timely given the increasing emphasis on climate change and the sustainable management of water resources. To this end, we have employed rainfall data from ASOS, AWS, and Thiessen Network-based measures provided by the KMA Weather Data Service to train our neural network models for reservoir yield predictions. Our analysis, which encompasses 34 reservoirs in Jeollabuk-do Province, examines how each method contributes to enhancing prediction accuracy. The results reveal that models using rainfall data based on the Thiessen Network's area rainfall ratio yield the highest accuracy. This can be attributed to the method's accounting for precise distances between observation stations, offering a more accurate reflection of the actual rainfall across different regions. These findings underscore the importance of precise regional rainfall data in predicting reservoir yields. Additionally, the paper underscores the significance of meticulous rainfall measurement and data analysis, and discusses the prediction model's potential applications in agriculture, urban planning, and flood management.

Assessment of Educational Needs in Uzbekistan: For the Capacity Building in Textiles and Fashion Higher Education (우즈베키스탄 섬유·패션 고등교육의 역량 강화를 위한 교육협력사업 수요조사)

  • Cho, Ahra;Lee, Hyojeong;Jin, Byoungho Ellie;Lee, Yoon-Jung
    • Journal of Korean Home Economics Education Association
    • /
    • v.35 no.3
    • /
    • pp.169-190
    • /
    • 2023
  • Uzbekistan, one of the top five cotton-producing countries in the world, primarily focuses its textile and fashion industry on raw cotton exports and the sewing industry. For Uzbekistan to achieve high added value, it is essential for the textile and fashion industry, which is currently at the CMT(cut, make, and trim) stage, to upgrade to OEM (original equipment manufacturing), ODM (original design manufacturing), and OBM (original brand manufacturing). South Korea recognizes Uzbekistan as a potential manufacturing base and trading partner and has invested Official Development Assistance (ODA) funds for the development of Uzbekistan's textiles and apparel sector. This study aims to evaluate Uzbekistan's fashion higher education in the context of global competitiveness and measure the need and prospects for education ODA from the Korean government in this field. Comprehensive investigations, including surveys of academics, industry experts, and government officials, in-depth interviews, and focus group interviews, were conducted to understand Uzbekistan's current fashion education environment. According to the research results, despite the textile and fashion sectors playing a pivotal role in the Uzbek economy, there is room for improvement in the curricula and teaching and learning methods of the fashion higher education programs. This study holds significance as foundational data for establishing education ODA strategies.

Development of Applied Music Education Program for Creative and Convergent Thinking-With a Focus on the Capstone design Class (창의·융합적 사고를 위한 실용음악 교육프로그램 개발-캡스톤디자인 수업을 중심으로)

  • Yun, Sung-Hyo;Han, Kyung-hoon
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.4
    • /
    • pp.285-294
    • /
    • 2024
  • This study aims to enhance learners' creative and integrative thinking through the use of a practical music education program, facilitating high-quality artistic activities and the integration of various disciplines. To achieve this, a practical music education program incorporating the PDIE model was designed, and the content validity of the developed program was verified. Through this process, We have researched and described methodologies for multidisciplinary research that can be applied in practical music education. This paper focuses on the fourth session of the study, which deals with the creative and integrative education of practical music and mathematics. The mathematical theory of interest in this research is the Fibonacci sequence, fundamental to the golden ratio in art. The goal is to enable balanced and high-quality creative activities through learning and applying the Fibonacci sequence. Additionally, to verify the validity and effectiveness of the instructional plan, including the one used in the 15-week course, we have detailed the participants involved in the content validation, the procedures of the research, the research tools used, and the methods for collecting and analyzing various data. Through this, We have confirmed the potential of creative and integrative education in higher practical music education and sought to develop educational methodologies for cultivating various creative talents in subsequent research.

Real-Time 3D Volume Deformation and Visualization by Integrating NeRF, PBD, and Parallel Resampling (NeRF, PBD 및 병렬 리샘플링을 결합한 실시간 3D 볼륨 변형체 시각화)

  • Sangmin Kwon;Sojin Jeon;Juni Park;Dasol Kim;Heewon Kye
    • Journal of the Korea Computer Graphics Society
    • /
    • v.30 no.3
    • /
    • pp.189-198
    • /
    • 2024
  • Research combining deep learning-based models and physical simulations is making important advances in the medical field. This extracts the necessary information from medical image data and enables fast and accurate prediction of deformation of the skeleton and soft tissue based on physical laws. This study proposes a system that integrates Neural Radiance Fields (NeRF), Position-Based Dynamics (PBD), and Parallel Resampling to generate 3D volume data, and deform and visualize them in real-time. NeRF uses 2D images and camera coordinates to produce high-resolution 3D volume data, while PBD enables real-time deformation and interaction through physics-based simulation. Parallel Resampling improves rendering efficiency by dividing the volume into tetrahedral meshes and utilizing GPU parallel processing. This system renders the deformed volume data using ray casting, leveraging GPU parallel processing for fast real-time visualization. Experimental results show that this system can generate and deform 3D data without expensive equipment, demonstrating potential applications in engineering, education, and medicine.

Ethical and Legal Implications of AI-based Human Resources Management (인공지능(AI) 기반 인사관리의 윤리적·법적 영향)

  • Jungwoo Lee;Jungsoo Lee;Ji Hun kwon;Minyi Cha;Kyu Tae Kim
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.25 no.2
    • /
    • pp.100-112
    • /
    • 2024
  • This study investigates the ethical and legal implications of utilizing artificial intelligence (AI) in human resource management, with a particular focus on AI interviews in the recruitment process. AI, defined as the capability of computer programs to perform tasks associated with human intelligence such as reasoning, learning, and adapting, is increasingly being integrated into HR practices. The deployment of AI in recruitment, specifically through AI-driven interviews, promises efficiency and objectivity but also raises significant ethical and legal concerns. These concerns include potential biases in AI algorithms, transparency in AI decision-making processes, data privacy issues, and compliance with existing labor laws and regulations. By analyzing case studies and reviewing relevant literature, this paper aims to provide a comprehensive understanding of these challenges and propose recommendations for ensuring ethical and legal compliance in AI-based HR practices. The findings suggest that while AI can enhance recruitment efficiency, it is imperative to establish robust ethical guidelines and legal frameworks to mitigate risks and ensure fair and transparent hiring practices.

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.