• Title/Summary/Keyword: 시간 학습

Search Result 3,697, Processing Time 0.034 seconds

A COVID-19 Diagnosis Model based on Various Transformations of Cough Sounds (기침 소리의 다양한 변환을 통한 코로나19 진단 모델)

  • Minkyung Kim;Gunwoo Kim;Keunho Choi
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.57-78
    • /
    • 2023
  • COVID-19, which started in Wuhan, China in November 2019, spread beyond China in 2020 and spread worldwide in March 2020. It is important to prevent a highly contagious virus like COVID-19 in advance and to actively treat it when confirmed, but it is more important to identify the confirmed fact quickly and prevent its spread since it is a virus that spreads quickly. However, PCR test to check for infection is costly and time consuming, and self-kit test is also easy to access, but the cost of the kit is not easy to receive every time. Therefore, if it is possible to determine whether or not a person is positive for COVID-19 based on the sound of a cough so that anyone can use it easily, anyone can easily check whether or not they are confirmed at anytime, anywhere, and it can have great economic advantages. In this study, an experiment was conducted on a method to identify whether or not COVID-19 was confirmed based on a cough sound. Cough sound features were extracted through MFCC, Mel-Spectrogram, and spectral contrast. For the quality of cough sound, noisy data was deleted through SNR, and only the cough sound was extracted from the voice file through chunk. Since the objective is COVID-19 positive and negative classification, learning was performed through XGBoost, LightGBM, and FCNN algorithms, which are often used for classification, and the results were compared. Additionally, we conducted a comparative experiment on the performance of the model using multidimensional vectors obtained by converting cough sounds into both images and vectors. The experimental results showed that the LightGBM model utilizing features obtained by converting basic information about health status and cough sounds into multidimensional vectors through MFCC, Mel-Spectogram, Spectral contrast, and Spectrogram achieved the highest accuracy of 0.74.

Analysis of the impact of mathematics education research using explainable AI (설명가능한 인공지능을 활용한 수학교육 연구의 영향력 분석)

  • Oh, Se Jun
    • The Mathematical Education
    • /
    • v.62 no.3
    • /
    • pp.435-455
    • /
    • 2023
  • This study primarily focused on the development of an Explainable Artificial Intelligence (XAI) model to discern and analyze papers with significant impact in the field of mathematics education. To achieve this, meta-information from 29 domestic and international mathematics education journals was utilized to construct a comprehensive academic research network in mathematics education. This academic network was built by integrating five sub-networks: 'paper and its citation network', 'paper and author network', 'paper and journal network', 'co-authorship network', and 'author and affiliation network'. The Random Forest machine learning model was employed to evaluate the impact of individual papers within the mathematics education research network. The SHAP, an XAI model, was used to analyze the reasons behind the AI's assessment of impactful papers. Key features identified for determining impactful papers in the field of mathematics education through the XAI included 'paper network PageRank', 'changes in citations per paper', 'total citations', 'changes in the author's h-index', and 'citations per paper of the journal'. It became evident that papers, authors, and journals play significant roles when evaluating individual papers. When analyzing and comparing domestic and international mathematics education research, variations in these discernment patterns were observed. Notably, the significance of 'co-authorship network PageRank' was emphasized in domestic mathematics education research. The XAI model proposed in this study serves as a tool for determining the impact of papers using AI, providing researchers with strategic direction when writing papers. For instance, expanding the paper network, presenting at academic conferences, and activating the author network through co-authorship were identified as major elements enhancing the impact of a paper. Based on these findings, researchers can have a clear understanding of how their work is perceived and evaluated in academia and identify the key factors influencing these evaluations. This study offers a novel approach to evaluating the impact of mathematics education papers using an explainable AI model, traditionally a process that consumed significant time and resources. This approach not only presents a new paradigm that can be applied to evaluations in various academic fields beyond mathematics education but also is expected to substantially enhance the efficiency and effectiveness of research activities.

Semantic Visualization of Dynamic Topic Modeling (다이내믹 토픽 모델링의 의미적 시각화 방법론)

  • Yeon, Jinwook;Boo, Hyunkyung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.131-154
    • /
    • 2022
  • Recently, researches on unstructured data analysis have been actively conducted with the development of information and communication technology. In particular, topic modeling is a representative technique for discovering core topics from massive text data. In the early stages of topic modeling, most studies focused only on topic discovery. As the topic modeling field matured, studies on the change of the topic according to the change of time began to be carried out. Accordingly, interest in dynamic topic modeling that handle changes in keywords constituting the topic is also increasing. Dynamic topic modeling identifies major topics from the data of the initial period and manages the change and flow of topics in a way that utilizes topic information of the previous period to derive further topics in subsequent periods. However, it is very difficult to understand and interpret the results of dynamic topic modeling. The results of traditional dynamic topic modeling simply reveal changes in keywords and their rankings. However, this information is insufficient to represent how the meaning of the topic has changed. Therefore, in this study, we propose a method to visualize topics by period by reflecting the meaning of keywords in each topic. In addition, we propose a method that can intuitively interpret changes in topics and relationships between or among topics. The detailed method of visualizing topics by period is as follows. In the first step, dynamic topic modeling is implemented to derive the top keywords of each period and their weight from text data. In the second step, we derive vectors of top keywords of each topic from the pre-trained word embedding model. Then, we perform dimension reduction for the extracted vectors. Then, we formulate a semantic vector of each topic by calculating weight sum of keywords in each vector using topic weight of each keyword. In the third step, we visualize the semantic vector of each topic using matplotlib, and analyze the relationship between or among the topics based on the visualized result. The change of topic can be interpreted in the following manners. From the result of dynamic topic modeling, we identify rising top 5 keywords and descending top 5 keywords for each period to show the change of the topic. Existing many topic visualization studies usually visualize keywords of each topic, but our approach proposed in this study differs from previous studies in that it attempts to visualize each topic itself. To evaluate the practical applicability of the proposed methodology, we performed an experiment on 1,847 abstracts of artificial intelligence-related papers. The experiment was performed by dividing abstracts of artificial intelligence-related papers into three periods (2016-2017, 2018-2019, 2020-2021). We selected seven topics based on the consistency score, and utilized the pre-trained word embedding model of Word2vec trained with 'Wikipedia', an Internet encyclopedia. Based on the proposed methodology, we generated a semantic vector for each topic. Through this, by reflecting the meaning of keywords, we visualized and interpreted the themes by period. Through these experiments, we confirmed that the rising and descending of the topic weight of a keyword can be usefully used to interpret the semantic change of the corresponding topic and to grasp the relationship among topics. In this study, to overcome the limitations of dynamic topic modeling results, we used word embedding and dimension reduction techniques to visualize topics by era. The results of this study are meaningful in that they broadened the scope of topic understanding through the visualization of dynamic topic modeling results. In addition, the academic contribution can be acknowledged in that it laid the foundation for follow-up studies using various word embeddings and dimensionality reduction techniques to improve the performance of the proposed methodology.

Investigating Data Preprocessing Algorithms of a Deep Learning Postprocessing Model for the Improvement of Sub-Seasonal to Seasonal Climate Predictions (계절내-계절 기후예측의 딥러닝 기반 후보정을 위한 입력자료 전처리 기법 평가)

  • Uran Chung;Jinyoung Rhee;Miae Kim;Soo-Jin Sohn
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.2
    • /
    • pp.80-98
    • /
    • 2023
  • This study explores the effectiveness of various data preprocessing algorithms for improving subseasonal to seasonal (S2S) climate predictions from six climate forecast models and their Multi-Model Ensemble (MME) using a deep learning-based postprocessing model. A pipeline of data transformation algorithms was constructed to convert raw S2S prediction data into the training data processed with several statistical distribution. A dimensionality reduction algorithm for selecting features through rankings of correlation coefficients between the observed and the input data. The training model in the study was designed with TimeDistributed wrapper applied to all convolutional layers of U-Net: The TimeDistributed wrapper allows a U-Net convolutional layer to be directly applied to 5-dimensional time series data while maintaining the time axis of data, but every input should be at least 3D in U-Net. We found that Robust and Standard transformation algorithms are most suitable for improving S2S predictions. The dimensionality reduction based on feature selections did not significantly improve predictions of daily precipitation for six climate models and even worsened predictions of daily maximum and minimum temperatures. While deep learning-based postprocessing was also improved MME S2S precipitation predictions, it did not have a significant effect on temperature predictions, particularly for the lead time of weeks 1 and 2. Further research is needed to develop an optimal deep learning model for improving S2S temperature predictions by testing various models and parameters.

Efficient Deep Learning Approaches for Active Fire Detection Using Himawari-8 Geostationary Satellite Images (Himawari-8 정지궤도 위성 영상을 활용한 딥러닝 기반 산불 탐지의 효율적 방안 제시)

  • Sihyun Lee;Yoojin Kang;Taejun Sung;Jungho Im
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_3
    • /
    • pp.979-995
    • /
    • 2023
  • As wildfires are difficult to predict, real-time monitoring is crucial for a timely response. Geostationary satellite images are very useful for active fire detection because they can monitor a vast area with high temporal resolution (e.g., 2 min). Existing satellite-based active fire detection algorithms detect thermal outliers using threshold values based on the statistical analysis of brightness temperature. However, the difficulty in establishing suitable thresholds for such threshold-based methods hinders their ability to detect fires with low intensity and achieve generalized performance. In light of these challenges, machine learning has emerged as a potential-solution. Until now, relatively simple techniques such as random forest, Vanilla convolutional neural network (CNN), and U-net have been applied for active fire detection. Therefore, this study proposed an active fire detection algorithm using state-of-the-art (SOTA) deep learning techniques using data from the Advanced Himawari Imager and evaluated it over East Asia and Australia. The SOTA model was developed by applying EfficientNet and lion optimizer, and the results were compared with the model using the Vanilla CNN structure. EfficientNet outperformed CNN with F1-scores of 0.88 and 0.83 in East Asia and Australia, respectively. The performance was better after using weighted loss, equal sampling, and image augmentation techniques to fix data imbalance issues compared to before the techniques were used, resulting in F1-scores of 0.92 in East Asia and 0.84 in Australia. It is anticipated that timely responses facilitated by the SOTA deep learning-based approach for active fire detection will effectively mitigate the damage caused by wildfires.

Development of deep learning network based low-quality image enhancement techniques for improving foreign object detection performance (이물 객체 탐지 성능 개선을 위한 딥러닝 네트워크 기반 저품질 영상 개선 기법 개발)

  • Ki-Yeol Eom;Byeong-Seok Min
    • Journal of Internet Computing and Services
    • /
    • v.25 no.1
    • /
    • pp.99-107
    • /
    • 2024
  • Along with economic growth and industrial development, there is an increasing demand for various electronic components and device production of semiconductor, SMT component, and electrical battery products. However, these products may contain foreign substances coming from manufacturing process such as iron, aluminum, plastic and so on, which could lead to serious problems or malfunctioning of the product, and fire on the electric vehicle. To solve these problems, it is necessary to determine whether there are foreign materials inside the product, and may tests have been done by means of non-destructive testing methodology such as ultrasound ot X-ray. Nevertheless, there are technical challenges and limitation in acquiring X-ray images and determining the presence of foreign materials. In particular Small-sized or low-density foreign materials may not be visible even when X-ray equipment is used, and noise can also make it difficult to detect foreign objects. Moreover, in order to meet the manufacturing speed requirement, the x-ray acquisition time should be reduced, which can result in the very low signal- to-noise ratio(SNR) lowering the foreign material detection accuracy. Therefore, in this paper, we propose a five-step approach to overcome the limitations of low resolution, which make it challenging to detect foreign substances. Firstly, global contrast of X-ray images are increased through histogram stretching methodology. Second, to strengthen the high frequency signal and local contrast, we applied local contrast enhancement technique. Third, to improve the edge clearness, Unsharp masking is applied to enhance edges, making objects more visible. Forth, the super-resolution method of the Residual Dense Block (RDB) is used for noise reduction and image enhancement. Last, the Yolov5 algorithm is employed to train and detect foreign objects after learning. Using the proposed method in this study, experimental results show an improvement of more than 10% in performance metrics such as precision compared to low-density images.

A Study on the Medical Application and Personal Information Protection of Generative AI (생성형 AI의 의료적 활용과 개인정보보호)

  • Lee, Sookyoung
    • The Korean Society of Law and Medicine
    • /
    • v.24 no.4
    • /
    • pp.67-101
    • /
    • 2023
  • The utilization of generative AI in the medical field is also being rapidly researched. Access to vast data sets reduces the time and energy spent in selecting information. However, as the effort put into content creation decreases, there is a greater likelihood of associated issues arising. For example, with generative AI, users must discern the accuracy of results themselves, as these AIs learn from data within a set period and generate outcomes. While the answers may appear plausible, their sources are often unclear, making it challenging to determine their veracity. Additionally, the possibility of presenting results from a biased or distorted perspective cannot be discounted at present on ethical grounds. Despite these concerns, the field of generative AI is continually advancing, with an increasing number of users leveraging it in various sectors, including biomedical and life sciences. This raises important legal considerations regarding who bears responsibility and to what extent for any damages caused by these high-performance AI algorithms. A general overview of issues with generative AI includes those discussed above, but another perspective arises from its fundamental nature as a large-scale language model ('LLM') AI. There is a civil law concern regarding "the memorization of training data within artificial neural networks and its subsequent reproduction". Medical data, by nature, often reflects personal characteristics of patients, potentially leading to issues such as the regeneration of personal information. The extensive application of generative AI in scenarios beyond traditional AI brings forth the possibility of legal challenges that cannot be ignored. Upon examining the technical characteristics of generative AI and focusing on legal issues, especially concerning the protection of personal information, it's evident that current laws regarding personal information protection, particularly in the context of health and medical data utilization, are inadequate. These laws provide processes for anonymizing and de-identification, specific personal information but fall short when generative AI is applied as software in medical devices. To address the functionalities of generative AI in clinical software, a reevaluation and adjustment of existing laws for the protection of personal information are imperative.

Dental Assistant and Dental Hygienist-comparison with U.S. (치과 보조 인력과 치과위생사-미국의 제도 비교)

  • Youngyuhn Choi
    • Journal of Korean Dental Hygiene Science
    • /
    • v.6 no.2
    • /
    • pp.65-77
    • /
    • 2023
  • Background: The shortage of dental hygienists as assistant is a great concern to dental clinics, while dental hygienists are rather pursuing the role of oral hygiene control and preventive treatments which is the main role for dental hygienists in the United States. The dental hygienist and dental assistant system in the United States can be a reference in these discussions. Methods: Educational requirements for licensure and work areas for dental hygienists and dental assistants were investigated through the information provided by the American Dental Association (ADA), American Dental Hygienists Association, National Board Dental Hygiene Examination (NBDHE), Dental Assistants Association of America (ADAA), and Dental Assistants National Board (DANB). Results: In the United States, each state has different systems, but in general, dental hygienists obtain licenses after completing 2~3 years of associate degree programs in dental hygiene after obtaining basic learning skills, and mainly perform tasks related to patient screening procedures, oral hygiene management and preventive care. Dental assistants can take the license test after completing a training course of 9~11 months to obtain a dental assistant certification. Additional expanded work typically requires passing state qualification tests, completing a training program, obtaining a degree, or gaining clinical experience for a certain period of time, depending on the state Conclusion: The scope of work of dental hygienists designated by the Medical Engineer Act and the Enforcement Decree in Korea includes both the work of dental hygienists and dental assistants in the United States, and if a dental assistant system like the United States is introduced to address the current shortage of dental assistants, institutional supplementation such as adjustment of the scope of work and expansion of the role of dental hygienists in oral hygiene management and prevention work is needed and in-depth discussion is necessary.

Contactless Data Society and Reterritorialization of the Archive (비접촉 데이터 사회와 아카이브 재영토화)

  • Jo, Min-ji
    • The Korean Journal of Archival Studies
    • /
    • no.79
    • /
    • pp.5-32
    • /
    • 2024
  • The Korean government ranked 3rd among 193 UN member countries in the UN's 2022 e-Government Development Index. Korea, which has consistently been evaluated as a top country, can clearly be said to be a leading country in the world of e-government. The lubricant of e-government is data. Data itself is neither information nor a record, but it is a source of information and records and a resource of knowledge. Since administrative actions through electronic systems have become widespread, the production and technology of data-based records have naturally expanded and evolved. Technology may seem value-neutral, but in fact, technology itself reflects a specific worldview. The digital order of new technologies, armed with hyper-connectivity and super-intelligence, not only has a profound influence on traditional power structures, but also has an a similar influence on existing information and knowledge transmission media. Moreover, new technologies and media, including data-based generative artificial intelligence, are by far the hot topic. It can be seen that the all-round growth and spread of digital technology has led to the augmentation of human capabilities and the outsourcing of thinking. This also involves a variety of problems, ranging from deep fakes and other fake images, auto profiling, AI lies hallucination that creates them as if they were real, and copyright infringement of machine learning data. Moreover, radical connectivity capabilities enable the instantaneous sharing of vast amounts of data and rely on the technological unconscious to generate actions without awareness. Another irony of the digital world and online network, which is based on immaterial distribution and logical existence, is that access and contact can only be made through physical tools. Digital information is a logical object, but digital resources cannot be read or utilized without some type of device to relay it. In that respect, machines in today's technological society have gone beyond the level of simple assistance, and there are points at which it is difficult to say that the entry of machines into human society is a natural change pattern due to advanced technological development. This is because perspectives on machines will change over time. Important is the social and cultural implications of changes in the way records are produced as a result of communication and actions through machines. Even in the archive field, what problems will a data-based archive society face due to technological changes toward a hyper-intelligence and hyper-connected society, and who will prove the continuous activity of records and data and what will be the main drivers of media change? It is time to research whether this will happen. This study began with the need to recognize that archives are not only records that are the result of actions, but also data as strategic assets. Through this, author considered how to expand traditional boundaries and achieves reterritorialization in a data-driven society.

The Impact of Edu-Tech and Tangible and Intangible Services of Private Institutes on parents' Intention for Re-Enrollment: The Moderating Effect of Rapport-Building Behavior (학원의 에듀테크특성과 유·무형적서비스가 학부모의 재수강의도에 미치는 영향: 라포형성행동의 조절효과)

  • Jeon, Ji-Yeon;Ha, Tae-Kwan
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.19 no.4
    • /
    • pp.127-139
    • /
    • 2024
  • This study investigates the impact of edutech characteristics and both tangible and intangible educational services on the intention to re-enroll, which is directly related to the management performance of private institutes. The study aims to propose strategies to improve re-enrollment intentions and management performance based on the findings. Private education has grown continuously, complementing the limitations of public education and increasing parental dependence. This study tested the hypothesis that edutech characteristics, intangible services, and tangible services, increasingly utilized with the development of information and communication technology, would influence re-enrollment intentions. It also examined whether rapport-building behavior with parents would have a moderating effect on this relationship. The hypothesis testing results showed that among the edutech characteristics, content, intangible services such as reliability and empathy, and tangible services such as tangibility and payment accessibility positively impacted re-enrollment intentions. The hypothesis that rapport-building behavior would moderate the relationship between educational services and re-enrollment intentions was supported for empathy in intangible services and tangibility in tangible services. Based on these findings, the study proposed three strategies to improve management performance of private institutes. First, in terms of improving and managing edutech characteristics, it suggested introducing and updating edutech content and ensuring operational stability. Second, for improving and managing intangible services, it recommended managing instructor recruitment and training to enhance quality and competence, maintaining professionalism through continuous education by credible institutions, and providing level-based education for students based on the qualitative improvement of educational programs. Third, to improve and manage tangible services, it suggested setting appropriate tuition fees, offering various payment methods (online, mobile, card, bank transfer) unrestricted by time and place, and equipping interiors and facilities that enable focused learning. Additionally, considering the moderating effect of rapport-building behavior, it emphasized that improvements and management requiring costs are necessary, but making parents feel a high level of tangibility through rapport-building is also important. Furthermore, given the increasing importance of edutech based on information and communication technology, the study highlighted the need for various support measures such as government technological support and venture certification system support for institutes with an entrepreneurial spirit aiming to introduce innovative technologies such as AI technology based on large language models and AR/VR-applied metaverse environments. This study is expected to help improve the management performance of private institutes by specifically suggesting items and methods for improvement and management in the educational field.

  • PDF