• Title/Summary/Keyword: AI learning data

Search Result 794, Processing Time 0.027 seconds

A Study on the Classification of Variables Affecting Smartphone Addiction in Decision Tree Environment Using Python Program

  • Kim, Seung-Jae
    • International journal of advanced smart convergence
    • /
    • v.11 no.4
    • /
    • pp.68-80
    • /
    • 2022
  • Since the launch of AI, technology development to implement complete and sophisticated AI functions has continued. In efforts to develop technologies for complete automation, Machine Learning techniques and deep learning techniques are mainly used. These techniques deal with supervised learning, unsupervised learning, and reinforcement learning as internal technical elements, and use the Big-data Analysis method again to set the cornerstone for decision-making. In addition, established decision-making is being improved through subsequent repetition and renewal of decision-making standards. In other words, big data analysis, which enables data classification and recognition/recognition, is important enough to be called a key technical element of AI function. Therefore, big data analysis itself is important and requires sophisticated analysis. In this study, among various tools that can analyze big data, we will use a Python program to find out what variables can affect addiction according to smartphone use in a decision tree environment. We the Python program checks whether data classification by decision tree shows the same performance as other tools, and sees if it can give reliability to decision-making about the addictiveness of smartphone use. Through the results of this study, it can be seen that there is no problem in performing big data analysis using any of the various statistical tools such as Python and R when analyzing big data.

Design and Implementation of a Lightweight On-Device AI-Based Real-time Fault Diagnosis System using Continual Learning (연속학습을 활용한 경량 온-디바이스 AI 기반 실시간 기계 결함 진단 시스템 설계 및 구현)

  • Youngjun Kim;Taewan Kim;Suhyun Kim;Seongjae Lee;Taehyoun Kim
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.19 no.3
    • /
    • pp.151-158
    • /
    • 2024
  • Although on-device artificial intelligence (AI) has gained attention to diagnosing machine faults in real time, most previous studies did not consider the model retraining and redeployment processes that must be performed in real-world industrial environments. Our study addresses this challenge by proposing an on-device AI-based real-time machine fault diagnosis system that utilizes continual learning. Our proposed system includes a lightweight convolutional neural network (CNN) model, a continual learning algorithm, and a real-time monitoring service. First, we developed a lightweight 1D CNN model to reduce the cost of model deployment and enable real-time inference on the target edge device with limited computing resources. We then compared the performance of five continual learning algorithms with three public bearing fault datasets and selected the most effective algorithm for our system. Finally, we implemented a real-time monitoring service using an open-source data visualization framework. In the performance comparison results between continual learning algorithms, we found that the replay-based algorithms outperformed the regularization-based algorithms, and the experience replay (ER) algorithm had the best diagnostic accuracy. We further tuned the number and length of data samples used for a memory buffer of the ER algorithm to maximize its performance. We confirmed that the performance of the ER algorithm becomes higher when a longer data length is used. Consequently, the proposed system showed an accuracy of 98.7%, while only 16.5% of the previous data was stored in memory buffer. Our lightweight CNN model was also able to diagnose a fault type of one data sample within 3.76 ms on the Raspberry Pi 4B device.

Preliminary Test of Google Vertex Artificial Intelligence in Root Dental X-ray Imaging Diagnosis (구글 버텍스 AI을 이용한 치과 X선 영상진단 유용성 평가)

  • Hyun-Ja Jeong
    • Journal of the Korean Society of Radiology
    • /
    • v.18 no.3
    • /
    • pp.267-273
    • /
    • 2024
  • Using a cloud-based vertex AI platform that can develop an artificial intelligence learning model without coding, this study easily developed an artificial intelligence learning model by the non-professional general public and confirmed its clinical applicability. Nine dental diseases and 2,999 root disease X-ray images released on the Kaggle site were used for the learning data, and learning, verification, and test data images were randomly classified. Image classification and multi-label learning were performed through hyper-parameter tuning work using a learning pipeline in vertex AI's basic learning model workflow. As a result of performing AutoML(Automated Machine Learning), AUC(Area Under Curve) was found to be 0.967, precision was 95.6%, and reproduction rate was 95.2%. It was confirmed that the learned artificial intelligence model was sufficient for clinical diagnosis.

Design and Utilization of Connected Data Architecture-based AI Service of Mass Distributed Abyss Storage (대용량 분산 Abyss 스토리지의 CDA (Connected Data Architecture) 기반 AI 서비스의 설계 및 활용)

  • Cha, ByungRae;Park, Sun;Seo, JaeHyun;Kim, JongWon;Shin, Byeong-Chun
    • Smart Media Journal
    • /
    • v.10 no.1
    • /
    • pp.99-107
    • /
    • 2021
  • In addition to the 4th Industrial Revolution and Industry 4.0, the recent megatrends in the ICT field are Big-data, IoT, Cloud Computing, and Artificial Intelligence. Therefore, rapid digital transformation according to the convergence of various industrial areas and ICT fields is an ongoing trend that is due to the development of technology of AI services suitable for the era of the 4th industrial revolution and the development of subdivided technologies such as (Business Intelligence), IA (Intelligent Analytics, BI + AI), AIoT (Artificial Intelligence of Things), AIOPS (Artificial Intelligence for IT Operations), and RPA 2.0 (Robotic Process Automation + AI). This study aims to integrate and advance various machine learning services of infrastructure-side GPU, CDA (Connected Data Architecture) framework, and AI based on mass distributed Abyss storage in accordance with these technical situations. Also, we want to utilize AI business revenue model in various industries.

A Study on Tower Modeling for Artificial Intelligence Training in Artifact Restoration

  • Byong-Kwon Lee;Young-Chae Park
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.9
    • /
    • pp.27-34
    • /
    • 2023
  • This paper studied the 3D modeling process for the restoration of the 'Three-story Stone Pagoda of Bulguksa Temple in Gyeongju', a stone pagoda from the Unified Silla Period, using artificial intelligence (AI). Existing 3D modeling methods generate numerous verts and faces, which takes a considerable amount of time for AI learning. Accordingly, a method of performing more efficient 3D modeling by lowering the number of verts and faces is required. To this end, in this study, the structure of the stone pagoda was deeply analyzed and a modeling method optimized for AI learning was studied. In addition, it is meaningful to propose a new 3D modeling methodology for the restoration of stone pagodas in Korea and to secure a data set necessary for artificial intelligence learning.

Deep Learning-Based, Real-Time, False-Pick Filter for an Onsite Earthquake Early Warning (EEW) System (온사이트 지진조기경보를 위한 딥러닝 기반 실시간 오탐지 제거)

  • Seo, JeongBeom;Lee, JinKoo;Lee, Woodong;Lee, SeokTae;Lee, HoJun;Jeon, Inchan;Park, NamRyoul
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.25 no.2
    • /
    • pp.71-81
    • /
    • 2021
  • This paper presents a real-time, false-pick filter based on deep learning to reduce false alarms of an onsite Earthquake Early Warning (EEW) system. Most onsite EEW systems use P-wave to predict S-wave. Therefore, it is essential to properly distinguish P-waves from noises or other seismic phases to avoid false alarms. To reduce false-picks causing false alarms, this study made the EEWNet Part 1 'False-Pick Filter' model based on Convolutional Neural Network (CNN). Specifically, it modified the Pick_FP (Lomax et al.) to generate input data such as the amplitude, velocity, and displacement of three components from 2 seconds ahead and 2 seconds after the P-wave arrival following one-second time steps. This model extracts log-mel power spectrum features from this input data, then classifies P-waves and others using these features. The dataset consisted of 3,189,583 samples: 81,394 samples from event data (727 events in the Korean Peninsula, 103 teleseismic events, and 1,734 events in Taiwan) and 3,108,189 samples from continuous data (recorded by seismic stations in South Korea for 27 months from 2018 to 2020). This model was trained with 1,826,357 samples through balancing, then tested on continuous data samples of the year 2019, filtering more than 99% of strong false-picks that could trigger false alarms. This model was developed as a module for USGS Earthworm and is written in C language to operate with minimal computing resources.

Designing the Framework of Evaluation on Learner's Cognitive Skill for Artificial Intelligence Education through Computational Thinking (Computational Thinking 기반 인공지능교육을 통한 학습자의 인지적역량 평가 프레임워크 설계)

  • Shin, Seungki
    • Journal of The Korean Association of Information Education
    • /
    • v.24 no.1
    • /
    • pp.59-69
    • /
    • 2020
  • The purpose of this study is to design the framework of evaluation on learner's cognitive skill for artificial intelligence(AI) education through computational thinking. To design the rubric and framework for evaluating the change of leaner's intrinsic thinking, the evaluation process was consisted of a sequential stage with a) agency that cognitive learning assistance for data collection, b) abstraction that recognizes the pattern of data and performs the categorization process by decomposing the characteristics of collected data, and c) modeling that constructing algorithms based on refined data through abstraction. The evaluating framework was designed for not only the cognitive domain of learners' perceptions, learning, behaviors, and outcomes but also the areas of knowledge, competencies, and attitudes about the problem-solving process and results of learners to evaluate the changes of inherent cognitive learning about AI education. The results of the research are meaningful in that the evaluating framework for AI education was developed for the development of individualized evaluation tools according to the context of teaching and learning, and it could be used as a standard in various areas of AI education in the future.

A Study on the Development of Korean Curriculum for Multicultural Students Using AI Technology

  • GiNam, CHO;Yong, KIM
    • Fourth Industrial Review
    • /
    • v.3 no.1
    • /
    • pp.21-32
    • /
    • 2023
  • Purpose - This study focused on the development of a Korean language curriculum to solve the problem of Korean literacy among students from multicultural families. Research design, data, and methodology - A case study was conducted on Sim(2018)'s learner-centered learning model to develop an educational plan including AI technology, which will help students from multicultural families to effectively improve their communication and learning skills by improving their reading, writing, and speaking of Korean. Result - Total of six educational plans using AI technology (Microsoft PowerPoint's drawing function, AutoDraw, and Google's Four-cut cartoons) were developed. Conclusion - The curriculum using AI is expected to greatly contribute to the recovery of language learning ability and confidence in studies necessary to improve learners' language education.

Research Trends of Multi-agent Collaboration Technology for Artificial Intelligence Bots (AI Bots를 위한 멀티에이전트 협업 기술 동향)

  • D., Kang;J.Y., Jung;C.H., Lee;M., Park;J.W., Lee;Y.J., Lee
    • Electronics and Telecommunications Trends
    • /
    • v.37 no.6
    • /
    • pp.32-42
    • /
    • 2022
  • Recently, decentralized approaches to artificial intelligence (AI) development, such as federated learning are drawing attention as AI development's cost and time inefficiency increase due to explosive data growth and rapid environmental changes. Collaborative AI technology that dynamically organizes collaborative groups between different agents to share data, knowledge, and experience and uses distributed resources to derive enhanced knowledge and analysis models through collaborative learning to solve given problems is an alternative to centralized AI. This article investigates and analyzes recent technologies and applications applicable to the research of multi-agent collaboration of AI bots, which can provide collaborative AI functionality autonomously.

Distributed Edge Computing for DNA-Based Intelligent Services and Applications: A Review (딥러닝을 사용하는 IoT빅데이터 인프라에 필요한 DNA 기술을 위한 분산 엣지 컴퓨팅기술 리뷰)

  • Alemayehu, Temesgen Seyoum;Cho, We-Duke
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.12
    • /
    • pp.291-306
    • /
    • 2020
  • Nowadays, Data-Network-AI (DNA)-based intelligent services and applications have become a reality to provide a new dimension of services that improve the quality of life and productivity of businesses. Artificial intelligence (AI) can enhance the value of IoT data (data collected by IoT devices). The internet of things (IoT) promotes the learning and intelligence capability of AI. To extract insights from massive volume IoT data in real-time using deep learning, processing capability needs to happen in the IoT end devices where data is generated. However, deep learning requires a significant number of computational resources that may not be available at the IoT end devices. Such problems have been addressed by transporting bulks of data from the IoT end devices to the cloud datacenters for processing. But transferring IoT big data to the cloud incurs prohibitively high transmission delay and privacy issues which are a major concern. Edge computing, where distributed computing nodes are placed close to the IoT end devices, is a viable solution to meet the high computation and low-latency requirements and to preserve the privacy of users. This paper provides a comprehensive review of the current state of leveraging deep learning within edge computing to unleash the potential of IoT big data generated from IoT end devices. We believe that the revision will have a contribution to the development of DNA-based intelligent services and applications. It describes the different distributed training and inference architectures of deep learning models across multiple nodes of the edge computing platform. It also provides the different privacy-preserving approaches of deep learning on the edge computing environment and the various application domains where deep learning on the network edge can be useful. Finally, it discusses open issues and challenges leveraging deep learning within edge computing.