• Title/Summary/Keyword: 스마트 학습관리 시스템

Search Result 86, Processing Time 0.019 seconds

LSTM-based Anomaly Detection on Big Data for Smart Factory Monitoring (스마트 팩토리 모니터링을 위한 빅 데이터의 LSTM 기반 이상 탐지)

  • Nguyen, Van Quan;Van Ma, Linh;Kim, Jinsul
    • Journal of Digital Contents Society
    • /
    • v.19 no.4
    • /
    • pp.789-799
    • /
    • 2018
  • This article presents machine learning based approach on Big data to analyzing time series data for anomaly detection in such industrial complex system. Long Short-Term Memory (LSTM) network have been demonstrated to be improved version of RNN and have become a useful aid for many tasks. This LSTM based model learn the higher level temporal features as well as temporal pattern, then such predictor is used to prediction stage to estimate future data. The prediction error is the difference between predicted output made by predictor and actual in-coming values. An error-distribution estimation model is built using a Gaussian distribution to calculate the anomaly in the score of the observation. In this manner, we move from the concept of a single anomaly to the idea of the collective anomaly. This work can assist the monitoring and management of Smart Factory in minimizing failure and improving manufacturing quality.

User control based OTT content search algorithms (사용자 제어기반 OTT 콘텐츠 검색 알고리즘)

  • Kim, Ki-Young;Suh, Yu-Hwa;Park, Byung-Joon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.5
    • /
    • pp.99-106
    • /
    • 2015
  • This research is focused on the development of the proprietary database embedded in the OTT device, which is used for searching and indexing video contents, and also the development of the search algorithm in the form of the critical components of the interface application with the OTT's database to provide video query searching, such as remote control smartphone application. As the number of available channels has increased to anywhere from dozens to hundreds of channels, it has become increasingly difficult for the viewer to find programs they want to watch. To address this issue, content providers are now in need of methods to recommend programs catering to each viewer's preference. the present study aims provide of the algorithm which recommends contents of OTT program by analyzing personal watching pattern based on one's history.

Improving the Accuracy of Document Classification by Learning Heterogeneity (이질성 학습을 통한 문서 분류의 정확성 향상 기법)

  • Wong, William Xiu Shun;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.21-44
    • /
    • 2018
  • In recent years, the rapid development of internet technology and the popularization of smart devices have resulted in massive amounts of text data. Those text data were produced and distributed through various media platforms such as World Wide Web, Internet news feeds, microblog, and social media. However, this enormous amount of easily obtained information is lack of organization. Therefore, this problem has raised the interest of many researchers in order to manage this huge amount of information. Further, this problem also required professionals that are capable of classifying relevant information and hence text classification is introduced. Text classification is a challenging task in modern data analysis, which it needs to assign a text document into one or more predefined categories or classes. In text classification field, there are different kinds of techniques available such as K-Nearest Neighbor, Naïve Bayes Algorithm, Support Vector Machine, Decision Tree, and Artificial Neural Network. However, while dealing with huge amount of text data, model performance and accuracy becomes a challenge. According to the type of words used in the corpus and type of features created for classification, the performance of a text classification model can be varied. Most of the attempts are been made based on proposing a new algorithm or modifying an existing algorithm. This kind of research can be said already reached their certain limitations for further improvements. In this study, aside from proposing a new algorithm or modifying the algorithm, we focus on searching a way to modify the use of data. It is widely known that classifier performance is influenced by the quality of training data upon which this classifier is built. The real world datasets in most of the time contain noise, or in other words noisy data, these can actually affect the decision made by the classifiers built from these data. In this study, we consider that the data from different domains, which is heterogeneous data might have the characteristics of noise which can be utilized in the classification process. In order to build the classifier, machine learning algorithm is performed based on the assumption that the characteristics of training data and target data are the same or very similar to each other. However, in the case of unstructured data such as text, the features are determined according to the vocabularies included in the document. If the viewpoints of the learning data and target data are different, the features may be appearing different between these two data. In this study, we attempt to improve the classification accuracy by strengthening the robustness of the document classifier through artificially injecting the noise into the process of constructing the document classifier. With data coming from various kind of sources, these data are likely formatted differently. These cause difficulties for traditional machine learning algorithms because they are not developed to recognize different type of data representation at one time and to put them together in same generalization. Therefore, in order to utilize heterogeneous data in the learning process of document classifier, we apply semi-supervised learning in our study. However, unlabeled data might have the possibility to degrade the performance of the document classifier. Therefore, we further proposed a method called Rule Selection-Based Ensemble Semi-Supervised Learning Algorithm (RSESLA) to select only the documents that contributing to the accuracy improvement of the classifier. RSESLA creates multiple views by manipulating the features using different types of classification models and different types of heterogeneous data. The most confident classification rules will be selected and applied for the final decision making. In this paper, three different types of real-world data sources were used, which are news, twitter and blogs.

A Study on the Research Trends in the Fourth Industrial Revolution in Korea Using Topic Modeling (토픽모델링을 활용한 4차 산업혁명 분야의 국내 연구 동향 분석)

  • Gi Young Kim;Dong-Jo Noh
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.34 no.4
    • /
    • pp.207-234
    • /
    • 2023
  • Since the advent of the Fourth Industrial Revolution, related studies have been conducted in various fields including industrial fields. In this study, to analyze domestic research trends on the Fourth Industrial Revolution, a keyword analysis and topic modeling analysis based on the LDA algorithm were conducted on 2,115 papers included in the KCI from January 2016 to August 2023. As a result of this study, first, the journals in which more than 30 academic papers related to the Fourth Industrial Revolution were published were digital convergence research, humanities society 21, e-business research, and learner-centered subject education research. Second, as a result of the topic modeling analysis, seven topics were selected: "human and artificial intelligence," "data and personal information management," "curriculum change and innovation," "corporate change and innovation," "education change and jobs," "culture and arts and content," and "information and corporate policies and responses." Third, common research topics related to the Fourth Industrial Revolution are "change in the curriculum," "human and artificial intelligence," and "culture arts and content," and common keywords include "company," "information," "protection," "smart," and "system." Fourth, in the first half of the research period (2016-2019), topics in the field of education appeared at the top, but in the second half (2020-2023), topics related to corporate, smart, digital, and service innovation appeared at the top. Fifth, research topics tended to become more specific or subdivided in the second half of the study. This trend is interpreted as a result of socioeconomic changes that occur as core technologies in the fourth industrial revolution are applied and utilized in various industrial fields after the corona pandemic. The results of this study are expected to provide useful information for identifying research trends in the field of the Fourth Industrial Revolution, establishing strategies, and subsequent research.

A Study on the Development Methodology of Intelligent Medical Devices Utilizing KANO-QFD Model (지능형 메디컬 기기 개발을 위한 KANO-QFD 모델 제안: AI 기반 탈모관리 기기 중심으로)

  • Kim, Yechan;Choi, Kwangeun;Chung, Doohee
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.217-242
    • /
    • 2022
  • With the launch of Artificial Intelligence(AI)-based intelligent products on the market, innovative changes are taking place not only in business but also in consumers' daily lives. Intelligent products have the potential to realize technology differentiation and increase market competitiveness through advanced functions of artificial intelligence. However, there is no new product development methodology that can sufficiently reflect the characteristics of artificial intelligence for the purpose of developing intelligent products with high market acceptance. This study proposes a KANO-QFD integrated model as a methodology for intelligent product development. As a specific example of the empirical analysis, the types of consumer requirements for hair loss prediction and treatment device were classified, and the relative importance and priority of engineering characteristics were derived to suggest the direction of intelligent medical product development. As a result of a survey of 130 consumers, accurate prediction of future hair loss progress, future hair loss and improved future after treatment realized and viewed on a smartphone, sophisticated design, and treatment using laser and LED combined light energy were realized as attractive quality factors among the KANO categories. As a result of the analysis based on House of Quality of QFD, learning data for hair loss diagnosis and prediction, micro camera resolution for scalp scan, hair loss type classification model, customized personal account management, and hair loss progress diagnosis model were derived. This study is significant in that it presented directions for the development of artificial intelligence-based intelligent medical product that were not previously preceded.

Analysis of Emerging Geo-technologies and Markets Focusing on Digital Twin and Environmental Monitoring in Response to Digital and Green New Deal (디지털 트윈, 환경 모니터링 등 디지털·그린 뉴딜 정책 관련 지질자원 유망기술·시장 분석)

  • Ahn, Eun-Young;Lee, Jaewook;Bae, Junhee;Kim, Jung-Min
    • Economic and Environmental Geology
    • /
    • v.53 no.5
    • /
    • pp.609-617
    • /
    • 2020
  • After introducing the industry 4.0 policy, Korean government announced 'Digital New Deal' and 'Green New Deal' as 'Korean New Deal' in 2020. We analyzed Korea Institute of Geoscience and Mineral Resources (KIGAM)'s research projects related to that policy and conducted markets analysis focused on Digital Twin and environmental monitoring technologies. Regarding 'Data Dam' policy, we suggested the digital geo-contents with Augmented Reality (AR) & Virtual Reality (VR) and the public geo-data collection & sharing system. It is necessary to expand and support the smart mining and digital oil fields research for '5th generation mobile communication (5G) and artificial intelligence (AI) convergence into all industries' policy. Korean government is suggesting downtown 3D maps for 'Digital Twin' policy. KIGAM can provide 3D geological maps and Internet of Things (IoT) systems for social overhead capital (SOC) management. 'Green New Deal' proposed developing technologies for green industries including resource circulation, Carbon Capture Utilization and Storage (CCUS), and electric & hydrogen vehicles. KIGAM has carried out related research projects and currently conducts research on domestic energy storage minerals. Oil and gas industries are presented as representative applications of digital twin. Many progress is made in mining automation and digital mapping and Digital Twin Earth (DTE) is a emerging research subject. The emerging research subjects are deeply related to data analysis, simulation, AI, and the IoT, therefore KIGAM should collaborate with sensors and computing software & system companies.