• Title/Summary/Keyword: AI learning data

Search Result 794, Processing Time 0.024 seconds

Research on Human Posture Recognition System Based on The Object Detection Dataset (객체 감지 데이터 셋 기반 인체 자세 인식시스템 연구)

  • Liu, Yan;Li, Lai-Cun;Lu, Jing-Xuan;Xu, Meng;Jeong, Yang-Kwon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.1
    • /
    • pp.111-118
    • /
    • 2022
  • In computer vision research, the two-dimensional human pose is a very extensive research direction, especially in pose tracking and behavior recognition, which has very important research significance. The acquisition of human pose targets, which is essentially the study of how to accurately identify human targets from pictures, is of great research significance and has been a hot research topic of great interest in recent years. Human pose recognition is used in artificial intelligence on the one hand and in daily life on the other. The excellent effect of pose recognition is mainly determined by the success rate and the accuracy of the recognition process, so it reflects the importance of human pose recognition in terms of recognition rate. In this human body gesture recognition, the human body is divided into 17 key points for labeling. Not only that but also the key points are segmented to ensure the accuracy of the labeling information. In the recognition design, use the comprehensive data set MS COCO for deep learning to design a neural network model to train a large number of samples, from simple step-by-step to efficient training, so that a good accuracy rate can be obtained.

A semi-supervised interpretable machine learning framework for sensor fault detection

  • Martakis, Panagiotis;Movsessian, Artur;Reuland, Yves;Pai, Sai G.S.;Quqa, Said;Cava, David Garcia;Tcherniak, Dmitri;Chatzi, Eleni
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.251-266
    • /
    • 2022
  • Structural Health Monitoring (SHM) of critical infrastructure comprises a major pillar of maintenance management, shielding public safety and economic sustainability. Although SHM is usually associated with data-driven metrics and thresholds, expert judgement is essential, especially in cases where erroneous predictions can bear casualties or substantial economic loss. Considering that visual inspections are time consuming and potentially subjective, artificial-intelligence tools may be leveraged in order to minimize the inspection effort and provide objective outcomes. In this context, timely detection of sensor malfunctioning is crucial in preventing inaccurate assessment and false alarms. The present work introduces a sensor-fault detection and interpretation framework, based on the well-established support-vector machine scheme for anomaly detection, combined with a coalitional game-theory approach. The proposed framework is implemented in two datasets, provided along the 1st International Project Competition for Structural Health Monitoring (IPC-SHM 2020), comprising acceleration and cable-load measurements from two real cable-stayed bridges. The results demonstrate good predictive performance and highlight the potential for seamless adaption of the algorithm to intrinsically different data domains. For the first time, the term "decision trajectories", originating from the field of cognitive sciences, is introduced and applied in the context of SHM. This provides an intuitive and comprehensive illustration of the impact of individual features, along with an elaboration on feature dependencies that drive individual model predictions. Overall, the proposed framework provides an easy-to-train, application-agnostic and interpretable anomaly detector, which can be integrated into the preprocessing part of various SHM and condition-monitoring applications, offering a first screening of the sensor health prior to further analysis.

A Case Study of Educational Effectiveness by Software Subjects for Humanities College Students

  • Seo, Joo-Young;Shin, Seung-Hun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.9
    • /
    • pp.267-277
    • /
    • 2022
  • Recently, the topics of SW liberal-arts education are diversifying, from 'Computational Thinking(CT)' to 'Programming, Data Analysis and Artificial Intelligence(AI)' in universities. We expect that the diversification of SW liberal-arts subjects does not just mean that the learning contents are different, but also differentiates the educational goals and educational effects of each subject. In this paper, we conducted a case study to analyze the educational effect according to the educational goals of two SW liberal-arts subjects, CT and Data Analysis Fundamentals(DA), for humanities college students. We confirmed that the educational effect of 'CT Efficacy' increased significantly in accordance with the common educational goal of 'Improving CT-based SW convergence competency' in both subjects. However, we also analyzed the difference in the educational effects of 'CT(the goal of basic SW education)' and 'DA(the goal of major-friendly SW education)', which have different subject goals. 'CT' mainly showed an educational effect on how to solve general daily problems, and 'DA' showed confidence in how to solve major problems along with general problems.

Predicting lane speeds from link speeds by using neural networks

  • Pyun, Dong hyun;Pyo, Changwoo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.8
    • /
    • pp.69-75
    • /
    • 2022
  • In this paper, a method for predicting the speed for each lane from the link speed using an artificial neural network is presented to increase the accuracy of predicting the required time of a driving route. The time required for passing through a link is observed differently depending on the direction of going straight, turning right, or turning left at the intersection of the end of the link. Therefore, it is necessary to predict the speed according to the vehicle's traveling direction. Data required for learning and verification were constructed by refining the data measured at the Gongpyeong intersection of Gukchaebosang-ro in Daegu Metropolitan City and four adjacent intersections around it. Five neural network models were used. In addition, error analysis of the prediction was performed to select a neural network experimentally suitable for the research purpose. Experimental results showed that the error in the estimation of the time required for each lane decreased by 17.4% for the straight lane, 4.4% for the right-turn lane, and 3.9% for the left-turn lane. This experiment is the result of analyzing only one link. If the entire pathway is tested, the effect is expected to be greater.

Reporting Quality of Research Studies on AI Applications in Medical Images According to the CLAIM Guidelines in a Radiology Journal With a Strong Prominence in Asia

  • Dong Yeong Kim;Hyun Woo Oh;Chong Hyun Suh
    • Korean Journal of Radiology
    • /
    • v.24 no.12
    • /
    • pp.1179-1189
    • /
    • 2023
  • Objective: We aimed to evaluate the reporting quality of research articles that applied deep learning to medical imaging. Using the Checklist for Artificial Intelligence in Medical Imaging (CLAIM) guidelines and a journal with prominence in Asia as a sample, we intended to provide an insight into reporting quality in the Asian region and establish a journal-specific audit. Materials and Methods: A total of 38 articles published in the Korean Journal of Radiology between June 2018 and January 2023 were analyzed. The analysis included calculating the percentage of studies that adhered to each CLAIM item and identifying items that were met by ≤ 50% of the studies. The article review was initially conducted independently by two reviewers, and the consensus results were used for the final analysis. We also compared adherence rates to CLAIM before and after December 2020. Results: Of the 42 items in the CLAIM guidelines, 12 items (29%) were satisfied by ≤ 50% of the included articles. None of the studies reported handling missing data (item #13). Only one study respectively presented the use of de-identification methods (#12), intended sample size (#19), robustness or sensitivity analysis (#30), and full study protocol (#41). Of the studies, 35% reported the selection of data subsets (#10), 40% reported registration information (#40), and 50% measured inter and intrarater variability (#18). No significant changes were observed in the rates of adherence to these 12 items before and after December 2020. Conclusion: The reporting quality of artificial intelligence studies according to CLAIM guidelines, in our study sample, showed room for improvement. We recommend that the authors and reviewers have a solid understanding of the relevant reporting guidelines and ensure that the essential elements are adequately reported when writing and reviewing the manuscripts for publication.

Self-optimizing feature selection algorithm for enhancing campaign effectiveness (캠페인 효과 제고를 위한 자기 최적화 변수 선택 알고리즘)

  • Seo, Jeoung-soo;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.173-198
    • /
    • 2020
  • For a long time, many studies have been conducted on predicting the success of campaigns for customers in academia, and prediction models applying various techniques are still being studied. Recently, as campaign channels have been expanded in various ways due to the rapid revitalization of online, various types of campaigns are being carried out by companies at a level that cannot be compared to the past. However, customers tend to perceive it as spam as the fatigue of campaigns due to duplicate exposure increases. Also, from a corporate standpoint, there is a problem that the effectiveness of the campaign itself is decreasing, such as increasing the cost of investing in the campaign, which leads to the low actual campaign success rate. Accordingly, various studies are ongoing to improve the effectiveness of the campaign in practice. This campaign system has the ultimate purpose to increase the success rate of various campaigns by collecting and analyzing various data related to customers and using them for campaigns. In particular, recent attempts to make various predictions related to the response of campaigns using machine learning have been made. It is very important to select appropriate features due to the various features of campaign data. If all of the input data are used in the process of classifying a large amount of data, it takes a lot of learning time as the classification class expands, so the minimum input data set must be extracted and used from the entire data. In addition, when a trained model is generated by using too many features, prediction accuracy may be degraded due to overfitting or correlation between features. Therefore, in order to improve accuracy, a feature selection technique that removes features close to noise should be applied, and feature selection is a necessary process in order to analyze a high-dimensional data set. Among the greedy algorithms, SFS (Sequential Forward Selection), SBS (Sequential Backward Selection), SFFS (Sequential Floating Forward Selection), etc. are widely used as traditional feature selection techniques. It is also true that if there are many risks and many features, there is a limitation in that the performance for classification prediction is poor and it takes a lot of learning time. Therefore, in this study, we propose an improved feature selection algorithm to enhance the effectiveness of the existing campaign. The purpose of this study is to improve the existing SFFS sequential method in the process of searching for feature subsets that are the basis for improving machine learning model performance using statistical characteristics of the data to be processed in the campaign system. Through this, features that have a lot of influence on performance are first derived, features that have a negative effect are removed, and then the sequential method is applied to increase the efficiency for search performance and to apply an improved algorithm to enable generalized prediction. Through this, it was confirmed that the proposed model showed better search and prediction performance than the traditional greed algorithm. Compared with the original data set, greed algorithm, genetic algorithm (GA), and recursive feature elimination (RFE), the campaign success prediction was higher. In addition, when performing campaign success prediction, the improved feature selection algorithm was found to be helpful in analyzing and interpreting the prediction results by providing the importance of the derived features. This is important features such as age, customer rating, and sales, which were previously known statistically. Unlike the previous campaign planners, features such as the combined product name, average 3-month data consumption rate, and the last 3-month wireless data usage were unexpectedly selected as important features for the campaign response, which they rarely used to select campaign targets. It was confirmed that base attributes can also be very important features depending on the type of campaign. Through this, it is possible to analyze and understand the important characteristics of each campaign type.

Contactless Data Society and Reterritorialization of the Archive (비접촉 데이터 사회와 아카이브 재영토화)

  • Jo, Min-ji
    • The Korean Journal of Archival Studies
    • /
    • no.79
    • /
    • pp.5-32
    • /
    • 2024
  • The Korean government ranked 3rd among 193 UN member countries in the UN's 2022 e-Government Development Index. Korea, which has consistently been evaluated as a top country, can clearly be said to be a leading country in the world of e-government. The lubricant of e-government is data. Data itself is neither information nor a record, but it is a source of information and records and a resource of knowledge. Since administrative actions through electronic systems have become widespread, the production and technology of data-based records have naturally expanded and evolved. Technology may seem value-neutral, but in fact, technology itself reflects a specific worldview. The digital order of new technologies, armed with hyper-connectivity and super-intelligence, not only has a profound influence on traditional power structures, but also has an a similar influence on existing information and knowledge transmission media. Moreover, new technologies and media, including data-based generative artificial intelligence, are by far the hot topic. It can be seen that the all-round growth and spread of digital technology has led to the augmentation of human capabilities and the outsourcing of thinking. This also involves a variety of problems, ranging from deep fakes and other fake images, auto profiling, AI lies hallucination that creates them as if they were real, and copyright infringement of machine learning data. Moreover, radical connectivity capabilities enable the instantaneous sharing of vast amounts of data and rely on the technological unconscious to generate actions without awareness. Another irony of the digital world and online network, which is based on immaterial distribution and logical existence, is that access and contact can only be made through physical tools. Digital information is a logical object, but digital resources cannot be read or utilized without some type of device to relay it. In that respect, machines in today's technological society have gone beyond the level of simple assistance, and there are points at which it is difficult to say that the entry of machines into human society is a natural change pattern due to advanced technological development. This is because perspectives on machines will change over time. Important is the social and cultural implications of changes in the way records are produced as a result of communication and actions through machines. Even in the archive field, what problems will a data-based archive society face due to technological changes toward a hyper-intelligence and hyper-connected society, and who will prove the continuous activity of records and data and what will be the main drivers of media change? It is time to research whether this will happen. This study began with the need to recognize that archives are not only records that are the result of actions, but also data as strategic assets. Through this, author considered how to expand traditional boundaries and achieves reterritorialization in a data-driven society.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

A Study on the Build of Equipment Predictive Maintenance Solutions Based on On-device Edge Computer

  • Lee, Yong-Hwan;Suh, Jin-Hyung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.4
    • /
    • pp.165-172
    • /
    • 2020
  • In this paper we propose an uses on-device-based edge computing technology and big data analysis methods through the use of on-device-based edge computing technology and analysis of big data, which are distributed computing paradigms that introduce computations and storage devices where necessary to solve problems such as transmission delays that occur when data is transmitted to central centers and processed in current general smart factories. However, even if edge computing-based technology is applied in practice, the increase in devices on the network edge will result in large amounts of data being transferred to the data center, resulting in the network band reaching its limits, which, despite the improvement of network technology, does not guarantee acceptable transfer speeds and response times, which are critical requirements for many applications. It provides the basis for developing into an AI-based facility prediction conservation analysis tool that can apply deep learning suitable for big data in the future by supporting intelligent facility management that can support productivity growth through research that can be applied to the field of facility preservation and smart factory industry with integrated hardware technology that can accommodate these requirements and factory management and control technology.

Development of a Simulator for Optimizing Semiconductor Manufacturing Incorporating Internet of Things (사물인터넷을 접목한 반도체 소자 공정 최적화 시뮬레이터 개발)

  • Dang, Hyun Shik;Jo, Dong Hee;Kim, Jong Seo;Jung, Taeho
    • Journal of the Korea Society for Simulation
    • /
    • v.26 no.4
    • /
    • pp.35-41
    • /
    • 2017
  • With the advances in Internet over Things, the demand in diverse electronic devices such as mobile phones and sensors has been rapidly increasing and boosting up the researches on those products. Semiconductor materials, devices, and fabrication processes are becoming more diverse and complicated, which accompanies finding parameters for an optimal fabrication process. In order to find the parameters, a process simulation before fabrication or a real-time process control system during fabrication can be used, but they lack incorporating the feedback from post-fabrication data and compatibility with older equipment. In this research, we have developed an artificial intelligence based simulator, which finds parameters for an optimal process and controls process equipment. In order to apply the control concept to all the equipment in a fabrication sequence, we have developed a prototype for a manipulator which can be installed over an existing buttons and knobs in the equipment and controls the equipment communicating with the AI over the Internet. The AI is based on the deep learning to find process parameters that will produce a device having target electrical characteristics. The proposed simulator can control existing equipment via the Internet to fabricate devices with desired performance and, therefore, it will help engineers to develop new devices efficiently and effectively.