• Title/Summary/Keyword: Generated AI

Search Result 245, Processing Time 0.041 seconds

Search for the Education of High-Tech Emotional Textile and Fashion (하이테크 감성 섬유패션의 교육 방향에 대한 모색)

  • Youn Hee Kim;Chunjeong Kim;Youngjoo Na
    • Science of Emotion and Sensibility
    • /
    • v.26 no.3
    • /
    • pp.69-82
    • /
    • 2023
  • High-tech sensibility textile and fashion, in which consumers' emotions and various textile and fashion technologies are converged, is an important industrial group. It is important to develop the ability to apply in practice by gathering the creative by understanding other fields and exchanging ideas through interdisciplinary collaboration in the field of emotional engineering. Through interdisciplinary research and collaboration, talent must be nurtured of individuals who would lead the era of the 4th Industrial Revolution with the ability to empathize with others as well as the creative convergence-type intellectual ability necessary for the rapidly changing society. To determine content-creation methods, basic research is conducted. Additionally, this study investigates on the current status and educational process of the emotional textile-fashion industry worldwide. To nurture talents in the textile and fashion sensibility science, the basic contents are created to manage the knowledge that delivers sensibility science and the ICT related to this field, as well as in the intensive, PB-style conceptual design based on sensibility. The process from derivation of consumer emotion analysis and product development can be experienced through smart kit practice. Moreover, various methods are developed to set up intellectual property rights generated while developing ICT convergence products as start-ups. The study also covers new knowledge rights to develop emotional textile fashion.

Suitability Evaluation Method for Both Control Data and Operator Regarding Remote Control of Maritime Autonomous Surface Ships (자율운항선박 원격제어 관련 제어 데이터와 운용자의 적합성 평가 방법)

  • Hwa-Sop Roh;Hong-Jin Kim;Jeong-Bin Yim
    • Journal of Navigation and Port Research
    • /
    • v.48 no.3
    • /
    • pp.214-220
    • /
    • 2024
  • Remote control is used for operating maritime autonomous surface ships. The operator controls the ship using control data generated by the remote control system. To ensure successful remote control, three principles must be followed: safety, reliability, and availability. To achieve this, the suitability of both the control data and operators for remote control must be established. Currently, there are no international regulations in place for evaluating remote control suitability through experiments on actual ships. Conducting such experiments is dangerous, costly, and time-consuming. The goal of this study is to develop a suitability evaluation method using the output values of control devices used in actual ship operation. The proposed method involves evaluating the suitability of data by analyzing the output values and evaluating the suitability of operators by examining their tracking of these output values. The experiment was conducted using a shore-based remote control system to operate the training ship 'Hannara' of Korea National Maritime and Ocean University. The experiment involved an iterative process of obtaining the operator's tracking value for the output value of the ship's control devices and transmitting and receiving tracking data between the ship and the shore. The evaluation results showed that the transmission and reception performance of control data was suitable for remote operation. However, the operator's tracking performance revealed a need for further education and training. Therefore, the proposed evaluation method can be applied to assess the suitability and analyze both the control data and the operator's compliance with the three principles of remote control.

Neutralization of Acid Rock Drainage from the Dongrae Pyrophyllite Deposit: A Study on Behavior of Heavy Metals (동래 납석광산 산성 광석배수의 중화실험: 중금속의 거동 특성 규명)

  • 염승준;윤성택;김주환;박맹언
    • Journal of Soil and Groundwater Environment
    • /
    • v.7 no.4
    • /
    • pp.68-76
    • /
    • 2002
  • In this study, we have investigated the geochemical behavior and fate of heavy metals in acid rock drainage (ARD). The ARD was collected from the area of the former Dongrae pyrophyllite mine. The Dongrae Creek waters were strongly acidic (pH : 2.3~4.2) and contained high concentrations of $SO_4$, Al, Fe, Mn, Pb, Cu, Zn, and Cd, due to the influence of ARD generated from weathering of pyrite-rich pyrophyllite ores. However, the water quality gradually improved as the water flows downstream. In view of the change of mole fractions of dissolved Fe, Al and Mn, the generated ARD was initially both Fe- and AA-ich but progressively evolved to more Al-rich toward the confluence with the uncontaminated Suyoung River. As the AR3 (pH 2.3) mixed with the uncontaminated waters (pH 6.5), the pH increased up to 4.2, which caused precipitation of $SO_4$-rich Fe hydroxysulfate as a red-colored, massive ferricrete precipitate throughout the Dongrae Creek. Accompanying the precipitation of ferricrete, the Dongrae Creek water progressively changed to more Al-rich toward downstream sites. At the mouth of the Dongrae Creek, it (pH 3.4) mixed with the Suyoung River (pH 6.9), where pH increased to 5.7, causing precipitation of Al hydroxysulfate (white precipitates). Neutralization of the ARD-contaminated waters in the laboratory caused the successive formation of Fe precipitates at pH<3.5 and Al precipitates at higher pH (4~6). Manganese compounds were precipitated at pH>6. The removal of trace metals was dependent on the precipitation of these compounds, which acted as sorbents. The pHs for 50% sorption ($pH_{50}$) in Fe-rich and Al-rich waters were respectively 3.2 and 4.5 for Pb, 4.5 and 5.8 for Cu, 5.2 and 7.4 for Cd, and 5.8 and 7.0 for Zn. This indicates that the trace metals were sorbed preferentially with increasing pH in the general order of Pb, Cu, Cd, and Zn and that the sorption of trace metals in Al-rich water occurred at higher pH than those in Fe-rich water. The results of this study demonstrated that the partitioning of trace metals in ARD is not only a function of pH, but also depends on the chemical composition of the water.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

Self-optimizing feature selection algorithm for enhancing campaign effectiveness (캠페인 효과 제고를 위한 자기 최적화 변수 선택 알고리즘)

  • Seo, Jeoung-soo;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.173-198
    • /
    • 2020
  • For a long time, many studies have been conducted on predicting the success of campaigns for customers in academia, and prediction models applying various techniques are still being studied. Recently, as campaign channels have been expanded in various ways due to the rapid revitalization of online, various types of campaigns are being carried out by companies at a level that cannot be compared to the past. However, customers tend to perceive it as spam as the fatigue of campaigns due to duplicate exposure increases. Also, from a corporate standpoint, there is a problem that the effectiveness of the campaign itself is decreasing, such as increasing the cost of investing in the campaign, which leads to the low actual campaign success rate. Accordingly, various studies are ongoing to improve the effectiveness of the campaign in practice. This campaign system has the ultimate purpose to increase the success rate of various campaigns by collecting and analyzing various data related to customers and using them for campaigns. In particular, recent attempts to make various predictions related to the response of campaigns using machine learning have been made. It is very important to select appropriate features due to the various features of campaign data. If all of the input data are used in the process of classifying a large amount of data, it takes a lot of learning time as the classification class expands, so the minimum input data set must be extracted and used from the entire data. In addition, when a trained model is generated by using too many features, prediction accuracy may be degraded due to overfitting or correlation between features. Therefore, in order to improve accuracy, a feature selection technique that removes features close to noise should be applied, and feature selection is a necessary process in order to analyze a high-dimensional data set. Among the greedy algorithms, SFS (Sequential Forward Selection), SBS (Sequential Backward Selection), SFFS (Sequential Floating Forward Selection), etc. are widely used as traditional feature selection techniques. It is also true that if there are many risks and many features, there is a limitation in that the performance for classification prediction is poor and it takes a lot of learning time. Therefore, in this study, we propose an improved feature selection algorithm to enhance the effectiveness of the existing campaign. The purpose of this study is to improve the existing SFFS sequential method in the process of searching for feature subsets that are the basis for improving machine learning model performance using statistical characteristics of the data to be processed in the campaign system. Through this, features that have a lot of influence on performance are first derived, features that have a negative effect are removed, and then the sequential method is applied to increase the efficiency for search performance and to apply an improved algorithm to enable generalized prediction. Through this, it was confirmed that the proposed model showed better search and prediction performance than the traditional greed algorithm. Compared with the original data set, greed algorithm, genetic algorithm (GA), and recursive feature elimination (RFE), the campaign success prediction was higher. In addition, when performing campaign success prediction, the improved feature selection algorithm was found to be helpful in analyzing and interpreting the prediction results by providing the importance of the derived features. This is important features such as age, customer rating, and sales, which were previously known statistically. Unlike the previous campaign planners, features such as the combined product name, average 3-month data consumption rate, and the last 3-month wireless data usage were unexpectedly selected as important features for the campaign response, which they rarely used to select campaign targets. It was confirmed that base attributes can also be very important features depending on the type of campaign. Through this, it is possible to analyze and understand the important characteristics of each campaign type.