• Title/Summary/Keyword: artificial intelligence-based model

Search Result 1,215, Processing Time 0.037 seconds

A study on ecosystem model of the magazines for smart devices Focusing on the case of magazine business in foreign countries (스마트 디바이스 잡지 생태계 모델 연구 - 외국 잡지의 비즈니스 사례를 중심으로)

  • Chang, Yong Ho;Kong, Byoung-Hun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.15 no.5
    • /
    • pp.2641-2654
    • /
    • 2014
  • In the smart media environment, magazine industry has been experiencing a transition to ecosystem of value network, which includes high complexity and ambiguity. Using case study method, this article conducts research on digital convergence, the model of magazine ecosystem and adaptation strategy of global magazine companies. Research findings have it that the way of contents production of global magazines has been based on collaborative production system within communities, expert communities, creative users, media contents companies and magazine platform. The system shows different patterns and characteristics depending on magazine-driven platform, Platform-driven platform or user-driven platform. Collaboration system has been confirmed in various cases: Huffington Post and Zinio which collaborate with media contents companies, Amazon magazines and Bookish with magazine companies, Huffington Post and Wired with expert communities, and Flipboard with creative users and communities. Foreign magazine contents diverge into (paper, electronic, app and web magazine) as they start the lively trades of their contents on the magazine platform. In the area of contents uses, readers employ smart media technology effectively such as cloud computing, artificial intelligence and module individualization, making it possible for the virtuous cycle to remain in the relationship within communities, expert communities and creative users.

NES Model Development: Expert System for Nitrogen Fertilizer Applications to Cornfields (NES 모델 개발 : 질소비료 적정 시용에 대한 전문가체계)

  • Kim, Won-Il;Jung, Goo-Bok;Fermanian, T.W.;Huck, M.G.;Park, Ro-Dong
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.34 no.1
    • /
    • pp.55-63
    • /
    • 2001
  • N fertilizer recommendations to optimize with consideration to maximum crop yields, maximum profits, and minimum N losses to ground or runoff water, an advisory system. Nitrogen Expert System (NES), was developed. The system was to estimate the optimal rate of N fertilizer application cornfields in Illinois. NES was constructed using Smart Elements, a knowledge-based system that manages the expertise of human experts. NES was reinforced by addition of the effect of a productivity index (PI), soil organic matter content (SOM), and pre-sidedressing of nitrate concentration (PSNT) to the optimal N fertilizer recommendation. NES contains 49 rules, 1 class, 14 objects, and 2 properties. NES was successfully operated, showing N recommendations with inputs of three soil properties including PI, SOM, and PSNT. NES can reduce N loss to the environment, but adherence to the recommendations may also reduce farmers income. Therefore, NES will be more effective by evaluating both environmental damage assessment and other economic agricultural management parameters and other soil physico-chemical parameters.

  • PDF

Automatic Construction of Deep Learning Training Data for High-Definition Road Maps Using Mobile Mapping System (정밀도로지도 제작을 위한 모바일매핑시스템 기반 딥러닝 학습데이터의 자동 구축)

  • Choi, In Ha;Kim, Eui Myoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.3
    • /
    • pp.133-139
    • /
    • 2021
  • Currently, the process of constructing a high-definition road map has a high proportion of manual labor, so there are limitations in construction time and cost. Research to automate map production with high-definition road maps using artificial intelligence is being actively conducted, but since the construction of training data for the map construction is also done manually, there is a need to automatically build training data. Therefore, in this study, after converting to images using point clouds acquired by a mobile mapping system, the road marking areas were extracted through image reclassification and overlap analysis using thresholds. Then, a methodology was proposed to automatically construct training data for deep learning data for the high-definition road map through the classification of the polygon types in the extracted regions. As a result of training 2,764 lane data constructed through the proposed methodology on a deep learning-based PointNet model, the training accuracy was 99.977%, and as a result of predicting the lanes of three color types using the trained model, the accuracy was 99.566%. Therefore, it was found that the methodology proposed in this study can efficiently produce training data for high-definition road maps, and it is believed that the map production process of road markings can also be automated.

A Study on Optimization of Perovskite Solar Cell Light Absorption Layer Thin Film Based on Machine Learning (머신러닝 기반 페로브스카이트 태양전지 광흡수층 박막 최적화를 위한 연구)

  • Ha, Jae-jun;Lee, Jun-hyuk;Oh, Ju-young;Lee, Dong-geun
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.7
    • /
    • pp.55-62
    • /
    • 2022
  • The perovskite solar cell is an active part of research in renewable energy fields such as solar energy, wind, hydroelectric power, marine energy, bioenergy, and hydrogen energy to replace fossil fuels such as oil, coal, and natural gas, which will gradually disappear as power demand increases due to the increase in use of the Internet of Things and Virtual environments due to the 4th industrial revolution. The perovskite solar cell is a solar cell device using an organic-inorganic hybrid material having a perovskite structure, and has advantages of replacing existing silicon solar cells with high efficiency, low cost solutions, and low temperature processes. In order to optimize the light absorption layer thin film predicted by the existing empirical method, reliability must be verified through device characteristics evaluation. However, since it costs a lot to evaluate the characteristics of the light-absorbing layer thin film device, the number of tests is limited. In order to solve this problem, the development and applicability of a clear and valid model using machine learning or artificial intelligence model as an auxiliary means for optimizing the light absorption layer thin film are considered infinite. In this study, to estimate the light absorption layer thin-film optimization of perovskite solar cells, the regression models of the support vector machine's linear kernel, R.B.F kernel, polynomial kernel, and sigmoid kernel were compared to verify the accuracy difference for each kernel function.

Comparative Evaluation of Chest Image Pneumonia based on Learning Rate Application (학습률 적용에 따른 흉부영상 폐렴 유무 분류 비교평가)

  • Kim, Ji-Yul;Ye, Soo-Young
    • Journal of the Korean Society of Radiology
    • /
    • v.16 no.5
    • /
    • pp.595-602
    • /
    • 2022
  • This study tried to suggest the most efficient learning rate for accurate and efficient automatic diagnosis of medical images for chest X-ray pneumonia images using deep learning. After setting the learning rates to 0.1, 0.01, 0.001, and 0.0001 in the Inception V3 deep learning model, respectively, deep learning modeling was performed three times. And the average accuracy and loss function value of verification modeling, and the metric of test modeling were set as performance evaluation indicators, and the performance was compared and evaluated with the average value of three times of the results obtained as a result of performing deep learning modeling. As a result of performance evaluation for deep learning verification modeling performance evaluation and test modeling metric, modeling with a learning rate of 0.001 showed the highest accuracy and excellent performance. For this reason, in this paper, it is recommended to apply a learning rate of 0.001 when classifying the presence or absence of pneumonia on chest X-ray images using a deep learning model. In addition, it was judged that when deep learning modeling through the application of the learning rate presented in this paper could play an auxiliary role in the classification of the presence or absence of pneumonia on chest X-ray images. In the future, if the study of classification for diagnosis and classification of pneumonia using deep learning continues, the contents of this thesis research can be used as basic data, and furthermore, it is expected that it will be helpful in selecting an efficient learning rate in classifying medical images using artificial intelligence.

Development of Graph based Deep Learning methods for Enhancing the Semantic Integrity of Spaces in BIM Models (BIM 모델 내 공간의 시멘틱 무결성 검증을 위한 그래프 기반 딥러닝 모델 구축에 관한 연구)

  • Lee, Wonbok;Kim, Sihyun;Yu, Youngsu;Koo, Bonsang
    • Korean Journal of Construction Engineering and Management
    • /
    • v.23 no.3
    • /
    • pp.45-55
    • /
    • 2022
  • BIM models allow building spaces to be instantiated and recognized as unique objects independently of model elements. These instantiated spaces provide the required semantics that can be leveraged for building code checking, energy analysis, and evacuation route analysis. However, theses spaces or rooms need to be designated manually, which in practice, lead to errors and omissions. Thus, most BIM models today does not guarantee the semantic integrity of space designations, limiting their potential applicability. Recent studies have explored ways to automate space allocation in BIM models using artificial intelligence algorithms, but they are limited in their scope and relatively low classification accuracy. This study explored the use of Graph Convolutional Networks, an algorithm exclusively tailored for graph data structures. The goal was to utilize not only geometry information but also the semantic relational data between spaces and elements in the BIM model. Results of the study confirmed that the accuracy was improved by about 8% compared to algorithms that only used geometric distinctions of the individual spaces.

Temperature Prediction and Control of Cement Preheater Using Alternative Fuels (대체연료를 사용하는 시멘트 예열실 온도 예측 제어)

  • Baasan-Ochir Baljinnyam;Yerim Lee;Boseon Yoo;Jaesik Choi
    • Resources Recycling
    • /
    • v.33 no.4
    • /
    • pp.3-14
    • /
    • 2024
  • The preheating and calcination processes in cement manufacturing, which are crucial for producing the cement intermediate product clinker, require a substantial quantity of fossil fuels to generate high-temperature thermal energy. However, owing to the ever-increasing severity of environmental pollution, considerable efforts are being made to reduce carbon emissions from fossil fuels in the cement industry. Several preliminary studies have focused on increasing the usage of alternative fuels like refuse-derived fuel (RDF). Alternative fuels offer several advantages, such as reduced carbon emissions, mitigated generation of nitrogen oxides, and incineration in preheaters and kilns instead of landfilling. However, owing to the diverse compositions of alternative fuels, estimating their calorific value is challenging. This makes it difficult to regulate the preheater stability, thereby limiting the usage of alternative fuels. Therefore, in this study, a model based on deep neural networks is developed to accurately predict the preheater temperature and propose optimal fuel input quantities using explainable artificial intelligence. Utilizing the proposed model in actual preheating process sites resulted in a 5% reduction in fossil fuel usage, 5%p increase in the substitution rate with alternative fuels, and 35% reduction in preheater temperature fluctuations.

Improving the Accuracy of Document Classification by Learning Heterogeneity (이질성 학습을 통한 문서 분류의 정확성 향상 기법)

  • Wong, William Xiu Shun;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.21-44
    • /
    • 2018
  • In recent years, the rapid development of internet technology and the popularization of smart devices have resulted in massive amounts of text data. Those text data were produced and distributed through various media platforms such as World Wide Web, Internet news feeds, microblog, and social media. However, this enormous amount of easily obtained information is lack of organization. Therefore, this problem has raised the interest of many researchers in order to manage this huge amount of information. Further, this problem also required professionals that are capable of classifying relevant information and hence text classification is introduced. Text classification is a challenging task in modern data analysis, which it needs to assign a text document into one or more predefined categories or classes. In text classification field, there are different kinds of techniques available such as K-Nearest Neighbor, Naïve Bayes Algorithm, Support Vector Machine, Decision Tree, and Artificial Neural Network. However, while dealing with huge amount of text data, model performance and accuracy becomes a challenge. According to the type of words used in the corpus and type of features created for classification, the performance of a text classification model can be varied. Most of the attempts are been made based on proposing a new algorithm or modifying an existing algorithm. This kind of research can be said already reached their certain limitations for further improvements. In this study, aside from proposing a new algorithm or modifying the algorithm, we focus on searching a way to modify the use of data. It is widely known that classifier performance is influenced by the quality of training data upon which this classifier is built. The real world datasets in most of the time contain noise, or in other words noisy data, these can actually affect the decision made by the classifiers built from these data. In this study, we consider that the data from different domains, which is heterogeneous data might have the characteristics of noise which can be utilized in the classification process. In order to build the classifier, machine learning algorithm is performed based on the assumption that the characteristics of training data and target data are the same or very similar to each other. However, in the case of unstructured data such as text, the features are determined according to the vocabularies included in the document. If the viewpoints of the learning data and target data are different, the features may be appearing different between these two data. In this study, we attempt to improve the classification accuracy by strengthening the robustness of the document classifier through artificially injecting the noise into the process of constructing the document classifier. With data coming from various kind of sources, these data are likely formatted differently. These cause difficulties for traditional machine learning algorithms because they are not developed to recognize different type of data representation at one time and to put them together in same generalization. Therefore, in order to utilize heterogeneous data in the learning process of document classifier, we apply semi-supervised learning in our study. However, unlabeled data might have the possibility to degrade the performance of the document classifier. Therefore, we further proposed a method called Rule Selection-Based Ensemble Semi-Supervised Learning Algorithm (RSESLA) to select only the documents that contributing to the accuracy improvement of the classifier. RSESLA creates multiple views by manipulating the features using different types of classification models and different types of heterogeneous data. The most confident classification rules will be selected and applied for the final decision making. In this paper, three different types of real-world data sources were used, which are news, twitter and blogs.

A Study on Developing a VKOSPI Forecasting Model via GARCH Class Models for Intelligent Volatility Trading Systems (지능형 변동성트레이딩시스템개발을 위한 GARCH 모형을 통한 VKOSPI 예측모형 개발에 관한 연구)

  • Kim, Sun-Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.2
    • /
    • pp.19-32
    • /
    • 2010
  • Volatility plays a central role in both academic and practical applications, especially in pricing financial derivative products and trading volatility strategies. This study presents a novel mechanism based on generalized autoregressive conditional heteroskedasticity (GARCH) models that is able to enhance the performance of intelligent volatility trading systems by predicting Korean stock market volatility more accurately. In particular, we embedded the concept of the volatility asymmetry documented widely in the literature into our model. The newly developed Korean stock market volatility index of KOSPI 200, VKOSPI, is used as a volatility proxy. It is the price of a linear portfolio of the KOSPI 200 index options and measures the effect of the expectations of dealers and option traders on stock market volatility for 30 calendar days. The KOSPI 200 index options market started in 1997 and has become the most actively traded market in the world. Its trading volume is more than 10 million contracts a day and records the highest of all the stock index option markets. Therefore, analyzing the VKOSPI has great importance in understanding volatility inherent in option prices and can afford some trading ideas for futures and option dealers. Use of the VKOSPI as volatility proxy avoids statistical estimation problems associated with other measures of volatility since the VKOSPI is model-free expected volatility of market participants calculated directly from the transacted option prices. This study estimates the symmetric and asymmetric GARCH models for the KOSPI 200 index from January 2003 to December 2006 by the maximum likelihood procedure. Asymmetric GARCH models include GJR-GARCH model of Glosten, Jagannathan and Runke, exponential GARCH model of Nelson and power autoregressive conditional heteroskedasticity (ARCH) of Ding, Granger and Engle. Symmetric GARCH model indicates basic GARCH (1, 1). Tomorrow's forecasted value and change direction of stock market volatility are obtained by recursive GARCH specifications from January 2007 to December 2009 and are compared with the VKOSPI. Empirical results indicate that negative unanticipated returns increase volatility more than positive return shocks of equal magnitude decrease volatility, indicating the existence of volatility asymmetry in the Korean stock market. The point value and change direction of tomorrow VKOSPI are estimated and forecasted by GARCH models. Volatility trading system is developed using the forecasted change direction of the VKOSPI, that is, if tomorrow VKOSPI is expected to rise, a long straddle or strangle position is established. A short straddle or strangle position is taken if VKOSPI is expected to fall tomorrow. Total profit is calculated as the cumulative sum of the VKOSPI percentage change. If forecasted direction is correct, the absolute value of the VKOSPI percentage changes is added to trading profit. It is subtracted from the trading profit if forecasted direction is not correct. For the in-sample period, the power ARCH model best fits in a statistical metric, Mean Squared Prediction Error (MSPE), and the exponential GARCH model shows the highest Mean Correct Prediction (MCP). The power ARCH model best fits also for the out-of-sample period and provides the highest probability for the VKOSPI change direction tomorrow. Generally, the power ARCH model shows the best fit for the VKOSPI. All the GARCH models provide trading profits for volatility trading system and the exponential GARCH model shows the best performance, annual profit of 197.56%, during the in-sample period. The GARCH models present trading profits during the out-of-sample period except for the exponential GARCH model. During the out-of-sample period, the power ARCH model shows the largest annual trading profit of 38%. The volatility clustering and asymmetry found in this research are the reflection of volatility non-linearity. This further suggests that combining the asymmetric GARCH models and artificial neural networks can significantly enhance the performance of the suggested volatility trading system, since artificial neural networks have been shown to effectively model nonlinear relationships.

An Ontology-based Generation of Operating Procedures for Boiler Shutdown : Knowledge Representation and Application to Operator Training (온톨로지 기반의 보일러 셧다운 절차 생성 : 지식표현 및 훈련시나리오 활용)

  • Park, Myeongnam;Kim, Tae-Ok;Lee, Bongwoo;Shin, Dongil
    • Journal of the Korean Institute of Gas
    • /
    • v.21 no.4
    • /
    • pp.47-61
    • /
    • 2017
  • The preconditions of the usefulness of an operator safety training model in large plants are the versatility and accuracy of operational procedures, obtained by detailed analysis of the various types of risks associated with the operation, and the systematic representation of knowledge. In this study, we consider the artificial intelligence planning method for the generation of operation procedures; classify them into general actions, actions and technical terms of the operator; and take into account the sharing and reuse of knowledge, defining a knowledge expression ontology. In order to expand and extend the general operations of the operation, we apply a Hierarchical Task Network (HTN). Actual boiler plant case studies are classified according to operating conditions, states and operating objectives between the units, and general emergency shutdown procedures are created to confirm the applicability of the proposed method. These results based on systematic knowledge representation can be easily applied to general plant operation procedures and operator safety training scenarios and will be used for automatic generation of safety training scenarios.