• Title/Summary/Keyword: Information based Industry

Search Result 5,132, Processing Time 0.039 seconds

A Comparative Analysis of Ensemble Learning-Based Classification Models for Explainable Term Deposit Subscription Forecasting (설명 가능한 정기예금 가입 여부 예측을 위한 앙상블 학습 기반 분류 모델들의 비교 분석)

  • Shin, Zian;Moon, Jihoon;Rho, Seungmin
    • The Journal of Society for e-Business Studies
    • /
    • v.26 no.3
    • /
    • pp.97-117
    • /
    • 2021
  • Predicting term deposit subscriptions is one of representative financial marketing in banks, and banks can build a prediction model using various customer information. In order to improve the classification accuracy for term deposit subscriptions, many studies have been conducted based on machine learning techniques. However, even if these models can achieve satisfactory performance, utilizing them is not an easy task in the industry when their decision-making process is not adequately explained. To address this issue, this paper proposes an explainable scheme for term deposit subscription forecasting. For this, we first construct several classification models using decision tree-based ensemble learning methods, which yield excellent performance in tabular data, such as random forest, gradient boosting machine (GBM), extreme gradient boosting (XGB), and light gradient boosting machine (LightGBM). We then analyze their classification performance in depth through 10-fold cross-validation. After that, we provide the rationale for interpreting the influence of customer information and the decision-making process by applying Shapley additive explanation (SHAP), an explainable artificial intelligence technique, to the best classification model. To verify the practicality and validity of our scheme, experiments were conducted with the bank marketing dataset provided by Kaggle; we applied the SHAP to the GBM and LightGBM models, respectively, according to different dataset configurations and then performed their analysis and visualization for explainable term deposit subscriptions.

A Study on an Automatic Classification Model for Facet-Based Multidimensional Analysis of Civil Complaints (패싯 기반 민원 다차원 분석을 위한 자동 분류 모델)

  • Na Rang Kim
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.29 no.1
    • /
    • pp.135-144
    • /
    • 2024
  • In this study, we propose an automatic classification model for quantitative multidimensional analysis based on facet theory to understand public opinions and demands on major issues through big data analysis. Civil complaints, as a form of public feedback, are generated by various individuals on multiple topics repeatedly and continuously in real-time, which can be challenging for officials to read and analyze efficiently. Specifically, our research introduces a new classification framework that utilizes facet theory and political analysis models to analyze the characteristics of citizen complaints and apply them to the policy-making process. Furthermore, to reduce administrative tasks related to complaint analysis and processing and to facilitate citizen policy participation, we employ deep learning to automatically extract and classify attributes based on the facet analysis framework. The results of this study are expected to provide important insights into understanding and analyzing the characteristics of big data related to citizen complaints, which can pave the way for future research in various fields beyond the public sector, such as education, industry, and healthcare, for quantifying unstructured data and utilizing multidimensional analysis. In practical terms, improving the processing system for large-scale electronic complaints and automation through deep learning can enhance the efficiency and responsiveness of complaint handling, and this approach can also be applied to text data processing in other fields.

How to build an AI Safety Management Chatbot Service based on IoT Construction Health Monitoring (IoT 건축시공 건전성 모니터링 기반 AI 안전관리 챗봇서비스 구축방안)

  • Hwi Jin Kang;Sung Jo Choi;Sang Jun Han;Jae Hyun Kim;Seung Ho Lee
    • Journal of the Society of Disaster Information
    • /
    • v.20 no.1
    • /
    • pp.106-116
    • /
    • 2024
  • Purpose: This paper conducts IoT and CCTV-based safety monitoring to analyze accidents and potential risks occurring at construction sites, and detect and analyze risks such as falls and collisions or abnormalities and to establish a system for early warning using devices like a walkie-talkie and chatbot service. Method: A safety management service model is presented through smart construction technology case studies at the construction site and review a relevant literature analysis. Result: According to 'Construction Accident Statistics,' in 2021, there were 26,888 casualties in the construction industry, accounting for 26.3% of all reported accidents. Fatalities in construction-related accidents amounted to 417 individuals, representing 50.5% of all industrial accident-related deaths. This study suggests implementing AI chatbot services for construction site safety management utilizing IoT-based health monitoring technologies in smart construction practices. Construction sites where stakeholders such as workers participate were demonstrated by implementing an artificial intelligence chatbot system by selecting major risk areas within the workplace, such as scaffolding processes, openings, and access to hazardous machinery. Conclusion: The possibility of commercialization was confirmed by receiving more than 90 points in the satisfaction survey of participating workers regarding the empirical results of the artificial intelligence chatbot service at construction sites.

Online news-based stock price forecasting considering homogeneity in the industrial sector (산업군 내 동질성을 고려한 온라인 뉴스 기반 주가예측)

  • Seong, Nohyoon;Nam, Kihwan
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.1-19
    • /
    • 2018
  • Since stock movements forecasting is an important issue both academically and practically, studies related to stock price prediction have been actively conducted. The stock price forecasting research is classified into structured data and unstructured data, and it is divided into technical analysis, fundamental analysis and media effect analysis in detail. In the big data era, research on stock price prediction combining big data is actively underway. Based on a large number of data, stock prediction research mainly focuses on machine learning techniques. Especially, research methods that combine the effects of media are attracting attention recently, among which researches that analyze online news and utilize online news to forecast stock prices are becoming main. Previous studies predicting stock prices through online news are mostly sentiment analysis of news, making different corpus for each company, and making a dictionary that predicts stock prices by recording responses according to the past stock price. Therefore, existing studies have examined the impact of online news on individual companies. For example, stock movements of Samsung Electronics are predicted with only online news of Samsung Electronics. In addition, a method of considering influences among highly relevant companies has also been studied recently. For example, stock movements of Samsung Electronics are predicted with news of Samsung Electronics and a highly related company like LG Electronics.These previous studies examine the effects of news of industrial sector with homogeneity on the individual company. In the previous studies, homogeneous industries are classified according to the Global Industrial Classification Standard. In other words, the existing studies were analyzed under the assumption that industries divided into Global Industrial Classification Standard have homogeneity. However, existing studies have limitations in that they do not take into account influential companies with high relevance or reflect the existence of heterogeneity within the same Global Industrial Classification Standard sectors. As a result of our examining the various sectors, it can be seen that there are sectors that show the industrial sectors are not a homogeneous group. To overcome these limitations of existing studies that do not reflect heterogeneity, our study suggests a methodology that reflects the heterogeneous effects of the industrial sector that affect the stock price by applying k-means clustering. Multiple Kernel Learning is mainly used to integrate data with various characteristics. Multiple Kernel Learning has several kernels, each of which receives and predicts different data. To incorporate effects of target firm and its relevant firms simultaneously, we used Multiple Kernel Learning. Each kernel was assigned to predict stock prices with variables of financial news of the industrial group divided by the target firm, K-means cluster analysis. In order to prove that the suggested methodology is appropriate, experiments were conducted through three years of online news and stock prices. The results of this study are as follows. (1) We confirmed that the information of the industrial sectors related to target company also contains meaningful information to predict stock movements of target company and confirmed that machine learning algorithm has better predictive power when considering the news of the relevant companies and target company's news together. (2) It is important to predict stock movements with varying number of clusters according to the level of homogeneity in the industrial sector. In other words, when stock prices are homogeneous in industrial sectors, it is important to use relational effect at the level of industry group without analyzing clusters or to use it in small number of clusters. When the stock price is heterogeneous in industry group, it is important to cluster them into groups. This study has a contribution that we testified firms classified as Global Industrial Classification Standard have heterogeneity and suggested it is necessary to define the relevance through machine learning and statistical analysis methodology rather than simply defining it in the Global Industrial Classification Standard. It has also contribution that we proved the efficiency of the prediction model reflecting heterogeneity.

A Study on Kiosk Satisfaction Level Improvement: Focusing on Kano, Timko, and PCSI Methodology (키오스크 소비자의 만족수준 연구: Kano, Timko, PCSI 방법론을 중심으로)

  • Choi, Jaehoon;Kim, Pansoo
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.17 no.4
    • /
    • pp.193-204
    • /
    • 2022
  • This study analyzed the degree of influence of measurement and improvement of customer satisfaction level targeting kiosk users. In modern times, due to the development of technology and the improvement of the online environment, the probability that simple labor tasks will disappear after 10 years is close to 90%. Even in domestic research, it is predicted that 'simple labor jobs' will disappear due to the influence of advanced technology with a probability of about 36%. there is. In particular, as the demand for non-face-to-face services increases due to the Corona 19 virus, which is recently spreading globally, the trend of introducing kiosks has accelerated, and the global market will grow to 83.5 billion won in 2021, showing an average annual growth rate of 8.9%. there is. However, due to the unmanned nature of these kiosks, some consumers still have difficulties in using them, and consumers who are not familiar with the use of these technologies have a negative attitude towards service co-producers due to rejection of non-face-to-face services and anxiety about service errors. Lack of understanding leads to role conflicts between sales clerks and consumers, or inequality is being created in terms of service provision and generations accustomed to using technology. In addition, since kiosk is a representative technology-based self-service industry, if the user feels uncomfortable or requires additional labor, the overall service value decreases and the growth of the kiosk industry itself can be suppressed. It is important. Therefore, interviews were conducted on the main points of direct use with actual users centered on display color scheme, text size, device design, device size, internal UI (interface), amount of information, recognition sensor (barcode, NFC, etc.), Display brightness, self-event, and reaction speed items were extracted. Afterwards, using the questionnaire, the Kano model quality attribute classification of each expected evaluation item was carried out, and Timko's customer satisfaction coefficient, which can be calculated with accurate numerical values The PCSI Index analysis was additionally performed to determine the improvement priorities by finally classifying the improvement impact of the kiosk expected evaluation items through research. As a result, the impact of improvement appears in the order of internal UI (interface), text size, recognition sensor (barcode, NFC, etc.), reaction speed, self-event, display brightness, amount of information, device size, device design, and display color scheme. Through this, we intend to contribute to a comprehensive comparison of kiosk-based research in each field and to set the direction for improvement in the venture industry.

Application of Lean Theory to BIM-Based Coordination - A Case Study on Process Re-Engineering of MEP Coordination - (린 기법의 BIM 기반 설계조율 프로세스 접목 - 설비전기 설계조율 프로세스 재설계 사례연구 -)

  • Jang, Se-Jun
    • Journal of the Korea Institute of Building Construction
    • /
    • v.18 no.1
    • /
    • pp.67-79
    • /
    • 2018
  • This paper provides theoretical deformation of lean concept and its application for usage of building information modeling (BIM) process. Recently, much research is focused on application of lean concept for more efficient usage of BIM. The lean theory and its basic function and feature is based on manufacturing industry. The manufacturing process can be improved by process re-engineering steps of lean concept which consist of the steps of value, value stream, flow, pull, perfection. However manufacturing process and construction process has different characteristics. Due to the differences, five steps of the traditional lean's process re-engineering can't be directly applied to the BIM based engineering process. In order to solve this problem, we conduct analysis on the characteristics of the manufacturing process and BIM based engineering. We propose modified and expanded concept of lean for process re-engineering and the modified theory was applied to the mechanical, electrical and plumbing (MEP) coordination process. Through the proposed 8 steps of methodology, 2D based process was changed to integrated and using BIM based MEP coordination process. In addition, the results showed the potentiality of cost reduction and process improvement. The results of this study can be a foundation for the theoretical combination of lean and a variety part of construction engineering process.

How to automatically extract 2D deliverables from BIM?

  • Kim, Yije;Chin, Sangyoon
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.1253-1253
    • /
    • 2022
  • Although the construction industry is changing from a 2D-based to a 3D BIM-based management process, 2D drawings are still used as standards for permits and construction. For this reason, 2D deliverables extracted from 3D BIM are one of the essential achievements of BIM projects. However, due to technical and institutional problems that exist in practice, the process of extracting 2D deliverables from BIM requires additional work beyond generating 3D BIM models. In addition, the consistency of data between 3D BIM models and 2D deliverables is low, which is a major factor hindering work productivity in practice. To solve this problem, it is necessary to build BIM data that meets information requirements (IRs) for extracting 2D deliverables to minimize the amount of work of users and maximize the utilization of BIM data. However, despite this, the additional work that occurs in the BIM process for drawing creation is still a burden on BIM users. To solve this problem, the purpose of this study is to increase the productivity of the BIM process by automating the process of extracting 2D deliverables from BIM and securing data consistency between the BIM model and 2D deliverables. For this, an expert interview was conducted, and the requirements for automation of the process of extracting 2D deliverables from BIM were analyzed. Based on the requirements, the types of drawings and drawing expression elements that require automation of drawing generation in the design development stage were derived. Finally, the method for developing automation technology targeting elements that require automation was classified and analyzed, and the process for automatically extracting BIM-based 2D deliverables through templates and rule-based automation modules were derived. At this time, the automation module was developed as an add-on to Revit software, a representative BIM authoring tool, and 120 rule-based automation rulesets, and the combinations of these rulesets were used to automatically generate 2D deliverables from BIM. Through this, it was possible to automatically create about 80% of drawing expression elements, and it was possible to simplify the user's work process compared to the existing work. Through the automation process proposed in this study, it is expected that the productivity of extracting 2D deliverables from BIM will increase, thereby increasing the practical value of BIM utilization.

  • PDF

Anomaly Detection for User Action with Generative Adversarial Networks (적대적 생성 모델을 활용한 사용자 행위 이상 탐지 방법)

  • Choi, Nam woong;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.43-62
    • /
    • 2019
  • At one time, the anomaly detection sector dominated the method of determining whether there was an abnormality based on the statistics derived from specific data. This methodology was possible because the dimension of the data was simple in the past, so the classical statistical method could work effectively. However, as the characteristics of data have changed complexly in the era of big data, it has become more difficult to accurately analyze and predict the data that occurs throughout the industry in the conventional way. Therefore, SVM and Decision Tree based supervised learning algorithms were used. However, there is peculiarity that supervised learning based model can only accurately predict the test data, when the number of classes is equal to the number of normal classes and most of the data generated in the industry has unbalanced data class. Therefore, the predicted results are not always valid when supervised learning model is applied. In order to overcome these drawbacks, many studies now use the unsupervised learning-based model that is not influenced by class distribution, such as autoencoder or generative adversarial networks. In this paper, we propose a method to detect anomalies using generative adversarial networks. AnoGAN, introduced in the study of Thomas et al (2017), is a classification model that performs abnormal detection of medical images. It was composed of a Convolution Neural Net and was used in the field of detection. On the other hand, sequencing data abnormality detection using generative adversarial network is a lack of research papers compared to image data. Of course, in Li et al (2018), a study by Li et al (LSTM), a type of recurrent neural network, has proposed a model to classify the abnormities of numerical sequence data, but it has not been used for categorical sequence data, as well as feature matching method applied by salans et al.(2016). So it suggests that there are a number of studies to be tried on in the ideal classification of sequence data through a generative adversarial Network. In order to learn the sequence data, the structure of the generative adversarial networks is composed of LSTM, and the 2 stacked-LSTM of the generator is composed of 32-dim hidden unit layers and 64-dim hidden unit layers. The LSTM of the discriminator consists of 64-dim hidden unit layer were used. In the process of deriving abnormal scores from existing paper of Anomaly Detection for Sequence data, entropy values of probability of actual data are used in the process of deriving abnormal scores. but in this paper, as mentioned earlier, abnormal scores have been derived by using feature matching techniques. In addition, the process of optimizing latent variables was designed with LSTM to improve model performance. The modified form of generative adversarial model was more accurate in all experiments than the autoencoder in terms of precision and was approximately 7% higher in accuracy. In terms of Robustness, Generative adversarial networks also performed better than autoencoder. Because generative adversarial networks can learn data distribution from real categorical sequence data, Unaffected by a single normal data. But autoencoder is not. Result of Robustness test showed that he accuracy of the autocoder was 92%, the accuracy of the hostile neural network was 96%, and in terms of sensitivity, the autocoder was 40% and the hostile neural network was 51%. In this paper, experiments have also been conducted to show how much performance changes due to differences in the optimization structure of potential variables. As a result, the level of 1% was improved in terms of sensitivity. These results suggest that it presented a new perspective on optimizing latent variable that were relatively insignificant.

Understanding the Mismatch between ERP and Organizational Information Needs and Its Responses: A Study based on Organizational Memory Theory (조직의 정보 니즈와 ERP 기능과의 불일치 및 그 대응책에 대한 이해: 조직 메모리 이론을 바탕으로)

  • Jeong, Seung-Ryul;Bae, Uk-Ho
    • Asia pacific journal of information systems
    • /
    • v.22 no.2
    • /
    • pp.21-38
    • /
    • 2012
  • Until recently, successful implementation of ERP systems has been a popular topic among ERP researchers, who have attempted to identify its various contributing factors. None of these efforts, however, explicitly recognize the need to identify disparities that can exist between organizational information requirements and ERP systems. Since ERP systems are in fact "packages" -that is, software programs developed by independent software vendors for sale to organizations that use them-they are designed to meet the general needs of numerous organizations, rather than the unique needs of a particular organization, as is the case with custom-developed software. By adopting standard packages, organizations can substantially reduce many of the potential implementation risks commonly associated with custom-developed software. However, it is also true that the nature of the package itself could be a risk factor as the features and functions of the ERP systems may not completely comply with a particular organization's informational requirements. In this study, based on the organizational memory mismatch perspective that was derived from organizational memory theory and cognitive dissonance theory, we define the nature of disparities, which we call "mismatches," and propose that the mismatch between organizational information requirements and ERP systems is one of the primary determinants in the successful implementation of ERP systems. Furthermore, we suggest that customization efforts as a coping strategy for mismatches can play a significant role in increasing the possibilities of success. In order to examine the contention we propose in this study, we employed a survey-based field study of ERP project team members, resulting in a total of 77 responses. The results of this study show that, as anticipated from the organizational memory mismatch perspective, the mismatch between organizational information requirements and ERP systems makes a significantly negative impact on the implementation success of ERP systems. This finding confirms our hypothesis that the more mismatch there is, the more difficult successful ERP implementation is, and thus requires more attention to be drawn to mismatch as a major failure source in ERP implementation. This study also found that as a coping strategy on mismatch, the effects of customization are significant. In other words, utilizing the appropriate customization method could lead to the implementation success of ERP systems. This is somewhat interesting because it runs counter to the argument of some literature and ERP vendors that minimized customization (or even the lack thereof) is required for successful ERP implementation. In many ERP projects, there is a tendency among ERP developers to adopt default ERP functions without any customization, adhering to the slogan of "the introduction of best practices." However, this study asserts that we cannot expect successful implementation if we don't attempt to customize ERP systems when mismatches exist. For a more detailed analysis, we identified three types of mismatches-Non-ERP, Non-Procedure, and Hybrid. Among these, only Non-ERP mismatches (a situation in which ERP systems cannot support the existing information needs that are currently fulfilled) were found to have a direct influence on the implementation of ERP systems. Neither Non-Procedure nor Hybrid mismatches were found to have significant impact in the ERP context. These findings provide meaningful insights since they could serve as the basis for discussing how the ERP implementation process should be defined and what activities should be included in the implementation process. They show that ERP developers may not want to include organizational (or business processes) changes in the implementation process, suggesting that doing so could lead to failed implementation. And in fact, this suggestion eventually turned out to be true when we found that the application of process customization led to higher possibilities of failure. From these discussions, we are convinced that Non-ERP is the only type of mismatch we need to focus on during the implementation process, implying that organizational changes must be made before, rather than during, the implementation process. Finally, this study found that among the various customization approaches, bolt-on development methods in particular seemed to have significantly positive effects. Interestingly again, this finding is not in the same line of thought as that of the vendors in the ERP industry. The vendors' recommendations are to apply as many best practices as possible, thereby resulting in the minimization of customization and utilization of bolt-on development methods. They particularly advise against changing the source code and rather recommend employing, when necessary, the method of programming additional software code using the computer language of the vendor. As previously stated, however, our study found active customization, especially bolt-on development methods, to have positive effects on ERP, and found source code changes in particular to have the most significant effects. Moreover, our study found programming additional software to be ineffective, suggesting there is much difference between ERP developers and vendors in viewpoints and strategies toward ERP customization. In summary, mismatches are inherent in the ERP implementation context and play an important role in determining its success. Considering the significance of mismatches, this study proposes a new model for successful ERP implementation, developed from the organizational memory mismatch perspective, and provides many insights by empirically confirming the model's usefulness.

  • PDF

Conflict of Interests and Analysts' Forecast (이해상충과 애널리스트 예측)

  • Park, Chang-Gyun;Youn, Taehoon
    • KDI Journal of Economic Policy
    • /
    • v.31 no.1
    • /
    • pp.239-276
    • /
    • 2009
  • The paper investigates the possible relationship between earnings prediction by security analysts and special ownership ties that link security companies those analysts belong to and firms under analysis. "Security analysts" are known best for their role as information producers in stock markets where imperfect information is prevalent and transaction costs are high. In such a market, changes in the fundamental value of a company are not spontaneously reflected in the stock price, and the security analysts actively produce and distribute the relevant information crucial for the price mechanism to operate efficiently. Therefore, securing the fairness and accuracy of information they provide is very important for efficiencyof resource allocation as well as protection of investors who are excluded from the special relationship. Evidence of systematic distortion of information by the special tie naturally calls for regulatory intervention, if found. However, one cannot presuppose the existence of distorted information based on the common ownership between the appraiser and the appraisee. Reputation effect is especially cherished by security firms and among analysts as indispensable intangible asset in the industry, and the incentive to maintain good reputation by providing accurate earnings prediction may overweigh the incentive to offer favorable rating or stock recommendation for the firms that are affiliated by common ownership. This study shares the theme of existing literature concerning the effect of conflict of interests on the accuracy of analyst's predictions. This study, however, focuses on the potential conflict of interest situation that may originate from the Korea-specific ownership structure of large conglomerates. Utilizing an extensive database of analysts' reports provided by WiseFn(R) in Korea, we perform empirical analysis of potential relationship between earnings prediction and common ownership. We first analyzed the prediction bias index which tells how optimistic or friendly the analyst's prediction is compared to the realized earnings. It is shown that there exists no statistically significant relationship between the prediction bias and common ownership. This is a rather surprising result since it is observed that the frequency of positive prediction bias is higher with such ownership tie. Next, we analyzed the prediction accuracy index which shows how accurate the analyst's prediction is compared to the realized earnings regardless of its sign. It is also concluded that there is no significant association between the accuracy ofearnings prediction and special relationship. We interpret the results implying that market discipline based on reputation effect is working in Korean stock market in the sense that security companies do not seem to be influenced by an incentive to offer distorted information on affiliated firms. While many of the existing studies confirm the relationship between the ability of the analystand the accuracy of the analyst's prediction, these factors cannot be controlled in the above analysis due to the lack of relevant data. As an indirect way to examine the possibility that such relationship might have distorted the result, we perform an additional but identical analysis based on a sub-sample consisting only of reports by best analysts. The result also confirms the earlier conclusion that the common ownership structure does not affect the accuracy and bias of earnings prediction by the analyst.

  • PDF