• Title/Summary/Keyword: artificial intelligence-based model

Search Result 1,215, Processing Time 0.033 seconds

Deep Learning-based Fracture Mode Determination in Composite Laminates (복합 적층판의 딥러닝 기반 파괴 모드 결정)

  • Muhammad Muzammil Azad;Atta Ur Rehman Shah;M.N. Prabhakar;Heung Soo Kim
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.37 no.4
    • /
    • pp.225-232
    • /
    • 2024
  • This study focuses on the determination of the fracture mode in composite laminates using deep learning. With the increase in the use of laminated composites in numerous engineering applications, the insurance of their integrity and performance is of paramount importance. However, owing to the complex nature of these materials, the identification of fracture modes is often a tedious and time-consuming task that requires critical domain knowledge. Therefore, to alleviate these issues, this study aims to utilize modern artificial intelligence technology to automate the fractographic analysis of laminated composites. To accomplish this goal, scanning electron microscopy (SEM) images of fractured tensile test specimens are obtained from laminated composites to showcase various fracture modes. These SEM images are then categorized based on numerous fracture modes, including fiber breakage, fiber pull-out, mix-mode fracture, matrix brittle fracture, and matrix ductile fracture. Next, the collective data for all classes are divided into train, test, and validation datasets. Two state-of-the-art, deep learning-based pre-trained models, namely, DenseNet and GoogleNet, are trained to learn the discriminative features for each fracture mode. The DenseNet models shows training and testing accuracies of 94.01% and 75.49%, respectively, whereas those of the GoogleNet model are 84.55% and 54.48%, respectively. The trained deep learning models are then validated on unseen validation datasets. This validation demonstrates that the DenseNet model, owing to its deeper architecture, can extract high-quality features, resulting in 84.44% validation accuracy. This value is 36.84% higher than that of the GoogleNet model. Hence, these results affirm that the DenseNet model is effective in performing fractographic analyses of laminated composites by predicting fracture modes with high precision.

Bankruptcy Forecasting Model using AdaBoost: A Focus on Construction Companies (적응형 부스팅을 이용한 파산 예측 모형: 건설업을 중심으로)

  • Heo, Junyoung;Yang, Jin Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.35-48
    • /
    • 2014
  • According to the 2013 construction market outlook report, the liquidation of construction companies is expected to continue due to the ongoing residential construction recession. Bankruptcies of construction companies have a greater social impact compared to other industries. However, due to the different nature of the capital structure and debt-to-equity ratio, it is more difficult to forecast construction companies' bankruptcies than that of companies in other industries. The construction industry operates on greater leverage, with high debt-to-equity ratios, and project cash flow focused on the second half. The economic cycle greatly influences construction companies. Therefore, downturns tend to rapidly increase the bankruptcy rates of construction companies. High leverage, coupled with increased bankruptcy rates, could lead to greater burdens on banks providing loans to construction companies. Nevertheless, the bankruptcy prediction model concentrated mainly on financial institutions, with rare construction-specific studies. The bankruptcy prediction model based on corporate finance data has been studied for some time in various ways. However, the model is intended for all companies in general, and it may not be appropriate for forecasting bankruptcies of construction companies, who typically have high liquidity risks. The construction industry is capital-intensive, operates on long timelines with large-scale investment projects, and has comparatively longer payback periods than in other industries. With its unique capital structure, it can be difficult to apply a model used to judge the financial risk of companies in general to those in the construction industry. Diverse studies of bankruptcy forecasting models based on a company's financial statements have been conducted for many years. The subjects of the model, however, were general firms, and the models may not be proper for accurately forecasting companies with disproportionately large liquidity risks, such as construction companies. The construction industry is capital-intensive, requiring significant investments in long-term projects, therefore to realize returns from the investment. The unique capital structure means that the same criteria used for other industries cannot be applied to effectively evaluate financial risk for construction firms. Altman Z-score was first published in 1968, and is commonly used as a bankruptcy forecasting model. It forecasts the likelihood of a company going bankrupt by using a simple formula, classifying the results into three categories, and evaluating the corporate status as dangerous, moderate, or safe. When a company falls into the "dangerous" category, it has a high likelihood of bankruptcy within two years, while those in the "safe" category have a low likelihood of bankruptcy. For companies in the "moderate" category, it is difficult to forecast the risk. Many of the construction firm cases in this study fell in the "moderate" category, which made it difficult to forecast their risk. Along with the development of machine learning using computers, recent studies of corporate bankruptcy forecasting have used this technology. Pattern recognition, a representative application area in machine learning, is applied to forecasting corporate bankruptcy, with patterns analyzed based on a company's financial information, and then judged as to whether the pattern belongs to the bankruptcy risk group or the safe group. The representative machine learning models previously used in bankruptcy forecasting are Artificial Neural Networks, Adaptive Boosting (AdaBoost) and, the Support Vector Machine (SVM). There are also many hybrid studies combining these models. Existing studies using the traditional Z-Score technique or bankruptcy prediction using machine learning focus on companies in non-specific industries. Therefore, the industry-specific characteristics of companies are not considered. In this paper, we confirm that adaptive boosting (AdaBoost) is the most appropriate forecasting model for construction companies by based on company size. We classified construction companies into three groups - large, medium, and small based on the company's capital. We analyzed the predictive ability of AdaBoost for each group of companies. The experimental results showed that AdaBoost has more predictive ability than the other models, especially for the group of large companies with capital of more than 50 billion won.

Semantic Process Retrieval with Similarity Algorithms (유사도 알고리즘을 활용한 시맨틱 프로세스 검색방안)

  • Lee, Hong-Joo;Klein, Mark
    • Asia pacific journal of information systems
    • /
    • v.18 no.1
    • /
    • pp.79-96
    • /
    • 2008
  • One of the roles of the Semantic Web services is to execute dynamic intra-organizational services including the integration and interoperation of business processes. Since different organizations design their processes differently, the retrieval of similar semantic business processes is necessary in order to support inter-organizational collaborations. Most approaches for finding services that have certain features and support certain business processes have relied on some type of logical reasoning and exact matching. This paper presents our approach of using imprecise matching for expanding results from an exact matching engine to query the OWL(Web Ontology Language) MIT Process Handbook. MIT Process Handbook is an electronic repository of best-practice business processes. The Handbook is intended to help people: (1) redesigning organizational processes, (2) inventing new processes, and (3) sharing ideas about organizational practices. In order to use the MIT Process Handbook for process retrieval experiments, we had to export it into an OWL-based format. We model the Process Handbook meta-model in OWL and export the processes in the Handbook as instances of the meta-model. Next, we need to find a sizable number of queries and their corresponding correct answers in the Process Handbook. Many previous studies devised artificial dataset composed of randomly generated numbers without real meaning and used subjective ratings for correct answers and similarity values between processes. To generate a semantic-preserving test data set, we create 20 variants for each target process that are syntactically different but semantically equivalent using mutation operators. These variants represent the correct answers of the target process. We devise diverse similarity algorithms based on values of process attributes and structures of business processes. We use simple similarity algorithms for text retrieval such as TF-IDF and Levenshtein edit distance to devise our approaches, and utilize tree edit distance measure because semantic processes are appeared to have a graph structure. Also, we design similarity algorithms considering similarity of process structure such as part process, goal, and exception. Since we can identify relationships between semantic process and its subcomponents, this information can be utilized for calculating similarities between processes. Dice's coefficient and Jaccard similarity measures are utilized to calculate portion of overlaps between processes in diverse ways. We perform retrieval experiments to compare the performance of the devised similarity algorithms. We measure the retrieval performance in terms of precision, recall and F measure? the harmonic mean of precision and recall. The tree edit distance shows the poorest performance in terms of all measures. TF-IDF and the method incorporating TF-IDF measure and Levenshtein edit distance show better performances than other devised methods. These two measures are focused on similarity between name and descriptions of process. In addition, we calculate rank correlation coefficient, Kendall's tau b, between the number of process mutations and ranking of similarity values among the mutation sets. In this experiment, similarity measures based on process structure, such as Dice's, Jaccard, and derivatives of these measures, show greater coefficient than measures based on values of process attributes. However, the Lev-TFIDF-JaccardAll measure considering process structure and attributes' values together shows reasonably better performances in these two experiments. For retrieving semantic process, we can think that it's better to consider diverse aspects of process similarity such as process structure and values of process attributes. We generate semantic process data and its dataset for retrieval experiment from MIT Process Handbook repository. We suggest imprecise query algorithms that expand retrieval results from exact matching engine such as SPARQL, and compare the retrieval performances of the similarity algorithms. For the limitations and future work, we need to perform experiments with other dataset from other domain. And, since there are many similarity values from diverse measures, we may find better ways to identify relevant processes by applying these values simultaneously.

New Insights on Mobile Location-based Services(LBS): Leading Factors to the Use of Services and Privacy Paradox (모바일 위치기반서비스(LBS) 관련한 새로운 견해: 서비스사용으로 이끄는 요인들과 사생활염려의 모순)

  • Cheon, Eunyoung;Park, Yong-Tae
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.33-56
    • /
    • 2017
  • As Internet usage is becoming more common worldwide and smartphone become necessity in daily life, technologies and applications related to mobile Internet are developing rapidly. The results of the Internet usage patterns of consumers around the world imply that there are many potential new business opportunities for mobile Internet technologies and applications. The location-based service (LBS) is a service based on the location information of the mobile device. LBS has recently gotten much attention among many mobile applications and various LBSs are rapidly developing in numerous categories. However, even with the development of LBS related technologies and services, there is still a lack of empirical research on the intention to use LBS. The application of previous researches is limited because they focused on the effect of one particular factor and had not shown the direct relationship on the intention to use LBS. Therefore, this study presents a research model of factors that affect the intention to use and actual use of LBS whose market is expected to grow rapidly, and tested it by conducting a questionnaire survey of 330 users. The results of data analysis showed that service customization, service quality, and personal innovativeness have a positive effect on the intention to use LBS and the intention to use LBS has a positive effect on the actual use of LBS. These results implies that LBS providers can enhance the user's intention to use LBS by offering service customization through the provision of various LBSs based on users' needs, improving information service qualities such as accuracy, timeliness, sensitivity, and reliability, and encouraging personal innovativeness. However, privacy concerns in the context of LBS are not significantly affected by service customization and personal innovativeness and privacy concerns do not significantly affect the intention to use LBS. In fact, the information related to users' location collected by LBS is less sensitive when compared with the information that is used to perform financial transactions. Therefore, such outcomes on privacy concern are revealed. In addition, the advantages of using LBS are more important than the sensitivity of privacy protection to the users who use LBS than to the users who use information systems such as electronic commerce that involves financial transactions. Therefore, LBS are recommended to be treated differently from other information systems. This study is significant in the theoretical point of contribution that it proposed factors affecting the intention to use LBS in a multi-faceted perspective, proved the proposed research model empirically, brought new insights on LBS, and broadens understanding of the intention to use and actual use of LBS. Also, the empirical results of the customization of LBS affecting the user's intention to use the LBS suggest that the provision of customized LBS services based on the usage data analysis through utilizing technologies such as artificial intelligence can enhance the user's intention to use. In a practical point of view, the results of this study are expected to help LBS providers to develop a competitive strategy for responding to LBS users effectively and lead to the LBS market grows. We expect that there will be differences in using LBSs depending on some factors such as types of LBS, whether it is free of charge or not, privacy policies related to LBS, the levels of reliability related application and technology, the frequency of use, etc. Therefore, if we can make comparative studies with those factors, it will contribute to the development of the research areas of LBS. We hope this study can inspire many researchers and initiate many great researches in LBS fields.

A Study on the Connective Validity of Technology Maturity and Industry for Core Technologies based on 4th Industrial Revolution (4차 산업혁명 기반 핵심기술에 대한 기술성숙도와 산업과 연계 타당성 연구)

  • Cho, Han-Jin;Jeong, Kyuman
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.3
    • /
    • pp.49-57
    • /
    • 2019
  • The core technology development of the Fourth Industrial Revolution is linked to the development of other core technologies, which will change the industrial structure in the future and create a new smart business model. In this paper, tried to analyze the technology maturity level and analyze the technology maturity. To do this, used technology trend information to investigate and integrate the market, policy, etc. Of core technology of the 4th Industrial Revolution to achieve a comprehensive maturity level. Because technology maturity measures are scored by technology developers, prejudices may be acted upon according to a person's tendency, which may be a subjective evaluation. It is also a measure of the maturity of individual technologies, and thus is not suitable for evaluating the overall system integration perspective. However, it is possible to evaluate the maturity before integrating the core element technologies constituting the whole system and to use it as a means to compare the effect of the whole system and its feasibility and play an important role in the planning of technology development.

A Guideline for Identifying Blockchain Applications in Organizations (기업에서 요구되는 블록체인 애플리케이션 탐색을 위한 가이드라인)

  • Namn, Su Hyeon
    • Management & Information Systems Review
    • /
    • v.38 no.1
    • /
    • pp.83-101
    • /
    • 2019
  • Blockchain is considered as an innovative technology along with Artificial Intelligence, Big Data, and Internet of Things. However, since the inception of the genesis of blockchain technology, the cryptocurrency Bitcoin, the technology is not utilized widely, not let alone disruptive applications. Most of the blockchain research deals with the cryptocurrency, general descriptions of the technology such as trend, outlook of the technology, explanation of component technology, and so on. There are no killer applications like Facebook or Google, of course. Reflecting on the slow adoption by businesses, we wanted know about the current status of the research on blockchain in Korea. The main purpose of this paper is to help business practitioners to identify the application of blockchain to enhance the competitiveness of their organization. To do that, we first use the framework by Iansiti et al (2017) and categorize the blockchain related articles published in Korea according to the framework. This is to provide a benchmark or cases of other organizations' adoption of blockchain technology. Second, based on the value proposition of blockchain applications, we suggest evolutionary paths for adopting them. Third, from the demand pull perspective of technology adoption for innovation, we propose applicable areas where blockchain applications can be introduced. Fourth, we use the value chain model to find out the appropriate domains of blockchain applications in the corporate value chains. And the five competitive forces models is adopted to find ways of lowering the power of forces by incorporating blockchain technology.

Analysis of the Status of Natural Language Processing Technology Based on Deep Learning (딥러닝 중심의 자연어 처리 기술 현황 분석)

  • Park, Sang-Un
    • The Journal of Bigdata
    • /
    • v.6 no.1
    • /
    • pp.63-81
    • /
    • 2021
  • The performance of natural language processing is rapidly improving due to the recent development and application of machine learning and deep learning technologies, and as a result, the field of application is expanding. In particular, as the demand for analysis on unstructured text data increases, interest in NLP(Natural Language Processing) is also increasing. However, due to the complexity and difficulty of the natural language preprocessing process and machine learning and deep learning theories, there are still high barriers to the use of natural language processing. In this paper, for an overall understanding of NLP, by examining the main fields of NLP that are currently being actively researched and the current state of major technologies centered on machine learning and deep learning, We want to provide a foundation to understand and utilize NLP more easily. Therefore, we investigated the change of NLP in AI(artificial intelligence) through the changes of the taxonomy of AI technology. The main areas of NLP which consists of language model, text classification, text generation, document summarization, question answering and machine translation were explained with state of the art deep learning models. In addition, major deep learning models utilized in NLP were explained, and data sets and evaluation measures for performance evaluation were summarized. We hope researchers who want to utilize NLP for various purposes in their field be able to understand the overall technical status and the main technologies of NLP through this paper.

Investigating the Characteristics of Academia-Industrial Cooperation-based Patents for their Long-term Use (지속적 활용이 가능한 산학협력 특허 특성 분석)

  • Park, Sang-Young;Choi, Youngjae;Lee, Sungjoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.3
    • /
    • pp.568-578
    • /
    • 2021
  • Patents that are research results from industry-university cooperation (IUC) are a source of innovation, and play an important role in economic growth, such as technology transfer and commercialization. For this reason, there are many efforts to revitalize IUC, but in general, company patents are achievements that can be commercialized, rather than research achievements, so not all patents are used for business, even after their creation as the outcome of IUC. Therefore, this research supports the design of measures in which IUC can ultimately be linked to successful utilization of patents by identifying the purposes of IUC, even after it has been successfully promoted, and patents have been filed as a result. To this end, first, the patents registered for industry-academia cooperation in the United States are collected, and second, a predictive model is designed, with unexpired and expired patents predicted using machine learning techniques. The final identified patents are intended to derive available factors in terms of marketability and technicality. This study is expected to help predict the utilization of unexpired and expired patents, and is expected to contribute to setting goals for research results from technical cooperation between corporate and university officials planning early IUC.

Application of deep learning technique for battery lead tab welding error detection (배터리 리드탭 압흔 오류 검출의 딥러닝 기법 적용)

  • Kim, YunHo;Kim, ByeongMan
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.27 no.2
    • /
    • pp.71-82
    • /
    • 2022
  • In order to replace the sampling tensile test of products produced in the tab welding process, which is one of the automotive battery manufacturing processes, vision inspectors are currently being developed and used. However, the vision inspection has the problem of inspection position error and the cost of improving it. In order to solve these problems, there are recent cases of applying deep learning technology. As one such case, this paper tries to examine the usefulness of applying Faster R-CNN, one of the deep learning technologies, to existing product inspection. The images acquired through the existing vision inspection machine are used as training data and trained using the Faster R-CNN ResNet101 V1 1024x1024 model. The results of the conventional vision test and Faster R-CNN test are compared and analyzed based on the test standards of 0% non-detection and 10% over-detection. The non-detection rate is 34.5% in the conventional vision test and 0% in the Faster R-CNN test. The over-detection rate is 100% in the conventional vision test and 6.9% in Faster R-CNN. From these results, it is confirmed that deep learning technology is very useful for detecting welding error of lead tabs in automobile batteries.

Quantitative Evaluations of Deep Learning Models for Rapid Building Damage Detection in Disaster Areas (재난지역에서의 신속한 건물 피해 정도 감지를 위한 딥러닝 모델의 정량 평가)

  • Ser, Junho;Yang, Byungyun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.5
    • /
    • pp.381-391
    • /
    • 2022
  • This paper is intended to find one of the prevailing deep learning models that are a type of AI (Artificial Intelligence) that helps rapidly detect damaged buildings where disasters occur. The models selected are SSD-512, RetinaNet, and YOLOv3 which are widely used in object detection in recent years. These models are based on one-stage detector networks that are suitable for rapid object detection. These are often used for object detection due to their advantages in structure and high speed but not for damaged building detection in disaster management. In this study, we first trained each of the algorithms on xBD dataset that provides the post-disaster imagery with damage classification labels. Next, the three models are quantitatively evaluated with the mAP(mean Average Precision) and the FPS (Frames Per Second). The mAP of YOLOv3 is recorded at 34.39%, and the FPS reached 46. The mAP of RetinaNet recorded 36.06%, which is 1.67% higher than YOLOv3, but the FPS is one-third of YOLOv3. SSD-512 received significantly lower values than the results of YOLOv3 on two quantitative indicators. In a disaster situation, a rapid and precise investigation of damaged buildings is essential for effective disaster response. Accordingly, it is expected that the results obtained through this study can be effectively used for the rapid response in disaster management.