• Title/Summary/Keyword: Learning from Failure

Search Result 182, Processing Time 0.03 seconds

Striving Towards a Holistic Innovation Policy in European Countries - But Linearity Still Prevails!

  • Edquist, Charles
    • STI Policy Review
    • /
    • v.5 no.2
    • /
    • pp.1-19
    • /
    • 2014
  • The concept of a holistic innovation policy is defined in this article, with discussions of what it is, why it is relevant, and how it can be implemented to enhance product innovation. It is shown that the innovation systems approach has diffused rapidly during the latest decades and has completely replaced the linear view in the field of innovation research. The majority of European countries are striving in the direction of developing a more holistic innovation policy. However, it is concluded that the innovation policies in European countries are still dominantly linear despite the fact that holistic policy seems to be the driving vision. Innovation policy is behindhand. Why innovation policy is still linear is also preliminarily discussed. Policymakers attending conferences on innovation are practically always in favor of holistic (systemic, broad-based, comprehensive, etc) innovation policies, have abandoned the linear view by learning from innovation research. The division between "linear" and "holistic" seems to be located within the community where innovation policies are designed and implemented, a community composed of policymakers (administrators/bureaucrats) and elected politicians. Perhaps the dividing line is between these two groups in that politicians, who actually make the decisions, may still reflexively believe in the linear view. Nevertheless, there seems to be a failure in communication between researchers and politicians in the field of innovation and there is therefore a strong need to involve innovation researchers in policy design and implementation to a much higher degree. Another way to increase the degree of holism could be to separate innovation policy from research policy, since their integration tends to cement the linear character of innovation policy. The empirical results are based on a questionnaire sent to twenty-three EU Member States, out of which nineteen (83%) responded. Part of the work for this article was carried out for the European Research and Innovation Area Committee (ERAC) of the European Commission (DG RTD).

Novelty Detection on Web-server Log Dataset (웹서버 로그 데이터의 이상상태 탐지 기법)

  • Lee, Hwaseong;Kim, Ki Su
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.10
    • /
    • pp.1311-1319
    • /
    • 2019
  • Currently, the web environment is a commonly used area for sharing information and conducting business. It is becoming an attack point for external hacking targeting on personal information leakage or system failure. Conventional signature-based detection is used in cyber threat but signature-based detection has a limitation that it is difficult to detect the pattern when it is changed like polymorphism. In particular, injection attack is known to the most critical security risks based on web vulnerabilities and various variants are possible at any time. In this paper, we propose a novelty detection technique to detect abnormal state that deviates from the normal state on web-server log dataset(WSLD). The proposed method is a machine learning-based technique to detect a minor anomalous data that tends to be different from a large number of normal data after replacing strings in web-server log dataset with vectors using machine learning-based embedding algorithm.

Trainees Can Safely Learn Video-Assisted Thoracic Surgery Lobectomy despite Limited Experience in Open Lobectomy

  • Yu, Woo Sik;Lee, Chang Young;Lee, Seokkee;Kim, Do Jung;Chung, Kyung Young
    • Journal of Chest Surgery
    • /
    • v.48 no.2
    • /
    • pp.105-111
    • /
    • 2015
  • Background: The aim of this study was to establish whether pulmonary lobectomy using video-assisted thoracic surgery (VATS) can be safely performed by trainees with limited experience with open lobectomy. Methods: Data were retrospectively collected from 251 patients who underwent VATS lobectomy at a single institution between October 2007 and April 2011. The surgical outcomes of the procedures that were performed by three trainee surgeons were compared to the outcomes of procedures performed by a surgeon who had performed more than 150 VATS lobectomies. The cumulative failure graph of each trainee was used for quality assessment and learning curve analysis. Results: The surgery time, estimated blood loss, final pathologic stage, thoracotomy conversion rate, chest tube duration, duration of hospital stay, complication rate, and mortality rate were comparable between the expert surgeon and each trainee. Cumulative failure graphs showed that the performance of each trainee was acceptable and that all trainees reached proficiency in performing VATS lobectomy after 40 cases. Conclusion: This study shows that trainees with limited experience with open lobectomy can safely learn to perform VATS lobectomy for the treatment of lung cancer under expert supervision without compromising outcomes.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

Relationship between Conceptual Understanding and Mapping Errors Induced in Learning Chemistry Concept with Analogy (비유를 사용한 화학 개념 학습에서 유발되는 대응 오류와 개념 이해도의 관계)

  • No, Tae-Hui;Kim, Gyeong-Sun;Sin, Eun-Ju;Han, Jae-Yeong
    • Journal of the Korean Chemical Society
    • /
    • v.50 no.6
    • /
    • pp.486-493
    • /
    • 2006
  • study investigated the relationship between conceptual understanding and mapping errors induced in learning chemistry concept with two analogies presented in current science textbooks. Each of the two groups from 7th graders (N=260) in three middle schools studied with one of the analogies, and then a conception test and a mapping test were administered. Analyses of the results indicated that students conceptual understanding has a significant relationship with their understanding about the mapping. The scores of the conception test and the mapping test for the unshared attributes were lower than those for the shared attributes. Five types of mapping errors were also identified: overmapping, mismapping, failure to map, rash mapping, and artificial mapping. Many representative misconceptions were found to be associated with their mapping errors. Educational implications are discussed.

Development of an Artificial Neural Network Expert System for Preliminary Design of Tunnel in Rock Masses (암반터널 예비설계를 위한 인공신경회로망 전문가 시스템의 개발)

  • 이철욱;문현구
    • Geotechnical Engineering
    • /
    • v.10 no.3
    • /
    • pp.79-96
    • /
    • 1994
  • A tunnel design expert system entitled NESTED is developed using the artificial neural network. The expert system includes three neural network computer models designed for the stability assessment of underground openings and the estimation of correlation between the RMR and Q systems. The expert system consists of the three models and the computerized rock mass classification programs that could be driven under the same user interface. As the structure of the neural network, a multi -layer neural network which adopts an or ror back-propagation learning algorithm is used. To set up its knowledge base from the prior case histories, an engineering database which can control the incomplete and erroneous information by learning process is developed. A series of experiments comparing the results of the neural network with the actual field observations have demonstrated the inferring capabilities of the neural network to identify the possible failure modes and the support timing. The neural network expert system thus complements the incomplete geological data and provides suitable support recommendations for preliminary design of tunnels in rock masses.

  • PDF

Game Elements Balancing using Deep Learning in Artificial Neural Network (딥러닝이 적용된 게임 밸런스에 관한 연구 게임 기획 방법론의 관점으로)

  • Jeon, Joonhyun
    • Journal of the HCI Society of Korea
    • /
    • v.13 no.3
    • /
    • pp.65-73
    • /
    • 2018
  • Game balance settings are crucial to game design. Game balancing must take into account a large amount of numerical values, configuration data, and the relationship between elements. Once released and served, a game - even for a balanced game - often requires calibration according to the game player's preference. To achieve sustainability, game balance needs adjustment while allowing for small changes. In fact, from the producers' standpoint, game balance issue is a critical success factor in game production. Therefore, they often invest much time and capital in game design. However, if such a costly game cannot provide players with an appropriate level of difficulty, the game is more likely to fail. On the contrary, if the game successfully identifies the game players' propensity and performs self-balancing to provide appropriate difficulty levels, this will significantly reduce the likelihood of game failure, while at the same time increasing the lifecycle of the game. Accordingly, if a novel technology for game balancing is developed using artificial intelligence (AI) that offers personalized, intelligent, and customized service to individual game players, it would bring significant changes to the game production system.

  • PDF

Automated detection of corrosion in used nuclear fuel dry storage canisters using residual neural networks

  • Papamarkou, Theodore;Guy, Hayley;Kroencke, Bryce;Miller, Jordan;Robinette, Preston;Schultz, Daniel;Hinkle, Jacob;Pullum, Laura;Schuman, Catherine;Renshaw, Jeremy;Chatzidakis, Stylianos
    • Nuclear Engineering and Technology
    • /
    • v.53 no.2
    • /
    • pp.657-665
    • /
    • 2021
  • Nondestructive evaluation methods play an important role in ensuring component integrity and safety in many industries. Operator fatigue can play a critical role in the reliability of such methods. This is important for inspecting high value assets or assets with a high consequence of failure, such as aerospace and nuclear components. Recent advances in convolution neural networks can support and automate these inspection efforts. This paper proposes using residual neural networks (ResNets) for real-time detection of corrosion, including iron oxide discoloration, pitting and stress corrosion cracking, in dry storage stainless steel canisters housing used nuclear fuel. The proposed approach crops nuclear canister images into smaller tiles, trains a ResNet on these tiles, and classifies images as corroded or intact using the per-image count of tiles predicted as corroded by the ResNet. The results demonstrate that such a deep learning approach allows to detect the locus of corrosion via smaller tiles, and at the same time to infer with high accuracy whether an image comes from a corroded canister. Thereby, the proposed approach holds promise to automate and speed up nuclear fuel canister inspections, to minimize inspection costs, and to partially replace human-conducted onsite inspections, thus reducing radiation doses to personnel.

CNN-based Automatic Machine Fault Diagnosis Method Using Spectrogram Images (스펙트로그램 이미지를 이용한 CNN 기반 자동화 기계 고장 진단 기법)

  • Kang, Kyung-Won;Lee, Kyeong-Min
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.21 no.3
    • /
    • pp.121-126
    • /
    • 2020
  • Sound-based machine fault diagnosis is the automatic detection of abnormal sound in the acoustic emission signals of the machines. Conventional methods of using mathematical models were difficult to diagnose machine failure due to the complexity of the industry machinery system and the existence of nonlinear factors such as noises. Therefore, we want to solve the problem of machine fault diagnosis as a deep learning-based image classification problem. In the paper, we propose a CNN-based automatic machine fault diagnosis method using Spectrogram images. The proposed method uses STFT to effectively extract feature vectors from frequencies generated by machine defects, and the feature vectors detected by STFT were converted into spectrogram images and classified by CNN by machine status. The results show that the proposed method can be effectively used not only to detect defects but also to various automatic diagnosis system based on sound.

Structural health monitoring data anomaly detection by transformer enhanced densely connected neural networks

  • Jun, Li;Wupeng, Chen;Gao, Fan
    • Smart Structures and Systems
    • /
    • v.30 no.6
    • /
    • pp.613-626
    • /
    • 2022
  • Guaranteeing the quality and integrity of structural health monitoring (SHM) data is very important for an effective assessment of structural condition. However, sensory system may malfunction due to sensor fault or harsh operational environment, resulting in multiple types of data anomaly existing in the measured data. Efficiently and automatically identifying anomalies from the vast amounts of measured data is significant for assessing the structural conditions and early warning for structural failure in SHM. The major challenges of current automated data anomaly detection methods are the imbalance of dataset categories. In terms of the feature of actual anomalous data, this paper proposes a data anomaly detection method based on data-level and deep learning technique for SHM of civil engineering structures. The proposed method consists of a data balancing phase to prepare a comprehensive training dataset based on data-level technique, and an anomaly detection phase based on a sophisticatedly designed network. The advanced densely connected convolutional network (DenseNet) and Transformer encoder are embedded in the specific network to facilitate extraction of both detail and global features of response data, and to establish the mapping between the highest level of abstractive features and data anomaly class. Numerical studies on a steel frame model are conducted to evaluate the performance and noise immunity of using the proposed network for data anomaly detection. The applicability of the proposed method for data anomaly classification is validated with the measured data of a practical supertall structure. The proposed method presents a remarkable performance on data anomaly detection, which reaches a 95.7% overall accuracy with practical engineering structural monitoring data, which demonstrates the effectiveness of data balancing and the robust classification capability of the proposed network.