• Title/Summary/Keyword: Machine tool industry

Search Result 268, Processing Time 0.027 seconds

Deriving adoption strategies of deep learning open source framework through case studies (딥러닝 오픈소스 프레임워크의 사례연구를 통한 도입 전략 도출)

  • Choi, Eunjoo;Lee, Junyeong;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.27-65
    • /
    • 2020
  • Many companies on information and communication technology make public their own developed AI technology, for example, Google's TensorFlow, Facebook's PyTorch, Microsoft's CNTK. By releasing deep learning open source software to the public, the relationship with the developer community and the artificial intelligence (AI) ecosystem can be strengthened, and users can perform experiment, implementation and improvement of it. Accordingly, the field of machine learning is growing rapidly, and developers are using and reproducing various learning algorithms in each field. Although various analysis of open source software has been made, there is a lack of studies to help develop or use deep learning open source software in the industry. This study thus attempts to derive a strategy for adopting the framework through case studies of a deep learning open source framework. Based on the technology-organization-environment (TOE) framework and literature review related to the adoption of open source software, we employed the case study framework that includes technological factors as perceived relative advantage, perceived compatibility, perceived complexity, and perceived trialability, organizational factors as management support and knowledge & expertise, and environmental factors as availability of technology skills and services, and platform long term viability. We conducted a case study analysis of three companies' adoption cases (two cases of success and one case of failure) and revealed that seven out of eight TOE factors and several factors regarding company, team and resource are significant for the adoption of deep learning open source framework. By organizing the case study analysis results, we provided five important success factors for adopting deep learning framework: the knowledge and expertise of developers in the team, hardware (GPU) environment, data enterprise cooperation system, deep learning framework platform, deep learning framework work tool service. In order for an organization to successfully adopt a deep learning open source framework, at the stage of using the framework, first, the hardware (GPU) environment for AI R&D group must support the knowledge and expertise of the developers in the team. Second, it is necessary to support the use of deep learning frameworks by research developers through collecting and managing data inside and outside the company with a data enterprise cooperation system. Third, deep learning research expertise must be supplemented through cooperation with researchers from academic institutions such as universities and research institutes. Satisfying three procedures in the stage of using the deep learning framework, companies will increase the number of deep learning research developers, the ability to use the deep learning framework, and the support of GPU resource. In the proliferation stage of the deep learning framework, fourth, a company makes the deep learning framework platform that improves the research efficiency and effectiveness of the developers, for example, the optimization of the hardware (GPU) environment automatically. Fifth, the deep learning framework tool service team complements the developers' expertise through sharing the information of the external deep learning open source framework community to the in-house community and activating developer retraining and seminars. To implement the identified five success factors, a step-by-step enterprise procedure for adoption of the deep learning framework was proposed: defining the project problem, confirming whether the deep learning methodology is the right method, confirming whether the deep learning framework is the right tool, using the deep learning framework by the enterprise, spreading the framework of the enterprise. The first three steps (i.e. defining the project problem, confirming whether the deep learning methodology is the right method, and confirming whether the deep learning framework is the right tool) are pre-considerations to adopt a deep learning open source framework. After the three pre-considerations steps are clear, next two steps (i.e. using the deep learning framework by the enterprise and spreading the framework of the enterprise) can be processed. In the fourth step, the knowledge and expertise of developers in the team are important in addition to hardware (GPU) environment and data enterprise cooperation system. In final step, five important factors are realized for a successful adoption of the deep learning open source framework. This study provides strategic implications for companies adopting or using deep learning framework according to the needs of each industry and business.

Optimization of Multiclass Support Vector Machine using Genetic Algorithm: Application to the Prediction of Corporate Credit Rating (유전자 알고리즘을 이용한 다분류 SVM의 최적화: 기업신용등급 예측에의 응용)

  • Ahn, Hyunchul
    • Information Systems Review
    • /
    • v.16 no.3
    • /
    • pp.161-177
    • /
    • 2014
  • Corporate credit rating assessment consists of complicated processes in which various factors describing a company are taken into consideration. Such assessment is known to be very expensive since domain experts should be employed to assess the ratings. As a result, the data-driven corporate credit rating prediction using statistical and artificial intelligence (AI) techniques has received considerable attention from researchers and practitioners. In particular, statistical methods such as multiple discriminant analysis (MDA) and multinomial logistic regression analysis (MLOGIT), and AI methods including case-based reasoning (CBR), artificial neural network (ANN), and multiclass support vector machine (MSVM) have been applied to corporate credit rating.2) Among them, MSVM has recently become popular because of its robustness and high prediction accuracy. In this study, we propose a novel optimized MSVM model, and appy it to corporate credit rating prediction in order to enhance the accuracy. Our model, named 'GAMSVM (Genetic Algorithm-optimized Multiclass Support Vector Machine),' is designed to simultaneously optimize the kernel parameters and the feature subset selection. Prior studies like Lorena and de Carvalho (2008), and Chatterjee (2013) show that proper kernel parameters may improve the performance of MSVMs. Also, the results from the studies such as Shieh and Yang (2008) and Chatterjee (2013) imply that appropriate feature selection may lead to higher prediction accuracy. Based on these prior studies, we propose to apply GAMSVM to corporate credit rating prediction. As a tool for optimizing the kernel parameters and the feature subset selection, we suggest genetic algorithm (GA). GA is known as an efficient and effective search method that attempts to simulate the biological evolution phenomenon. By applying genetic operations such as selection, crossover, and mutation, it is designed to gradually improve the search results. Especially, mutation operator prevents GA from falling into the local optima, thus we can find the globally optimal or near-optimal solution using it. GA has popularly been applied to search optimal parameters or feature subset selections of AI techniques including MSVM. With these reasons, we also adopt GA as an optimization tool. To empirically validate the usefulness of GAMSVM, we applied it to a real-world case of credit rating in Korea. Our application is in bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. The experimental dataset was collected from a large credit rating company in South Korea. It contained 39 financial ratios of 1,295 companies in the manufacturing industry, and their credit ratings. Using various statistical methods including the one-way ANOVA and the stepwise MDA, we selected 14 financial ratios as the candidate independent variables. The dependent variable, i.e. credit rating, was labeled as four classes: 1(A1); 2(A2); 3(A3); 4(B and C). 80 percent of total data for each class was used for training, and remaining 20 percent was used for validation. And, to overcome small sample size, we applied five-fold cross validation to our dataset. In order to examine the competitiveness of the proposed model, we also experimented several comparative models including MDA, MLOGIT, CBR, ANN and MSVM. In case of MSVM, we adopted One-Against-One (OAO) and DAGSVM (Directed Acyclic Graph SVM) approaches because they are known to be the most accurate approaches among various MSVM approaches. GAMSVM was implemented using LIBSVM-an open-source software, and Evolver 5.5-a commercial software enables GA. Other comparative models were experimented using various statistical and AI packages such as SPSS for Windows, Neuroshell, and Microsoft Excel VBA (Visual Basic for Applications). Experimental results showed that the proposed model-GAMSVM-outperformed all the competitive models. In addition, the model was found to use less independent variables, but to show higher accuracy. In our experiments, five variables such as X7 (total debt), X9 (sales per employee), X13 (years after founded), X15 (accumulated earning to total asset), and X39 (the index related to the cash flows from operating activity) were found to be the most important factors in predicting the corporate credit ratings. However, the values of the finally selected kernel parameters were found to be almost same among the data subsets. To examine whether the predictive performance of GAMSVM was significantly greater than those of other models, we used the McNemar test. As a result, we found that GAMSVM was better than MDA, MLOGIT, CBR, and ANN at the 1% significance level, and better than OAO and DAGSVM at the 5% significance level.

Evaluation of Technology Activity, Innovation and Productivity using Korean Patent Information (한국특허정보를 통한 기술활동성, 혁신성 및 생산성 평가)

  • Yun, In-Sik;Kim, Seok-Jin;Jeong, Eui-Seob
    • Journal of Information Management
    • /
    • v.42 no.2
    • /
    • pp.151-165
    • /
    • 2011
  • Patent information as the innovative index for activity of industry, science and technology reflects the inventive outcome of the nation, region, technology, or company etc.. and is able to be used as a tool evaluating the R&D product and technology diffusion. In this study, the index for analysing the productivity, innovation, and activity of the technology is provided to evaluate the technology in fields of the pharmaceutic, transport, biotechnology, textile, construction, machine parts, information media, and electric/telecommunication, which are becoming the national core technology. As a result of analysis, the technology activity in fields of the construction, pharmaceutic, and biotechnology shows a growing trend which reflect the interest in the quality and the extension of the life, but vice versa in fields of the textile, and information media. The innovation index in fields of the construction, pharmaceutic, and biotechnology index more than average, but vice versa in fields of the information media and electric/telecommunication. In case of technology productivity, more than 2 patentees are included in one patented technology. It has been determined that the technology productivity is decreased because of an increasing number of researcher participating the technology development, which is the recent trend of technology advancement.

An Optimized V&V Methodology to Improve Quality for Safety-Critical Software of Nuclear Power Plant (원전 안전-필수 소프트웨어의 품질향상을 위한 최적화된 확인 및 검증 방안)

  • Koo, Seo-Ryong;Yoo, Yeong-Jae
    • Journal of the Korea Society for Simulation
    • /
    • v.24 no.4
    • /
    • pp.1-9
    • /
    • 2015
  • As the use of software is more wider in the safety-critical nuclear fields, so study to improve safety and quality of the software has been actively carried out for more than the past decade. In the nuclear power plant, nuclear man-machine interface systems (MMIS) performs the function of the brain and neural networks of human and consists of fully digitalized equipments. Therefore, errors in the software for nuclear MMIS may occur an abnormal operation of nuclear power plant, can result in economic loss due to the consequential trip of the nuclear power plant. Verification and validation (V&V) is a software-engineering discipline that helps to build quality into software, and the nuclear industry has been defined by laws and regulations to implement and adhere to a through verification and validation activities along the software lifecycle. V&V is a collection of analysis and testing activities across the full lifecycle and complements the efforts of other quality-engineering functions. This study propose a methodology based on V&V activities and related tool-chain to improve quality for software in the nuclear power plant. The optimized methodology consists of a document evaluation, requirement traceability, source code review, and software testing. The proposed methodology has been applied and approved to the real MMIS project for Shin-Hanul units 1&2.

Comparison of Bentgrass Recovery Speed on Golf Green Followed by Methods of Ball Mark Repair Practise (골프장 그린의 볼마크 수리방법에 따른 벤트그래스의 회복속도 비교)

  • Park, Jong-Hwa;Lee, Jae-Phil;Kim, Doo-Hwan;Joo, Young-Kyoo
    • Asian Journal of Turfgrass Science
    • /
    • v.24 no.2
    • /
    • pp.211-217
    • /
    • 2010
  • This study was conducted to investigate a proper method of ball mark repair by comparing the creeping bentgrass recovery speed on golf course green treated by various methods of ball mark repair. Nine general repairing methods were tested and compared; control (no repair, A type), two common methods of USGA (B type) and GCSAA (C type), three methods with fork shaped hand set performing at Korean golf courses (Ansung Benest, D; Sky72, E; Lakeside, F type), and three methods using the repair machine with 6, 8, or 14 teeth (G, H, I type, respectively). Three creeping bentgrass cultivar of 'Penncross', 'T-1', and 'CY-2' were tested in this field experiment. This test was carried out from September to November in 2009 at the nursery on the Seoul Lakeside Golf course. The average speed of turfgrass recovery after various ball mark repairing methods have been ranked as in the order of E, D, C, B, F, I, H, G, and A. The methods of hand practise showed more effective results than repair method using machines. The ball mark recovery speeds of 'Penncross' were in the order of E, D, B, C, F, I, H, and A. In the case of 'T1' and 'CY-2', similar orders were showed as D, E, B, F, C, H, I, A, G and the order of D, E, C, F, B, H, G, I, A, respectively. The ball mark recovery speed among creeping bentgrass cultivar resulted in the order of 'CY-2', 'Penncross', and 'T-1'. The most proper method of ball mark repair was repair method using a hand set tool especially the method of the Sky72 Golf course (E type). At the first, remove a damaged grass area with fork and tap. And then gather the side grasses into the center area with pulling the grasses with fork. After that, make harden and flat on the turf surface by pounding and rolling with the round wooden stick. The final Nstep, water the repaired grass surface. This ball mark repairing practise showed a most rapid and proper recovery method on creeping bentgrass green.

Correlation Analysis of Inspection Results and ATP Bioluminescence Assay for Verification of Hygiene Status at 5 Star Hotels in Korea (국내 주요 5성급 호텔의 위생실태 조사와 ATP 결과의 상관분석 평가 연구)

  • Kim, Bo-Ram;Lee, Jung-A;Ha, Sang-Do
    • Journal of Food Hygiene and Safety
    • /
    • v.36 no.1
    • /
    • pp.42-50
    • /
    • 2021
  • Along with the rapid growth of the food service industry, food safety requirements and hygiene are increasing in importance in restaurants and hotels. Accordingly, there is a need for quick and practical monitoring techniques to determine hygiene status in the field. In this study, we investigated 5 domestic 5-star hotels specifically, personal hygiene (hands of workers), cooking utensils (knife, cutting board, food storage container, slicing machine blade, ice-maker scoop) and other facilities (refrigerator handle, sink). In addition, we examined the hygiene management status of customer contact points (tongs for buffet, etc.) to derive the correlation between the ATP values as a, a verification method. As a result of our five-hotel survey, we found that cooking utensils and personal hygiene were relatively sanitary compared to other inspection items (cookware 92.2%, personal hygiene 91.4%, facilities and equipment 76.19%, customer contact items 88.6%). According to our ATP-based mothod, kitchen utensils (51 ± 45 RLU/25㎠) were relatively clean compared to other with facilities and equipment (167 ± 123 RLU/25㎠). In the present study, we also evaluated the usefulness of the ATP bioluminescence method for monitoring surface hygiene at hotel restaurants. After correlation analysis of surveillance of hygienic status points and ATP assay, most results showed negative and high correlation (-0.64--0.89). Our ATP assay (92 ± 67 RLU/25㎠) of each item after cleaning showed signigicantly reduced results compared to the ATP assay (1020 ± 1254 RLU/25㎠) for normal status, thereby indicating its suitability as a tool to verify the validity of cleaning. By our results, ATP bioluminescence could be used as an effective tool for visual numerical evaluation of invisible contaminants.

A Study on Searching for Export Candidate Countries of the Korean Food and Beverage Industry Using Node2vec Graph Embedding and Light GBM Link Prediction (Node2vec 그래프 임베딩과 Light GBM 링크 예측을 활용한 식음료 산업의 수출 후보국가 탐색 연구)

  • Lee, Jae-Seong;Jun, Seung-Pyo;Seo, Jinny
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.4
    • /
    • pp.73-95
    • /
    • 2021
  • This study uses Node2vec graph embedding method and Light GBM link prediction to explore undeveloped export candidate countries in Korea's food and beverage industry. Node2vec is the method that improves the limit of the structural equivalence representation of the network, which is known to be relatively weak compared to the existing link prediction method based on the number of common neighbors of the network. Therefore, the method is known to show excellent performance in both community detection and structural equivalence of the network. The vector value obtained by embedding the network in this way operates under the condition of a constant length from an arbitrarily designated starting point node. Therefore, it has the advantage that it is easy to apply the sequence of nodes as an input value to the model for downstream tasks such as Logistic Regression, Support Vector Machine, and Random Forest. Based on these features of the Node2vec graph embedding method, this study applied the above method to the international trade information of the Korean food and beverage industry. Through this, we intend to contribute to creating the effect of extensive margin diversification in Korea in the global value chain relationship of the industry. The optimal predictive model derived from the results of this study recorded a precision of 0.95 and a recall of 0.79, and an F1 score of 0.86, showing excellent performance. This performance was shown to be superior to that of the binary classifier based on Logistic Regression set as the baseline model. In the baseline model, a precision of 0.95 and a recall of 0.73 were recorded, and an F1 score of 0.83 was recorded. In addition, the light GBM-based optimal prediction model derived from this study showed superior performance than the link prediction model of previous studies, which is set as a benchmarking model in this study. The predictive model of the previous study recorded only a recall rate of 0.75, but the proposed model of this study showed better performance which recall rate is 0.79. The difference in the performance of the prediction results between benchmarking model and this study model is due to the model learning strategy. In this study, groups were classified by the trade value scale, and prediction models were trained differently for these groups. Specific methods are (1) a method of randomly masking and learning a model for all trades without setting specific conditions for trade value, (2) arbitrarily masking a part of the trades with an average trade value or higher and using the model method, and (3) a method of arbitrarily masking some of the trades with the top 25% or higher trade value and learning the model. As a result of the experiment, it was confirmed that the performance of the model trained by randomly masking some of the trades with the above-average trade value in this method was the best and appeared stably. It was found that most of the results of potential export candidates for Korea derived through the above model appeared appropriate through additional investigation. Combining the above, this study could suggest the practical utility of the link prediction method applying Node2vec and Light GBM. In addition, useful implications could be derived for weight update strategies that can perform better link prediction while training the model. On the other hand, this study also has policy utility because it is applied to trade transactions that have not been performed much in the research related to link prediction based on graph embedding. The results of this study support a rapid response to changes in the global value chain such as the recent US-China trade conflict or Japan's export regulations, and I think that it has sufficient usefulness as a tool for policy decision-making.

Modern Paper Quality Control

  • Olavi Komppa
    • Proceedings of the Korea Technical Association of the Pulp and Paper Industry Conference
    • /
    • 2000.06a
    • /
    • pp.16-23
    • /
    • 2000
  • The increasing functional needs of top-quality printing papers and packaging paperboards, and especially the rapid developments in electronic printing processes and various computer printers during past few years, set new targets and requirements for modern paper quality. Most of these paper grades of today have relatively high filler content, are moderately or heavily calendered , and have many coating layers for the best appearance and performance. In practice, this means that many of the traditional quality assurance methods, mostly designed to measure papers made of pure. native pulp only, can not reliably (or at all) be used to analyze or rank the quality of modern papers. Hence, introduction of new measurement techniques is necessary to assure and further develop the paper quality today and in the future. Paper formation , i.e. small scale (millimeter scale) variation of basis weight, is the most important quality parameter of paper-making due to its influence on practically all the other quality properties of paper. The ideal paper would be completely uniform so that the basis weight of each small point (area) measured would be the same. In practice, of course, this is not possible because there always exists relatively large local variations in paper. However, these small scale basis weight variations are the major reason for many other quality problems, including calender blacking uneven coating result, uneven printing result, etc. The traditionally used visual inspection or optical measurement of the paper does not give us a reliable understanding of the material variations in the paper because in modern paper making process the optical behavior of paper is strongly affected by using e.g. fillers, dye or coating colors. Futhermore, the opacity (optical density) of the paper is changed at different process stages like wet pressing and calendering. The greatest advantage of using beta transmission method to measure paper formation is that it can be very reliably calibrated to measure true basis weight variation of all kinds of paper and board, independently on sample basis weight or paper grade. This gives us the possibility to measure, compare and judge papers made of different raw materials, different color, or even to measure heavily calendered, coated or printed papers. Scientific research of paper physics has shown that the orientation of the top layer (paper surface) fibers of the sheet paly the key role in paper curling and cockling , causing the typical practical problems (paper jam) with modern fax and copy machines, electronic printing , etc. On the other hand, the fiber orientation at the surface and middle layer of the sheet controls the bending stiffness of paperboard . Therefore, a reliable measurement of paper surface fiber orientation gives us a magnificent tool to investigate and predict paper curling and coclking tendency, and provides the necessary information to finetune, the manufacturing process for optimum quality. many papers, especially heavily calendered and coated grades, do resist liquid and gas penetration very much, bing beyond the measurement range of the traditional instruments or resulting invonveniently long measuring time per sample . The increased surface hardness and use of filler minerals and mechanical pulp make a reliable, nonleaking sample contact to the measurement head a challenge of its own. Paper surface coating causes, as expected, a layer which has completely different permeability characteristics compared to the other layer of the sheet. The latest developments in sensor technologies have made it possible to reliably measure gas flow in well controlled conditions, allowing us to investigate the gas penetration of open structures, such as cigarette paper, tissue or sack paper, and in the low permeability range analyze even fully greaseproof papers, silicon papers, heavily coated papers and boards or even detect defects in barrier coatings ! Even nitrogen or helium may be used as the gas, giving us completely new possibilities to rank the products or to find correlation to critical process or converting parameters. All the modern paper machines include many on-line measuring instruments which are used to give the necessary information for automatic process control systems. hence, the reliability of this information obtained from different sensors is vital for good optimizing and process stability. If any of these on-line sensors do not operate perfectly ass planned (having even small measurement error or malfunction ), the process control will set the machine to operate away from the optimum , resulting loss of profit or eventual problems in quality or runnability. To assure optimum operation of the paper machines, a novel quality assurance policy for the on-line measurements has been developed, including control procedures utilizing traceable, accredited standards for the best reliability and performance.