• Title/Summary/Keyword: artificial intelligence techniques

Search Result 689, Processing Time 0.026 seconds

Nondestructive Quantification of Corrosion in Cu Interconnects Using Smith Charts (스미스 차트를 이용한 구리 인터커텍트의 비파괴적 부식도 평가)

  • Minkyu Kang;Namgyeong Kim;Hyunwoo Nam;Tae Yeob Kang
    • Journal of the Microelectronics and Packaging Society
    • /
    • v.31 no.2
    • /
    • pp.28-35
    • /
    • 2024
  • Corrosion inside electronic packages significantly impacts the system performance and reliability, necessitating non-destructive diagnostic techniques for system health management. This study aims to present a non-destructive method for assessing corrosion in copper interconnects using the Smith chart, a tool that integrates the magnitude and phase of complex impedance for visualization. For the experiment, specimens simulating copper transmission lines were subjected to temperature and humidity cycles according to the MIL-STD-810G standard to induce corrosion. The corrosion level of the specimen was quantitatively assessed and labeled based on color changes in the R channel. S-parameters and Smith charts with progressing corrosion stages showed unique patterns corresponding to five levels of corrosion, confirming the effectiveness of the Smith chart as a tool for corrosion assessment. Furthermore, by employing data augmentation, 4,444 Smith charts representing various corrosion levels were obtained, and artificial intelligence models were trained to output the corrosion stages of copper interconnects based on the input Smith charts. Among image classification-specialized CNN and Transformer models, the ConvNeXt model achieved the highest diagnostic performance with an accuracy of 89.4%. When diagnosing the corrosion using the Smith chart, it is possible to perform a non-destructive evaluation using electronic signals. Additionally, by integrating and visualizing signal magnitude and phase information, it is expected to perform an intuitive and noise-robust diagnosis.

Research on Computer-Based Convergence Performing Arts - The Impact of Digital Technology on the Performing Arts-

  • Jin-hee gong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.9
    • /
    • pp.99-107
    • /
    • 2024
  • This study analyzed how computer-based digital technology affects convergence performing arts according to the trend of the times of domestic performing arts. Based on the analyzed contents, the purpose of the study was to propose an appropriate use plan for performing arts and technology and a plan for future development of convergence performing arts. Looking at the analysis results according to the purpose of the study, as a first step, the use of video technology developed in the performing arts stage using video technology evolved into holograms, media art, and 3D techniques. In the second step, technology and art were fused using artificial intelligence and robots. Artificial intelligence composed music, choreographed dance, and wrote a play script. In addition, robots performed and played with humans on stage. Third, virtual space was also used in performing arts. It was possible to direct spaces in various places using virtual spaces rather than performance halls and stage spaces. In this way, performing arts using digital technology will become more diverse and professional, and things that are possible in imagination that cross boundaries will be developed into reality. This study proposes a convergence that appropriately utilizes various technologies of digital and computer while maintaining the area of creation that humans can do and the expressiveness and artistry they express. In preparation for these changes in the times, future convergence performing artists should be able to acquire a combination of artistry and technology of stage technology experts who can use digital technology, professional actors who can express artistry along with AI, and professionals who can create art by manipulating AI.

A Condition Rating Method of Bridges using an Artificial Neural Network Model (인공신경망모델을 이용한 교량의 상태평가)

  • Oh, Soon-Taek;Lee, Dong-Jun;Lee, Jae-Ho
    • Journal of the Korean Society for Railway
    • /
    • v.13 no.1
    • /
    • pp.71-77
    • /
    • 2010
  • It is increasing annually that the cost for bridge Maintenance Repair & Rehabilitation (MR&R) in developed countries. Based on Intelligent Technology, Bridge Management System (BMS) is developed for optimization of Life Cycle Cost (LCC) and reliability to predict long-term bridge deteriorations. However, such data are very limited amongst all the known bridge agencies, making it difficult to reliably predict future structural performances. To alleviate this problem, an Artificial Neural Network (ANN) based Backward Prediction Model (BPM) for generating missing historical condition ratings has been developed. Its reliability has been verified using existing condition ratings from the Maryland Department of Transportation, USA. The function of the BPM is to establish the correlations between the known condition ratings and such non-bridge factors as climate and traffic volumes, which can then be used to obtain the bridge condition ratings of the missing years. Since the non-bridge factors used in the BPM can influence the variation of the bridge condition ratings, well-selected non-bridge factors are critical for the BPM to function effectively based on the minimized discrepancy rate between the BPM prediction result and existing data (deck; 6.68%, superstructure; 6.61%, substructure; 7.52%). This research is on the generation of usable historical data using Artificial Intelligence techniques to reliably predict future bridge deterioration. The outcomes (Long-term Bridge deterioration Prediction) will help bridge authorities to effectively plan maintenance strategies for obtaining the maximum benefit with limited funds.

Technology Analysis on Automatic Detection and Defense of SW Vulnerabilities (SW 보안 취약점 자동 탐색 및 대응 기술 분석)

  • Oh, Sang-Hwan;Kim, Tae-Eun;Kim, HwanKuk
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.11
    • /
    • pp.94-103
    • /
    • 2017
  • As automatic hacking tools and techniques have been improved, the number of new vulnerabilities has increased. The CVE registered from 2010 to 2015 numbered about 80,000, and it is expected that more vulnerabilities will be reported. In most cases, patching a vulnerability depends on the developers' capability, and most patching techniques are based on manual analysis, which requires nine months, on average. The techniques are composed of finding the vulnerability, conducting the analysis based on the source code, and writing new code for the patch. Zero-day is critical because the time gap between the first discovery and taking action is too long, as mentioned. To solve the problem, techniques for automatically detecting and analyzing software (SW) vulnerabilities have been proposed recently. Cyber Grand Challenge (CGC) held in 2016 was the first competition to create automatic defensive systems capable of reasoning over flaws in binary and formulating patches without experts' direct analysis. Darktrace and Cylance are similar projects for managing SW automatically with artificial intelligence and machine learning. Though many foreign commercial institutions and academies run their projects for automatic binary analysis, the domestic level of technology is much lower. This paper is to study developing automatic detection of SW vulnerabilities and defenses against them. We analyzed and compared relative works and tools as additional elements, and optimal techniques for automatic analysis are suggested.

Implementing RPA for Digital to Intelligent(D2I) (디지털에서 인텔리전트(D2I)달성을 위한 RPA의 구현)

  • Dong-Jin Choi
    • Information Systems Review
    • /
    • v.21 no.4
    • /
    • pp.143-156
    • /
    • 2019
  • Types of innovation can be categorized into simplification, information, automation, and intelligence. Intelligence is the highest level of innovation, and RPA can be seen as one of intelligence. Robotic Process Automation(RPA), a software robot with artificial intelligence, is an example of intelligence that is suited for simple, repetitive, large-scale transaction processing tasks. The RPA, which is already in operation in many companies in Korea, shows what needs to be done to naturally focus on the core tasks in a situation where the need for a strong organizational culture is increasing and the emphasis is on voluntary leadership, strong teamwork and execution, and a professional working culture. The introduction was considered naturally according to the need to find. Robotic Process Automation, or RPA, is a technology that replaces human tasks with the goal of quickly and efficiently handling structural tasks. RPA is implemented through software robots that mimic humans using software such as ERP systems or productivity tools. RPA robots are software installed on a computer and are called robots by the principle of operation. RPA is integrated throughout the IT system through the front end, unlike traditional software that communicates with other IT systems through the back end. In practice, this means that software robots use IT systems in the same way as humans, repeat the correct steps, and respond to events on the computer screen instead of communicating with the system's application programming interface(API). Designing software that mimics humans to communicate with other software can be less intuitive, but there are many advantages to this approach. First, you can integrate RPA with virtually any software you use, regardless of your openness to third-party applications. Many enterprise IT systems are proprietary because they do not have many common APIs, and their ability to communicate with other systems is severely limited, but RPA solves this problem. Second, RPA can be implemented in a very short time. Traditional software development methods, such as enterprise software integration, are relatively time consuming, but RPAs can be implemented in a relatively short period of two to four weeks. Third, automated processes through software robots can be easily modified by system users. While traditional approaches require advanced coding techniques to drastically modify how they work, RPA can be instructed by modifying relatively simple logical statements, or by modifying screen captures or graphical process charts of human-run processes. This makes RPA very versatile and flexible. This RPA is a good example of the application of digital to intelligence(D2I).

Semantic Visualization of Dynamic Topic Modeling (다이내믹 토픽 모델링의 의미적 시각화 방법론)

  • Yeon, Jinwook;Boo, Hyunkyung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.131-154
    • /
    • 2022
  • Recently, researches on unstructured data analysis have been actively conducted with the development of information and communication technology. In particular, topic modeling is a representative technique for discovering core topics from massive text data. In the early stages of topic modeling, most studies focused only on topic discovery. As the topic modeling field matured, studies on the change of the topic according to the change of time began to be carried out. Accordingly, interest in dynamic topic modeling that handle changes in keywords constituting the topic is also increasing. Dynamic topic modeling identifies major topics from the data of the initial period and manages the change and flow of topics in a way that utilizes topic information of the previous period to derive further topics in subsequent periods. However, it is very difficult to understand and interpret the results of dynamic topic modeling. The results of traditional dynamic topic modeling simply reveal changes in keywords and their rankings. However, this information is insufficient to represent how the meaning of the topic has changed. Therefore, in this study, we propose a method to visualize topics by period by reflecting the meaning of keywords in each topic. In addition, we propose a method that can intuitively interpret changes in topics and relationships between or among topics. The detailed method of visualizing topics by period is as follows. In the first step, dynamic topic modeling is implemented to derive the top keywords of each period and their weight from text data. In the second step, we derive vectors of top keywords of each topic from the pre-trained word embedding model. Then, we perform dimension reduction for the extracted vectors. Then, we formulate a semantic vector of each topic by calculating weight sum of keywords in each vector using topic weight of each keyword. In the third step, we visualize the semantic vector of each topic using matplotlib, and analyze the relationship between or among the topics based on the visualized result. The change of topic can be interpreted in the following manners. From the result of dynamic topic modeling, we identify rising top 5 keywords and descending top 5 keywords for each period to show the change of the topic. Existing many topic visualization studies usually visualize keywords of each topic, but our approach proposed in this study differs from previous studies in that it attempts to visualize each topic itself. To evaluate the practical applicability of the proposed methodology, we performed an experiment on 1,847 abstracts of artificial intelligence-related papers. The experiment was performed by dividing abstracts of artificial intelligence-related papers into three periods (2016-2017, 2018-2019, 2020-2021). We selected seven topics based on the consistency score, and utilized the pre-trained word embedding model of Word2vec trained with 'Wikipedia', an Internet encyclopedia. Based on the proposed methodology, we generated a semantic vector for each topic. Through this, by reflecting the meaning of keywords, we visualized and interpreted the themes by period. Through these experiments, we confirmed that the rising and descending of the topic weight of a keyword can be usefully used to interpret the semantic change of the corresponding topic and to grasp the relationship among topics. In this study, to overcome the limitations of dynamic topic modeling results, we used word embedding and dimension reduction techniques to visualize topics by era. The results of this study are meaningful in that they broadened the scope of topic understanding through the visualization of dynamic topic modeling results. In addition, the academic contribution can be acknowledged in that it laid the foundation for follow-up studies using various word embeddings and dimensionality reduction techniques to improve the performance of the proposed methodology.

Empirical Research on Search model of Web Service Repository (웹서비스 저장소의 검색기법에 관한 실증적 연구)

  • Hwang, You-Sub
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.173-193
    • /
    • 2010
  • The World Wide Web is transitioning from being a mere collection of documents that contain useful information toward providing a collection of services that perform useful tasks. The emerging Web service technology has been envisioned as the next technological wave and is expected to play an important role in this recent transformation of the Web. By providing interoperable interface standards for application-to-application communication, Web services can be combined with component-based software development to promote application interaction and integration within and across enterprises. To make Web services for service-oriented computing operational, it is important that Web services repositories not only be well-structured but also provide efficient tools for an environment supporting reusable software components for both service providers and consumers. As the potential of Web services for service-oriented computing is becoming widely recognized, the demand for an integrated framework that facilitates service discovery and publishing is concomitantly growing. In our research, we propose a framework that facilitates Web service discovery and publishing by combining clustering techniques and leveraging the semantics of the XML-based service specification in WSDL files. We believe that this is one of the first attempts at applying unsupervised artificial neural network-based machine-learning techniques in the Web service domain. We have developed a Web service discovery tool based on the proposed approach using an unsupervised artificial neural network and empirically evaluated the proposed approach and tool using real Web service descriptions drawn from operational Web services repositories. We believe that both service providers and consumers in a service-oriented computing environment can benefit from our Web service discovery approach.

A Study on the Improvement of Injection Molding Process Using CAE and Decision-tree (CAE와 Decision-tree를 이용한 사출성형 공정개선에 관한 연구)

  • Hwang, Soonhwan;Han, Seong-Ryeol;Lee, Hoojin
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.4
    • /
    • pp.580-586
    • /
    • 2021
  • The CAT methodology is a numerical analysis technique using CAE. Recently, a methodology of applying artificial intelligence techniques to a simulation has been studied. A previous study compared the deformation results according to the injection molding process using a machine learning technique. Although MLP has excellent prediction performance, it lacks an explanation of the decision process and is like a black box. In this study, data was generated using Autodesk Moldflow 2018, an injection molding analysis software. Several Machine Learning Algorithms models were developed using RapidMiner version 9.5, a machine learning platform software, and the root mean square error was compared. The decision-tree showed better prediction performance than other machine learning techniques with the RMSE values. The classification criterion can be increased according to the Maximal Depth that determines the size of the Decision-tree, but the complexity also increases. The simulation showed that by selecting an intermediate value that satisfies the constraint based on the changed position, there was 7.7% improvement compared to the previous simulation.

Probability Map of Migratory Bird Habitat for Rational Management of Conservation Areas - Focusing on Busan Eco Delta City (EDC) - (보존지역의 합리적 관리를 위한 철새 서식 확률지도 구축 - 부산 Eco Delta City (EDC)를 중심으로 -)

  • Kim, Geun Han;Kong, Seok Jun;Kim, Hee Nyun;Koo, Kyung Ah
    • Journal of the Korean Society of Environmental Restoration Technology
    • /
    • v.26 no.6
    • /
    • pp.67-84
    • /
    • 2023
  • In some areas of the Republic of Korea, the designation and management of conservation areas do not adequately reflect regional characteristics and often impose behavioral regulations without considering the local context. One prominent example is the Busan EDC area. As a result, conflicts may arise, including large-scale civil complaints, regarding the conservation and utilization of these areas. Therefore, for the efficient designation and management of protected areas, it is necessary to consider various ecosystem factors, changes in land use, and regional characteristics. In this study, we specifically focused on the Busan EDC area and applied machine learning techniques to analyze the habitat of regional species. Additionally, we employed Explainable Artificial Intelligence techniques to interpret the results of our analysis. To analyze the regional characteristics of the waterfront area in the Busan EDC district and the habitat of migratory birds, we used bird observations as dependent variables, distinguishing between presence and absence. The independent variables were constructed using land cover, elevation, slope, bridges, and river depth data. We utilized the XGBoost (eXtreme Gradient Boosting) model, known for its excellent performance in various fields, to predict the habitat probabilities of 11 bird species. Furthermore, we employed the SHapley Additive exPlanations technique, one of the representative methodologies of XAI, to analyze the relative importance and impact of the variables used in the model. The analysis results showed that in the EDC business district, as one moves closer to the river from the waterfront, the likelihood of bird habitat increases based on the overlapping habitat probabilities of the analyzed bird species. By synthesizing the major variables influencing the habitat of each species, key variables such as rivers, rice fields, fields, pastures, inland wetlands, tidal flats, orchards, cultivated lands, cliffs & rocks, elevation, lakes, and deciduous forests were identified as areas that can serve as habitats, shelters, resting places, and feeding grounds for birds. On the other hand, artificial structures such as bridges, railways, and other public facilities were found to have a negative impact on bird habitat. The development of a management plan for conservation areas based on the objective analysis presented in this study is expected to be extensively utilized in the future. It will provide diverse evidential materials for establishing effective conservation area management strategies.

Increasing Accuracy of Stock Price Pattern Prediction through Data Augmentation for Deep Learning (데이터 증강을 통한 딥러닝 기반 주가 패턴 예측 정확도 향상 방안)

  • Kim, Youngjun;Kim, Yeojeong;Lee, Insun;Lee, Hong Joo
    • The Journal of Bigdata
    • /
    • v.4 no.2
    • /
    • pp.1-12
    • /
    • 2019
  • As Artificial Intelligence (AI) technology develops, it is applied to various fields such as image, voice, and text. AI has shown fine results in certain areas. Researchers have tried to predict the stock market by utilizing artificial intelligence as well. Predicting the stock market is known as one of the difficult problems since the stock market is affected by various factors such as economy and politics. In the field of AI, there are attempts to predict the ups and downs of stock price by studying stock price patterns using various machine learning techniques. This study suggest a way of predicting stock price patterns based on the Convolutional Neural Network(CNN) among machine learning techniques. CNN uses neural networks to classify images by extracting features from images through convolutional layers. Therefore, this study tries to classify candlestick images made by stock data in order to predict patterns. This study has two objectives. The first one referred as Case 1 is to predict the patterns with the images made by the same-day stock price data. The second one referred as Case 2 is to predict the next day stock price patterns with the images produced by the daily stock price data. In Case 1, data augmentation methods - random modification and Gaussian noise - are applied to generate more training data, and the generated images are put into the model to fit. Given that deep learning requires a large amount of data, this study suggests a method of data augmentation for candlestick images. Also, this study compares the accuracies of the images with Gaussian noise and different classification problems. All data in this study is collected through OpenAPI provided by DaiShin Securities. Case 1 has five different labels depending on patterns. The patterns are up with up closing, up with down closing, down with up closing, down with down closing, and staying. The images in Case 1 are created by removing the last candle(-1candle), the last two candles(-2candles), and the last three candles(-3candles) from 60 minutes, 30 minutes, 10 minutes, and 5 minutes candle charts. 60 minutes candle chart means one candle in the image has 60 minutes of information containing an open price, high price, low price, close price. Case 2 has two labels that are up and down. This study for Case 2 has generated for 60 minutes, 30 minutes, 10 minutes, and 5minutes candle charts without removing any candle. Considering the stock data, moving the candles in the images is suggested, instead of existing data augmentation techniques. How much the candles are moved is defined as the modified value. The average difference of closing prices between candles was 0.0029. Therefore, in this study, 0.003, 0.002, 0.001, 0.00025 are used for the modified value. The number of images was doubled after data augmentation. When it comes to Gaussian Noise, the mean value was 0, and the value of variance was 0.01. For both Case 1 and Case 2, the model is based on VGG-Net16 that has 16 layers. As a result, 10 minutes -1candle showed the best accuracy among 60 minutes, 30 minutes, 10 minutes, 5minutes candle charts. Thus, 10 minutes images were utilized for the rest of the experiment in Case 1. The three candles removed from the images were selected for data augmentation and application of Gaussian noise. 10 minutes -3candle resulted in 79.72% accuracy. The accuracy of the images with 0.00025 modified value and 100% changed candles was 79.92%. Applying Gaussian noise helped the accuracy to be 80.98%. According to the outcomes of Case 2, 60minutes candle charts could predict patterns of tomorrow by 82.60%. To sum up, this study is expected to contribute to further studies on the prediction of stock price patterns using images. This research provides a possible method for data augmentation of stock data.

  • PDF