• Title/Summary/Keyword: Cost-Sensitive Learning

Search Result 28, Processing Time 0.021 seconds

Research on damage detection and assessment of civil engineering structures based on DeepLabV3+ deep learning model

  • Chengyan Song
    • Structural Engineering and Mechanics
    • /
    • v.91 no.5
    • /
    • pp.443-457
    • /
    • 2024
  • At present, the traditional concrete surface inspection methods based on artificial vision have the problems of high cost and insecurity, while the computer vision methods rely on artificial selection features in the case of sensitive environmental changes and difficult promotion. In order to solve these problems, this paper introduces deep learning technology in the field of computer vision to achieve automatic feature extraction of structural damage, with excellent detection speed and strong generalization ability. The main contents of this study are as follows: (1) A method based on DeepLabV3+ convolutional neural network model is proposed for surface detection of post-earthquake structural damage, including surface damage such as concrete cracks, spaling and exposed steel bars. The key semantic information is extracted by different backbone networks, and the data sets containing various surface damage are trained, tested and evaluated. The intersection ratios of 54.4%, 44.2%, and 89.9% in the test set demonstrate the network's capability to accurately identify different types of structural surface damages in pixel-level segmentation, highlighting its effectiveness in varied testing scenarios. (2) A semantic segmentation model based on DeepLabV3+ convolutional neural network is proposed for the detection and evaluation of post-earthquake structural components. Using a dataset that includes building structural components and their damage degrees for training, testing, and evaluation, semantic segmentation detection accuracies were recorded at 98.5% and 56.9%. To provide a comprehensive assessment that considers both false positives and false negatives, the Mean Intersection over Union (Mean IoU) was employed as the primary evaluation metric. This choice ensures that the network's performance in detecting and evaluating pixel-level damage in post-earthquake structural components is evaluated uniformly across all experiments. By incorporating deep learning technology, this study not only offers an innovative solution for accurately identifying post-earthquake damage in civil engineering structures but also contributes significantly to empirical research in automated detection and evaluation within the field of structural health monitoring.

Case Analysis of Applications of Seismic Data Denoising Methods using Deep-Learning Techniques (심층 학습 기법을 이용한 탄성파 자료 잡음 제거 적용사례 분석)

  • Jo, Jun Hyeon;Ha, Wansoo
    • Geophysics and Geophysical Exploration
    • /
    • v.23 no.2
    • /
    • pp.72-88
    • /
    • 2020
  • Recent rapid advances in computer hardware performance have led to relatively low computational costs, increasing the number of applications of machine-learning techniques to geophysical problems. In particular, deep-learning techniques are gaining in popularity as the number of cases successfully solving complex and nonlinear problems has gradually increased. In this paper, applications of seismic data denoising methods using deep-learning techniques are introduced and investigated. Depending on the type of attenuated noise, these studies are grouped into denoising applications of coherent noise, random noise, and the combination of these two types of noise. Then, we investigate the deep-learning techniques used to remove the corresponding noise. Unlike conventional methods used to attenuate seismic noise, deep neural networks, a typical deep-learning technique, learn the characteristics of the noise independently and then automatically optimize the parameters. Therefore, such methods are less sensitive to generalized problems than conventional methods and can reduce labor costs. Several studies have also demonstrated that deep-learning techniques perform well in terms of computational cost and denoising performance. Based on the results of the applications covered in this paper, the pros and cons of the deep-learning techniques used to remove seismic noise are analyzed and discussed.

A New Adaptive Kernel Estimation Method for Correntropy Equalizers (코렌트로피 이퀄라이져를 위한 새로운 커널 사이즈 적응 추정 방법)

  • Kim, Namyong
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.3
    • /
    • pp.627-632
    • /
    • 2021
  • ITL (information-theoretic learning) has been applied successfully to adaptive signal processing and machine learning applications, but there are difficulties in deciding the kernel size, which has a great impact on the system performance. The correntropy algorithm, one of the ITL methods, has superior properties of impulsive-noise robustness and channel-distortion compensation. On the other hand, it is also sensitive to the kernel sizes that can lead to system instability. In this paper, considering the sensitivity of the kernel size cubed in the denominator of the cost function slope, a new adaptive kernel estimation method using the rate of change in error power in respect to the kernel size variation is proposed for the correntropy algorithm. In a distortion-compensation experiment for impulsive-noise and multipath-distorted channel, the performance of the proposed kernel-adjusted correntropy algorithm was examined. The proposed method shows a two times faster convergence speed than the conventional algorithm with a fixed kernel size. In addition, the proposed algorithm converged appropriately for kernel sizes ranging from 2.0 to 6.0. Hence, the proposed method has a wide acceptable margin of initial kernel sizes.

Machine learning techniques for reinforced concrete's tensile strength assessment under different wetting and drying cycles

  • Ibrahim Albaijan;Danial Fakhri;Adil Hussein Mohammed;Arsalan Mahmoodzadeh;Hawkar Hashim Ibrahim;Khaled Mohamed Elhadi;Shima Rashidi
    • Steel and Composite Structures
    • /
    • v.49 no.3
    • /
    • pp.337-348
    • /
    • 2023
  • Successive wetting and drying cycles of concrete due to weather changes can endanger the safety of engineering structures over time. Considering wetting and drying cycles in concrete tests can lead to a more correct and reliable design of engineering structures. This study aims to provide a model that can be used to estimate the resistance properties of concrete under different wetting and drying cycles. Complex sample preparation methods, the necessity for highly accurate and sensitive instruments, early sample failure, and brittle samples all contribute to the difficulty of measuring the strength of concrete in the laboratory. To address these problems, in this study, the potential ability of six machine learning techniques, including ANN, SVM, RF, KNN, XGBoost, and NB, to predict the concrete's tensile strength was investigated by applying 240 datasets obtained using the Brazilian test (80% for training and 20% for test). In conducting the test, the effect of additives such as glass and polypropylene, as well as the effect of wetting and drying cycles on the tensile strength of concrete, was investigated. Finally, the statistical analysis results revealed that the XGBoost model was the most robust one with R2 = 0.9155, mean absolute error (MAE) = 0.1080 Mpa, and variance accounted for (VAF) = 91.54% to predict the concrete tensile strength. This work's significance is that it allows civil engineers to accurately estimate the tensile strength of different types of concrete. In this way, the high time and cost required for the laboratory tests can be eliminated.

Super-resolution Algorithm Using Adaptive Unsharp Masking for Infra-red Images (적외선 영상을 위한 적응적 언샤프 마스킹을 이용한 초고해상도 알고리즘)

  • Kim, Yong-Jun;Song, Byung Cheol
    • Journal of Broadcast Engineering
    • /
    • v.21 no.2
    • /
    • pp.180-191
    • /
    • 2016
  • When up-scaling algorithms for visible light images are applied to infrared (IR) images, they rarely work because IR images are usually blurred. In order to solve such a problem, this paper proposes an up-scaling algorithm for IR images. We employ adaptive dynamic range encoding (ADRC) as a simple classifier based on the observation that IR images have weak details. Also, since human visual systems are more sensitive to edges, our algorithm focuses on edges. Then, we add pre-processing in learning phase. As a result, we can improve visibility of IR images without increasing computational cost. Comparing with Anchored neighborhood regression (A+), the proposed algorithm provides better results. In terms of just noticeable blur, the proposed algorithm shows higher values by 0.0201 than the A+, respectively.

Response Modeling for the Marketing Promotion with Weighted Case Based Reasoning Under Imbalanced Data Distribution (불균형 데이터 환경에서 변수가중치를 적용한 사례기반추론 기반의 고객반응 예측)

  • Kim, Eunmi;Hong, Taeho
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.29-45
    • /
    • 2015
  • Response modeling is a well-known research issue for those who have tried to get more superior performance in the capability of predicting the customers' response for the marketing promotion. The response model for customers would reduce the marketing cost by identifying prospective customers from very large customer database and predicting the purchasing intention of the selected customers while the promotion which is derived from an undifferentiated marketing strategy results in unnecessary cost. In addition, the big data environment has accelerated developing the response model with data mining techniques such as CBR, neural networks and support vector machines. And CBR is one of the most major tools in business because it is known as simple and robust to apply to the response model. However, CBR is an attractive data mining technique for data mining applications in business even though it hasn't shown high performance compared to other machine learning techniques. Thus many studies have tried to improve CBR and utilized in business data mining with the enhanced algorithms or the support of other techniques such as genetic algorithm, decision tree and AHP (Analytic Process Hierarchy). Ahn and Kim(2008) utilized logit, neural networks, CBR to predict that which customers would purchase the items promoted by marketing department and tried to optimized the number of k for k-nearest neighbor with genetic algorithm for the purpose of improving the performance of the integrated model. Hong and Park(2009) noted that the integrated approach with CBR for logit, neural networks, and Support Vector Machine (SVM) showed more improved prediction ability for response of customers to marketing promotion than each data mining models such as logit, neural networks, and SVM. This paper presented an approach to predict customers' response of marketing promotion with Case Based Reasoning. The proposed model was developed by applying different weights to each feature. We deployed logit model with a database including the promotion and the purchasing data of bath soap. After that, the coefficients were used to give different weights of CBR. We analyzed the performance of proposed weighted CBR based model compared to neural networks and pure CBR based model empirically and found that the proposed weighted CBR based model showed more superior performance than pure CBR model. Imbalanced data is a common problem to build data mining model to classify a class with real data such as bankruptcy prediction, intrusion detection, fraud detection, churn management, and response modeling. Imbalanced data means that the number of instance in one class is remarkably small or large compared to the number of instance in other classes. The classification model such as response modeling has a lot of trouble to recognize the pattern from data through learning because the model tends to ignore a small number of classes while classifying a large number of classes correctly. To resolve the problem caused from imbalanced data distribution, sampling method is one of the most representative approach. The sampling method could be categorized to under sampling and over sampling. However, CBR is not sensitive to data distribution because it doesn't learn from data unlike machine learning algorithm. In this study, we investigated the robustness of our proposed model while changing the ratio of response customers and nonresponse customers to the promotion program because the response customers for the suggested promotion is always a small part of nonresponse customers in the real world. We simulated the proposed model 100 times to validate the robustness with different ratio of response customers to response customers under the imbalanced data distribution. Finally, we found that our proposed CBR based model showed superior performance than compared models under the imbalanced data sets. Our study is expected to improve the performance of response model for the promotion program with CBR under imbalanced data distribution in the real world.

A Study on the UIC(University & Industry Collaboration) Model for Global New Business (글로벌 사업 진출을 위한 산학협력 협업촉진모델: 경남 G대학 GTEP 사업 실험사례연구)

  • Baek, Jong-ok;Park, Sang-hyeok;Seol, Byung-moon
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.10 no.6
    • /
    • pp.69-80
    • /
    • 2015
  • This can be promoted collaboration environment for the system and the system is very important for competitiveness, it is equipped. If so, could work in collaboration with members of the organization to promote collaboration what factors? Organizational collaboration and cooperation of many people working, or worth pursuing common goals by sharing information and processes to improve labor productivity, defined as collaboration. Factors that promote collaboration are shared visions, the organization's principles and rules that reflect the visions, on-line system developments, and communication methods. First, it embodies the vision shared by the more sympathetic members are active and voluntary participation in the activities of the organization can be achieved. Second, the members are aware of all the rules and principles of a united whole is accepted and leads to good performance. In addition, the ability to share sensitive business activities for self-development and also lead to work to make this a regular activity to create a team that can collaborate to help the environment and the atmosphere. Third, a systematic construction of the online collaboration system is made efficient and rapid task. According to Student team and A corporation we knew that Cloud services and social media, low-cost, high-efficiency services could achieve. The introduction of the latest information technology changes, the members of the organization's systems and active participation can take advantage of continuing education must be made. Fourth, the company to inform people both inside and outside of the organization to communicate actively to change the image of the company activities, the creation of corporate performance is very important to figure. Reflects the latest trend to actively use social media to communicate the effort is needed. For development of systematic collaboration promoting model steps to meet the organizational role. First, the Chief Executive Officer to make a firm and clear vision of the organization members to propagate the faith, empathy gives a sense of belonging should be able to have. Second, middle managers, CEO's vision is to systematically propagate the organizers rules and principles to establish a system would create. Third, general operatives internalize the vision of the company stating that the role of outside companies must adhere. The purpose of this study was well done in collaboration organizations promoting factors for strategic alignment model based on the golden circle and collaboration to understand and reflect the latest trends in information technology tools to take advantage of smart work and business know how student teams through case analysis will derive the success factors. This is the foundation for future empirical studies are expected to be present.

  • PDF

A Study on the Impact of Artificial Intelligence on Decision Making : Focusing on Human-AI Collaboration and Decision-Maker's Personality Trait (인공지능이 의사결정에 미치는 영향에 관한 연구 : 인간과 인공지능의 협업 및 의사결정자의 성격 특성을 중심으로)

  • Lee, JeongSeon;Suh, Bomil;Kwon, YoungOk
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.231-252
    • /
    • 2021
  • Artificial intelligence (AI) is a key technology that will change the future the most. It affects the industry as a whole and daily life in various ways. As data availability increases, artificial intelligence finds an optimal solution and infers/predicts through self-learning. Research and investment related to automation that discovers and solves problems on its own are ongoing continuously. Automation of artificial intelligence has benefits such as cost reduction, minimization of human intervention and the difference of human capability. However, there are side effects, such as limiting the artificial intelligence's autonomy and erroneous results due to algorithmic bias. In the labor market, it raises the fear of job replacement. Prior studies on the utilization of artificial intelligence have shown that individuals do not necessarily use the information (or advice) it provides. Algorithm error is more sensitive than human error; so, people avoid algorithms after seeing errors, which is called "algorithm aversion." Recently, artificial intelligence has begun to be understood from the perspective of the augmentation of human intelligence. We have started to be interested in Human-AI collaboration rather than AI alone without human. A study of 1500 companies in various industries found that human-AI collaboration outperformed AI alone. In the medicine area, pathologist-deep learning collaboration dropped the pathologist cancer diagnosis error rate by 85%. Leading AI companies, such as IBM and Microsoft, are starting to adopt the direction of AI as augmented intelligence. Human-AI collaboration is emphasized in the decision-making process, because artificial intelligence is superior in analysis ability based on information. Intuition is a unique human capability so that human-AI collaboration can make optimal decisions. In an environment where change is getting faster and uncertainty increases, the need for artificial intelligence in decision-making will increase. In addition, active discussions are expected on approaches that utilize artificial intelligence for rational decision-making. This study investigates the impact of artificial intelligence on decision-making focuses on human-AI collaboration and the interaction between the decision maker personal traits and advisor type. The advisors were classified into three types: human, artificial intelligence, and human-AI collaboration. We investigated perceived usefulness of advice and the utilization of advice in decision making and whether the decision-maker's personal traits are influencing factors. Three hundred and eleven adult male and female experimenters conducted a task that predicts the age of faces in photos and the results showed that the advisor type does not directly affect the utilization of advice. The decision-maker utilizes it only when they believed advice can improve prediction performance. In the case of human-AI collaboration, decision-makers higher evaluated the perceived usefulness of advice, regardless of the decision maker's personal traits and the advice was more actively utilized. If the type of advisor was artificial intelligence alone, decision-makers who scored high in conscientiousness, high in extroversion, or low in neuroticism, high evaluated the perceived usefulness of the advice so they utilized advice actively. This study has academic significance in that it focuses on human-AI collaboration that the recent growing interest in artificial intelligence roles. It has expanded the relevant research area by considering the role of artificial intelligence as an advisor of decision-making and judgment research, and in aspects of practical significance, suggested views that companies should consider in order to enhance AI capability. To improve the effectiveness of AI-based systems, companies not only must introduce high-performance systems, but also need employees who properly understand digital information presented by AI, and can add non-digital information to make decisions. Moreover, to increase utilization in AI-based systems, task-oriented competencies, such as analytical skills and information technology capabilities, are important. in addition, it is expected that greater performance will be achieved if employee's personal traits are considered.