• Title/Summary/Keyword: genetic Neural Network

Search Result 531, Processing Time 0.024 seconds

Building a mathematics model for lane-change technology of autonomous vehicles

  • Phuong, Pham Anh;Phap, Huynh Cong;Tho, Quach Hai
    • ETRI Journal
    • /
    • v.44 no.4
    • /
    • pp.641-653
    • /
    • 2022
  • In the process of autonomous vehicle motion planning and to create comfort for vehicle occupants, factors that must be considered are the vehicle's safety features and the road's slipperiness and smoothness. In this paper, we build a mathematical model based on the combination of a genetic algorithm and a neural network to offer lane-change solutions of autonomous vehicles, focusing on human vehicle control skills. Traditional moving planning methods often use vehicle kinematic and dynamic constraints when creating lane-change trajectories for autonomous vehicles. When comparing this generated trajectory with a man-generated moving trajectory, however, there is in fact a significant difference. Therefore, to draw the optimal factors from the actual driver's lane-change operations, the solution in this paper builds the training data set for the moving planning process with lane change operation by humans with optimal elements. The simulation results are performed in a MATLAB simulation environment to demonstrate that the proposed solution operates effectively with optimal points such as operator maneuvers and improved comfort for passengers as well as creating a smooth and slippery lane-change trajectory.

Application of an Optimized Support Vector Regression Algorithm in Short-Term Traffic Flow Prediction

  • Ruibo, Ai;Cheng, Li;Na, Li
    • Journal of Information Processing Systems
    • /
    • v.18 no.6
    • /
    • pp.719-728
    • /
    • 2022
  • The prediction of short-term traffic flow is the theoretical basis of intelligent transportation as well as the key technology in traffic flow induction systems. The research on short-term traffic flow prediction has showed the considerable social value. At present, the support vector regression (SVR) intelligent prediction model that is suitable for small samples has been applied in this domain. Aiming at parameter selection difficulty and prediction accuracy improvement, the artificial bee colony (ABC) is adopted in optimizing SVR parameters, which is referred to as the ABC-SVR algorithm in the paper. The simulation experiments are carried out by comparing the ABC-SVR algorithm with SVR algorithm, and the feasibility of the proposed ABC-SVR algorithm is verified by result analysis. Continuously, the simulation experiments are carried out by comparing the ABC-SVR algorithm with particle swarm optimization SVR (PSO-SVR) algorithm and genetic optimization SVR (GA-SVR) algorithm, and a better optimization effect has been attained by simulation experiments and verified by statistical test. Simultaneously, the simulation experiments are carried out by comparing the ABC-SVR algorithm and wavelet neural network time series (WNN-TS) algorithm, and the prediction accuracy of the proposed ABC-SVR algorithm is improved and satisfactory prediction effects have been obtained.

THREE-STAGED RISK EVALUATION MODEL FOR BIDDING ON INTERNATIONAL CONSTRUCTION PROJECTS

  • Wooyong Jung;Seung Heon Han
    • International conference on construction engineering and project management
    • /
    • 2011.02a
    • /
    • pp.534-541
    • /
    • 2011
  • Risk evaluation approaches for bidding on international construction projects are typically partitioned into three stages: country selection, project classification, and bid-cost evaluation. However, previous studies are frequently under attack in that they have several crucial limitations: 1) a dearth of studies about country selection risk tailored for the overseas construction market at a corporate level; 2) no consideration of uncertainties for input variable per se; 3) less probabilistic approaches in estimating a range of cost variance; and 4) less inclusion of covariance impacts. This study thus suggests a three-staged risk evaluation model to resolve these inherent problems. In the first stage, a country portfolio model that maximizes the expected construction market growth rate and profit rate while decreasing market uncertainty is formulated using multi-objective genetic analysis. Following this, probabilistic approaches for screening bad projects are suggested through applying various data mining methods such as discriminant logistic regression, neural network, C5.0, and support vector machine. For the last stage, the cost overrun prediction model is simulated for determining a reasonable bid cost, while considering non-parametric distribution, effects of systematic risks, and the firm's specific capability accrued in a given country. Through the three consecutive models, this study verifies that international construction risk can be allocated, reduced, and projected to some degree, thereby contributing to sustaining stable profits and revenues in both the short-term and the long-term perspective.

  • PDF

Sensor Fault Detection Scheme based on Deep Learning and Support Vector Machine (딥 러닝 및 서포트 벡터 머신기반 센서 고장 검출 기법)

  • Yang, Jae-Wan;Lee, Young-Doo;Koo, In-Soo
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.2
    • /
    • pp.185-195
    • /
    • 2018
  • As machines have been automated in the field of industries in recent years, it is a paramount importance to manage and maintain the automation machines. When a fault occurs in sensors attached to the machine, the machine may malfunction and further, a huge damage will be caused in the process line. To prevent the situation, the fault of sensors should be monitored, diagnosed and classified in a proper way. In the paper, we propose a sensor fault detection scheme based on SVM and CNN to detect and classify typical sensor errors such as erratic, drift, hard-over, spike, and stuck faults. Time-domain statistical features are utilized for the learning and testing in the proposed scheme, and the genetic algorithm is utilized to select the subset of optimal features. To classify multiple sensor faults, a multi-layer SVM is utilized, and ensemble technique is used for CNN. As a result, the SVM that utilizes a subset of features selected by the genetic algorithm provides better performance than the SVM that utilizes all the features. However, the performance of CNN is superior to that of the SVM.

Numerical Study on the Development of the Seismic Response Prediction Method for the Low-rise Building Structures using the Limited Information (제한된 정보를 이용한 저층 건물 구조물의 지진 응답 예측 기법 개발을 위한 해석적 연구)

  • Choi, Se-Woon
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.33 no.4
    • /
    • pp.271-277
    • /
    • 2020
  • There are increasing cases of monitoring the structural response of structures using multiple sensors. However, owing to cost and management problems, limited sensors are installed in the structure. Thus, few structural responses are collected, which hinders analyzing the behavior of the structure. Therefore, a technique to predict responses at a location where sensors are not installed to a reliable level using limited sensors is necessary. In this study, a numerical study is conducted to predict the seismic response of low-rise buildings using limited information. It is assumed that the available response information is only the acceleration responses of the first and top floors. Using both information, the first natural frequency of the structure can be obtained. The acceleration information on the first floor is used as the ground motion information. To minimize the error on the acceleration history response of the top floor and the first natural frequency error of the target structure, the method for predicting the mass and stiffness information of a structure using the genetic algorithm is presented. However, the constraints are not considered. To determine the range of design variables that mean the search space, the parameter prediction method based on artificial neural networks is proposed. To verify the proposed method, a five-story structure is used as an example.

Computational intelligence models for predicting the frictional resistance of driven pile foundations in cold regions

  • Shiguan Chen;Huimei Zhang;Kseniya I. Zykova;Hamed Gholizadeh Touchaei;Chao Yuan;Hossein Moayedi;Binh Nguyen Le
    • Computers and Concrete
    • /
    • v.32 no.2
    • /
    • pp.217-232
    • /
    • 2023
  • Numerous studies have been performed on the behavior of pile foundations in cold regions. This study first attempted to employ artificial neural networks (ANN) to predict pile-bearing capacity focusing on pile data recorded primarily on cold regions. As the ANN technique has disadvantages such as finding global minima or slower convergence rates, this study in the second phase deals with the development of an ANN-based predictive model improved with an Elephant herding optimizer (EHO), Dragonfly Algorithm (DA), Genetic Algorithm (GA), and Evolution Strategy (ES) methods for predicting the piles' bearing capacity. The network inputs included the pile geometrical features, pile area (m2), pile length (m), internal friction angle along the pile body and pile tip (Ø°), and effective vertical stress. The MLP model pile's output was the ultimate bearing capacity. A sensitivity analysis was performed to determine the optimum parameters to select the best predictive model. A trial-and-error technique was also used to find the optimum network architecture and the number of hidden nodes. According to the results, there is a good consistency between the pile-bearing DA-MLP-predicted capacities and the measured bearing capacities. Based on the R2 and determination coefficient as 0.90364 and 0.8643 for testing and training datasets, respectively, it is suggested that the DA-MLP model can be effectively implemented with higher reliability, efficiency, and practicability to predict the bearing capacity of piles.

Deep Learning Algorithm and Prediction Model Associated with Data Transmission of User-Participating Wearable Devices (사용자 참여형 웨어러블 디바이스 데이터 전송 연계 및 딥러닝 대사증후군 예측 모델)

  • Lee, Hyunsik;Lee, Woongjae;Jeong, Taikyeong
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.25 no.6
    • /
    • pp.33-45
    • /
    • 2020
  • This paper aims to look at the perspective that the latest cutting-edge technologies are predicting individual diseases in the actual medical environment in a situation where various types of wearable devices are rapidly increasing and used in the healthcare domain. Through the process of collecting, processing, and transmitting data by merging clinical data, genetic data, and life log data through a user-participating wearable device, it presents the process of connecting the learning model and the feedback model in the environment of the Deep Neural Network. In the case of the actual field that has undergone clinical trial procedures of medical IT occurring in such a high-tech medical field, the effect of a specific gene caused by metabolic syndrome on the disease is measured, and clinical information and life log data are merged to process different heterogeneous data. That is, it proves the objective suitability and certainty of the deep neural network of heterogeneous data, and through this, the performance evaluation according to the noise in the actual deep learning environment is performed. In the case of the automatic encoder, we proved that the accuracy and predicted value varying per 1,000 EPOCH are linearly changed several times with the increasing value of the variable.

Evaluation of a Thermal Conductivity Prediction Model for Compacted Clay Based on a Machine Learning Method (기계학습법을 통한 압축 벤토나이트의 열전도도 추정 모델 평가)

  • Yoon, Seok;Bang, Hyun-Tae;Kim, Geon-Young;Jeon, Haemin
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.41 no.2
    • /
    • pp.123-131
    • /
    • 2021
  • The buffer is a key component of an engineered barrier system that safeguards the disposal of high-level radioactive waste. Buffers are located between disposal canisters and host rock, and they can restrain the release of radionuclides and protect canisters from the inflow of ground water. Since considerable heat is released from a disposal canister to the surrounding buffer, the thermal conductivity of the buffer is a very important parameter in the entire disposal safety. For this reason, a lot of research has been conducted on thermal conductivity prediction models that consider various factors. In this study, the thermal conductivity of a buffer is estimated using the machine learning methods of: linear regression, decision tree, support vector machine (SVM), ensemble, Gaussian process regression (GPR), neural network, deep belief network, and genetic programming. In the results, the machine learning methods such as ensemble, genetic programming, SVM with cubic parameter, and GPR showed better performance compared with the regression model, with the ensemble with XGBoost and Gaussian process regression models showing best performance.

Optimization of Multiclass Support Vector Machine using Genetic Algorithm: Application to the Prediction of Corporate Credit Rating (유전자 알고리즘을 이용한 다분류 SVM의 최적화: 기업신용등급 예측에의 응용)

  • Ahn, Hyunchul
    • Information Systems Review
    • /
    • v.16 no.3
    • /
    • pp.161-177
    • /
    • 2014
  • Corporate credit rating assessment consists of complicated processes in which various factors describing a company are taken into consideration. Such assessment is known to be very expensive since domain experts should be employed to assess the ratings. As a result, the data-driven corporate credit rating prediction using statistical and artificial intelligence (AI) techniques has received considerable attention from researchers and practitioners. In particular, statistical methods such as multiple discriminant analysis (MDA) and multinomial logistic regression analysis (MLOGIT), and AI methods including case-based reasoning (CBR), artificial neural network (ANN), and multiclass support vector machine (MSVM) have been applied to corporate credit rating.2) Among them, MSVM has recently become popular because of its robustness and high prediction accuracy. In this study, we propose a novel optimized MSVM model, and appy it to corporate credit rating prediction in order to enhance the accuracy. Our model, named 'GAMSVM (Genetic Algorithm-optimized Multiclass Support Vector Machine),' is designed to simultaneously optimize the kernel parameters and the feature subset selection. Prior studies like Lorena and de Carvalho (2008), and Chatterjee (2013) show that proper kernel parameters may improve the performance of MSVMs. Also, the results from the studies such as Shieh and Yang (2008) and Chatterjee (2013) imply that appropriate feature selection may lead to higher prediction accuracy. Based on these prior studies, we propose to apply GAMSVM to corporate credit rating prediction. As a tool for optimizing the kernel parameters and the feature subset selection, we suggest genetic algorithm (GA). GA is known as an efficient and effective search method that attempts to simulate the biological evolution phenomenon. By applying genetic operations such as selection, crossover, and mutation, it is designed to gradually improve the search results. Especially, mutation operator prevents GA from falling into the local optima, thus we can find the globally optimal or near-optimal solution using it. GA has popularly been applied to search optimal parameters or feature subset selections of AI techniques including MSVM. With these reasons, we also adopt GA as an optimization tool. To empirically validate the usefulness of GAMSVM, we applied it to a real-world case of credit rating in Korea. Our application is in bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. The experimental dataset was collected from a large credit rating company in South Korea. It contained 39 financial ratios of 1,295 companies in the manufacturing industry, and their credit ratings. Using various statistical methods including the one-way ANOVA and the stepwise MDA, we selected 14 financial ratios as the candidate independent variables. The dependent variable, i.e. credit rating, was labeled as four classes: 1(A1); 2(A2); 3(A3); 4(B and C). 80 percent of total data for each class was used for training, and remaining 20 percent was used for validation. And, to overcome small sample size, we applied five-fold cross validation to our dataset. In order to examine the competitiveness of the proposed model, we also experimented several comparative models including MDA, MLOGIT, CBR, ANN and MSVM. In case of MSVM, we adopted One-Against-One (OAO) and DAGSVM (Directed Acyclic Graph SVM) approaches because they are known to be the most accurate approaches among various MSVM approaches. GAMSVM was implemented using LIBSVM-an open-source software, and Evolver 5.5-a commercial software enables GA. Other comparative models were experimented using various statistical and AI packages such as SPSS for Windows, Neuroshell, and Microsoft Excel VBA (Visual Basic for Applications). Experimental results showed that the proposed model-GAMSVM-outperformed all the competitive models. In addition, the model was found to use less independent variables, but to show higher accuracy. In our experiments, five variables such as X7 (total debt), X9 (sales per employee), X13 (years after founded), X15 (accumulated earning to total asset), and X39 (the index related to the cash flows from operating activity) were found to be the most important factors in predicting the corporate credit ratings. However, the values of the finally selected kernel parameters were found to be almost same among the data subsets. To examine whether the predictive performance of GAMSVM was significantly greater than those of other models, we used the McNemar test. As a result, we found that GAMSVM was better than MDA, MLOGIT, CBR, and ANN at the 1% significance level, and better than OAO and DAGSVM at the 5% significance level.

Analysis of Automatic Meter Reading Systems (IBM, Oracle, and Itron) (국외 상수도 원격검침 시스템(IBM, Oracle, Itron) 분석)

  • Joo, Jin Chul;Kim, Juhwan;Lee, Doojin;Choi, Taeho;Kim, Jong Kyu
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2017.05a
    • /
    • pp.264-264
    • /
    • 2017
  • 국외의 상수도 원격검침 시스템 내 데이터 전송방식은 도시 규모, 계량기의 밀도, 전력공급 여부 및 통신망의 설치 여부 등을 종합적으로 고려하여 결정되었다. 대부분의 스마트워터미터 제조업체들은 계량기의 부호기가 공급하는 판독 내용(데이터)을 전송할 검침단말기와 근거리 통신망(neighborhood area network)을 연계하여 개발 및 판매하였으며, 자체 소유 통신 프로토콜을 사용하여 라디오 주파수(RF) 통신 기술을 사용하고 있다. 광역통신망(wide area network)의 경우, 노드(말단의 계량기 및 센서)들과 이에 연결된 통신망 들을 포함한 네트웍의 배열이나 구성이 스타(star), 메쉬(mesh), 버스(bus), 나무(tree) 등의 형태로 통신망이 구성되어 있으나, 스타와 메쉬형 통신망 구성형태가 가장 널리 활용되는 것으로 조사되었다. 시스템 통합운영관리 업체들인 IBM, Oracle, Itron 등은 용수 인프라 관리 또는 통합네트워크 솔루션 등의 통합 물관리 시스템(integrated water management system)을 개발하여 현장적용을 하고 있으며, 원격검침 시스템을 통해 고객들의 현재 소비량과 과거 누적 소비량, 누수 감지 서비스 및 실시간 요금 고지 등을 실시간으로 웹 포털과 앱을 통해 제공하고 있다. 또한, 일부 제조업체들은 도시 용수공급/소비 관리자가 주민의 용수사용량을 모니터링하여 일평균 용수사용량 및 사용 경향을 파악하고, 누수를 검지하여 복구 및 용수 사용 지속가능성 지수를 제시하고, 실시간으로 주민의 용수사용량 관련 데이터를 모니터링하여 용수공급의 최적화를 위한 의사결정지원 서비스를 용수공급자에게 제공하고 있다. 최근에는 인공지능을 활용해 가정용수의 용도별(세탁용수, 화장실용수, 샤워용수, 식기세척용수 등) 사용량 곡선을 패터닝하여 profiling 기법을 도입해, 스마트워터미터에서 용수사용량이 통합되어 검지될 시 용수사용량의 세부 용도별 re-profiling 기법을 도입하여 가정용수내 과소비되는 지점을 도출 후 절감을 유도하는 기술이 개발 중이다. 또한, 미래 용수 사용량 예측을 위해 다양한 시계열 자료를 분석하는 선형 종속 모형(자기회귀모형, 자기회귀이동평균모형, 자기회귀적분이동평균모형 등)과 비선형 종속 모형(Fuzzy Logic, Neural Network, Genetic Algorithm 등)을 활용한 예측기능이 구축되어 상호 비교하여 최적의 용수사용량 예측 도구를 제공되고 있다.

  • PDF