• Title/Summary/Keyword: optimal network model

Search Result 1,029, Processing Time 0.028 seconds

Development of a Freeway Travel Time Forecasting Model for Long Distance Section with Due Regard to Time-lag (시간처짐현상을 고려한 장거리구간 통행시간 예측 모형 개발)

  • 이의은;김정현
    • Journal of Korean Society of Transportation
    • /
    • v.20 no.4
    • /
    • pp.51-61
    • /
    • 2002
  • In this dissertation, We demonstrated the Travel Time forecasting model in the freeway of multi-section with regard of drives' attitude. Recently, the forecasted travel time that is furnished based on expected travel time data and advanced experiment isn't being able to reflect the time-lag phenomenon specially in case of long distance trip, so drivers don't believe any more forecasted travel time. And that's why the effects of ATIS(Advanced Traveler Information System) are reduced. Therefore, in this dissertation to forecast the travel time of the freeway of multi-section reflecting the time-lag phenomenon & the delay of tollgate, we used traffic volume data & TCS data that are collected by Korea Highway Cooperation. Also keep the data of mixed unusual to applicate real system. The applied model for forecasting is consisted of feed-forward structure which has three input units & two output units and the back-propagation is utilized as studying method. Furthermore, the optimal alternative was chosen through the twelve alternative ideas which is composed of the unit number of hidden-layer & repeating number which affect studying speed & forecasting capability. In order to compare the forecasting capability of developed ANN model. the algorithm which are currently used as an information source for freeway travel time. During the comparison with reference model, MSE, MARE, MAE & T-test were executed, as the result, the model which utilized the artificial neural network performed more superior forecasting capability among the comparison index. Moreover, the calculated through the particularity of data structure which was used in this experiment.

Evaluating Reverse Logistics Networks with Centralized Centers : Hybrid Genetic Algorithm Approach (집중형센터를 가진 역물류네트워크 평가 : 혼합형 유전알고리즘 접근법)

  • Yun, YoungSu
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.55-79
    • /
    • 2013
  • In this paper, we propose a hybrid genetic algorithm (HGA) approach to effectively solve the reverse logistics network with centralized centers (RLNCC). For the proposed HGA approach, genetic algorithm (GA) is used as a main algorithm. For implementing GA, a new bit-string representation scheme using 0 and 1 values is suggested, which can easily make initial population of GA. As genetic operators, the elitist strategy in enlarged sampling space developed by Gen and Chang (1997), a new two-point crossover operator, and a new random mutation operator are used for selection, crossover and mutation, respectively. For hybrid concept of GA, an iterative hill climbing method (IHCM) developed by Michalewicz (1994) is inserted into HGA search loop. The IHCM is one of local search techniques and precisely explores the space converged by GA search. The RLNCC is composed of collection centers, remanufacturing centers, redistribution centers, and secondary markets in reverse logistics networks. Of the centers and secondary markets, only one collection center, remanufacturing center, redistribution center, and secondary market should be opened in reverse logistics networks. Some assumptions are considered for effectively implementing the RLNCC The RLNCC is represented by a mixed integer programming (MIP) model using indexes, parameters and decision variables. The objective function of the MIP model is to minimize the total cost which is consisted of transportation cost, fixed cost, and handling cost. The transportation cost is obtained by transporting the returned products between each centers and secondary markets. The fixed cost is calculated by opening or closing decision at each center and secondary markets. That is, if there are three collection centers (the opening costs of collection center 1 2, and 3 are 10.5, 12.1, 8.9, respectively), and the collection center 1 is opened and the remainders are all closed, then the fixed cost is 10.5. The handling cost means the cost of treating the products returned from customers at each center and secondary markets which are opened at each RLNCC stage. The RLNCC is solved by the proposed HGA approach. In numerical experiment, the proposed HGA and a conventional competing approach is compared with each other using various measures of performance. For the conventional competing approach, the GA approach by Yun (2013) is used. The GA approach has not any local search technique such as the IHCM proposed the HGA approach. As measures of performance, CPU time, optimal solution, and optimal setting are used. Two types of the RLNCC with different numbers of customers, collection centers, remanufacturing centers, redistribution centers and secondary markets are presented for comparing the performances of the HGA and GA approaches. The MIP models using the two types of the RLNCC are programmed by Visual Basic Version 6.0, and the computer implementing environment is the IBM compatible PC with 3.06Ghz CPU speed and 1GB RAM on Windows XP. The parameters used in the HGA and GA approaches are that the total number of generations is 10,000, population size 20, crossover rate 0.5, mutation rate 0.1, and the search range for the IHCM is 2.0. Total 20 iterations are made for eliminating the randomness of the searches of the HGA and GA approaches. With performance comparisons, network representations by opening/closing decision, and convergence processes using two types of the RLNCCs, the experimental result shows that the HGA has significantly better performance in terms of the optimal solution than the GA, though the GA is slightly quicker than the HGA in terms of the CPU time. Finally, it has been proved that the proposed HGA approach is more efficient than conventional GA approach in two types of the RLNCC since the former has a GA search process as well as a local search process for additional search scheme, while the latter has a GA search process alone. For a future study, much more large-sized RLNCCs will be tested for robustness of our approach.

Research on APC Verification for Disaster Victims and Vulnerable Facilities (재난약자 및 취약시설에 대한 APC실증에 관한 연구)

  • Seungyong Kim;Incheol Hwang;Dongsik Kim;Jungjae Shin;Seunggap Yong
    • Journal of the Society of Disaster Information
    • /
    • v.20 no.1
    • /
    • pp.199-205
    • /
    • 2024
  • Purpose: This study aims to improve the recognition rate of Auto People Counting (APC) in accurately identifying and providing information on remaining evacuees in disaster-vulnerable facilities such as nursing homes to firefighting and other response agencies in the event of a disaster. Methods: In this study, a baseline model was established using CNN (Convolutional Neural Network) models to improve the algorithm for recognizing images of incoming and outgoing individuals through cameras installed in actual disaster-vulnerable facilities operating APC systems. Various algorithms were analyzed, and the top seven candidates were selected. The research was conducted by utilizing transfer learning models to select the optimal algorithm with the best performance. Results: Experiment results confirmed the precision and recall of Densenet201 and Resnet152v2 models, which exhibited the best performance in terms of time and accuracy. It was observed that both models demonstrated 100% accuracy for all labels, with Densenet201 model showing superior performance. Conclusion: The optimal algorithm applicable to APC among various artificial intelligence algorithms was selected. Further research on algorithm analysis and learning is required to accurately identify the incoming and outgoing individuals in disaster-vulnerable facilities in various disaster situations such as emergencies in the future.

A Study for Co-channel Interference Cancelation Algorithm with Channel Estimation for WBAN System Application (WBAN 환경에서 채널 추정 기반의 공용 채널 간섭 제거 기술)

  • Choi, Won-Seok;Kim, Jeong-Gon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.6C
    • /
    • pp.476-482
    • /
    • 2012
  • In this paper, we analyze and compare several co-channel interference mitigation algorithms for WBAN application in 2.4 GHz ISM frequency bands. ML (Maximum Likelihood), OC (Optimal Combining) and MMSE (Minimum Mean Square Error) has been considered for the possible techniques for interference cancellation in view of the trade off between the performance and the complexity of implementation. Based on the channel model of IEEE 802.15.6 standard, simulation results show that ML and OC attains the lower BER performance than that of MMSE if we assume the perfect channel estimation. But, ML and OC have the additional requirement of implementation for his own and other users's channel estimation process, hence, besides the BER performance, the complexity of implementation and the sensitivity to channel estimation error should be considered since it requires the simple and small sized equipment for WBAN system application. In addition, the gap of detection BER performance between ML, OC and MMSE is much decreased under the imperfect channel estimation if we adopt real channel estimation process, therefore, in order to apply to WBAN system, the trade off between the BER performance and complexity of implemetation should be seriously considered to decide the best co-channel interference cancellation for WBAN system application.

Calculation of Stability Number of Tetrapods Using Weights and Biases of ANN Model (인공신경망 모델의 가중치와 편의를 이용한 테트라포드의 안정수 계산 방법)

  • Lee, Jae Sung;Suh, Kyung-Duck
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.28 no.5
    • /
    • pp.277-283
    • /
    • 2016
  • Tetrapod is one of the most widely used concrete armor units for rubble mound breakwaters. The calculation of the stability number of Tetrapods is necessary to determine the optimal weight of Tetrapods. Many empirical formulas have been developed to calculate the stability number of Tetrapods, from the Hudson formula in 1950s to the recent one developed by Suh and Kang. They were developed by using the regression analysis to determine the coefficients of an assumed formula using the experimental data. Recently, software engineering (or machine learning) methods are introduced as a large amount of experimental data becomes available, e.g. artificial neural network (ANN) models for rock armors. However, these methods are seldom used probably because they did not significantly improve the accuracy compared with the empirical formula and/or the engineers are not familiar with them. In this study, we propose an explicit method to calculate the stability number of Tetrapods using the weights and biases of an ANN model. This method can be used by an engineer who has basic knowledge of matrix operation without requiring knowledge of ANN, and it is more accurate than previous empirical formulas.

Application of Multi-Dimensional Precipitation Models to the Sampling Error Problem (관측오차문제에 대한 다차원 강우모형의 적용)

  • Yu, Cheol-Sang
    • Journal of Korea Water Resources Association
    • /
    • v.30 no.5
    • /
    • pp.441-447
    • /
    • 1997
  • Rainfall observation using rain gage network or satellites includes the sampling error depending on the observation methods or plans. For example, the sampling using rain gages is continuous in time but discontinuous in space, which is nothing but the source of the sampling error. The sampling using satellites is the reverse case that continuous in space and discontinuous in time. The sampling error may be quantified by use of the temporal-spatial characteristics of rainfall and the sampling design. One of recent works on this problem was done by North and Nakamoto (1989), who derived a formulation for estimating the sampling error based on the temporal-spatial rainfall spectrum and the design scheme. The formula enables us to design an optimal rain gage network or a satellite operation plan providing the statistical characteristics of rainfall. In this paper the formula is reviewed and applied for the sampling error problems using several multi-dimensional precipitation models. The results show the limitation of the formulation, which cannot distinguish the model difference in case the model parameters can reproduce similar second order statistics of rainfall. The limitation can be improved by developing a new way to consider the higher order statistics, and eventually the probability density function (PDF) of rainfall.

  • PDF

Direction-Embedded Branch Prediction based on the Analysis of Neural Network (신경망의 분석을 통한 방향 정보를 내포하는 분기 예측 기법)

  • Kwak Jong Wook;Kim Ju-Hwan;Jhon Chu Shik
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.42 no.1
    • /
    • pp.9-26
    • /
    • 2005
  • In the pursuit of ever higher levels of performance, recent computer systems have made use of deep pipeline, dynamic scheduling and multi-issue superscalar processor technologies. In this situations, branch prediction schemes are an essential part of modem microarchitectures because the penalty for a branch misprediction increases as pipelines deepen and the number of instructions issued per cycle increases. In this paper, we propose a novel branch prediction scheme, direction-gshare(d-gshare), to improve the prediction accuracy. At first, we model a neural network with the components that possibly affect the branch prediction accuracy, and analyze the variation of their weights based on the neural network information. Then, we newly add the component that has a high weight value to an original gshare scheme. We simulate our branch prediction scheme using Simple Scalar, a powerful event-driven simulator, and analyze the simulation results. Our results show that, compared to bimodal, two-level adaptive and gshare predictor, direction-gshare predictor(d-gshare. 3) outperforms, without additional hardware costs, by up to 4.1% and 1.5% in average for the default mont of embedded direction, and 11.8% in maximum and 3.7% in average for the optimal one.

Deep Learning Based Group Synchronization for Networked Immersive Interactions (네트워크 환경에서의 몰입형 상호작용을 위한 딥러닝 기반 그룹 동기화 기법)

  • Lee, Joong-Jae
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.10
    • /
    • pp.373-380
    • /
    • 2022
  • This paper presents a deep learning based group synchronization that supports networked immersive interactions between remote users. The goal of group synchronization is to enable all participants to synchronously interact with others for increasing user presence Most previous methods focus on NTP-based clock synchronization to enhance time accuracy. Moving average filters are used to control media playout time on the synchronization server. As an example, the exponentially weighted moving average(EWMA) would be able to track and estimate accurate playout time if the changes in input data are not significant. However it needs more time to be stable for any given change over time due to codec and system loads or fluctuations in network status. To tackle this problem, this work proposes the Deep Group Synchronization(DeepGroupSync), a group synchronization based on deep learning that models important features from the data. This model consists of two Gated Recurrent Unit(GRU) layers and one fully-connected layer, which predicts an optimal playout time by utilizing the sequential playout delays. The experiments are conducted with an existing method that uses the EWMA and the proposed method that uses the DeepGroupSync. The results show that the proposed method are more robust against unpredictable or rapid network condition changes than the existing method.

Study on water quality prediction in water treatment plants using AI techniques (AI 기법을 활용한 정수장 수질예측에 관한 연구)

  • Lee, Seungmin;Kang, Yujin;Song, Jinwoo;Kim, Juhwan;Kim, Hung Soo;Kim, Soojun
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.3
    • /
    • pp.151-164
    • /
    • 2024
  • In water treatment plants supplying potable water, the management of chlorine concentration in water treatment processes involving pre-chlorination or intermediate chlorination requires process control. To address this, research has been conducted on water quality prediction techniques utilizing AI technology. This study developed an AI-based predictive model for automating the process control of chlorine disinfection, targeting the prediction of residual chlorine concentration downstream of sedimentation basins in water treatment processes. The AI-based model, which learns from past water quality observation data to predict future water quality, offers a simpler and more efficient approach compared to complex physicochemical and biological water quality models. The model was tested by predicting the residual chlorine concentration downstream of the sedimentation basins at Plant, using multiple regression models and AI-based models like Random Forest and LSTM, and the results were compared. For optimal prediction of residual chlorine concentration, the input-output structure of the AI model included the residual chlorine concentration upstream of the sedimentation basin, turbidity, pH, water temperature, electrical conductivity, inflow of raw water, alkalinity, NH3, etc. as independent variables, and the desired residual chlorine concentration of the effluent from the sedimentation basin as the dependent variable. The independent variables were selected from observable data at the water treatment plant, which are influential on the residual chlorine concentration downstream of the sedimentation basin. The analysis showed that, for Plant, the model based on Random Forest had the lowest error compared to multiple regression models, neural network models, model trees, and other Random Forest models. The optimal predicted residual chlorine concentration downstream of the sedimentation basin presented in this study is expected to enable real-time control of chlorine dosing in previous treatment stages, thereby enhancing water treatment efficiency and reducing chemical costs.

A study on simulation modeling of the underground space environment-focused on storage space for radioactive wastes (지하공간 환경예측 시뮬레이션 개발 연구-핵 폐기물 저장공간 중심으로)

  • 이창우
    • Tunnel and Underground Space
    • /
    • v.9 no.4
    • /
    • pp.306-314
    • /
    • 1999
  • In underground spaces including nuclear waste repository, prediction of air quantity, temperature/humidity and pollutant concentration is utmost important for space construction and management during the normal state as well as for determining the measures in emergency cases such as underground fires. This study aims at developing a model for underground space environment which has capabilities to take into account the effects of autocompression for the natural ventilation head calculation, to find the optimal location and size of fans and regulators, to predict the temperature and humidity by calculating the convective heat transfer coefficient and the sensible and latent heat transfer rates, and to estimate the pollutant levels throughout the network. The temperature/humidity prediction model was applied to a military storage underground space and the relative differences of dry and wet temperatures were 1.5 ~ 2.9% and 0.6 ~ 6.1%, respectively. The convection-based pollutant transport model was applied to two different vehicle tunnels. Coefficients of turbulent diffusion due to the atmospheric turbulence were found to be 9.78 and 17.35$m^2$/s, but measurements of smoke and CO concentrations in a tunnel with high traffic density and under operation of ventilation equipment showed relative differences of 5.88 and 6.62% compared with estimates from the convection-based model. These findings indicate convection is the governing mechanism for pollutant diffusion in most of the tunnel-type spaces.

  • PDF