• Title/Summary/Keyword: Solution algorithm

Search Result 3,896, Processing Time 0.028 seconds

A joint modeling of longitudinal zero-inflated count data and time to event data (경시적 영과잉 가산자료와 생존자료의 결합모형)

  • Kim, Donguk;Chun, Jihun
    • The Korean Journal of Applied Statistics
    • /
    • v.29 no.7
    • /
    • pp.1459-1473
    • /
    • 2016
  • Both longitudinal data and survival data are collected simultaneously in longitudinal data which are observed throughout the passage of time. In this case, the effect of the independent variable becomes biased (provided that sole use of longitudinal data analysis does not consider the relation between both data used) if the missing that occurred in the longitudinal data is non-ignorable because it is caused by a correlation with the survival data. A joint model of longitudinal data and survival data was studied as a solution for such problem in order to obtain an unbiased result by considering the survival model for the cause of missing. In this paper, a joint model of the longitudinal zero-inflated count data and survival data is studied by replacing the longitudinal part with zero-inflated count data. A hurdle model and proportional hazards model were used for each longitudinal zero inflated count data and survival data; in addition, both sub-models were linked based on the assumption that the random effect of sub-models follow the multivariate normal distribution. We used the EM algorithm for the maximum likelihood estimator of parameters and estimated standard errors of parameters were calculated using the profile likelihood method. In simulation, we observed a better performance of the joint model in bias and coverage probability compared to the separate model.

Study on GNSS Constellation Combination to Improve the Current and Future Multi-GNSS Navigation Performance

  • Seok, Hyojeong;Yoon, Donghwan;Lim, Cheol Soon;Park, Byungwoon;Seo, Seung-Woo;Park, Jun-Pyo
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.4 no.2
    • /
    • pp.43-55
    • /
    • 2015
  • In the case of satellite navigation positioning, the shielding of satellite signals is determined by the environment of the region at which a user is located, and the navigation performance is determined accordingly. The accuracy of user position determination varies depending on the dilution of precision (DOP) which is a measuring index for the geometric characteristics of visible satellites; and if the minimum visible satellites are not secured, position determination is impossible. Currently, the GLObal NAvigation Satellite system (GLONASS) of Russia is used to supplement the navigation performance of the Global Positioning System (GPS) in regions where GPS cannot be used. In addition, the European Satellite Navigation System (Galileo) of the European Union, the Chinese Satellite Navigation System (BeiDou) of China, the Quasi-Zenith Satellite System (QZSS) of Japan, and the Indian Regional Navigation Satellite System (IRNSS) of India are aimed to achieve the full operational capability (FOC) operation of the navigation system. Thus, the number of satellites available for navigation would rapidly increase, particularly in the Asian region; and when integrated navigation is performed, the improvement of navigation performance is expected to be much larger than that in other regions. To secure a stable and prompt position solution, GPS-GLONASS integrated navigation is generally performed at present. However, as available satellite navigation systems have been diversified, finding the minimum satellite constellation combination to obtain the best navigation performance has recently become an issue. For this purpose, it is necessary to examine and predict the navigation performance that could be obtained by the addition of the third satellite navigation system in addition to GPS-GLONASS. In this study, the current status of the integrated navigation performance for various satellite constellation combinations was analyzed based on 2014, and the navigation performance in 2020 was predicted based on the FOC plan of the satellite navigation system for each country. For this prediction, the orbital elements and nominal almanac data of satellite navigation systems that can be observed in the Korean Peninsula were organized, and the minimum elevation angle expecting signal shielding was established based on Matlab and the performance was predicted in terms of DOP. In the case of integrated navigation, a time offset determination algorithm needs to be considered in order to estimate the clock error between navigation systems, and it was analyzed using two kinds of methods: a satellite navigation message based estimation method and a receiver based method where a user directly performs estimation. This simulation is expected to be used as an index for the establishment of the minimum satellite constellation for obtaining the best navigation performance.

Stress-Strain Responses of Concrete Confined by FRP Composites (FRP 합성재료에 의하여 구속된 콘크리트의 응력-변형률 응답 예측)

  • Cho, Soon-Ho
    • Journal of the Korea Concrete Institute
    • /
    • v.19 no.6
    • /
    • pp.803-810
    • /
    • 2007
  • An analytical method capable of predicting various stress-strain responses in axially loaded concrete confined with FRP (fiber reinforced polymers) composites in a rational manner is presented. Its underlying idea is that the volumetric expansion due to progressive microcracking in mechanically loaded concrete is an important measure of the extent of damage in the material microstructure, and can be utilized to estimate the load-carrying capacity of concrete by considering the corresponding accumulated damage. Following from this, an elastic modulus expressed as a function of area strain and concrete porosity, the energy-balance equation relating the dilating concrete to the confining device interactively, the varying confining pressure, and an incremental calculation algorithm are included in the solution procedure. The proposed method enables the evaluation of lateral strains consecutively according to the related mechanical model and the energy-balance equation, rather than using an empirically derived equation for Poisson's ratio or dilation rate as in other analytical methods. Several existing analytical methods that can predict the overall response were also examined and discussed, particularly focusing on the way of considering the volumetric expansion. The results predicted by the proposed and Samaan's bilinear equation models correlated with observed results with a reasonable degree, however it can be judged that the latter is not capable of predicting the response of lateral strains correctly due to incorporating the initial Poisson's ratio and the final converged dilation rate only. Further, the proposed method seems to have greater benefits in other applications by the use of the fundamental principles of mechanics.

Computationally Efficient ion-Splitting Method for Monte Carlo ion Implantation Simulation for the Analysis of ULSI CMOS Characteristics (ULSI급 CMOS 소자 특성 분석을 위한 몬테 카를로 이온 주입 공정 시뮬레이션시의 효율적인 가상 이온 발생법)

  • Son, Myeong-Sik;Lee, Jin-Gu
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.38 no.11
    • /
    • pp.771-780
    • /
    • 2001
  • It is indispensable to use the process and device simulation tool in order to analyze accurately the electrical characteristics of ULSI CMOS devices, in addition to developing and manufacturing those devices. The 3D Monte Carlo (MC) simulation result is not efficient for large-area application because of the lack of simulation particles. In this paper is reported a new efficient simulation strategy for 3D MC ion implantation into large-area application using the 3D MC code of TRICSI(TRansport Ions into Crystal Silicon). The strategy is related to our newly proposed split-trajectory method and ion-splitting method(ion-shadowing approach) for 3D large-area application in order to increase the simulation ions, not to sacrifice the simulation accuracy for defects and implanted ions. In addition to our proposed methods, we have developed the cell based 3D interpolation algorithm to feed the 3D MC simulation result into the device simulator and not to diverge the solution of continuous diffusion equations for diffusion and RTA(rapid thermal annealing) after ion implantation. We found that our proposed simulation strategy is very computationally efficient. The increased number of simulation ions is about more than 10 times and the increase of simulation time is not twice compared to the split-trajectory method only.

  • PDF

Development of a Model for Calculating Road Congestion Toll with Sensitivity Analysis (민감도 분석을 이용한 도로 혼잡통행료 산정 모형 개발)

  • Kim, Byung-Kwan;Lim, Yong-Taek;Lim, Kang-Won
    • Journal of Korean Society of Transportation
    • /
    • v.22 no.5
    • /
    • pp.139-149
    • /
    • 2004
  • As the expansion of road capacity has become impractical in many urban areas, congestion pricing has been widely considered as an effective method to reduce urban traffic congestion in recent years. The principal reason is that the congestion pricing may lead the user equilibrium (UE) flow pattern to system optimum (SO) pattern in road network. In the context of network equilibrium, the link tolls according to the marginal cost pricing principle can user an UE flow to a SO pattern. Thus, the pricing method offers an efficient tool for moving toward system optimal traffic conditions on the network. This paper proposes a continuous network design program (CNDP) in network equilibrium condition, in order to find optimal congestion toll for maximizing net economic benefit (NEB). The model could be formulated as a bi-level program with continuous variable(congestion toll) such that the upper level problem is for maximizing the NEB in elastic demand, while the lower level is for describing route choice of road users. The bi-level CNDP is intrinsically nonlinear, non-convex, and hence it might be difficult to solve. So, we suggest a heuristic solution algorithm, which adopt derivative information of link flow with respect to design parameter, or congestion toll. Two example networks are used for test of the model proposed in the paper.

Assessment of Rainfall-Sediment Yield-Runoff Prediction Uncertainty Using a Multi-objective Optimization Method (다중최적화기법을 이용한 강우-유사-유출 예측 불확실성 평가)

  • Lee, Gi-Ha;Yu, Wan-Sik;Jung, Kwan-Sue;Cho, Bok-Hwan
    • Journal of Korea Water Resources Association
    • /
    • v.43 no.12
    • /
    • pp.1011-1027
    • /
    • 2010
  • In hydrologic modeling, prediction uncertainty generally stems from various uncertainty sources associated with model structure, data, and parameters, etc. This study aims to assess the parameter uncertainty effect on hydrologic prediction results. For this objective, a distributed rainfall-sediment yield-runoff model, which consists of rainfall-runoff module for simulation of surface and subsurface flows and sediment yield module based on unit stream power theory, was applied to the mesoscale mountainous area (Cheoncheon catchment; 289.9 $km^2$). For parameter uncertainty evaluation, the model was calibrated by a multi-objective optimization algorithm (MOSCEM) with two different objective functions (RMSE and HMLE) and Pareto optimal solutions of each case were then estimated. In Case I, the rainfall-runoff module was calibrated to investigate the effect of parameter uncertainty on hydrograph reproduction whereas in Case II, sediment yield module was calibrated to show the propagation of parameter uncertainty into sedigraph estimation. Additionally, in Case III, all parameters of both modules were simultaneously calibrated in order to take account of prediction uncertainty in rainfall-sediment yield-runoff modeling. The results showed that hydrograph prediction uncertainty of Case I was observed over the low-flow periods while the sedigraph of high-flow periods was sensitive to uncertainty of the sediment yield module parameters in Case II. In Case III, prediction uncertainty ranges of both hydrograph and sedigraph were larger than the other cases. Furthermore, prediction uncertainty in terms of spatial distribution of erosion and deposition drastically varied with the applied model parameters for all cases.

A Study on PIXE Spectrum Analysis for the Determination of Elemental Contents (원소별 함량결정을 위한 PIXE 스펙트럼 분석에 관한 연구)

  • Jong-Seok OH;;Hae-ILL Bak
    • Nuclear Engineering and Technology
    • /
    • v.22 no.2
    • /
    • pp.101-107
    • /
    • 1990
  • The PIXE (Proton Induced X-ray Emission) method is applied to the quantitative analysis of trace elements in tap water, red wine, urine and old black powder samples. Sample irradiations are performed with a 1.202 MeV proton beam from the SNU 1.5-MV Tandem Van de Graaff accelerator, and measurements of X-ray spectra are made by the Si(Li) spectrometer To increase the sensitivity of analysis tap water is preconcentrated by evaporation method. As an internal standard, Ni powder is mixed with black powder sample and yttrium solution is added to the other samples. The analyses of the PIXE spectra are carried out by using the AXIL (Analytical X-ray Analysis by Iterative Least-squares) computer code, in which the routine for least-squares method is based on the Marquardt algorithm. The elements such as Mg, Al, Si, Ti, Fe and Zn are analyzed at sub-ppm levels in the tap water sample. In the red wine sample prepared without preconcentration. the element Ti is detected in the amount of 3ppm. In conclusion, the PIXE method is proved to be appropriate for the analysis of liquid samples by relative measurements using the internal standard. and is expected to be improved by the use of evaluated X-ray production cross-sections and the development of sample preparation techniques.

  • PDF

Development of Improved Clustering Harmony Search and its Application to Various Optimization Problems (개선 클러스터링 화음탐색법 개발 및 다양한 최적화문제에 적용)

  • Choi, Jiho;Jung, Donghwi;Kim, Joong Hoon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.3
    • /
    • pp.630-637
    • /
    • 2018
  • Harmony search (HS) is a recently developed metaheuristic optimization algorithm. HS is inspired by the process of musical improvisation and repeatedly searches for the optimal solution using three operations: random selection, memory recall (or harmony memory consideration), and pitch adjustment. HS has been applied by many researchers in various fields. The increasing complexity of real-world optimization problems has created enormous challenges for the current technique, and improved techniques of optimization algorithms and HS are required. We propose an improved clustering harmony search (ICHS) that uses a clustering technique to group solutions in harmony memory based on their objective function values. The proposed ICHS performs modified harmony memory consideration in which decision variables of solutions in a high-ranked cluster have higher probability of being selected than those in a low-ranked cluster. The ICHS is demonstrated in various optimization problems, including mathematical benchmark functions and water distribution system pipe design problems. The results show that the proposed ICHS outperforms other improved versions of HS.

Development of Ideal Model Based Optimization Procedure with Heuristic Knowledge (정위적 방사선 수술에서의 이상표적모델과 경험적 지식을 활용한 수술계획 최적화 방법 개발)

  • 오승종;송주영;최경식;김문찬;이태규;서태석
    • Progress in Medical Physics
    • /
    • v.15 no.2
    • /
    • pp.84-93
    • /
    • 2004
  • Stereotactic radiosurgery (SRS) is a technique that delivers a high dose to a target legion and a low dose to a critical organ through only one or a few irradiations. For this purpose, many mathematical methods for optimization have been proposed. There are some limitations to using these methods: the long calculation time and difficulty in finding a unique solution due to different tumor shapes. In this study, many clinical target shapes were examined to find a typical pattern of tumor shapes from which some possible ideal geometrical shapes, such as spheres, cylinders, cones or a combination, are assumed to approximate real tumor shapes. Using the arrangement of multiple isocenters, optimum variables, such as isocenter positions or collimator size, were determined. A database was formed from these results. The optimization procedure consisted of the following steps: Any shape of tumor was first assumed to an ideal model through a geometry comparison algorithm, then optimum variables for ideal geometry chosen from the predetermined database, followed by a final adjustment of the optimum parameters using the real tumor shape. Although the result of applying the database to other patients was not superior to the result of optimization in each case, it can be acceptable as a plan starling point.

  • PDF

An Equality-Based Model for Real-Time Application of A Dynamic Traffic Assignment Model (동적통행배정모형의 실시간 적용을 위한 변동등식의 응용)

  • Shin, Seong-Il;Ran, Bin;Choi, Dae-Soon;Baik, Nam-Tcheol
    • Journal of Korean Society of Transportation
    • /
    • v.20 no.3
    • /
    • pp.129-147
    • /
    • 2002
  • This paper presents a variational equality formulation by Providing new dynamic route choice condition for a link-based dynamic traffic assignment model. The concepts of used paths, used links, used departure times are employed to derive a new link-based dynamic route choice condition. The route choice condition is formulated as a time-dependent variational equality problem and necessity and sufficiency conditions are provided to prove equivalence of the variational equality model. A solution algorithm is proposed based on physical network approach and diagonalization technique. An asymmetric network computational study shows that ideal dynamic-user optimal route condition is satisfied when the length of each time interval is shortened. The I-394 corridor study shows that more than 93% of computational speed improved compared to conventional variational inequality approach, and furthermore as the larger network size, the more computational performance can be expected. This paper concludes that the variational equality could be a promising approach for real-time application of a dynamic traffic assignment model based on fast computational performance.