• Title/Summary/Keyword: linear standard model

Search Result 432, Processing Time 0.032 seconds

PERFORMANCE OF THE AUTOREGRESSIVE METHOD IN LONG-TERM PREDICTION OF SUNSPOT NUMBER

  • Chae, Jongchul;Kim, Yeon Han
    • Journal of The Korean Astronomical Society
    • /
    • v.50 no.2
    • /
    • pp.21-27
    • /
    • 2017
  • The autoregressive method provides a univariate procedure to predict the future sunspot number (SSN) based on past record. The strength of this method lies in the possibility that from past data it yields the SSN in the future as a function of time. On the other hand, its major limitation comes from the intrinsic complexity of solar magnetic activity that may deviate from the linear stationary process assumption that is the basis of the autoregressive model. By analyzing the residual errors produced by the method, we have obtained the following conclusions: (1) the optimal duration of the past time for the forecast is found to be 8.5 years; (2) the standard error increases with prediction horizon and the errors are mostly systematic ones resulting from the incompleteness of the autoregressive model; (3) there is a tendency that the predicted value is underestimated in the activity rising phase, while it is overestimated in the declining phase; (5) the model prediction of a new Solar Cycle is fairly good when it is similar to the previous one, but is bad when the new cycle is much different from the previous one; (6) a reasonably good prediction of a new cycle can be made using the AR model 1.5 years after the start of the cycle. In addition, we predict the next cycle (Solar Cycle 25) will reach the peak in 2024 at the activity level similar to the current cycle.

Prediction of the Volume of Solid Radioactive Wastes to be Generated from Korean Next Generation Reactor

  • Cheong, Jae-Hak;Lee, Kun-Jai;Maeng, Sung-Jun;Song, Myung-Jae;Park, Kyu-Wan
    • Nuclear Engineering and Technology
    • /
    • v.29 no.3
    • /
    • pp.218-228
    • /
    • 1997
  • Correlations between the amount of DAW (Dry Active Waste) generated from present Korean PWRs and their operating parameters were analyzed. As the result of multi-variable linear regressions, a model predicting the volume of DAW using the number of shutdowns ( $f_{FS}$ ) and total personnel exposure ( $P_{\varepsilon}$) was derived. Considering one standard error bound, the model could successfully simulate about 8575 of the real data. In order to predict the amount of DAW to be generated from a KNGR another model was derived by taking into account the additional volume reduction by supercompaction system. In addition, the volume of WAW (Wet Active Waste) to be generated from KNGR (Korean Next Generation Reactor) was calculated by considering conceptual design data and replacement effect of radwaste evaporator with selective ion exchangers. Finally, total volume of SRW (Solid Radioactive Waste) to be generated from KNGR was predicted by inserting design goal values of $f_{FS}$ and $P_{\varepsilon}$ into the model. The result showed that the expected amount of SRW to be generated from KNGR would be in the range of 33~44㎥. $y^{-1}$ . It was proved that the value would meet the operational target of KNGR proposed by KEPCO, that is, 50㎥. $y^{-1}$ .

  • PDF

The Choice of an Optimal Growth Function Considering Environmental Factors and Production Style (생산방식과 환경요인들을 고려한 최적성장함수의 선택에 관한 연구)

  • Choi, Jong Du
    • Environmental and Resource Economics Review
    • /
    • v.13 no.4
    • /
    • pp.717-734
    • /
    • 2004
  • This paper examined the statistical goodness-of-fit tests for biological growth model in bioeconomic analysis. Some authors estimated usually growth function for fish in the world. However, few studies have estimated growth equations for the bivalve species. Thus, this paper studied the common functional forms of fitting growth equations for cham scallops considering environmental factors and production styles. The following functional forms are considered: linear, log-reciprocal, double log, polynomial and linear with interactions. Results of fitting these various functional forms with real data are compared and evaluated using standard statistical goodness-of-fit tests. Results also indicate that log-reciprocal function is statistically the best fit to the real data. Therefore, the log-reciprocal function is decided the best function describing cham scallop biological growth and hence might be useful for economic evaluation(i.e., optimal harvesting time).

  • PDF

Case Study on Influential Factors of Nonlinear Response History Analysis - Focused on 1989 Loma Prieta Earthquake - (비선형 응답이력해석의 영향인자에 대한 사례연구 - 1989 Loma Prieta 지진 계측기록을 중심으로 -)

  • Liu, Qihang;Lee, Jin-Sun
    • Journal of the Korean Geotechnical Society
    • /
    • v.33 no.12
    • /
    • pp.45-58
    • /
    • 2017
  • As many seismic codes for various facilities are changed into a performance based design code, demand for a reliable nonlinear response-history analysis (RHA) arises. However, the equivalent linear analysis has been used as a standard approach since 1970 in the field of site response analysis. So, the reliability of nonlinear RHA should be provided to be adopted in replace of equivalent linear analysis. In this paper, the reliability of nonlinear RHA is reviewed for a layered soil layer using Loma Prieta earthquake records in 1989. For this purpose, the appropriate way for selecting nonlinear soil models and the effect of base boundary condition for 3D analysis are evaluated. As a result, there is no significant differences between equivalent linear and nonlinear RHA. In case of 3D analysis, absorbing boundary condition should be applied at base to prevent rocking motion of the whole model.

A study on detailing gusset plate and bracing members in concentrically braced frame structures

  • Hassan, M.S.;Salawdeh, S.;Hunt, A.;Broderick, B.M.;Goggins, J.
    • Advances in Computational Design
    • /
    • v.3 no.3
    • /
    • pp.233-267
    • /
    • 2018
  • Conventional seismic design of concentrically braced frame (CBF) structures suggests that the gusset plate connecting a steel brace to beams and/or columns should be designed as non-dissipative in earthquakes, while the steel brace members should be designed as dissipative elements. These design intentions lead to thicker and larger gusset plates in design on one hand and a potentially under-rated contribution of gusset plates in design, on the other hand. In contrast, research has shown that compact and thinner gusset plates designed in accordance with the elliptical clearance method rather than the conventional standard linear clearance method can enhance system ductility and energy dissipation capacity in concentrically braced steel frames. In order to assess the two design methods, six cyclic push-over tests on full scale models of concentric braced steel frame structures were conducted. Furthermore, a 3D finite element (FE) shell model, incorporating state-of-the-art tools and techniques in numerical simulation, was developed that successfully replicates the response of gusset plate and bracing members under fully reversed cyclic axial loading. Direct measurements from strain gauges applied to the physical models were used primarily to validate FE models, while comparisons of hysteresis load-displacement loops from physical and numerical models were used to highlight the overall performance of the FE models. The study shows the two design methods attain structural response as per the design intentions; however, the elliptical clearance method has a superiority over the standard linear method as a fact of improving detailing of the gusset plates, enhancing resisting capacity and improving deformability of a CBF structure. Considerations were proposed for improvement of guidelines for detailing gusset plates and bracing members in CBF structures.

A Study on the Modeling and Propagation to Evaluate Uncertainties in Measurement Results (측정결과의 불확도산정을 위한 모델링과 불확도 전파에 관한 연구)

  • 김종상;조남호
    • Journal of the Korea Society of Computer and Information
    • /
    • v.8 no.4
    • /
    • pp.165-175
    • /
    • 2003
  • The concept of measurement uncertainty has been recognised for many years since "Guide to the Expression of Uncertainty in Measurement" was published 1993 by ISO. This study firstly propose the mathematical model to evaluate uncertainty considering the dispersion of samples because the mathematical model of a measurement is an important to evaluate uncertainty, and it must contains every quantify which contribute significantly to uncertainty in the measurement result. Secondly the standard uncertainty of the result of a measurement, namely combined standard uncertainty is evaluated using the law of propagation of uncertainty, what is termed in GUM method. In GUM method, a measurand is usually approximated by a linear function of its variables by the transforming its input quantities. Furthermore central limit theorem is applied to the input quantity. However the mathematical model of a measurement is generally not always a linearity function, and a distribution function of input or output quantity is not necessarily normal distribution. Then, in some cases GUM method is not favorable to evaluate a measurement uncertainty. Therefore this study propose a new method and its algorithm which use the Monte-carlo simulation to evaluate a measurement uncertainty in both case of linearity or non-linearity function. function.

  • PDF

Design of Particle Swarm Optimization-based Polynomial Neural Networks (입자 군집 최적화 알고리즘 기반 다항식 신경회로망의 설계)

  • Park, Ho-Sung;Kim, Ki-Sang;Oh, Sung-Kwun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.60 no.2
    • /
    • pp.398-406
    • /
    • 2011
  • In this paper, we introduce a new architecture of PSO-based Polynomial Neural Networks (PNN) and discuss its comprehensive design methodology. The conventional PNN is based on a extended Group Method of Data Handling (GMDH) method, and utilized the polynomial order (viz. linear, quadratic, and modified quadratic) as well as the number of node inputs fixed (selected in advance by designer) at Polynomial Neurons located in each layer through a growth process of the network. Moreover it does not guarantee that the conventional PNN generated through learning results in the optimal network architecture. The PSO-based PNN results in a structurally optimized structure and comes with a higher level of flexibility that the one encountered in the conventional PNN. The PSO-based design procedure being applied at each layer of PNN leads to the selection of preferred PNs with specific local characteristics (such as the number of input variables, input variables, and the order of the polynomial) available within the PNN. In the sequel, two general optimization mechanisms of the PSO-based PNN are explored: the structural optimization is realized via PSO whereas in case of the parametric optimization we proceed with a standard least square method-based learning. To evaluate the performance of the PSO-based PNN, the model is experimented with using Gas furnace process data, and pH neutralization process data. For the characteristic analysis of the given entire data with non-linearity and the construction of efficient model, the given entire system data is partitioned into two type such as Division I(Training dataset and Testing dataset) and Division II(Training dataset, Validation dataset, and Testing dataset). A comparative analysis shows that the proposed PSO-based PNN is model with higher accuracy as well as more superb predictive capability than other intelligent models presented previously.

Validation and selection of GCPs obtained from ERS SAR and the SRTM DEM: Application to SPOT DEM Construction

  • Jung, Hyung-Sup;Hong, Sang-Hoon;Won, Joong-Sun
    • Korean Journal of Remote Sensing
    • /
    • v.24 no.5
    • /
    • pp.483-496
    • /
    • 2008
  • Qualified ground control points (GCPs) are required to construct a digital elevation model (DEM) from a pushbroom stereo pair. An inverse geolocation algorithm for extracting GCPs from ERS SAR data and the SRTM DEM was recently developed. However, not all GCPs established by this method are accurate enough for direct application to the geometric correction of pushbroom images such as SPOT, IRS, etc, and thus a method for selecting and removing inaccurate points from the sets of GCPs is needed. In this study, we propose a method for evaluating GCP accuracy and winnowing sets of GCPs through orientation modeling of pushbroom image and validate performance of this method using SPOT stereo pair of Daejon City. It has been found that the statistical distribution of GCP positional errors is approximately Gaussian without bias, and that the residual errors estimated by orientation modeling have a linear relationship with the positional errors. Inaccurate GCPs have large positional errors and can be iteratively eliminated by thresholding the residual errors. Forty-one GCPs were initially extracted for the test, with mean the positional error values of 25.6m, 2.5m and -6.1m in the X-, Y- and Z-directions, respectively, and standard deviations of 62.4m, 37.6m and 15.0m. Twenty-one GCPs were eliminated by the proposed method, resulting in the standard deviations of the positional errors of the 20 final GCPs being reduced to 13.9m, 8.5m and 7.5m in the X-, Y- and Z-directions, respectively. Orientation modeling of the SPOT stereo pair was performed using the 20 GCPs, and the model was checked against 15 map-based points. The root mean square errors (RMSEs) of the model were 10.4m, 7.1m and 12.1m in X-, Y- and Z-directions, respectively. A SPOT DEM with a 20m ground resolution was successfully constructed using a automatic matching procedure.

Ground Base Laser Torque Applied on LEO Satellites of Various Geometries

  • Khalifa, N.S.
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.13 no.4
    • /
    • pp.484-490
    • /
    • 2012
  • This paper is devoted to investigate the feasibility of using a medium power ground-based laser to produce a torque on LEO satellites of various shapes. The laser intensity delivered to a satellite is calculated using a simple model of laser propagation in which a standard atmospheric condition and linear atmospheric interaction mechanism is assumed. The laser force is formulated using a geocentric equatorial system in which the Earth is an oblate spheroid. The torque is formulated for a cylindrical satellite, spherical satellites and for satellites of complex shape. The torque algorithm is implemented for some sun synchronous low Earth orbit cubesats. Based on satellites perigee height, the results demonstrate that laser torque affecting on a cubesat has a maximum value in the order of $10^{-9}$ which is comparable with that of solar radiation. However, it has a minimum value in the order of $10^{-10}$ which is comparable with that of gravity gradient. Moreover, the results clarify the dependency of the laser torque on the orbital eccentricity. As the orbit becomes more circular it will experience less torque. So, we can conclude that the ground based laser torque has a significant contribution on the low Earth orbit cubesats. It can be adjusted to obtain the required control torque and it can be used as an active attitude control system for cubesats.

Novel Multi-user Conjunctive Keyword Search Against Keyword Guessing Attacks Under Simple Assumptions

  • Zhao, Zhiyuan;Wang, Jianhua
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.7
    • /
    • pp.3699-3719
    • /
    • 2017
  • Conjunctive keyword search encryption is an important technique for protecting sensitive personal health records that are outsourced to cloud servers. It has been extensively employed for cloud storage, which is a convenient storage option that saves bandwidth and economizes computing resources. However, the process of searching outsourced data may facilitate the leakage of sensitive personal information. Thus, an efficient data search approach with high security is critical. The multi-user search function is critical for personal health records (PHRs). To solve these problems, this paper proposes a novel multi-user conjunctive keyword search scheme (mNCKS) without a secure channel against keyword guessing attacks for personal health records, which is referred to as a secure channel-free mNCKS (SCF-mNCKS). The security of this scheme is demonstrated using the Decisional Bilinear Diffie-Hellman (DBDH) and Decision Linear (D-Linear) assumptions in the standard model. Comparisons are performed to demonstrate the security advantages of the SCF-mNCKS scheme and show that it has more functions than other schemes in the case of analogous efficiency.