• Title/Summary/Keyword: weighting matrices

Search Result 62, Processing Time 0.019 seconds

An assessment of the applicability of multigroup cross sections generated with Monte Carlo method for fast reactor analysis

  • Lin, Ching-Sheng;Yang, Won Sik
    • Nuclear Engineering and Technology
    • /
    • v.52 no.12
    • /
    • pp.2733-2742
    • /
    • 2020
  • This paper presents an assessment of applicability of the multigroup cross sections generated with Monte Carlo tools to the fast reactor analysis based on transport calculations. 33-group cross section sets were generated for simple one- (1-D) and two-dimensional (2-D) sodium-cooled fast reactor problems using the SERPENT code and applied to deterministic steady-state and depletion calculations. Relative to the reference continuous-energy SERPENT results, with the transport corrected P0 scattering cross section, the k-eff value was overestimated by 506 and 588 pcm for 1-D and 2-D problems, respectively, since anisotropic scattering is important in fast reactors. When the scattering order was increased to P5, the 1-D and 2-D problem errors were increased to 577 and 643 pcm, respectively. A sensitivity and uncertainty analysis with the PERSENT code indicated that these large k-eff errors cannot be attributed to the statistical uncertainties of cross sections and they are likely due to the approximate anisotropic scattering matrices determined by scalar flux weighting. The anisotropic scattering cross sections were alternatively generated using the MC2-3 code and merged with the SERPENT cross sections. The mixed cross section set consistently reduced the errors in k-eff, assembly powers, and nuclide densities. For example, in the 2-D calculation with P3 scattering order, the k-eff error was reduced from 634 pcm to -223 pcm. The maximum error in assembly power was reduced from 2.8% to 0.8% and the RMS error was reduced from 1.4% to 0.4%. The maximum error in the nuclide densities at the end of 12-month depletion that occurred in 237Np was reduced from 3.4% to 1.5%. The errors of the other nuclides are also reduced consistently, for example, from 1.1% to 0.1% for 235U, from 2.2% to 0.7% for 238Pu, and from 1.6% to 0.2% for 241Pu. These results indicate that the scalar flux weighted anisotropic scattering cross sections of SERPENT may not be adequate for application to fast reactors where anisotropic scattering is important.

Optimal supervised LSA method using selective feature dimension reduction (선택적 자질 차원 축소를 이용한 최적의 지도적 LSA 방법)

  • Kim, Jung-Ho;Kim, Myung-Kyu;Cha, Myung-Hoon;In, Joo-Ho;Chae, Soo-Hoan
    • Science of Emotion and Sensibility
    • /
    • v.13 no.1
    • /
    • pp.47-60
    • /
    • 2010
  • Most of the researches about classification usually have used kNN(k-Nearest Neighbor), SVM(Support Vector Machine), which are known as learn-based model, and Bayesian classifier, NNA(Neural Network Algorithm), which are known as statistics-based methods. However, there are some limitations of space and time when classifying so many web pages in recent internet. Moreover, most studies of classification are using uni-gram feature representation which is not good to represent real meaning of words. In case of Korean web page classification, there are some problems because of korean words property that the words have multiple meanings(polysemy). For these reasons, LSA(Latent Semantic Analysis) is proposed to classify well in these environment(large data set and words' polysemy). LSA uses SVD(Singular Value Decomposition) which decomposes the original term-document matrix to three different matrices and reduces their dimension. From this SVD's work, it is possible to create new low-level semantic space for representing vectors, which can make classification efficient and analyze latent meaning of words or document(or web pages). Although LSA is good at classification, it has some drawbacks in classification. As SVD reduces dimensions of matrix and creates new semantic space, it doesn't consider which dimensions discriminate vectors well but it does consider which dimensions represent vectors well. It is a reason why LSA doesn't improve performance of classification as expectation. In this paper, we propose new LSA which selects optimal dimensions to discriminate and represent vectors well as minimizing drawbacks and improving performance. This method that we propose shows better and more stable performance than other LSAs' in low-dimension space. In addition, we derive more improvement in classification as creating and selecting features by reducing stopwords and weighting specific values to them statistically.

  • PDF