• Title/Summary/Keyword: statistical limits

Search Result 276, Processing Time 0.028 seconds

Initialization by using truncated distributions in artificial neural network (절단된 분포를 이용한 인공신경망에서의 초기값 설정방법)

  • Kim, MinJong;Cho, Sungchul;Jeong, Hyerin;Lee, YungSeop;Lim, Changwon
    • The Korean Journal of Applied Statistics
    • /
    • v.32 no.5
    • /
    • pp.693-702
    • /
    • 2019
  • Deep learning has gained popularity for the classification and prediction task. Neural network layers become deeper as more data becomes available. Saturation is the phenomenon that the gradient of an activation function gets closer to 0 and can happen when the value of weight is too big. Increased importance has been placed on the issue of saturation which limits the ability of weight to learn. To resolve this problem, Glorot and Bengio (Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249-256, 2010) claimed that efficient neural network training is possible when data flows variously between layers. They argued that variance over the output of each layer and variance over input of each layer are equal. They proposed a method of initialization that the variance of the output of each layer and the variance of the input should be the same. In this paper, we propose a new method of establishing initialization by adopting truncated normal distribution and truncated cauchy distribution. We decide where to truncate the distribution while adapting the initialization method by Glorot and Bengio (2010). Variances are made over output and input equal that are then accomplished by setting variances equal to the variance of truncated distribution. It manipulates the distribution so that the initial values of weights would not grow so large and with values that simultaneously get close to zero. To compare the performance of our proposed method with existing methods, we conducted experiments on MNIST and CIFAR-10 data using DNN and CNN. Our proposed method outperformed existing methods in terms of accuracy.

Identification of Uncertainty in Fitting Rating Curve with Bayesian Regression (베이지안 회귀분석을 이용한 수위-유량 관계곡선의 불확실성 분석)

  • Kim, Sang-Ug;Lee, Kil-Seong
    • Journal of Korea Water Resources Association
    • /
    • v.41 no.9
    • /
    • pp.943-958
    • /
    • 2008
  • This study employs Bayesian regression analysis for fitting discharge rating curves. The parameter estimates using the Bayesian regression analysis were compared to ordinary least square method using the t-distribution. In these comparisons, the mean values from the t-distribution and the Bayesian regression are not significantly different. However, the difference between upper and lower limits are remarkably reduced with the Bayesian regression. Therefore, from the point of view of uncertainty analysis, the Bayesian regression is more attractive than the conventional method based on a t-distribution because the data size at the site of interest is typically insufficient to estimate the parameters in rating curve. The merits and demerits of the two types of estimation methods are analyzed through the statistical simulation considering heteroscedasticity. The validation of the Bayesian regression is also performed using real stage-discharge data which were observed at 5 gauges on the Anyangcheon basin. Because the true parameters at 5 gauges are unknown, the quantitative accuracy of the Bayesian regression can not be assessed. However, it can be suggested that the uncertainty in rating curves at 5 gauges be reduced by Bayesian regression.

An Analysis of a Reverse Mortgage using a Multiple Life Model (연생모형을 이용한 역모기지의 분석)

  • Baek, HyeYoun;Lee, SeonJu;Lee, Hangsuck
    • The Korean Journal of Applied Statistics
    • /
    • v.26 no.3
    • /
    • pp.531-547
    • /
    • 2013
  • Multiple life models are useful in multiple life insurance and multiple life annuities when the payment times of benets in these insurance products are contingent on the future life times of at least two people. A reverse mortgage is an annuity whose monthly payments terminate at the death time of the last survivor; however, actuaries have used female life table to calculate monthly payments of a reverse mortgage. This approach may overestimate monthly payments. This paper suggests a last-survivor life table rather than a female life table to avoid the overestimation of monthly payments. Next, this paper derives the distribution of the future life time of last survivor, and calculates the expected life times of male, female and last survivor. This paper calculates principal limits and monthly payments in cases of male life table, female life table and last-survivor life table, respectively. Some numerical examples are discussed.

Pre-Filtering based Post-Load Shedding Method for Improving Spatial Queries Accuracy in GeoSensor Environment (GeoSensor 환경에서 공간 질의 정확도 향상을 위한 선-필터링을 이용한 후-부하제한 기법)

  • Kim, Ho;Baek, Sung-Ha;Lee, Dong-Wook;Kim, Gyoung-Bae;Bae, Hae-Young
    • Journal of Korea Spatial Information System Society
    • /
    • v.12 no.1
    • /
    • pp.18-27
    • /
    • 2010
  • In u-GIS environment, GeoSensor environment requires that dynamic data captured from various sensors and static information in terms of features in 2D or 3D are fused together. GeoSensors, the core of this environment, are distributed over a wide area sporadically, and are collected in any size constantly. As a result, storage space could be exceeded because of restricted memory in DSMS. To solve this kind of problems, a lot of related studies are being researched actively. There are typically 3 different methods - Random Load Shedding, Semantic Load Shedding, and Sampling. Random Load Shedding chooses and deletes data in random. Semantic Load Shedding prioritizes data, then deletes it first which has lower priority. Sampling uses statistical operation, computes sampling rate, and sheds load. However, they are not high accuracy because traditional ones do not consider spatial characteristics. In this paper 'Pre-Filtering based Post Load Shedding' are suggested to improve the accuracy of spatial query and to restrict load shedding in DSMS. This method, at first, limits unnecessarily increased loads in stream queue with 'Pre-Filtering'. And then, it processes 'Post-Load Shedding', considering data and spatial status to guarantee the accuracy of result. The suggested method effectively reduces the number of the performance of load shedding, and improves the accuracy of spatial query.

Exposure Assessment of Solvents and Toluene Diisocyanates among Polyurethane Waterproofing Workers in the Construction industry (건설현장 우레탄 방수작업자의 휘발성 유기화합물 및 톨루엔 디이소시아네이트 노출평가)

  • Park, Hyunhee;Hwang, Eunsong;Ro, Jiwon;Jang, Kwangmyung;Park, Seunghyun;Yoon, Chungsik
    • Journal of Korean Society of Occupational and Environmental Hygiene
    • /
    • v.30 no.2
    • /
    • pp.134-152
    • /
    • 2020
  • Objectives: The objective of this study was to evaluate volatile organic compounds (VOCs) and toluene diisocyanates (TDIs) exposure among polyurethane waterproofing workers in the construction industry. Methods: Task-based personal air samplings were carried out at seven construction sites using organic vapor monitor for VOCs (n=88) and glass fiber filters coated with 1-(2-pyridyl)piperazine(1-2PP) for TDIs (n=81). The concentration of VOCs and TDIs were shown for four different work types(mixing paint, primer roller painting, urethane resin spread painting, painter assistant) at five different worksites (rooftop, ground parking lot, piloti, bathroom, and swimming pool). The two TDI sampling methods (filter vs impinger) were evaluated in parallel to compare the concentrations. Results: The geometric mean(GM) concentration of VOCs Exposure Index (EI) was highest for primer roller painting (1.4), followed in order by, urethane resin spread painting (0.85), mixing paint (0.53), and painter assistant (0.35) by work types. The GM of VOCs EI was highest for bathroom (1.4) followed in order by, swimming pool (0.85), piloti (0.89), ground parking lot (0.82) and rooftop (0.57) by worksites. The GM of 2,4-/2,6-TDI concentration was 0.052 ppb and 0.432 ppb each. There was no statistical difference in TDIs concentrations among worksites. The concentration of 2,6-TDI was ten times higher than that of 2,4-TDI. The concentration of 2,6-TDI by impinger method was 5.7 times higher than that by filter method. Conclusions: In this study, we found 38.6% of the VOCs samples exceeded the occupational exposure limits and 19.8% of the 2,6-TDI samples exceeded 1 ppb among polyurethane waterproofing workers. The most important determinants that increase the concentration of VOCs and TDIs was indoor environment and primer painting work.

AN IN-VITRO WEAR STUDY OF INDIRECT COMPOSITE RESINS AGAINST HUMAN ENAMEL (법랑질에 의한 수종의 간접복합레진의 마모에 관한 연구)

  • Yi, Hyun-Jeong;Jeon, Young-Chan;Jeong, Chang-Mo;Jeong, Hee-Chan
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.45 no.5
    • /
    • pp.611-620
    • /
    • 2007
  • Statement of problem: Second-generation indirect composite resins have been improved flexural strength, compressive strength, hydrolytic degradation resistance, wear resistance compared to first-generation indirect composite resins, but there are still some problems as hydrolysis and low wear resistance. Some manufacturers claim that wear resistance of their materials has been improved, but little independent study has been published on wear properties of these materials and the properties specified in the advertising materials are largely derived from in-house or contracted testing. Purpose: This study was to evaluate the wear of indirect composite resins (SR Adore, Sinfony, Tescera ATL) and gold alloy against the human enamel. Material and method: Extracted human incisors and premolars were sectioned to $2{\times}2{\times}2mm$ cube and embedded in the clear resin and formed conical shaped antagonist to fit the jig of pin-on-disk tribometer. Total 20 antagonists were stored in distilled water. Five disk samples, 24mm in diameter and 1.5mm thick, were made for each of three groups of indirect composite resins and gold alloy group, and polished to #2,000 SiC paper on auto-polishing machine. Disk specimens were tested for wear against enamel antagonists. Wear test were conducted in distilled water using a pin-on-disk tribometer under condition (sliding speed 200rpm contact load 24N, sliding distance 160m). The wear of the enamel was determined by weighing the enamel antagonist before and after test, and the weight was converted to volumes by average density. The wear tracks were analyzed by scanning electron microscopy and surface profilometer to elucidate the wear mechanisms. Statistical analysis of the enamel wear volume, wear track depth and wear tract width of disk specimens were accomplished with one-way ANOVA and the means were compared for significant differences with Scheffe's test. Results: 1. The enamel wear was most in gold alloy, but there were no statistically significant differences among all the groups (P>.05). 2. In indirect composite resin groups, the group to make the most shallow depth of wear tract was Sinfony, followed by Tescera ATL, SR Adoro (P<.05). Gold alloy was shallower than Sinfony, but there was no statistically significant difference between Sinfony and gold alloy (P>.05). 3. The width of wear tract of SR Adore was larger than the other groups (P<.05), and there were no statistically significant differences among the other groups (P>.05). 4. SEM analysis revealed that Sinfony and gold alloy showed less wear scars after test, Tescera ATL showed more wear scars and SR Adore showed the most. Conclusion: Within the limits of this study, Sinfony and gold alloy showed the least wear rates and showed similar wear patterns.

Positive Analysis about Study-trend for a Field of the Korea Security : Papers Contributed($1997{\sim}2007$) to "Korea Security Science Association"- centered (한국 경호경비학의 연구경향 분석: "한국경호경비학회지" 기고논문(1997-2007)을 중심으로)

  • Ahn, Hwang-Kwon;Kim, Sang-Jin
    • Korean Security Journal
    • /
    • no.15
    • /
    • pp.199-219
    • /
    • 2008
  • This study analyzed the contents of the 225 papers included in Korea Security Science Association during the decade -from 1997 to 2007. This study was classified the study method qualitative. First, characteristic of researchers(distinction of sex, distinction of academic degree, regional distribution, one's position and regional distribution, participants per paper). Second, study trends classified by fields of study(where receiving research expenses support or not, change of study subject). Third, study trends classified by methods of study(study method by year, study method by study subject, statistical analysis by year) were subdivided. Analysis shows that there are some shortcomings on the research of Korea Security Science Association as compared with other fields. However, it shows advanced trends for example participation in different study field, evenly distributed regional study participation, variety trial of analysis method. Then again, the distinction of sex, one's position, too much emphasis on independence research, vulnerability about support of research expenses, emphasis on study fields and study trends wandering from industrial circles are getting deeper In study methods, generalized research form such as document study and phenomenon technical case study is limited so deduction of kernel result is not thoroughgoing enough as well as it shows the trend that limits to duplicate and generalized proposal technic.

  • PDF

Optimal design of a nonparametric Shewhart-Lepage control chart (비모수적 Shewhart-Lepage 관리도의 최적 설계)

  • Lee, Sungmin;Lee, Jaeheon
    • Journal of the Korean Data and Information Science Society
    • /
    • v.28 no.2
    • /
    • pp.339-348
    • /
    • 2017
  • One of the major issues of statistical process control for variables data is monitoring both the mean and the standard deviation. The traditional approach to monitor these parameters is to simultaneously use two seperate control charts. However there have been some works on developing a single chart using a single plotting statistic for joint monitoring, and it is claimed that they are simpler and may be more appealing than the traditonal one from a practical point of view. When using these control charts for variables data, estimating in-control parameters and checking the normality assumption are the very important step. Nonparametric Shewhart-Lepage chart, proposed by Mukherjee and Chakraborti (2012), is an attractive option, because this chart uses only a single control statistic, and does not require the in-control parameters and the underlying continuous distribution. In this paper, we introduce the Shewhart-Lepage chart, and propose the design procedure to find the optimal diagnosis limits when the location and the scale parameters change simultaneously. We also compare the efficiency of the proposed method with that of Mukherjee and Chakraborti (2012).

A Comparative Analysis on The Efficiency of Various Clinical Methods for Diagnosis of Tuberculosis (결핵 진단을 위한 검사 방법간의 효율성에 관한 비교 분석)

  • 최석철;정천환;성희경;김태운;이원재
    • Biomedical Science Letters
    • /
    • v.5 no.2
    • /
    • pp.191-200
    • /
    • 1999
  • In recent years continuously increasing number of tuberculosis (TB) cases due to the emergence of strains with multidrug resistance and AIDS is a significant global health problem. Therefore, more rapid and reliable diagnosis of TB may be one of the most urgent needs in efforts to eradicate the disease. The present study was designed to compare and assess the diagnostic values and efficiencies between the conventional methods (X-ray, AFB stain and culture) and PCR for pulmonary TB on 171 cases. Chest X-ray finding and clinical features revealed that 39 (22.8%) of 171 sputum specimens were pulmonary TB cases. The statistical data were taken on the basis of the definitive diagnosis: In X-ray, overall sensitivity, specificity, efficiency and false positive and false negative incidence was respectively 69.2%, 87.1%, 83.0%, 12.9%, and 30.8%; 79.5%, 95.5%, 91.8%,4.6% and 20.5% in AFB-stain; 56.4%, 99.2%,89.5%, 0.8% and 43.6% in culture; 82.1%, 96.2%, 93.0%, 3.8% and 17.9% in PCR. PCR got a highest sensitivity and efficiency as well as a lowest false negative incidence. Culture had a highest specificity with a lowest false positive incidence. These results imply that PCR assay is fast, sensitive and efficient method for diagnosis of pulmonary TB. However, combined use of the conventional methods with thorough quality control may offer more opportunities for detecting Mycobacterium tuberculosis and diagnosting TB although they have some limits.

  • PDF

The Transport Characteristics of 238U, 232Th, 226Ra, and 40K in the Production Cycle of Phosphate Rock

  • Jung, Yoonhee;Lim, Jong-Myoung;Ji, Young-Yong;Chung, Kun Ho;Kang, Mun Ja
    • Journal of Radiation Protection and Research
    • /
    • v.42 no.1
    • /
    • pp.33-41
    • /
    • 2017
  • Background: Phosphate rock and its by-product are widely used in various industries to produce phosphoric acid, gypsum, gypsum board, and fertilizer. Owing to its high level of natural radioactive nuclides (e.g., $^{238}U$ and $^{226}Ra$), the radiological safety of workers who work with phosphate rock should be systematically managed. In this study, $^{238}U$, $^{232}Th$, $^{226}Ra$, and $^{40}K$ levels were measured to analyze the transport characteristics of these radionuclides in the production cycle of phosphate rock. Materials and Methods: Energy dispersive X-ray fluorescence and gamma spectrometry were used to determine the activity of $^{238}U$, $^{232}Th$, $^{226}Ra$, and $^{40}K$. To evaluate the extent of secular disequilibrium, the analytical results were compared using statistical methods. Finally, the distribution of radioactivity across different stages of the phosphate rock production cycle was evaluated. Results and Discussion: The concentration ratios of $^{226}Ra$ and $^{238}U$ in phosphate rock were close to 1.0, while those found in gypsum and fertilizer were extremely different, reflecting disequilibrium after the chemical reaction process. The nuclide with the highest activity level in the production cycle of phosphate rock was $^{40}K$, and the median $^{40}K$ activity was $8.972Bq{\cdot}g^{-1}$ and $1.496Bq{\cdot}g^{-1}$, respectively. For the $^{238}U$ series, the activity of $^{238}U$ and $^{226}Ra$ was greatest in phosphate rock, and the distribution of activity values clearly showed the transport characteristics of the radionuclides, both for the byproducts of the decay sequences and for their final products. Conclusion: Although the activity of $^{40}K$ in k-related fertilizer was relatively high, it made a relatively low contribution to the total radiological effect. However, the activity levels of $^{226}Ra$ and $^{238}U$ in phosphate rock were found to be relatively high, near the upper end of the acceptable limits. Therefore, it is necessary to systematically manage the radiological safety of workers engaged in phosphate rock processing.