• Title/Summary/Keyword: Statistical Methodology

Search Result 1,307, Processing Time 0.027 seconds

An optimal policy for an infinite dam with exponential inputs of water (비의 양이 지수분포를 따르는 경우 무한 댐의 최적 방출정책 연구)

  • Kim, Myung-Hwa;Baek, Jee-Seon;Choi, Seung-Kyoung;Lee, Eui-Yong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.22 no.6
    • /
    • pp.1089-1096
    • /
    • 2011
  • We consider an infinite dam with inputs formed by a compound Poisson process and adopt a $P^M_{\lambda}$-policy to control the level of water, where the water is released at rate M when the level of water exceeds threshold ${\lambda}$. We obtain interesting stationary properties of the level of water, when the amount of each input independently follows an exponential distribution. After assigning several managing costs to the dam, we derive the long-run average cost per unit time and show that there exist unique values of releasing rate M and threshold ${\lambda}$ which minimize the long-run average cost per unit time. Numerical results are also illustrated by using MATLAB.

A Case Study on Risk Levels of Shoulder Postures Associated with Work-related Musculoskeletal Disorders at Automobile Manufacturing Industry (자동차 조립업종 작업의 근골격계질환관련 어깨 작업자세 위험도 결정을 위한 사례적 접근)

  • Park, Dong Hyun;Hur, Kuk Kang
    • Journal of the Korean Society of Safety
    • /
    • v.28 no.1
    • /
    • pp.95-101
    • /
    • 2013
  • This study tried to develop a basis for quantitative index of working postures associated with WMSDs(Work-related Musculoskeletal Disorders) that could overcome realistic restriction during application of typical checklists for WMSDs evaluation. The baseline data for this study was obtained from automobile manufacturing company(A total of 603 jobs were observed). Specifically, data for shoulder postures was analyzed to have a better and more objective method in terms of job relevance than typical methods such as OWAS, RULA, and REBA. Major statistical tools were Clustering, Logistic regression and so on. The main results in this study could be summarized as follows; 1) The relationships between working postures and WMSDs symptoms at shoulder were statistically significant based on the results from logistic regression. 2) Based on clustering analysis, three levels for WMSDs risk at shoulder were produced for both flexion and abduction were statistically significant. Specific results were as follows; Shoulder flexion: low risk(< $37.7^{\circ}$), medium risk($37.7^{\circ}{\sim}70.0^{\circ}$), high risk(> $70.0^{\circ}$) Shoulder abduction: low risk(< $26.5^{\circ}$), medium risk($26.5^{\circ}{\sim}56.8^{\circ}$), high risk(> $56.8^{\circ}$). 3) The sensitivities on risk levels of shoulder flexion and abduction were 64.0% and 20.6% respectively while the specificities on risk levels of shoulder flexion and abduction were 99.1% and 99.3% respectively. The results showed that the data associated with shoulder postures in this study could provide a good basis for job evaluation of WMSDs at shoulder. Specifically, this evaluation methodology was different from the methods usually used at WMSDs study since it tried to be based on direct job relevance from real working situation. Further evaluation for other body parts as well as shoulder would provide more stability and reliability in WMSDs evaluation study.

Applying 6 sigma techniques in CMMI based software process improvement (CMMI 기반의 프로세스 개선을 위한 6시그마 활용방안)

  • Kim Han-Saem;Han Hyuk-Soo
    • The KIPS Transactions:PartD
    • /
    • v.13D no.3 s.106
    • /
    • pp.415-424
    • /
    • 2006
  • There are increasing numbers of foreign and domestic organizations that are using CMM/CMMI to establish their processes and keep improving them. CMMI and IDEAL models of SEI provide the best practices of processes and guide the organization using them based on processes maturity levels. However, they do not deal with their tools or methods that describe how to implement the processes in the organization. Therefore, in this paper, we developed a method in which various tools and statistical methodology of 6 sigma are applied to identify the process areas to be improved, to extract problems in those areas and to prioritize them. We expect this paper can contribute to the organizations that are searching for practical way of implementing CMMI based software process improvement and of identifying improvement items systematically. Also this method will be used to understand the result of improvement activities quantitatively.

An Investigation of a Sensibility Evaluation Method Using Big Data in the Field of Design -Focusing on Hanbok Related Design Factors, Sensibility Responses, and Evaluation Terms- (디자인 분야에서 빅데이터를 활용한 감성평가방법 모색 -한복 연관 디자인 요소, 감성적 반응, 평가어휘를 중심으로-)

  • An, Hyosun;Lee, Inseong
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.40 no.6
    • /
    • pp.1034-1044
    • /
    • 2016
  • This study seeks a method to objectively evaluate sensibility based on Big Data in the field of design. In order to do so, this study examined the sensibility responses on design factors for the public through a network analysis of texts displayed in social media. 'Hanbok', a formal clothing that represents Korea, was selected as the subject for the research methodology. We then collected 47,677 keywords related to Hanbok from 12,000 posts on Naver blogs from January $1^{st}$ to December $31^{st}$ 2015 and that analyzed using social matrix (a Big Data analysis software) rather than using previous survey methods. We also derived 56 key-words related to design elements and sensibility responses of Hanbok. Centrality analysis and CONCOR analysis were conducted using Ucinet6. The visualization of the network text analysis allowed the categorization of the main design factors of Hanbok with evaluation terms that mean positive, negative, and neutral sensibility responses. We also derived key evaluation factors for Hanbok as fitting, rationality, trend, and uniqueness. The evaluation terms extracted based on natural language processing technologies of atypical data have validity as a scale for evaluation and are expected to be suitable for utilization in an index for sensibility evaluation that supplements the limits of previous surveys and statistical analysis methods. The network text analysis method used in this study provides new guidelines for the use of Big Data involving sensibility evaluation methods in the field of design.

Statistical Optimization of Medium Composition for Bacterial Cellulose Production by Gluconacetobacter hansenii UAC09 Using Coffee Cherry Husk Extract - an Agro-Industry Waste

  • Rani, Mahadevaswamy Usha;Rastogi, Navin K.;Anu Appaiah, K.A.
    • Journal of Microbiology and Biotechnology
    • /
    • v.21 no.7
    • /
    • pp.739-745
    • /
    • 2011
  • During the production of grape wine, the formation of thick leathery pellicle/bacterial cellulose (BC) at the airliquid interface was due to the bacterium, which was isolated and identified as Gluconacetobacter hansenii UAC09. Cultural conditions for bacterial cellulose production from G. hansenii UAC09 were optimized by central composite rotatable experimental design. To economize the BC production, coffee cherry husk (CCH) extract and corn steep liquor (CSL) were used as less expensive sources of carbon and nitrogen, respectively. CCH and CSL are byproducts from the coffee processing and starch processing industry, respectively. The interactions between pH (4.5-8.5), CSL (2-10%), alcohol (0.5-2%), acetic acid (0.5-2%), and water dilution rate to CCH ratio (1:1 to 1:5) were studied using response surface methodology. The optimum conditions for maximum BC production were pH (6.64), CSL (10%), alcohol (0.5%), acetic acid (1.13%), and water to CCH ratio (1:1). After 2 weeks of fermentation, the amount of BC produced was 6.24 g/l. This yield was comparable to the predicted value of 6.09 g/l. This is the first report on the optimization of the fermentation medium by RSM using CCH extract as the carbon source for BC production by G. hansenii UAC09.

Production of Total Reducing Sugar and Levulinic Acid from Brown Macro-algae Sargassum fulvellum (거대 갈조류 모자반으로부터 환원당과 레불린산의 생산)

  • Jeong, Gwi-Taek
    • Microbiology and Biotechnology Letters
    • /
    • v.42 no.2
    • /
    • pp.177-183
    • /
    • 2014
  • Recently, many biofuels and chemicals converted from renewable resources have been introduced into chemical industries. Sargassum fulvellum is a brown macro-algae, which is found on the seashores of Korea and Japan. In this work, the production of total reducing sugar and levulinic acid from S. fulvellum, using dilute-acid catalyzed hydrothermal hydrolysis and statistical methodology, was investigated. As a result, 15.28 g/l total reducing sugar was obtained under the optimized conditions of $160.1^{\circ}C$, 1.0% sulfuric acid, in 20.2 min. Furthermore, 2.65 g/l levulinic acid was obtained at $189.5^{\circ}C$, 2.93% sulfuric acid, in 48.8 min.

Variational Mode Decomposition with Missing Data (결측치가 있는 자료에서의 변동모드분해법)

  • Choi, Guebin;Oh, Hee-Seok;Lee, Youngjo;Kim, Donghoh;Yu, Kyungsang
    • The Korean Journal of Applied Statistics
    • /
    • v.28 no.2
    • /
    • pp.159-174
    • /
    • 2015
  • Dragomiretskiy and Zosso (2014) developed a new decomposition method, termed variational mode decomposition (VMD), which is efficient for handling the tone detection and separation of signals. However, VMD may be inefficient in the presence of missing data since it is based on a fast Fourier transform (FFT) algorithm. To overcome this problem, we propose a new approach based on a novel combination of VMD and hierarchical (or h)-likelihood method. The h-likelihood provides an effective imputation methodology for missing data when VMD decomposes the signal into several meaningful modes. A simulation study and real data analysis demonstrates that the proposed method can produce substantially effective results.

Prediction Interval Estimation in Ttansformed ARMA Models (변환된 자기회귀이동평균 모형에서의 예측구간추정)

  • Cho, Hye-Min;Oh, Sung-Un;Yeo, In-Kwon
    • The Korean Journal of Applied Statistics
    • /
    • v.20 no.3
    • /
    • pp.541-550
    • /
    • 2007
  • One of main aspects of time series analysis is to forecast future values of series based on values up to a given time. The prediction interval for future values is usually obtained under the normality assumption. When the assumption is seriously violated, a transformation of data may permit the valid use of the normal theory. We investigate the prediction problem for future values in the original scale when transformations are applied in ARMA models. In this paper, we introduce the methodology based on Yeo-Johnson transformation to solve the problem of skewed data whose modelling is relatively difficult in the analysis of time series. Simulation studies show that the coverage probabilities of proposed intervals are closer to the nominal level than those of usual intervals.

Impact of Self-Citations on Impact Factor: A Study Across Disciplines, Countries and Continents

  • Pandita, Ramesh;Singh, Shivendra
    • Journal of Information Science Theory and Practice
    • /
    • v.3 no.2
    • /
    • pp.42-57
    • /
    • 2015
  • Purpose. : The present study is an attempt to find out the impact of self-citations on Impact Factor (IF) across disciplines. The study examines the number of research articles published across 27 major subject fields covered by SCImago, encompassing as many as 310 sub-disciplines. The study evaluates aspects like percentage of self-citations across each discipline, leading self-citing countries and continents, and the impact of self-citation on their IF. Scope. : The study is global in nature, as it evaluates the trend of self-citation and its impact on IF of all the major subject disciplines of the world, along with countries and continents. IF has been calculated for the year 2012 by analyzing the articles published during the years 2010 and 2011. Methodology/Approach. : The study is empirical in nature; as such, statistical and mathematical tools and techniques have been employed to work out the distribution across disciplines. The evaluation has been purely under-taken on the secondary data, retrieved from SCImago Journal and Country Ranking. Findings. : Self-citations play a very significant part in inflating IF. All the subject fields under study are influenced by the practice of self-citation, ranging from 33.14% to 52.38%. Compared to the social sciences and the humanities, subject fields falling under the purview of pure and applied sciences have a higher number of self-citations, but a far lesser percentage than the social sciences and humanities. Upon excluding self-citations, a substantial amount of change was observed in the IF of subject fields under study, as 18 (66.66%) out of 27 subjects fields faced shuffle in their rankings. Variation in rankings based on IF with and without self-citation was observed at subject level, country level, and continental level.

Quantification of future climate uncertainty over South Korea using eather generator and GCM

  • Tanveer, Muhammad Ejaz;Bae, Deg-Hyo
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2018.05a
    • /
    • pp.154-154
    • /
    • 2018
  • To interpret the climate projections for the future as well as present, recognition of the consequences of the climate internal variability and quantification its uncertainty play a vital role. The Korean Peninsula belongs to the Far East Asian Monsoon region and its rainfall characteristics are very complex from time and space perspective. Its internal variability is expected to be large, but this variability has not been completely investigated to date especially using models of high temporal resolutions. Due to coarse spatial and temporal resolutions of General Circulation Models (GCM) projections, several studies adopted dynamic and statistical downscaling approaches to infer meterological forcing from climate change projections at local spatial scales and fine temporal resolutions. In this study, stochastic downscaling methodology was adopted to downscale daily GCM resolutions to hourly time scale using an hourly weather generator, the Advanced WEather GENerator (AWE-GEN). After extracting factors of change from the GCM realizations, these were applied to the climatic statistics inferred from historical observations to re-evaluate parameters of the weather generator. The re-parameterized generator yields hourly time series which can be considered to be representative of future climate conditions. Further, 30 ensemble members of hourly precipitation were generated for each selected station to quantify uncertainty. Spatial map was generated to visualize as separated zones formed through K-means cluster algorithm which region is more inconsistent as compared to the climatological norm or in which region the probability of occurrence of the extremes event is high. The results showed that the stations located near the coastal regions are more uncertain as compared to inland regions. Such information will be ultimately helpful for planning future adaptation and mitigation measures against extreme events.

  • PDF