• Title/Summary/Keyword: Re-Estimation

Search Result 357, Processing Time 0.03 seconds

Optimization of Gaussian Mixture in CDHMM Training for Improved Speech Recognition

  • Lee, Seo-Gu;Kim, Sung-Gil;Kang, Sun-Mee;Ko, Han-Seok
    • Speech Sciences
    • /
    • v.5 no.1
    • /
    • pp.7-21
    • /
    • 1999
  • This paper proposes an improved training procedure in speech recognition based on the continuous density of the Hidden Markov Model (CDHMM). Of the three parameters (initial state distribution probability, state transition probability, output probability density function (p.d.f.) of state) governing the CDHMM model, we focus on the third parameter and propose an efficient algorithm that determines the p.d.f. of each state. It is known that the resulting CDHMM model converges to a local maximum point of parameter estimation via the iterative Expectation Maximization procedure. Specifically, we propose two independent algorithms that can be embedded in the segmental K -means training procedure by replacing relevant key steps; the adaptation of the number of mixture Gaussian p.d.f. and the initialization using the CDHMM parameters previously estimated. The proposed adaptation algorithm searches for the optimal number of mixture Gaussian humps to ensure that the p.d.f. is consistently re-estimated, enabling the model to converge toward the global maximum point. By applying an appropriate threshold value, which measures the amount of collective changes of weighted variances, the optimized number of mixture Gaussian branch is determined. The initialization algorithm essentially exploits the CDHMM parameters previously estimated and uses them as the basis for the current initial segmentation subroutine. It captures the trend of previous training history whereas the uniform segmentation decimates it. The recognition performance of the proposed adaptation procedures along with the suggested initialization is verified to be always better than that of existing training procedure using fixed number of mixture Gaussian p.d.f.

  • PDF

The Combined Tx. Diversity of STBC Tx. Diversity and Balanced Tx. Diversity (STBC 송신 다이버시티와 균등 송신 다이버시티가 결합된 송신 다이버시티)

  • Chun, Kwang-Ho;Min, Seung-Hyun;Liu, Lijun;Lim, Myoung-Seob
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.33 no.2A
    • /
    • pp.183-190
    • /
    • 2008
  • The balanced Tx diversity with same data on two antennas can have better or lower performance than STBC Tx diversity, depending on the difference between each phase of two channels when the received signal is processed with MRRC. Therefore, using the feedback information based on the phase estimation in each channels, the better scheme of the balanced Tx diversity and STBC Tx diversity can be selected. However, when the phase condition changes during the transmission as the selected Tx. diversity scheme, the decoded bit can be erroneous because the previously estimated phase and newly estimated phase is different. In this case, the receiver should request the re-transmission of the just received signal to the transmitter part. Through computer simulation, it is shown that the combined scheme of the balanced Tx diversity and STBC Tx. diversity has better performance than STBC Tx diversity.

Estimation of Semiconductor Market, Using NLS Diffusion Model (비선형회귀 확산모형을 이용한 반도체 시장수요 추정)

  • Kim, Gene;Khoe, Kyung-Il
    • Journal of Digital Convergence
    • /
    • v.12 no.3
    • /
    • pp.141-147
    • /
    • 2014
  • Diffusion model is popular research topic in marketing and economy particularly for the areas of model specification and market size forecasting. In particular, Bass model can explain Roger's innovation diffusion and product life cycle through easy mathematical representation and hence the model has been widely used for the explanation of adopting innovative new products and technologies. Nonetheless, there're only a couple of pioneering researches about semiconductor market, using diffusion models. Consequently, we'd utilise NLS approach diffusion model to estimate the market potential of MOSFET, major switching device for power management of system, and explain the process to industry stakeholders and policy makers for delivery of managerial implication with pragmatic purpose.

Estimation of Nurse Staffing Based on Nursing Workload with Reference to a Patient Classification System for a Intensive Care Unit (중환자의 중증도에 따른 적정 간호인력 수요 산정)

  • Park, Young Sun;Song, Rhayun
    • Journal of Korean Critical Care Nursing
    • /
    • v.10 no.1
    • /
    • pp.1-12
    • /
    • 2017
  • Purpose: This study aimed to estimate the appropriate nurse staffing ratio in intensive care units (ICUs) by measuring nursing workload based on patient's severity and needs, using the Korean Patient Classification System for critical care nurses. Methods: The data were collected from January 18 to February 29, 2016 using a standardized checklist by observation or self-report. During the study period, 723 patients were included to be categorized from I to IV using the patient classification system. Measurement of total nursing workload on a shift was calculated in terms of hours based on the time and motion method by using tools for surveying nursing activities. The nursing activities were categorized as direct nursing care, indirect nursing care, and personal time. Total of 127 cases were included in measuring direct nursing time and 18 nurses participated in measuring indirect and personal time. Data were analyzed using descriptive statistics. Results: Two patients were classified into Class I (11.1%), 5 into Class II (27.8%), 9 into Class III (50%), and two into Class IV (11.1%). The amount of direct nursing care required for Class IV (513.7 min) was significantly more than that required for Class I (135.4 min). Direct and indirect nursing care was provided more often during the day shift as compared to the evening or night shifts. These findings provided the rationale for determining the appropriate ratio for nursing staff per shift based on the nursing workload in each shift. Conclusions: An appropriate ratio of nurse staffing should be ensured in ICUs to re-arrange the workload of nurses to help them provide essential direct care for patients.

  • PDF

Measurement of Gamma-ray Yield from Thick Carbon Target Irradiated by 5 and 9 MeV Deuterons

  • Araki, Shouhei;Kondo, Kazuhiro;Kin, Tadahiro;Watanabe, Yukinobu;Shigyo, Nobuhiro;Sagara, Kenshi
    • Journal of Radiation Protection and Research
    • /
    • v.42 no.1
    • /
    • pp.16-20
    • /
    • 2017
  • Background: The design of deuteron accelerator neutron source facilities requires reliable yield estimation of gamma-rays as well as neutrons from deuteron-induced reactions. We have so foar measured systematically double-differential thick target neutron yields (DDTTNYs) for carbon, aluminum, titanium, copper, niobium, and SUS304 targets. In the neutron data analysis, the events of gamma-rays taken simultaneously were treated as backgrounds. In the present work, we have re-analyzed the experimental data for a thick carbon target with particular attention to gamma-ray events. Materials and Methods: Double-differential thick target gamma-ray yields from carbon irradiated by 5 and 9 MeV deuterons were measured using an NE213 liquid organic scintillator at the Kyushu University Tandem accelerator Laboratory. The gamma-ray energy spectra were obtained by an unfolding method using FORIST code. The response functions of the NE213 detector were calculated by EGS5 incorporated in PHITS code. Results and Discussion: The measured gamma-ray spectra show some pronounced peaks corresponding to gamma-ray transitions between discrete levels in residual nuclei, and the measured angular distributions are almost isotropic for both the incident energies. Conclusion: PHITS calculations using INCL, GEM, and EBITEM models reproduce the spectral shapes and the angular distributions generally well, although they underestimate the absolute gamma-ray yields by about 20%.

Estimation of Property for Flowable Fills Using Disposal Materials (폐기물을 활용한 유동성 복토재의 특성 평가)

  • Lee, Jong-Kyu;Lee, Bong-Jik;Shin, Bang-Woong
    • Journal of the Korean GEO-environmental Society
    • /
    • v.6 no.2
    • /
    • pp.31-38
    • /
    • 2005
  • Flowable fills are self-leveling, liquid-like materials, and self-compacting to 95-100% of the maximum unit weight. Benefits of flowable fills include limited required labor, accelerated construction, ready placement at inaccessible locations, and the ability to be manually re-excavated. Applications for flowable fills include utility trenches, building excavations, underground storage tanks, abandoned sewers and utility lines, and filling underground mine shafts The objective of this study is to estimate engineering property of flowable fills made of soil mixed with recycled stylofoam and stabilizer for using geotechnical field. For this study, the uniaxial compression test, flowable test, and model tests were performed. Based on the results of the tests, the following conclusions are : fills made of soil mixed with recycled stylofoam and stabilizer can be used as flowable fills, minimum stabilizer quantity for using flowable fills ranges from 1.0($kN/m^3$) to 1.2 ($kN/m^3$).

  • PDF

Classification Prediction Error Estimation System of Microarray for a Comparison of Resampling Methods Based on Multi-Layer Perceptron (다층퍼셉트론 기반 리 샘플링 방법 비교를 위한 마이크로어레이 분류 예측 에러 추정 시스템)

  • Park, Su-Young;Jeong, Chai-Yeoung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.2
    • /
    • pp.534-539
    • /
    • 2010
  • In genomic studies, thousands of features are collected on relatively few samples. One of the goals of these studies is to build classifiers to predict the outcome of future observations. There are three inherent steps to build classifiers: a significant gene selection, model selection and prediction assessment. In the paper, with a focus on prediction assessment, we normalize microarray data with quantile-normalization methods that adjust quartile of all slide equally and then design a system comparing several methods to estimate 'true' prediction error of a prediction model in the presence of feature selection and compare and analyze a prediction error of them. LOOCV generally performs very well with small MSE and bias, the split sample method and 2-fold CV perform with small sample size very pooly. For computationally burdensome analyses, 10-fold CV may be preferable to LOOCV.

ASCII data hiding method based on blind video watermarking using minimum modification of motion vectors (움직임벡터의 변경 최소화 기법을 이용한 블라인드 비디오 워터마킹 기반의 문자 정보 은닉 기법)

  • Kang, Kyung-Won;Ryu, Tae-Kyung;Jeong, Tae-Il;Park, Tae-Hee;Kim, Jong-Nam;Moon, Kwang-Seok
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.1C
    • /
    • pp.78-85
    • /
    • 2007
  • With the advancement of the digital broadcasting and popularity of the Internet, recently, many studies are making on the digital watermarking for the copyright protection of digital data. This paper proposes the minimum modification method of motion vector to minimize the degradation of video quality, hiding subtitles of many language and information of OST(original sound track), character profiles, etc. as well as the copyright protection. Our proposed algorithm extracts feature vector by comparing motion vector data with watermark data, and minimize the modification of motion vectors by deciding the inversion of bit. Thus the degradation of video quality is minimized comparing to conventional algorithms. This algorithm also can check data integrity, and retrieve embedded hidden data simply and blindly. And our proposed scheme can be useful for conventional MPEG-1, -2 standards without any increment of bit rate in the compressed video domain. The experimental result shows that the proposed scheme obtains better video quality than other previous algorithms by about $0.5{\sim}1.5dB$.

Re-estimation of Settling Velocity Profile Equations for Muddy Cohesive Sediments in West Coasts (서해안 갯벌 점착성 퇴적물 침강속도 곡선식의 재검토)

  • Hwang K.-N.
    • Journal of the Korean Society for Marine Environment & Energy
    • /
    • v.5 no.1
    • /
    • pp.3-10
    • /
    • 2002
  • Quantifying the settling velocities of fine-cohesive sediments is very essential in the study of ocean pollutions as well as sedimentations. Settling properties of fine-cohesive sediments are influenced largely by aggregation which occurs as a consequence of interparticle collision and cohesion of particles. Since the degree of cohesion of fine-cohesive sediments depends on physico-chemical properties such as grain size distribution, percentage of organic materials, and mineralogical compositions, and these physico-chemical properties varies regionally, the settling velocities of fine-cohesive sediments for a specific site should be determined through field or laboratory experiment. Recently, settling velocities of fine-cohesive sediments in Saemankeum coasts and Kunsan Estuary have been measured through laboratory experiments. Using these data, the previously proposed well-known settling velocity equations for fine-cohesive sediments are examined and a new equation is developed for better representation of the measured data in this study. The newly developed settling velocity equation is simpler in the form and easier in determining the related coefficients than the previous well-known equations.

  • PDF

Estimation of Single Vegetation Volume Using 3D Point Cloud-based Alpha Shape and Voxel (3차원 포인트 클라우드 기반 Alpha Shape와 Voxel을 활용한 단일 식생 부피 산정)

  • Jang, Eun-kyung;Ahn, Myeonghui
    • Ecology and Resilient Infrastructure
    • /
    • v.8 no.4
    • /
    • pp.204-211
    • /
    • 2021
  • In this study, information on vegetation was collected using a point cloud through a 3-D Terrestrial Lidar Scanner, and the physical shape was analyzed by reconfiguring the object based on the refined data. Each filtering step of the raw data was optimized, and the reference volume and the estimated results using the Alpha Shape and Voxel techniques were compared. As a result of the analysis, when the volume was calculated by applying the Alpha Shape, it was overestimated than reference volume regardless of data filtering. In addition, the Voxel method to be the most similar to the reference volume after the 8th filtering, and as the filtering proceeded, it was underestimated. Therefore, when re-implementing an object using a point cloud, internal voids due to the complex shape of the target object must be considered, and it is necessary to pay attention to the filtering process for optimal data analyzed in the filtering process.