• Title/Summary/Keyword: memory coefficient

Search Result 128, Processing Time 0.023 seconds

A Real-Time Embedded Speech Recognition System

  • Nam, Sang-Yep;Lee, Chun-Woo;Lee, Sang-Won;Park, In-Jung
    • Proceedings of the IEEK Conference
    • /
    • 2002.07a
    • /
    • pp.690-693
    • /
    • 2002
  • According to the growth of communication biz, embedded market rapidly developing in domestic and overseas. Embedded system can be used in various way such as wire and wireless communication equipment or information products. There are lots of developing performance applying speech recognition to embedded system, for instance, PDA, PCS, CDMA-2000 or IMT-2000. This study implement minimum memory of speech recognition engine and DB for apply real time embedded system. The implement measure of speech recognition equipment to fit on embedded system is like following. At first, DC element is removed from Input voice and then a compensation of high frequency was achieved by pre-emphasis with coefficients value, 0.97 and constitute division data as same size as 256 sample by lapped shift method. Through by Levinson - Durbin Algorithm, these data can get linear predictive coefficient and again, using Cepstrum - Transformer attain feature vectors. During HMM training, We used Baum-Welch reestimation Algorithm for each words training and can get the recognition result from executed likelihood method on each words. The used speech data is using 40 speech command data and 10 digits extracted form each 15 of male and female speaker spoken menu control command of Embedded system. Since, in many times, ARM CPU is adopted in embedded system, it's peformed porting the speech recognition engine on ARM core evaluation board. And do the recognition test with select set 1 and set 3 parameter that has good recognition rate on commander and no digit after the several tests using by 5 proposal recognition parameter sets. The recognition engine of recognition rate shows 95%, speech commander recognizer shows 96% and digits recognizer shows 94%.

  • PDF

MarSel : LD based tagSNP Selection System for Large-scale SNP Haplotype Dataset (MarSel : 대용량 SNP 일배체형 데이터에 대한 연관불균형기반의 tagSNP 선택 시스템)

  • Kim Sang-Jun;Yeo Sang-Soo;Kim Sung-Kwon
    • The KIPS Transactions:PartA
    • /
    • v.13A no.1 s.98
    • /
    • pp.79-86
    • /
    • 2006
  • Recently the tagSNP selection problem has been researched for reducing the cost of association studies between human's diversities and SNPs. General approach for this problem is that all of SNPs are separated into appropriate blocks and then tagSNPs are chosen in each block. Marsel in this paper is the system that involved the concept of linkage disequilibrium for overcoming the problem that the existing block partitioning approaches have short of biological meanings. In most approaches, the contiguous regions, which recombinations have LD coefficient |D'| and then tagSNP selection step is performed. And MarSel guarantees the minimum tagSNP selection using entropy-based optimal selection algorithm when tagSNPs are chosen in each block, and enables chromosome-level association studies using efficient memory management technique when input is very large-scale dataset that is impossible to be processed in the existing systems.

Variable Radix-Two Multibit Coding and Its VLSI Implementation of DCT/IDCT (가변길이 다중비트 코딩을 이용한 DCT/IDCT의 설계)

  • 김대원;최준림
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.39 no.12
    • /
    • pp.1062-1070
    • /
    • 2002
  • In this paper, variable radix-two multibit coding algorithm is presented and applied in the implementation of discrete cosine transform(DCT) and inverse discrete cosine transform(IDCT). Variable radix-two multibit coding means the 2k SD (signed digit) representation of overlapped multibit scanning with variable shift method. SD represented by 2k generates partial products, which can be easily implemented with shifters and adders. This algorithm is most powerful for the hardware implementation of DCT/IDCT with constant coefficient matrix multiplication. This paper introduces the suggested algorithm, it's proof and the implementation of DCT/IDCT The implemented IDCT chip with 8 PEs(Processing Elements) and one transpose memory runs at a tate of 400 Mpixels/sec at 54MHz frequency for high speed parallel signal processing, and it's verified in HDTV and MPEG decoder.

Is the SIS 3.0 Valid for Use at a Rehabilitation Setting in Korea for Patients with Stroke?

  • Song, Jumin;Lee, Haejung
    • The Journal of Korean Physical Therapy
    • /
    • v.27 no.4
    • /
    • pp.252-257
    • /
    • 2015
  • Purpose: The purpose of this study was to assess psychometric properties of the Korean version of the Stroke Impact Scale 3.0 (K-SIS 3.0) in patients with stroke. Methods: Patients with stroke longer than 3 months were invited to participate in the study at specialized rehabilitation centers in Busan. Information on patients was collected using Mini-Mental State Examination (MMSE), Modified Bathel Index (MBI), Beck Depression Index (BDI), WHODAS 2.0-12 item, and K-SIS. Floor and ceiling effects of each domain of K-SIS were examined. The internal consistency of each domain of the K-SIS was calculated using Cronbach's ${\alpha}$. Correlation between K-SIS and each scale was assessed using Spearman's correlation coefficient. Results: Ninety subjects participated in the study. The K-SIS was found to have excellent internal consistency (Cronbach's ${\alpha}=0.93$). Each domain of the consistency ranged from 0.86 to 0.94, except the emotion (${\alpha}=0.51$). Significant correlations were observed between MMSE and domains of memory and thinking, and communication (r=0.48 and 0.52 respectively). BDI was negatively related to domains of emotion, ADL, mobility, and participation (r=-0.43, -0.49, -0.52 and -0.33 respectively). Specific daily activity (MBI) and general functioning (WHODAS 2.0) were also found to be closely related to the domains of ADL, mobility, and participation (ranging from r=-0.41 to r=-0.59). No ceiling and floor effect was observed. Conclusion: Excellent reliability and validity of K-SIS were obtained in the study and it could be suggested that K-SIS may be used for patients with stroke for collection of information on functioning in the clinical context.

A Study for Complexity Improvement of Automatic Speaker Verification in PDA Environment (PDA 환경에서 자동화자 확인의 계산량 개선을 위한 연구)

  • Seo, Chang-Woo;Lim, Young-Hwan;Jeon, Sung-Chae;Jang, Nam-Young
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.10 no.3
    • /
    • pp.170-175
    • /
    • 2009
  • In this paper, we propose real time automatic speaker verification (ASV) system to protect personal information on personal digital assistant (PDA) device. Recently, the capacity of PDA has extended and been popular, especially for mobile environment such as mobile commerce (M-commerce). However, there still exist lots of difficulties for practical application of ASV utility to PDA device because it requires too much computational complexity. To solve this problem, we apply the method to relieve the computational burden by performing the preprocessing such as spectral subtraction and speech detection during the speech utterance. Also by applying the hidden Markov model (HMM) optimal state alignment and the sequential probability ratio test (SPRT), we can get much faster processing results. The whole system implementation is simple and compact enough to fit well with PDA device's limited memory and low CPU speed.

  • PDF

RNN-LSTM Based Soil Moisture Estimation Using Terra MODIS NDVI and LST (Terra MODIS NDVI 및 LST 자료와 RNN-LSTM을 활용한 토양수분 산정)

  • Jang, Wonjin;Lee, Yonggwan;Lee, Jiwan;Kim, Seongjoon
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.61 no.6
    • /
    • pp.123-132
    • /
    • 2019
  • This study is to estimate the spatial soil moisture using Terra MODIS (Moderate Resolution Imaging Spectroradiometer) satellite data and machine learning technique. Using the 3 years (2015~2017) data of MODIS 16 days composite NDVI (Normalized Difference Vegetation Index) and daily Land Surface Temperature (LST), ground measured precipitation and sunshine hour of KMA (Korea Meteorological Administration), the RDA (Rural Development Administration) 10 cm~30 cm average TDR (Time Domain Reflectometry) measured soil moisture at 78 locations was tested. For daily analysis, the missing values of MODIS LST by clouds were interpolated by conditional merging method using KMA surface temperature observation data, and the 16 days NDVI was linearly interpolated to 1 day interval. By applying the RNN-LSTM (Recurrent Neural Network-Long Short Term Memory) artificial neural network model, 70% of the total period was trained and the rest 30% period was verified. The results showed that the coefficient of determination ($R^2$), Root Mean Square Error (RMSE), and Nash-Sutcliffe Efficiency were 0.78, 2.76%, and 0.75 respectively. In average, the clay soil moisture was estimated well comparing with the other soil types of silt, loam, and sand. This is because the clay has the intrinsic physical property for having narrow range of soil moisture variation between field capacity and wilting point.

A Study on the Demand Prediction Model for Repair Parts of Automotive After-sales Service Center Using LSTM Artificial Neural Network (LSTM 인공신경망을 이용한 자동차 A/S센터 수리 부품 수요 예측 모델 연구)

  • Jung, Dong Kun;Park, Young Sik
    • The Journal of Information Systems
    • /
    • v.31 no.3
    • /
    • pp.197-220
    • /
    • 2022
  • Purpose The purpose of this study is to identifies the demand pattern categorization of repair parts of Automotive After-sales Service(A/S) and proposes a demand prediction model for Auto repair parts using Long Short-Term Memory (LSTM) of artificial neural networks (ANN). The optimal parts inventory quantity prediction model is implemented by applying daily, weekly, and monthly the parts demand data to the LSTM model for the Lumpy demand which is irregularly in a specific period among repair parts of the Automotive A/S service. Design/methodology/approach This study classified the four demand pattern categorization with 2 years demand time-series data of repair parts according to the Average demand interval(ADI) and coefficient of variation (CV2) of demand size. Of the 16,295 parts in the A/S service shop studied, 96.5% had a Lumpy demand pattern that large quantities occurred at a specific period. lumpy demand pattern's repair parts in the last three years is predicted by applying them to the LSTM for daily, weekly, and monthly time-series data. as the model prediction performance evaluation index, MAPE, RMSE, and RMSLE that can measure the error between the predicted value and the actual value were used. Findings As a result of this study, Daily time-series data were excellently predicted as indicators with the lowest MAPE, RMSE, and RMSLE values, followed by Weekly and Monthly time-series data. This is due to the decrease in training data for Weekly and Monthly. even if the demand period is extended to get the training data, the prediction performance is still low due to the discontinuation of current vehicle models and the use of alternative parts that they are contributed to no more demand. Therefore, sufficient training data is important, but the selection of the prediction demand period is also a critical factor.

Tunnel wall convergence prediction using optimized LSTM deep neural network

  • Arsalan, Mahmoodzadeh;Mohammadreza, Taghizadeh;Adil Hussein, Mohammed;Hawkar Hashim, Ibrahim;Hanan, Samadi;Mokhtar, Mohammadi;Shima, Rashidi
    • Geomechanics and Engineering
    • /
    • v.31 no.6
    • /
    • pp.545-556
    • /
    • 2022
  • Evaluation and optimization of tunnel wall convergence (TWC) plays a vital role in preventing potential problems during tunnel construction and utilization stage. When convergence occurs at a high rate, it can lead to significant problems such as reducing the advance rate and safety, which in turn increases operating costs. In order to design an effective solution, it is important to accurately predict the degree of TWC; this can reduce the level of concern and have a positive effect on the design. With the development of soft computing methods, the use of deep learning algorithms and neural networks in tunnel construction has expanded in recent years. The current study aims to employ the long-short-term memory (LSTM) deep neural network predictor model to predict the TWC, based on 550 data points of observed parameters developed by collecting required data from different tunnelling projects. Among the data collected during the pre-construction and construction phases of the project, 80% is randomly used to train the model and the rest is used to test the model. Several loss functions including root mean square error (RMSE) and coefficient of determination (R2) were used to assess the performance and precision of the applied method. The results of the proposed models indicate an acceptable and reliable accuracy. In fact, the results show that the predicted values are in good agreement with the observed actual data. The proposed model can be considered for use in similar ground and tunneling conditions. It is important to note that this work has the potential to reduce the tunneling uncertainties significantly and make deep learning a valuable tool for planning tunnels.

Establishment of Analytical Method for Methylmercury in Fish by Using HPLC-ICP/MS (고성능액체크로마토그래피-유도결합플라즈마 질량분석기를 이용한 어류 중 메틸수은 분석법 확립)

  • Yoo, Kyung-Yoal;Bahn, Kyeong-Nyeo;Kim, Eun-Jung;Kim, Yang-Sun;Myung, Jyong-Eun;Yoon, Hae-Seong;Kim, Mee-Hye
    • Korean Journal of Environmental Agriculture
    • /
    • v.30 no.3
    • /
    • pp.288-294
    • /
    • 2011
  • BACKGROUND: Methylmercury is analyzed by HPLC-ICP/MS because of the simplicity for sample preparation and interference. However, most of the pre-treatment methods for methylmercury need a further pH adjustment of the extracted solution and removal of organic matter for HPLC. The purpose of this study was to establish a rapid and accurate analytical method for determination of methylmercury in fish by using HPLC-ICP/MS. METHOD AND RESULTS: We conducted an experiment for pre-treatment and instrument conditions and analytical method verification. Pre-treatment condition was established with aqueous 1% L-cysteine HCl and heated at $60^{\circ}C$ in microwave for 20 min. Methylmercury in $50{\mu}L$ of filtered extract was separated by a C18 column and aqueous 0.1% L-cysteine HCl + 0.1% L-cysteine mobile phase at $25^{\circ}C$. The presence of cysteine in mobile phase and sample solution was essential to eliminate adsorption, peak tailing and memory effect problems. Correlation coefficient($r^2$) for the linearity was 0.9998. The limits of detection and quantitation for this method were 0.15 and $0.45{\mu}g/kg$ respectively. CONCLUSION: Result for analytical method verification, accuracy and repeatability of the analytes were in good agreement with the certified reference materials values of methylmercury at a 95% confidence level. The advantage of the established method is that the extracted solution can be directly injected into the HPLC column without additional processes and the memory effect of mercury in the ICP-MS can be eliminated.

A study on the derivation and evaluation of flow duration curve (FDC) using deep learning with a long short-term memory (LSTM) networks and soil water assessment tool (SWAT) (LSTM Networks 딥러닝 기법과 SWAT을 이용한 유량지속곡선 도출 및 평가)

  • Choi, Jung-Ryel;An, Sung-Wook;Choi, Jin-Young;Kim, Byung-Sik
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.spc1
    • /
    • pp.1107-1118
    • /
    • 2021
  • Climate change brought on by global warming increased the frequency of flood and drought on the Korean Peninsula, along with the casualties and physical damage resulting therefrom. Preparation and response to these water disasters requires national-level planning for water resource management. In addition, watershed-level management of water resources requires flow duration curves (FDC) derived from continuous data based on long-term observations. Traditionally, in water resource studies, physical rainfall-runoff models are widely used to generate duration curves. However, a number of recent studies explored the use of data-based deep learning techniques for runoff prediction. Physical models produce hydraulically and hydrologically reliable results. However, these models require a high level of understanding and may also take longer to operate. On the other hand, data-based deep-learning techniques offer the benefit if less input data requirement and shorter operation time. However, the relationship between input and output data is processed in a black box, making it impossible to consider hydraulic and hydrological characteristics. This study chose one from each category. For the physical model, this study calculated long-term data without missing data using parameter calibration of the Soil Water Assessment Tool (SWAT), a physical model tested for its applicability in Korea and other countries. The data was used as training data for the Long Short-Term Memory (LSTM) data-based deep learning technique. An anlysis of the time-series data fond that, during the calibration period (2017-18), the Nash-Sutcliffe Efficiency (NSE) and the determinanation coefficient for fit comparison were high at 0.04 and 0.03, respectively, indicating that the SWAT results are superior to the LSTM results. In addition, the annual time-series data from the models were sorted in the descending order, and the resulting flow duration curves were compared with the duration curves based on the observed flow, and the NSE for the SWAT and the LSTM models were 0.95 and 0.91, respectively, and the determination coefficients were 0.96 and 0.92, respectively. The findings indicate that both models yield good performance. Even though the LSTM requires improved simulation accuracy in the low flow sections, the LSTM appears to be widely applicable to calculating flow duration curves for large basins that require longer time for model development and operation due to vast data input, and non-measured basins with insufficient input data.