• Title/Summary/Keyword: Words Error

Search Result 260, Processing Time 0.023 seconds

A Study on Non-uniformity Correction Method through Uniform Area Detection Using KOMPSAT-3 Side-Slider Image (사이드 슬리더 촬영 기반 KOMPSAT-3 위성 영상의 균일 영역 검출을 통한 비균일 보정 기법 연구 양식)

  • Kim, Hyun-ho;Seo, Doochun;Jung, JaeHeon;Kim, Yongwoo
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.1013-1027
    • /
    • 2021
  • Images taken with KOMPSAT-3 have additional NIR and PAN bands, as well as RGB regions of the visible ray band, compared to imagestaken with a standard camera. Furthermore, electrical and optical properties must be considered because a wide radius area of approximately 17 km or more is photographed at an altitude of 685 km above the ground. In other words, the camera sensor of KOMPSAT-3 is distorted by each CCD pixel, characteristics of each band,sensitivity and time-dependent change, CCD geometry. In order to solve the distortion, correction of the sensors is essential. In this paper, we propose a method for detecting uniform regions in side-slider-based KOMPSAT-3 images using segment-based noise analysis. After detecting a uniform area with the corresponding algorithm, a correction table was created for each sensor to apply the non-uniformity correction algorithm, and satellite image correction was performed using the created correction table. As a result, the proposed method reduced the distortion of the satellite image,such as vertical noise, compared to the conventional method. The relative radiation accuracy index, which is an index based on mean square error (RA) and an index based on absolute error (RE), wasfound to have a comparative advantage of 0.3 percent and 0.15 percent, respectively, over the conventional method.

Depth Scaling Strategy Using a Flexible Damping Factor forFrequency-Domain Elastic Full Waveform Inversion

  • Oh, Ju-Won;Kim, Shin-Woong;Min, Dong-Joo;Moon, Seok-Joon;Hwang, Jong-Ha
    • Journal of the Korean earth science society
    • /
    • v.37 no.5
    • /
    • pp.277-285
    • /
    • 2016
  • We introduce a depth scaling strategy to improve the accuracy of frequency-domain elastic full waveform inversion (FWI) using the new pseudo-Hessian matrix for seismic data without low-frequency components. The depth scaling strategy is based on the fact that the damping factor in the Levenberg-Marquardt method controls the energy concentration in the gradient. In other words, a large damping factor makes the Levenberg-Marquardt method similar to the steepest-descent method, by which shallow structures are mainly recovered. With a small damping factor, the Levenberg-Marquardt method becomes similar to the Gauss-Newton methods by which we can resolve deep structures as well as shallow structures. In our depth scaling strategy, a large damping factor is used in the early stage and then decreases automatically with the trend of error as the iteration goes on. With the depth scaling strategy, we can gradually move the parameter-searching region from shallow to deep parts. This flexible damping factor plays a role in retarding the model parameter update for shallow parts and mainly inverting deeper parts in the later stage of inversion. By doing so, we can improve deep parts in inversion results. The depth scaling strategy is applied to synthetic data without lowfrequency components for a modified version of the SEG/EAGE overthrust model. Numerical examples show that the flexible damping factor yields better results than the constant damping factor when reliable low-frequency components are missing.

The Correlation between Handwriting Skills and Praxis in the Low Grades Students at an Elementary School (초등학교 저학년 아동의 글씨쓰기와 실행능력과의 상관관계)

  • Yu, Seung-Bok;Kim, Jin-Ju;Kim, Kyeong-Mi
    • The Journal of Korean Academy of Sensory Integration
    • /
    • v.4 no.1
    • /
    • pp.1-15
    • /
    • 2006
  • Objective : The purpose of this study was to examine the correlation between handwriting skills and praxis. Method : Participants consisted of 50 normal children who were the second-grade students at an A elementary school in Kim-hae. They didn't have a visual dysfunction, an auditory dysfunction, and a disease or an injury in arms and hands. They could follow examiners' directions properly. They were administered the Postural Praxis and the Praxis on Verbal Command of the Sensory Integration and Praxis Tests(SIPT)(Ayres, 2000) and the handwriting skill test which was made with reference to foreign literatures. It was conducted from October 19, 2004 to December 17, 2004. The data were analyzed with non-paired t-test, ANOVA, and Pearson correlation coefficient. Results : 1. Total handwriting score and praxis according to gender of children showed the statistically significant differences(p<0.05). 2. Total handwriting score correlated with praxis(p<0.05) and handwriting speed did not correlate with praxis. 3. Postural Praxis and Praxis on Verbal Command according to handwriting groups showed the statistically significant differences(p<0.05). 4. Among the standards of the handwriting skill test, accuracy of letter form, identity of letter size, spacing between letters and words, placing text on lines, error existence, and letter out of regular square correlated with the Postural Praxis(p<0.05), and accuracy of letter form, identity of letter size, and placing text on lines correlated with the Praxis on Verbal Command(p<0.05). Conclusions : The correlation between handwriting skills and praxis will help occupational therapists to provide fundamental and various treatment programs for children who are referred for the poor handwriting. But more studies in handwriting skills and praxis are necessary to decide which component of handwriting skills is related to praxis.

  • PDF

A Study on the Removal of Unusual Feature Vectors in Speech Recognition (음성인식에서 특이 특징벡터의 제거에 대한 연구)

  • Lee, Chang-Young
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.8 no.4
    • /
    • pp.561-567
    • /
    • 2013
  • Some of the feature vectors for speech recognition are rare and unusual. These patterns lead to overfitting for the parameters of the speech recognition system and, as a result, cause structural risks in the system that hinder the good performance in recognition. In this paper, as a method of removing these unusual patterns, we try to exclude vectors whose norms are larger than a specified cutoff value and then train the speech recognition system. The objective of this study is to exclude as many unusual feature vectors under the condition of no significant degradation in the speech recognition error rate. For this purpose, we introduce a cutoff parameter and investigate the resultant effect on the speaker-independent speech recognition of isolated words by using FVQ(Fuzzy Vector Quantization)/HMM(Hidden Markov Model). Experimental results showed that roughly 3%~6% of the feature vectors might be considered as unusual, and therefore be excluded without deteriorating the speech recognition accuracy.

Optical Character Recognition for Hindi Language Using a Neural-network Approach

  • Yadav, Divakar;Sanchez-Cuadrado, Sonia;Morato, Jorge
    • Journal of Information Processing Systems
    • /
    • v.9 no.1
    • /
    • pp.117-140
    • /
    • 2013
  • Hindi is the most widely spoken language in India, with more than 300 million speakers. As there is no separation between the characters of texts written in Hindi as there is in English, the Optical Character Recognition (OCR) systems developed for the Hindi language carry a very poor recognition rate. In this paper we propose an OCR for printed Hindi text in Devanagari script, using Artificial Neural Network (ANN), which improves its efficiency. One of the major reasons for the poor recognition rate is error in character segmentation. The presence of touching characters in the scanned documents further complicates the segmentation process, creating a major problem when designing an effective character segmentation technique. Preprocessing, character segmentation, feature extraction, and finally, classification and recognition are the major steps which are followed by a general OCR. The preprocessing tasks considered in the paper are conversion of gray scaled images to binary images, image rectification, and segmentation of the document's textual contents into paragraphs, lines, words, and then at the level of basic symbols. The basic symbols, obtained as the fundamental unit from the segmentation process, are recognized by the neural classifier. In this work, three feature extraction techniques-: histogram of projection based on mean distance, histogram of projection based on pixel value, and vertical zero crossing, have been used to improve the rate of recognition. These feature extraction techniques are powerful enough to extract features of even distorted characters/symbols. For development of the neural classifier, a back-propagation neural network with two hidden layers is used. The classifier is trained and tested for printed Hindi texts. A performance of approximately 90% correct recognition rate is achieved.

Two Statistical Models for Automatic Word Spacing of Korean Sentences (한글 문장의 자동 띄어쓰기를 위한 두 가지 통계적 모델)

  • 이도길;이상주;임희석;임해창
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.3_4
    • /
    • pp.358-371
    • /
    • 2003
  • Automatic word spacing is a process of deciding correct boundaries between words in a sentence including spacing errors. It is very important to increase the readability and to communicate the accurate meaning of text to the reader. The previous statistical approaches for automatic word spacing do not consider the previous spacing state, and thus can not help estimating inaccurate probabilities. In this paper, we propose two statistical word spacing models which can solve the problem of the previous statistical approaches. The proposed models are based on the observation that the automatic word spacing is regarded as a classification problem such as the POS tagging. The models can consider broader context and estimate more accurate probabilities by generalizing hidden Markov models. We have experimented the proposed models under a wide range of experimental conditions in order to compare them with the current state of the art, and also provided detailed error analysis of our models. The experimental results show that the proposed models have a syllable-unit accuracy of 98.33% and Eojeol-unit precision of 93.06% by the evaluation method considering compound nouns.

Korean Word Segmentation and Compound-noun Decomposition Using Markov Chain and Syllable N-gram (마코프 체인 밀 음절 N-그램을 이용한 한국어 띄어쓰기 및 복합명사 분리)

  • 권오욱
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.3
    • /
    • pp.274-284
    • /
    • 2002
  • Word segmentation errors occurring in text preprocessing often insert incorrect words into recognition vocabulary and cause poor language models for Korean large vocabulary continuous speech recognition. We propose an automatic word segmentation algorithm using Markov chains and syllable-based n-gram language models in order to correct word segmentation error in teat corpora. We assume that a sentence is generated from a Markov chain. Spaces and non-space characters are generated on self-transitions and other transitions of the Markov chain, respectively Then word segmentation of the sentence is obtained by finding the maximum likelihood path using syllable n-gram scores. In experimental results, the algorithm showed 91.58% word accuracy and 96.69% syllable accuracy for word segmentation of 254 sentence newspaper columns without any spaces. The algorithm improved the word accuracy from 91.00% to 96.27% for word segmentation correction at line breaks and yielded the decomposition accuracy of 96.22% for compound-noun decomposition.

Improvement of the Linear Predictive Coding with Windowed Autocorrelation (윈도우가 적용된 자기상관에 의한 선형예측부호의 개선)

  • Lee, Chang-Young;Lee, Chai-Bong
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.6 no.2
    • /
    • pp.186-192
    • /
    • 2011
  • In this paper, we propose a new procedure for improvement of the linear predictive coding. To reduce the error power incurred by the coding, we interchanged the order of the two procedures of windowing on the signal and linear prediction. This scheme corresponds to LPC extraction with windowed autocorrelation. The proposed method requires more calculational time because it necessitates matrix inversion on more parameters than the conventional technique where an efficient Levinson-Durbin recursive procedure is applicable with smaller parameters. Experimental test over various speech phonemes showed, however, that our procedure yields about 5 % less power distortion compared to the conventional technique. Consequently, the proposed method in this paper is thought to be preferable to the conventional technique as far as the fidelity is concerned. In a separate study of speaker-dependent speech recognition test for 50 isolated words pronounced by 40 people, our approach yielded better performance too.

Comparison of Male/Female Speech Features and Improvement of Recognition Performance by Gender-Specific Speech Recognition (남성과 여성의 음성 특징 비교 및 성별 음성인식에 의한 인식 성능의 향상)

  • Lee, Chang-Young
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.5 no.6
    • /
    • pp.568-574
    • /
    • 2010
  • In an effort to improve the speech recognition rate, we investigated performance comparison between speaker-independent and gender-specific speech recognitions. For this purpose, 20 male and 20 female speakers each pronounced 300 isolated Korean words and the speeches were divided into 4 groups: female, male, and two mixed genders. To examine the validity for the gender-specific speech recognition, Fourier spectrum and MFCC feature vectors averaged over male and female speakers separately were examined. The result showed distinction between the two genders, which supports the motivation for the gender-specific speech recognition. In experiments of speech recognition rate, the error rate for the gender-specific case was shown to be less than50% compared to that of the speaker-independent case. From the obtained results, it might be suggested that hierarchical recognition of gender and speech recognition might yield better performance over the current method of speech recognition.

A Study on the Test Circuit Design and Development of Algorithm for Parallel RAM Testing (RAM의 병렬 테스팅을 위한 알고리듬개발 및 테스트회로 설계에 관한 연구)

  • 조현묵;백경갑;백인천;차균현
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.17 no.7
    • /
    • pp.666-676
    • /
    • 1992
  • In this paper, algorithm and testable circuit to find all PSF(Pattern Sensitive Fault ) occured in RAM were proposed. Conventional test circuit and algorithm took much time in testing because consecutive test for RAM cells or f-dimensional memory struciure was not employed. In this paper, methodology for parallel RAM-testing was proposed by compensating additional circuit for test to conventional RAM circuit. Additional circuits are parallel comparator, error detector, group selector circuit and a modified decoder used for parallel testing. And also, the constructive method of Eulerian path to obtain efficient test pattern was performed. Consequently, If algorithm proposed in this paper Is used, the same operations as 32sxwor4 lines will be needed to test b x w=n matrix RAM. Circuit simulation was performerd, and 10 bits x :If words testable RAM was designed.

  • PDF