• Title/Summary/Keyword: Words Error

Search Result 260, Processing Time 0.025 seconds

On the Measurement of the Depth and Distance from the Defocused Imagesusing the Regularization Method (비초점화 영상에서 정칙화법을 이용한 깊이 및 거리 계측)

  • 차국찬;김종수
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.32B no.6
    • /
    • pp.886-898
    • /
    • 1995
  • One of the ways to measure the distance in the computer vision is to use the focus and defocus. There are two methods in this way. The first method is caculating the distance from the focused images in a point (MMDFP: the method measuring the distance to the focal plane). The second method is to measure the distance from the difference of the camera parameters, in other words, the apertures of the focal planes, of two images with having the different parameters (MMDCI: the method to measure the distance by comparing two images). The problem of the existing methods in MMDFP is to decide the thresholding vaue on detecting the most optimally focused object in the defocused image. In this case, it could be solved by comparing only the error energy in 3x3 window between two images. In MMDCI, the difficulty is the influence of the deflection effect. Therefor, to minimize its influence, we utilize two differently focused images instead of different aperture images in this paper. At the first, the amount of defocusing between two images is measured through the introduction of regularization and then the distance from the camera to the objects is caculated by the new equation measuring the distance. In the results of simulation, we see the fact to be able to measure the distance from two differently defocused images, and for our approach to be robuster than the method using the different aperture in the noisy image.

  • PDF

Updating Policy of Indoor Moving Object Databases for Location-Based Services: The Kalman Filter Method (위치기반서비스를 위한 옥내 이동객체 데이터베이스 갱신전략: 칼만 필터 방법)

  • Yim, Jae-Geol;Joo, Jae-Hun;Park, Chan-Sik;Gwon, Ki-Young;Kim, Min-Hye
    • The Journal of Information Systems
    • /
    • v.19 no.1
    • /
    • pp.1-17
    • /
    • 2010
  • This paper proposes an updating policy of indoor moving object databases (IMODB) for location-based services. our method applies the Ka1man filter on the recently collected measured positions to estimate the moving object's position and velocity at the moment of the most recent measurement, and extrapolate the current position with the estimated position and velocity. If the distance between the extrapolated current position and the measured current position is within the threshold, in other words if they are close then we skip updating the IMODB. When the IMODB needs to know the moving object's position at a certain moment T, it applies the Kalman filter on the series of the measurements received before T and extrapolates the position at T with the estimations obtained by the Kalman filter. In order to verify the efficiency of our updating method, we performed the experiments of applying our method on the series of measured positions obtained by applying the fingerprinting indoor positioning method while we are actually walking through the test bed. In the analysis of the test results, we estimated the communication saving rate of our method and the error increment rate caused by the communication saving.

A Study on the Defection of Arcing Faults in Transmission Lines and Development of Fault Distance Estimation Software using MATLAB (MATLAB을 이용한 송전선로의 아크사고 검출 및 고장거리 추정 소프트웨어 개발에 관한 연구)

  • Kim, Byeong-Cheon;Park, Nam-Ok;Kim, Dong-Su;Kim, Gil-Hwan
    • The Transactions of the Korean Institute of Electrical Engineers A
    • /
    • v.51 no.4
    • /
    • pp.163-168
    • /
    • 2002
  • This paper present a new verb efficient numerical algorithm for arcing faults detection and fault distance estimation in transmission line. It is based on the fundamental differential equations describing the transients on a transmission line before, during and alter the fault occurrence, and on the application of the "Least Error Squares Technique"for the unknown model parameter estimation. If the arc voltage estimated is a near zero, the fault is without arc, in other words the fault is permanent fault. If the arc voltage estimated has any high value, the faust is identified as an fault, or the transient fault. In permanent faults case, fault distance estimation is necessary. This paper uses the model of the arcing fault in transmission line using ZnO arrestor and resistance to be implemented within EMTP. One purpose of this study is to build a structure for modeling of arcing fault detection and fault distance estimation algorithm using Matlab programming. In this paper, This algorithm has been designed in Graphic user interface(GUI).

A New Endpoint Detection Method Based on Chaotic System Features for Digital Isolated Word Recognition System

  • Zang, Xian;Chong, Kil-To
    • Proceedings of the IEEK Conference
    • /
    • 2009.05a
    • /
    • pp.37-39
    • /
    • 2009
  • In the research of speech recognition, locating the beginning and end of a speech utterance in a background of noise is of great importance. Since the background noise presenting to record will introduce disturbance while we just want to get the stationary parameters to represent the corresponding speech section, in particular, a major source of error in automatic recognition system of isolated words is the inaccurate detection of beginning and ending boundaries of test and reference templates, thus we must find potent method to remove the unnecessary regions of a speech signal. The conventional methods for speech endpoint detection are based on two simple time-domain measurements - short-time energy, and short-time zero-crossing rate, which couldn't guarantee the precise results if in the low signal-to-noise ratio environments. This paper proposes a novel approach that finds the Lyapunov exponent of time-domain waveform. This proposed method has no use for obtaining the frequency-domain parameters for endpoint detection process, e.g. Mel-Scale Features, which have been introduced in other paper. Comparing with the conventional methods based on short-time energy and short-time zero-crossing rate, the novel approach based on time-domain Lyapunov Exponents(LEs) is low complexity and suitable for Digital Isolated Word Recognition System.

  • PDF

An Experimental Study of Tire-Road Friction Coefficient by Transient Brake Time (실차 실험을 통한 제동순시간에 의한 타이어-노면마찰계수에 관한 연구)

  • Han, Chang-Pyoung;Park, Kyoung-Suk;Choi, Myung-Jin;Lee, Jong-Sang;Shin, Un-Gyu
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.24 no.7 s.196
    • /
    • pp.106-111
    • /
    • 2007
  • In this paper, the transient brake time was studied on the van type vehicle with accelerometer. Experiments were carried out on the asphalt(new and polished), unpacked road(earth and gravel) and on wet or dry road conditions. The transient brake time is not effected bzy the vehicle speed. The transient brake time is about 0.41$\sim$0.43second on the asphalt road surface and the error range is within 0.1$\sim$0.16second. For the asphalt road condition, the transient brake time is not effected by both new asphalt road surface and the polished asphalt road surface. With compared by dry and wet road surface condition, the transient brake time of wet condition is longer than dry road condition and compared with unpacked road condition and packed road condition, unpacked road condition is shorter than packed road condition. It is considered that the transient brake time is effected by the road surface fraction coefficient. In other words, the transients brake time increases as friction coefficient decreases.

A Comparative Study of Software finite Fault NHPP Model Considering Inverse Rayleigh and Rayleigh Distribution Property (역-레일리와 레일리 분포 특성을 이용한 유한고장 NHPP모형에 근거한 소프트웨어 신뢰성장 모형에 관한 비교연구)

  • Shin, Hyun Cheul;Kim, Hee Cheul
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.10 no.3
    • /
    • pp.1-9
    • /
    • 2014
  • The inverse Rayleigh model distribution and Rayleigh distribution model were widely used in the field of reliability station. In this paper applied using the finite failure NHPP models in order to growth model. In other words, a large change in the course of the software is modified, and the occurrence of defects is almost inevitable reality. Finite failure NHPP software reliability models can have, in the literature, exhibit either constant, monotonic increasing or monotonic decreasing failure occurrence rates per fault. In this paper, proposes the inverse Rayleigh and Rayleigh software reliability growth model, which made out efficiency application for software reliability. Algorithm to estimate the parameters used to maximum likelihood estimator and bisection method, model selection based on mean square error (MSE) and coefficient of determination($R^2$), for the sake of efficient model, were employed. In order to insurance for the reliability of data, Laplace trend test was employed. In many aspects, Rayleigh distribution model is more efficient than the reverse-Rayleigh distribution model was proved. From this paper, software developers have to consider the growth model by prior knowledge of the software to identify failure modes which can helped.

The Impact of Foreign Exchange Rates on International Travel: The Case of South Korea

  • Lee, Jung-Wan
    • Journal of Distribution Science
    • /
    • v.10 no.9
    • /
    • pp.5-11
    • /
    • 2012
  • Purpose - The objective of the paper is to explain both the price sensitivity of international tourists to South Korea and the price sensitivity of Korean tourists to international travel. The study examines long-run equilibrium relationships and Granger causal relationships between foreign exchange rates and inbound and outbound tourism demand in South Korea. Research design/ data / methodology - The study employs monthly time series data from January 1990 to September 2010. The paper examines the long-run equilibrium relationship using the Johansen cointegration test approach after unit root tests. The short-run Granger causality was tested using the vector error correction model with the Wald test. Results - Hypothesis 1 testing whether there is a long-run equilibrium relationship between exchange rates, inbound and outbound tourism demand is supported. Hypothesis 2 testing whether exchange rates lead to a change in touristarrivals and expenditure is not supported. Hypothesis 3 testing whether exchange rates lead to a change in tourist departures and expenditure is supported. Conclusions - The findings of this study show that the impacts of tourism price competitiveness are changing quite significantly with regard to destination competitiveness. In other words, the elasticity of tourism price over tourism demand has been moderated.

  • PDF

Error Correction in Korean Morpheme Recovery using Deep Learning (딥 러닝을 이용한 한국어 형태소의 원형 복원 오류 수정)

  • Hwang, Hyunsun;Lee, Changki
    • Journal of KIISE
    • /
    • v.42 no.11
    • /
    • pp.1452-1458
    • /
    • 2015
  • Korean Morphological Analysis is a difficult process. Because Korean is an agglutinative language, one of the most important processes in Morphological Analysis is Morpheme Recovery. There are some methods using Heuristic rules and Pre-Analyzed Partial Words that were examined for this process. These methods have performance limits as a result of not using contextual information. In this study, we built a Korean morpheme recovery system using deep learning, and this system used word embedding for the utilization of contextual information. In '들/VV' and '듣/VV' morpheme recovery, the system showed 97.97% accuracy, a better performance than with SVM(Support Vector Machine) which showed 96.22% accuracy.

A stemming algorithm for a korean language free-text retrieval system (자연어검색시스템을 위한 스태밍알고리즘의 설계 및 구현)

  • 이효숙
    • Journal of the Korean Society for information Management
    • /
    • v.14 no.2
    • /
    • pp.213-234
    • /
    • 1997
  • A stemming algorithm for the Korean language free-text retrieval system has been designed and implemented. The algorithm contains three major parts and it operates iteratively ; firstly, stop-words are removed with a use of a stop-word list ; secondly, a basic removing procedure proceeds with a rule table 1, which contains the suffixes, the postpositional particles, and the optionally adopted symbols specifying an each stemming action ; thirdly, an extended stemming and rewriting procedures continue with a rule table 2, which are composed of th suffixes and the optionally combined symbols representing various actions depending upon the context-sensitive rules. A test was carried out to obtain an indication of how successful the algorithm was and to identify any minor changes in the algorithm for an enhanced one. As a result of it, 21.4 % compression is achieved and an error rate is 15.9%.

  • PDF

Experimental Analysis of Correct Answer Characteristics in Question Answering Systems (질의응답시스템에서 정답 특징에 관한 실험적 분석)

  • Han, Kyoung-Soo
    • Journal of Digital Contents Society
    • /
    • v.19 no.5
    • /
    • pp.927-933
    • /
    • 2018
  • One of the factors that have the greatest influence on the error of the question answering system that finds and provides answers to natural language questions is the step of searching for documents or passages that contain correct answers. In order to improve the retrieval performance, it is necessary to understand the characteristics of documents and passages containing correct answers. This paper experimentally analyzes how many question words appear in the correct answer documents, how the location of the question word is distributed, and how the topic of the question and the correct answer document are similar using the corpus composed of the question, the documents with correct answer, and the documents without correct answer. This study explains the causes of previous search research results for question answer system and discusses the necessary elements of effective search step.