• Title/Summary/Keyword: trend algorithm

Search Result 435, Processing Time 0.038 seconds

A Study for Removing Road Shields from Mobile Mapping System of the Laser Data using RTF Filtering Techniques (RTF 필터링을 이용한 모바일매핑시스템 레이저 데이터의 도로 장애물 제거에 관한 연구)

  • Song, Hyun-Kun;Kang, Byoung-Ju;Lee, Sung-Hun;Choi, Yun-Soo
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.20 no.1
    • /
    • pp.3-12
    • /
    • 2012
  • It is a global trend to give attention to generating precise 3D navigation maps since eco-friendly vehicles have become a critical issue due to environmental protection and depletion of fossil fuels. To date, Mobile Mapping System (MMS) has been a efficient method to acquire the data for generating the 3D navigation maps. To achieve this goal so far in the Mobile Mapping System using the data acquisition method has been proposed to be most effective. For this study the basic RTF filter algorithm was applied to modify to fit MMS quantitative analysis derived floor 99.71%, 99.95% of the highly non-producers to maintain accuracy and high-precision 3D road could create DEM. In addition, the roads that exist within the cars, roadside tree, road cars, such as the median strips have been removed to shields it takes to get results effectively, and effective in practical applications and can be expected to improve operational efficiency is considered.

A Study on TCP-friendly Congestion Control Scheme using Hybrid Approach for Multimedia Streaming in the Internet (인터넷에서 멀티미디어 스트리밍을 위한 하이브리드형 TCP-friendly 혼잡제어기법에 관한 연구)

  • 조정현;나인호
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2003.10a
    • /
    • pp.837-840
    • /
    • 2003
  • Recently the multimedia streaming traffic such as digital audio and video in the Internet has increased tremendously. Unlike TCP, the UDP protocol, which has been used to transmit streaming traffic through the Internet, does not apply any congestion control mechanism to regulate the data flow through the shared network. If this trend is let go unchecked, these traffic will effect the performance of TCP, which is used to transport data traffic, and may lead to congestion collapse of the Internet. To avoid any adverse effort on the current Internet functionality, A study on a new protocol of modification or addition of some functionality to existing transport protocol for transmitting streaming traffic in the Internet is needed. TCP-frienly congestion control mechanism is classified with window-based congestion control scheme and rate-based congestion control scheme. In this paper, we propose an algorithm for improving the transmitting rate on a hybrid TCP-friendly congestion control scheme combined with widow-based and rate-based congestion control for multimedia streaming in the internet.

  • PDF

Experiment and Implementation of a Machine-Learning Based k-Value Prediction Scheme in a k-Anonymity Algorithm (k-익명화 알고리즘에서 기계학습 기반의 k값 예측 기법 실험 및 구현)

  • Muh, Kumbayoni Lalu;Jang, Sung-Bong
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.1
    • /
    • pp.9-16
    • /
    • 2020
  • The k-anonymity scheme has been widely used to protect private information when Big Data are distributed to a third party for research purposes. When the scheme is applied, an optimal k value determination is one of difficult problems to be resolved because many factors should be considered. Currently, the determination has been done almost manually by human experts with their intuition. This leads to degrade performance of the anonymization, and it takes much time and cost for them to do a task. To overcome this problem, a simple idea has been proposed that is based on machine learning. This paper describes implementations and experiments to realize the proposed idea. In thi work, a deep neural network (DNN) is implemented using tensorflow libraries, and it is trained and tested using input dataset. The experiment results show that a trend of training errors follows a typical pattern in DNN, but for validation errors, our model represents a different pattern from one shown in typical training process. The advantage of the proposed approach is that it can reduce time and cost for experts to determine k value because it can be done semi-automatically.

Computer나 Calculator를 이용한 계산에서 오류 교정을 위한 어림셈 지도에 관한 연구

  • Gang Si Jung
    • The Mathematical Education
    • /
    • v.29 no.1
    • /
    • pp.21-34
    • /
    • 1990
  • This is a study on an instruction of estimation for error correction in the calculation with a computer or a calculator. The aim of this study is to survey a new aspect of calaulation teaching and the teaching strategy of estimation and finally to frame a new curriculum model of estimation instruction. This research required a year and the outcomes of the research can be listed as follows: 1. Social utilities of estimation were made clear, and a new trend of calculation teaching related to estimation instruction was shown. 2. The definition of estimation was given and actual examples of conducting an estimation among pupils in lower grades were given for them to have abundant experiences. 3. The ways of finding estimating values in fraction and decimal fraction were presented for the pupils to be able to conduct an estimation. 4. The following contents were given as a basic strategy for estimation. 1) Front-end strategy 2) Clustering strategy 3) Rounding strategy 4) Compatible numbers strategy 5) Special numbers strategy 5. In an instuction of estimation the meaning, method. and process of calculation and calculating algorithm were reviewed for the cultivation of children's creativity through promoting their basic skill, mathematical thinking and problem-solving ability. 6. The following contents were also covered as an estimation strategy for measurement 1) Calculating the sense of quantity on the size of unit. 2) Estimating the total quantity by frequent repetition of unit quantity. 3) Estimating the length and the volume by weighing. 4) Estimating unknown quantity based on the quatity already known. 5) Estimating the area by means of equivalent area transformation. 7. The ways of instructing mental computation were presented. 8. Reviews were made on the curricular and the textbook contents concerning estimation instructions both in Korea and Japan. and a new model of curriculum was devised with reference to estimation instruction data of the United States.

  • PDF

The Study for ENHPP Software Reliability Growth Model based on Burr Coverage Function (Burr 커버리지 함수에 기초한 ENHPP소프트웨어 신뢰성장모형에 관한 연구)

  • Kim, Hee-Cheul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.12 no.4
    • /
    • pp.33-42
    • /
    • 2007
  • Accurate predictions of software release times, and estimation of the reliability and availability of a software product require quantification of a critical element of the software testing process : test coverage. This model called Enhanced non-homogeneous poission process(ENHPP). In this paper, exponential coverage and S-shaped model was reviewed, proposes the Kappa coverage model, which maked out efficiency application for software reliability. Algorithm to estimate the parameters used to maximum likelihood estimator and bisection method, model selection based on SSE statistics and Kolmogorov distance, for the sake of efficient model, was employed. From the analysis of mission time, the result of this comparative study shows the excellent performance of Burr coverage model rather than exponential coverage and S-shaped model using NTDS data. This analysis of failure data compared with the Kappa coverage model and the existing model(using arithmetic and Laplace trend tests, bias tests) is presented.

  • PDF

Inflow Forecasting Using Fuzzy-Grey Model (Fuzzy-Grey 모형을 이용한 유입량 예측)

  • Kim, Yong;Yi, Choong Sung;Kim, Hung Soo;Shim, Myung Pil
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2004.05b
    • /
    • pp.759-764
    • /
    • 2004
  • 본 연구는 Deng(1989)이 제시한 Grey 모형을 이용하여 성진강댐의 월유입량을 예측하였고 그 방법을 제시하였다. Grey 모형은 시계열모형이나 다른 모형에 비해 비교적 적은 수의 자료를 이용하고, 간단할 수식으로 구성되어 있는 장점이 있으나, 적은 수의 자료로 인해 입력자료가 가지는 증감의 경향(trend)으로 오차가 발생하기 쉽다. 그러므로 예측오차를 극복하기 위해서 Fuzzy 시스템을 결합한 Fuzzy-Grey 모형을 구성하였고 Fuzzy 시스템에 필요한 매개변수를 추정하기 위해 최적화기법인 유전자 알고리즘(GA; Genetic Algorithm)을 이용하였다. Grey 모형과 결합된 Fuzzy 시스템은 현재의 입력자료가 가지는 패턴과 가장 유사한 패턴의 과거자료를 이용하여 현재의 입력자료의 예측오차를 추론해내는 기능을 가진다. 오차를 추론하기 위해서 과거 월유입량 자료중 현재 입력 자료와 유사한 패턴을 Grey 상관도를 이용하여 검색하고, 보다 높은 유사성을 가지는 패턴을 선별하고자 노름(norm)을 사용하였고, 유전자 알고리즘의 탐색공간을 제한하였다. 이렇게 구성한 Fuzzy-Grey 모형을 이용하여 전국적인 가뭄년도였던 1992년, 1988년, 2001년에 대해 섬진강댐의 월유입량을 예측하였다. 오차는 1982년, 2001년, 1988년 순으로 비슷한 크기의 오차가 발생하였는데 결과를 분석하여 보면, 급격한 월유입량의 변화가 있었던 경우에 오차가 크게 발생하였으나 가뭄년도에 대해 월유입량의 불확실성이 큼에도 불구하고 비교적 월유입량의 추세를 잘 예측한 것으로 판단된다. 본 연구에서 적용한 Fuzzy-Grey 모형은 적은 수의 자료를 이용하여 예측하고 예측결과를 다시 입력자료로 사용하는 업데이트 방식을 사용하기 때문에 예측결과의 오차가 완전하게 보정되지 않으면 다음 결과에 역시 오차를 주게 되어 오차보정이 상당히 중요하다는 것을 알 수 있었다. 오차를 보다 효과적으로 보정하기 위해서는 퍼지제어에 사용되는 퍼지규칙의 수를 늘리고, 유입량에 직접적인 영향을 주는 강우량과 연계한 2변수의 Fuzzy-Grey 모형을 이용한다면 보다 정확한 유입량 예측이 가능할 것으로 사료된다.

  • PDF

Outlier prediction in sensor network data using periodic pattern (주기 패턴을 이용한 센서 네트워크 데이터의 이상치 예측)

  • Kim, Hyung-Il
    • Journal of Sensor Science and Technology
    • /
    • v.15 no.6
    • /
    • pp.433-441
    • /
    • 2006
  • Because of the low power and low rate of a sensor network, outlier is frequently occurred in the time series data of sensor network. In this paper, we suggest periodic pattern analysis that is applied to the time series data of sensor network and predict outlier that exist in the time series data of sensor network. A periodic pattern is minimum period of time in which trend of values in data is appeared continuous and repeated. In this paper, a quantization and smoothing is applied to the time series data in order to analyze the periodic pattern and the fluctuation of each adjacent value in the smoothed data is measured to be modified to a simple data. Then, the periodic pattern is abstracted from the modified simple data, and the time series data is restructured according to the periods to produce periodic pattern data. In the experiment, the machine learning is applied to the periodic pattern data to predict outlier to see the results. The characteristics of analysis of the periodic pattern in this paper is not analyzing the periods according to the size of value of data but to analyze time periods according to the fluctuation of the value of data. Therefore analysis of periodic pattern is robust to outlier. Also it is possible to express values of time attribute as values in time period by restructuring the time series data into periodic pattern. Thus, it is possible to use time attribute even in the general machine learning algorithm in which the time series data is not possible to be learned.

The Study for NHPP Software Reliability Growth Model based on Exponentiated Exponential Distribution (지수화 지수 분포에 의존한 NHPP 소프트웨어 신뢰성장 모형에 관한 연구)

  • Kim, Hee-Cheul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.5 s.43
    • /
    • pp.9-18
    • /
    • 2006
  • Finite failure NHPP models presented in the literature exhibit either constant, monotonic increasing or monotonic decreasing failure occurrence rates per fault. In this paper, Goel-Okumoto and Yamada-Ohba-Osaki model was reviewed, proposes the exponentiated exponential distribution reliability model, which maked out efficiency substituted for gamma and Weibull model(2 parameter shape illustrated by Gupta and Kundu(2001) Algorithm to estimate the parameters used to maximum likelihood estimator and bisection method, model selection based on SSE, AIC statistics and Kolmogorov distance, for the sake of efficient model, was employed. Analysis of failure using NTDS data set for the sake of proposing shape parameter of the exponentiated exponential distribution was employed. This analysis of failure data compared with the exponentiated exponential distribution model and the existing model (using arithmetic and Laplace trend tests, bias tests) is presented.

  • PDF

The Study for NHPP Software Reliability Model based on Chi-Square Distribution (카이제곱 NHPP에 의한 소프트웨어 신뢰성 모형에 관한 연구)

  • Kim, Hee-Cheul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.1 s.39
    • /
    • pp.45-53
    • /
    • 2006
  • Finite failure NHPP models presented in the literature exhibit either constant, monotonic increasing or monotonic decreasing failure occurrence rates per fault. In this paper, Goel-Okumoto and Yamada-Ohba-Osaki model was reviewed, proposes the $x^2$ reliability model, which can capture the increasing nature of the failure occurrence rate per fault. Algorithm to estimate the parameters used to maximum likelihood estimator and bisection method, model selection based on SSE, AIC statistics and Kolmogorov distance, for the sake of efficient model, was employed. Analysis of failure using real data set, SYS2(Allen P.Nikora and Michael R.Lyu), for the sake of proposing shape parameter of the $x^2$ distribution using the degree of freedom, was employed. This analysis of failure data compared with the $x^2$ model and the existing model using arithmetic and Laplace trend tests, Kolmogorov test is presented.

  • PDF

The Parallel Corpus Approach to Building the Syntactic Tree Transfer Set in the English-to- Vietnamese Machine Translation

  • Dien Dinh;Ngan Thuy;Quang Xuan;Nam Chi
    • Proceedings of the IEEK Conference
    • /
    • summer
    • /
    • pp.382-386
    • /
    • 2004
  • Recently, with the machine learning trend, most of the machine translation systems on over the world use two syntax tree sets of two relevant languages to learn syntactic tree transfer rules. However, for the English-Vietnamese language pair, this approach is impossible because until now we have not had a Vietnamese syntactic tree set which is correspondent to English one. Building of a very large correspondent Vietnamese syntactic tree set (thousands of trees) requires so much work and take the investment of specialists in linguistics. To take advantage from our available English-Vietnamese Corpus (EVC) which was tagged in word alignment, we choose the SITG (Stochastic Inversion Transduction Grammar) model to construct English- Vietnamese syntactic tree sets automatically. This model is used to parse two languages at the same time and then carry out the syntactic tree transfer. This English-Vietnamese bilingual syntactic tree set is the basic training data to carry out transferring automatically from English syntactic trees to Vietnamese ones by machine learning models. We tested the syntax analysis by comparing over 10,000 sentences in the amount of 500,000 sentences of our English-Vietnamese bilingual corpus and first stage got encouraging result $(analyzed\;about\;80\%)[5].$ We have made use the TBL algorithm (Transformation Based Learning) to carry out automatic transformations from English syntactic trees to Vietnamese ones based on that parallel syntactic tree transfer set[6].

  • PDF