• 제목/요약/키워드: Reduced data

검색결과 6,434건 처리시간 0.04초

이동통신망에서 예측 위치 등록 정책을 통한 위치관리 비용 감소 효과 분석 (An Analysis of Location Management Cost by Predictive Location Update Policy in Mobile Cellular Networks)

  • 고한성;장인갑;홍정식;이창훈
    • 한국경영과학회:학술대회논문집
    • /
    • 한국경영과학회 2007년도 추계학술대회 및 정기총회
    • /
    • pp.388-394
    • /
    • 2007
  • In wireless network, we propose a predictive location update scheme which considers mobile user's(MU's) mobility patterns. MU's mobility patterns can be found from a movement history data. The prediction accuracy and model complexity depend on the degree of application of history data. The more data we use, the more accurate the prediction is. As a result, the location management cost is reduced, but complexity of the model increases. In this paper, we classify MU's mobility patterns into four types. For each type, we find the respective optimal number of application of history data, and predictive location area by using the simulation. The optimal numbers of four types are shown to be different. When we use more than three application of history data, the simulation time and data storage are shown to increase very steeply.

  • PDF

측정데이터의 효율적 감소를 위한 De Iaunay 삼각형 분할의 적용 (Delaunay triangulation for efficient reduction of measured point data)

  • 허성민;김호찬;이석희
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 2001년도 춘계학술대회 논문집
    • /
    • pp.53-56
    • /
    • 2001
  • Reverse engineering has been widely used for the shape reconstruction of an object without CAD data and it includes some steps such as scanning of a clay or wood model, and generating some manufacturing data in an STL file. A new approach to remove point data with Delaunay triangulation is introduced to deal with the size problems of STL file and the difficulties in the operation of RP process. This approach can be used to reduce a number of measuring data from laser scanner within a specified tolerance, thus it can avoid the time for handing point data during modeling process and the time for verifying and slicing STL model during RP process. Developed software enables the user to specify the criteria for the selection of group of triangles either by the angle between triangles or the percentage of triangles reduced, and thus RP models with accuracy will be helpful to automated process.

  • PDF

Incremental Multi-classification by Least Squares Support Vector Machine

  • Oh, Kwang-Sik;Shim, Joo-Yong;Kim, Dae-Hak
    • Journal of the Korean Data and Information Science Society
    • /
    • 제14권4호
    • /
    • pp.965-974
    • /
    • 2003
  • In this paper we propose an incremental classification of multi-class data set by LS-SVM. By encoding the output variable in the training data set appropriately, we obtain a new specific output vectors for the training data sets. Then, online LS-SVM is applied on each newly encoded output vectors. Proposed method will enable the computation cost to be reduced and the training to be performed incrementally. With the incremental formulation of an inverse matrix, the current information and new input data are used for building another new inverse matrix for the estimation of the optimal bias and lagrange multipliers. Computational difficulties of large scale matrix inversion can be avoided. Performance of proposed method are shown via numerical studies and compared with artificial neural network.

  • PDF

SAHN 모델의 부분적 패턴 추정 방법에 대한 연구 (A Study on Partial Pattern Estimation for Sequential Agglomerative Hierarchical Nested Model)

  • 장경원;안태천
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2005년도 학술대회 논문집 정보 및 제어부문
    • /
    • pp.143-145
    • /
    • 2005
  • In this paper, an empirical study result on pattern estimation method is devoted to reveal underlying data patterns with a relatively reduced computational cost. Presented method performs crisp type clustering with given n number of data samples by means of the sequential agglomerative hierarchical nested model (SAHN). Conventional SAHN based clustering requires large computation time in the initial step of algorithm. To deal with this concern, we modified overall process with a partial approach. In the beginning of this method, we divide given data set to several sub groups with uniform sampling and then each divided sub data group is applied to SAHN based method. The advantage of this method reduces computation time of original process and gives similar results. Proposed is applied to several test data set and simulation result with conceptual analysis is presented.

  • PDF

로버스트추정에 의한 지구물리자료의 역산 (Inversion of Geophysical Data with Robust Estimation)

  • 김희준
    • 자원환경지질
    • /
    • 제28권4호
    • /
    • pp.433-438
    • /
    • 1995
  • The most popular minimization method is based on the least-squares criterion, which uses the $L_2$ norm to quantify the misfit between observed and synthetic data. The solution of the least-squares problem is the maximum likelihood point of a probability density containing data with Gaussian uncertainties. The distribution of errors in the geophysical data is, however, seldom Gaussian. Using the $L_2$ norm, large and sparsely distributed errors adversely affect the solution, and the estimated model parameters may even be completely unphysical. On the other hand, the least-absolute-deviation optimization, which is based on the $L_1$ norm, has much more robust statistical properties in the presence of noise. The solution of the $L_1$ problem is the maximum likelihood point of a probability density containing data with longer-tailed errors than the Gaussian distribution. Thus, the $L_1$ norm gives more reliable estimates when a small number of large errors contaminate the data. The effect of outliers is further reduced by M-fitting method with Cauchy error criterion, which can be performed by iteratively reweighted least-squares method.

  • PDF

Non-Linear Error Identifier Algorithm for Configuring Mobile Sensor Robot

  • Rajaram., P;Prakasam., P
    • Journal of Electrical Engineering and Technology
    • /
    • 제10권3호
    • /
    • pp.1201-1211
    • /
    • 2015
  • WSN acts as an effective tool for tracking the large scale environments. In such environment, the battery life of the sensor networks is limited due to collection of the data, usage of sensing, computation and communication. To resolve this, a mobile robot is presented to identify the data present in the partitioned sensor networks and passed onto the sink. In novel data collection algorithm, the performance of the data collecting operation is reduced because mobile robot can be used only within the limited range. To enhance the data collection in a changing environment, Non Linear Error Identifier (NLEI) algorithm has been developed and presented in this paper to configure the robot by means of error models which are non-linear. Experimental evaluation has been conducted to estimate the performance of the proposed NLEI and it has been observed that the proposed NLEI algorithm increases the error correction rate upto 42% and efficiency upto 60%.

러프집합과 계층적 구조를 이용한 규칙생성 (Rule Generation using Rough set and Hierarchical Structure)

  • 김주영;이철희
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2002년도 합동 추계학술대회 논문집 정보 및 제어부문
    • /
    • pp.521-524
    • /
    • 2002
  • This paper deals with the rule generation from data for control system and data mining using rough set. If the cores and reducts are searched for without consideration of the frequency of data belonging to the same equivalent class, the unnecessary attributes may not be discarded, and the resultant rules don't represent well the characteristics of the data. To improve this, we handle the inconsistent data with a probability measure defined by support, As a result the effect of uncertainty in knowledge reduction can be reduced to some extent. Also we construct the rule base in a hierarchical structure by applying core as the classification criteria at each level. If more than one core exist, the coverage degree is used to select an appropriate one among then to increase the classification rate. The proposed method gives more proper and effective rule base in compatibility and size. For some data mining example the simulations are performed to show the effectiveness of the proposed method.

  • PDF

특징형상정보와 작업설계정보를 이용한 NC코드의 자동 생성 (Automatic generation of NC-code using Feature data and Process Planning data)

  • 박재민;노형민
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 2002년도 추계학술대회 논문집
    • /
    • pp.591-594
    • /
    • 2002
  • Generating NC-code from 3D part model needs a lot of effort to make many decisions, including machining area, tool change data, tool data, cutting condition, etc., by using either manual or computer aided method. This effort can be reduced by integration of automated process planning and NC-code generation. In case of generating NC code with a help of the process planning system, many data mentioned from the process planning can be used. It means that we can create NC-code about a full part. In this study, integration of FAPPS(Feature based Automatic Process Planning) with a NC-code generating module is described and additional data to adapt NC-code for machine shop is discussed.

  • PDF

A Bayesian uncertainty analysis for nonignorable nonresponse in two-way contingency table

  • Woo, Namkyo;Kim, Dal Ho
    • Journal of the Korean Data and Information Science Society
    • /
    • 제26권6호
    • /
    • pp.1547-1555
    • /
    • 2015
  • We study the problem of nonignorable nonresponse in a two-way contingency table and there may be one or two missing categories. We describe a nonignorable nonresponse model for the analysis of two-way categorical table. One approach to analyze these data is to construct several tables (one complete and the others incomplete). There are nonidentifiable parameters in incomplete tables. We describe a hierarchical Bayesian model to analyze two-way categorical data. We use a nonignorable nonresponse model with Bayesian uncertainty analysis by placing priors in nonidentifiable parameters instead of a sensitivity analysis for nonidentifiable parameters. To reduce the effects of nonidentifiable parameters, we project the parameters to a lower dimensional space and we allow the reduced set of parameters to share a common distribution. We use the griddy Gibbs sampler to fit our models and compute DIC and BPP for model diagnostics. We illustrate our method using data from NHANES III data to obtain the finite population proportions.

스트림 데이터를 위한 데이터 구동형 질의처리 기법 (A Data-Driven Query Processing Method for Stream Data)

  • 민미경
    • 디지털콘텐츠학회 논문지
    • /
    • 제8권4호
    • /
    • pp.541-546
    • /
    • 2007
  • 많은 양의 연속적인 스트림 데이터를 대상으로 하는 연속적인 질의처리의 경우는 전통적 방식의 요구구동형 질의처리 방식이 적합하지 않다. 본 논문에서는 자료구동형 방식을 도입하여 질의를 처리함으로써 스트림 데이터에 알맞은 질의처리 기법을 제안하고 질의계획의 구조와 질의실행 방식을 설명하였다. 제안된 질의처리 기법은 다중질의 처리가 가능하며, 질의 간에 공유가 가능하게 한다. 또한 부분질의의 실행결과가 저장됨으로써 실행시간을 단축할 수 있다. 본 질의처리 모델에 XML 데이터와 XQuery 질의를 적용하였다.

  • PDF