• Title/Summary/Keyword: few data

Search Result 4,242, Processing Time 0.034 seconds

DETERMINATION OF GPS HEIGHT WITH INCORPORATION OF USING SURFACE METEOROLOGICAL MEASUREMENTS

  • Wang, Chuan-Sheng;Liou, Yuei-An;Yeh, Ta-Kang
    • Proceedings of the KSRS Conference
    • /
    • 2008.10a
    • /
    • pp.313-316
    • /
    • 2008
  • Although the positioning accuracy of the Global Positioning System (GPS) has been studied extensively and used widely, it is still limited due to errors from sources such as the ionospheric effect, orbital uncertainty, antenna phase center variation, signal multipath and tropospheric influence. This investigation addresses the tropospheric effect on GPS height determination. Data obtained from GPS receivers and co-located surface meteorological instruments in 2003 are adopted in this study. The Ministry of the Interior (MOl), Taiwan, established these GPS receivers as continuous operating reference stations. Two different approaches, parameter estimation and external correction, are utilized to correct the zenith tropospheric delay (ZTD) by applying the surface meteorological measurements (SMM) data. Yet, incorrect pressure measurement leads to very poor accuracy. The GPS height can be affected by a few meters, and the root-mean-square (rms) of the daily solution ranges from a few millimeters to centimeters, no matter what the approach adopted. The effect is least obvious when using SMM data for the parameter estimation approach, but the constant corrections of the GPS height occur more often at higher altitudes. As for the external correction approach, the Saastamoinen model with SMM data makes the repeatability of the GPS height maintained at few centimeters, while the rms of the daily solution displays an improvement of about 2-3 mm.

  • PDF

Dynamic data-base Typhoon Track Prediction (DYTRAP) (동적 데이터베이스 기반 태풍 진로 예측)

  • Lee, Yunje;Kwon, H. Joe;Joo, Dong-Chan
    • Atmosphere
    • /
    • v.21 no.2
    • /
    • pp.209-220
    • /
    • 2011
  • A new consensus algorithm for the prediction of tropical cyclone track has been developed. Conventional consensus is a simple average of a few fixed models that showed the good performance in track prediction for the past few years. Meanwhile, the consensus in this study is a weighted average of a few models that may change for every individual forecast time. The models are selected as follows. The first step is to find the analogous past tropical cyclone tracks to the current track. The next step is to evaluate the model performances for those past tracks. Finally, we take the weighted average of the selected models. More weight is given to the higher performance model. This new algorithm has been named as DYTRAP (DYnamic data-base Typhoon tRAck Prediction) in the sense that the data base is used to find the analogous past tracks and the effective models for every individual track prediction case. DYTRAP has been applied to all 2009 tropical cyclone track prediction. The results outperforms those of all models as well as all the official forecasts of the typhoon centers. In order to prove the real usefulness of DYTRAP, it is necessary to apply the DYTRAP system to the real time prediction because the forecast in typhoon centers usually uses 6-hour or 12-hour-old model guidances.

Data-Mining Bootstrap Procedure with Potential Predictors in Forecasting Models: Evidence from Eight Countries in the Asia-Pacific Stock Markets

  • Lee, Hojin
    • East Asian Economic Review
    • /
    • v.23 no.4
    • /
    • pp.333-351
    • /
    • 2019
  • We use a data-mining bootstrap procedure to investigate the predictability test in the eight Asia-Pacific regional stock markets using in-sample and out-of-sample forecasting models. We address ourselves to the data-mining bias issues by using the data-mining bootstrap procedure proposed by Inoue and Kilian and applied to the US stock market data by Rapach and Wohar. The empirical findings show that stock returns are predictable not only in-sample but out-of-sample in Hong Kong, Malaysia, Singapore, and Korea with a few exceptions for some forecasting horizons. However, we find some significant disparity between in-sample and out-of-sample predictability in the Korean stock market. For Hong Kong, Malaysia, and Singapore, stock returns have predictable components both in-sample and out-of-sample. For the US, Australia, and Canada, we do not find any evidence of return predictability in-sample and out-of-sample with a few exceptions. For Japan, stock returns have a predictable component with price-earnings ratio as a forecasting variable for some out-of-sample forecasting horizons.

예방정비 체제하에서의 공정별 고장시간 간격분석

  • Kim, Chang-Hyun;Kim, Jong-Han
    • IE interfaces
    • /
    • v.4 no.2
    • /
    • pp.35-42
    • /
    • 1991
  • Many authors derived MTTF/MTBF for an operating system by analyzing its actual life time data. However, it is difficult to derive MTTF/MTBF when few breakdowns accur throughout a year. In this paper, we address a new approach to solve that problem under a preventive maintenance policy, in which few breakdowns occur, and also introduce a case study using the results obtained.

  • PDF

Application of Variable Selection for Prediction of Target Concentration

  • 김선우;김연주;김종원;윤길원
    • Bulletin of the Korean Chemical Society
    • /
    • v.20 no.5
    • /
    • pp.525-527
    • /
    • 1999
  • Many types of chemical data tend to be characterized by many measured variables on each of a few observations. In this situation, target concentration can be predicted using multivariate statistical modeling. However, it is necessary to use a few variables considering size and cost of instrumentation, for an example, for development of a portable biomedical instrument. This study presents, with a spectral data set of total hemoglobin in whole blood, the possibility that modeling using only a few variables can improve predictability compared to modeling using all of the variables. Predictability from the model using three wavelengths selected from all possible regression method was improved, compared to the model using whole spectra (whole spectra: SEP = 0.4 g/dL, 3-wavelengths: SEP=0.3 g/dL). It appears that the proper selection of variables can be more effective than using whole spectra for determining the hemoglobin concentration in whole blood.

Host-Based Intrusion Detection Model Using Few-Shot Learning (Few-Shot Learning을 사용한 호스트 기반 침입 탐지 모델)

  • Park, DaeKyeong;Shin, DongIl;Shin, DongKyoo;Kim, Sangsoo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.7
    • /
    • pp.271-278
    • /
    • 2021
  • As the current cyber attacks become more intelligent, the existing Intrusion Detection System is difficult for detecting intelligent attacks that deviate from the existing stored patterns. In an attempt to solve this, a model of a deep learning-based intrusion detection system that analyzes the pattern of intelligent attacks through data learning has emerged. Intrusion detection systems are divided into host-based and network-based depending on the installation location. Unlike network-based intrusion detection systems, host-based intrusion detection systems have the disadvantage of having to observe the inside and outside of the system as a whole. However, it has the advantage of being able to detect intrusions that cannot be detected by a network-based intrusion detection system. Therefore, in this study, we conducted a study on a host-based intrusion detection system. In order to evaluate and improve the performance of the host-based intrusion detection system model, we used the host-based Leipzig Intrusion Detection-Data Set (LID-DS) published in 2018. In the performance evaluation of the model using that data set, in order to confirm the similarity of each data and reconstructed to identify whether it is normal data or abnormal data, 1D vector data is converted to 3D image data. Also, the deep learning model has the drawback of having to re-learn every time a new cyber attack method is seen. In other words, it is not efficient because it takes a long time to learn a large amount of data. To solve this problem, this paper proposes the Siamese Convolutional Neural Network (Siamese-CNN) to use the Few-Shot Learning method that shows excellent performance by learning the little amount of data. Siamese-CNN determines whether the attacks are of the same type by the similarity score of each sample of cyber attacks converted into images. The accuracy was calculated using Few-Shot Learning technique, and the performance of Vanilla Convolutional Neural Network (Vanilla-CNN) and Siamese-CNN was compared to confirm the performance of Siamese-CNN. As a result of measuring Accuracy, Precision, Recall and F1-Score index, it was confirmed that the recall of the Siamese-CNN model proposed in this study was increased by about 6% from the Vanilla-CNN model.

Analysis of Supervisory Report for Performance Measurement in the Private Building Construction Sites (민간 건축현장 성과측정을 위한 감리보고서 활용성 분석)

  • Sung, Yookyung;Hur, Youn Kyoung;Lee, Seung Woo;Yoo, Wi Sung
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2022.11a
    • /
    • pp.217-218
    • /
    • 2022
  • Supervision work deals with important data necessary for the performance management on building construction sites in accordance with the Building Act. Therefore, this study attempts to use the data from supervisory reports to measure the performance of private building projects. Performance measurement is important for systematic management. However, there are only a few cases in which performance measurement is performed because it requires strenuous efforts to collect data for measurement. First, this study derived 6 performance areas and 15 indicators through a few rounds of expert group discussions. Then, we confirmed the performance indicators with high feasibility of data collection through a survey of supervision experts. It is expected that the data of supervisory reports can measure systematically performance and assist in speedy diagnosis of private building construction sites.

  • PDF

Patterns of Data Analysis\ulcorner

  • Unwin, Antony
    • Journal of the Korean Statistical Society
    • /
    • v.30 no.2
    • /
    • pp.219-230
    • /
    • 2001
  • How do you carry out data analysis\ulcorner There are few texts and little theory. One approach could be to use a pattern language, an idea which has been successful in field as diverse as town planning and software engineering. Patterns for data analysis are defined and discussed, illustrated with examples.

  • PDF

Optimal maintenance scheduling of pumps in thermal power stations through reliability analysis based on few data

  • Nakamura, Masatoshi;Kumarawadu, Priyantha;Yoshida, Akinori;Hatazaki, Hironori
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10a
    • /
    • pp.271-274
    • /
    • 1996
  • In this paper we made a reliability analysis of power system pumps by using the dimensional reduction method which over comes the problem due to unavailability of enpugh data in the actual systems under many different operational environments. Hence a resonable method was proposed to determine the optimum maintenance interval of given pump in thermal power stations. This analysis was based on an actual data set of pumps for over ten years in thermal power stations belonged to Kyushu Electric Power Company, Japan.

  • PDF