• Title/Summary/Keyword: Parameter Space

Search Result 1,352, Processing Time 0.034 seconds

Performance Assessment of GBAS Ephemeris Monitor for Wide Faults (Wide Fault에 대한 GBAS 궤도 오차 모니터 성능 분석)

  • Junesol Song;Carl Milner
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.13 no.2
    • /
    • pp.189-197
    • /
    • 2024
  • Galileo is a European Global Navigation Satellite System (GNSS) that has offered the Galileo Open Service since 2016. Consequently, the standardization of GNSS augmentation systems, such as Satellite Based Augmentation System (SBAS), Ground Based Augmentation System (GBAS), and Aircraft Based Augmentation System (ABAS) for Galileo signals, is ongoing. In 2023, the European Union Space Programme Agency (EUSPA) released prior probabilities of a satellite fault and a constellation fault for Galileo, which are 3×10-5 and 2×10-4 per hour, respectively. In particular, the prior probability of a Galileo constellation fault is significantly higher than that for the GPS constellation fault, which is defined as 1×10-8 per hour. This raised concerns about its potential impact on GBAS integrity monitoring. According to the Global Positioning System (GPS) Standard Positioning Service Performance Standard (SPS PS), a constellation fault is classified as a wide fault. A wide fault refers to a fault that affects more than two satellites due to a common cause. Such a fault can be caused by a failure in the Earth Orientation Parameter (EOP). The EOP is used when transforming the inertial axis, on which the orbit determination is based, to Earth Centered Earth Fixed (ECEF) axis, accounting for the irregularities in the rotation of the Earth. Therefore, a faulty EOP can introduce errors when computing a satellite position with respect to the ECEF axis. In GNSS, the ephemeris parameters are estimated based on the positions of satellites and are transmitted to navigation satellites. Subsequently, these ephemeris parameters are broadcasted via the navigation message to users. Therefore, a faulty EOP results in erroneous broadcast ephemeris data. In this paper, we assess the conventional ephemeris fault detection monitor currently employed in GBAS for wide faults, as current GBAS considers only single failure cases. In addition to the existing requirements defined in the standards on the Probability of Missed Detection (PMD), we derive a new PMD requirement tailored for a wide fault. The compliance of the current ephemeris monitor to the derived requirement is evaluated through a simulation. Our findings confirm that the conventional monitor meets the requirement even for wide fault scenarios.

Prediction of Disk Cutter Wear Considering Ground Conditions and TBM Operation Parameters (지반 조건과 TBM 운영 파라미터를 고려한 디스크 커터 마모 예측)

  • Yunseong Kang;Tae Young Ko
    • Tunnel and Underground Space
    • /
    • v.34 no.2
    • /
    • pp.143-153
    • /
    • 2024
  • Tunnel Boring Machine (TBM) method is a tunnel excavation method that produces lower levels of noise and vibration during excavation compared to drilling and blasting methods, and it offers higher stability. It is increasingly being applied to tunnel projects worldwide. The disc cutter is an excavation tool mounted on the cutterhead of a TBM, which constantly interacts with the ground at the tunnel face, inevitably leading to wear. In this study quantitatively predicted disc cutter wear using geological conditions, TBM operational parameters, and machine learning algorithms. Among the input variables for predicting disc cutter wear, the Uniaxial Compressive Strength (UCS) is considerably limited compared to machine and wear data, so the UCS estimation for the entire section was first conducted using TBM machine data, and then the prediction of the Coefficient of Wearing rate(CW) was performed with the completed data. Comparing the performance of CW prediction models, the XGBoost model showed the highest performance, and SHapley Additive exPlanation (SHAP) analysis was conducted to interpret the complex prediction model.

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (부도예측을 위한 KNN 앙상블 모형의 동시 최적화)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.139-157
    • /
    • 2016
  • Bankruptcy involves considerable costs, so it can have significant effects on a country's economy. Thus, bankruptcy prediction is an important issue. Over the past several decades, many researchers have addressed topics associated with bankruptcy prediction. Early research on bankruptcy prediction employed conventional statistical methods such as univariate analysis, discriminant analysis, multiple regression, and logistic regression. Later on, many studies began utilizing artificial intelligence techniques such as inductive learning, neural networks, and case-based reasoning. Currently, ensemble models are being utilized to enhance the accuracy of bankruptcy prediction. Ensemble classification involves combining multiple classifiers to obtain more accurate predictions than those obtained using individual models. Ensemble learning techniques are known to be very useful for improving the generalization ability of the classifier. Base classifiers in the ensemble must be as accurate and diverse as possible in order to enhance the generalization ability of an ensemble model. Commonly used methods for constructing ensemble classifiers include bagging, boosting, and random subspace. The random subspace method selects a random feature subset for each classifier from the original feature space to diversify the base classifiers of an ensemble. Each ensemble member is trained by a randomly chosen feature subspace from the original feature set, and predictions from each ensemble member are combined by an aggregation method. The k-nearest neighbors (KNN) classifier is robust with respect to variations in the dataset but is very sensitive to changes in the feature space. For this reason, KNN is a good classifier for the random subspace method. The KNN random subspace ensemble model has been shown to be very effective for improving an individual KNN model. The k parameter of KNN base classifiers and selected feature subsets for base classifiers play an important role in determining the performance of the KNN ensemble model. However, few studies have focused on optimizing the k parameter and feature subsets of base classifiers in the ensemble. This study proposed a new ensemble method that improves upon the performance KNN ensemble model by optimizing both k parameters and feature subsets of base classifiers. A genetic algorithm was used to optimize the KNN ensemble model and improve the prediction accuracy of the ensemble model. The proposed model was applied to a bankruptcy prediction problem by using a real dataset from Korean companies. The research data included 1800 externally non-audited firms that filed for bankruptcy (900 cases) or non-bankruptcy (900 cases). Initially, the dataset consisted of 134 financial ratios. Prior to the experiments, 75 financial ratios were selected based on an independent sample t-test of each financial ratio as an input variable and bankruptcy or non-bankruptcy as an output variable. Of these, 24 financial ratios were selected by using a logistic regression backward feature selection method. The complete dataset was separated into two parts: training and validation. The training dataset was further divided into two portions: one for the training model and the other to avoid overfitting. The prediction accuracy against this dataset was used to determine the fitness value in order to avoid overfitting. The validation dataset was used to evaluate the effectiveness of the final model. A 10-fold cross-validation was implemented to compare the performances of the proposed model and other models. To evaluate the effectiveness of the proposed model, the classification accuracy of the proposed model was compared with that of other models. The Q-statistic values and average classification accuracies of base classifiers were investigated. The experimental results showed that the proposed model outperformed other models, such as the single model and random subspace ensemble model.

Pseudo Image Composition and Sensor Models Analysis of SPOT Satellite Imagery of Non-Accessible Area (비접근 지역에 대한 SPOT 위성영상의 Pseudo영상 구성 및 센서모델 분석)

  • 방기인;조우석
    • Proceedings of the KSRS Conference
    • /
    • 2001.03a
    • /
    • pp.140-148
    • /
    • 2001
  • The satellite sensor model is typically established using ground control points acquired by ground survey Of existing topographic maps. In some cases where the targeted area can't be accessed and the topographic maps are not available, it is difficult to obtain ground control points so that geospatial information could not be obtained from satellite image. The paper presents several satellite sensor models and satellite image decomposition methods for non-accessible area where ground control points can hardly acquired in conventional ways. First, 10 different satellite sensor models, which were extended from collinearity condition equations, were developed and then the behavior of each sensor model was investigated. Secondly, satellite images were decomposed and also pseudo images were generated. The satellite sensor model extended from collinearity equations was represented by the six exterior orientation parameters in 1$^{st}$, 2$^{nd}$ and 3$^{rd}$ order function of satellite image row. Among them, the rotational angle parameters such as $\omega$(omega) and $\phi$(phi) correlated highly with positional parameters could be assigned to constant values. For non-accessible area, satellite images were decomposed, which means that two consecutive images were combined as one image. The combined image consists of one satellite image with ground control points and the other without ground control points. In addition, a pseudo image which is an imaginary image, was prepared from one satellite image with ground control points and the other without ground control points. In other words, the pseudo image is an arbitrary image bridging two consecutive images. For the experiments, SPOT satellite images exposed to the similar area in different pass were used. Conclusively, it was found that 10 different satellite sensor models and 5 different decomposed methods delivered different levels of accuracy. Among them, the satellite camera model with 1$^{st}$ order function of image row for positional orientation parameters and rotational angle parameter of kappa, and constant rotational angle parameter omega and phi provided the best 60m maximum error at check point with pseudo images arrangement.

  • PDF

Pseudo Image Composition and Sensor Models Analysis of SPOT Satellite Imagery for Inaccessible Area (비접근 지역에 대한 SPOT 위성영상의 Pseudo영상 구성 및 센서모델 분석)

  • 방기인;조우석
    • Korean Journal of Remote Sensing
    • /
    • v.17 no.1
    • /
    • pp.33-44
    • /
    • 2001
  • The paper presents several satellite models and satellite image decomposition methods for inaccessible area where ground control points can hardly acquired in conventional ways. First, 10 different satellite sensor models, which were extended from collinearity condition equations, were developed and then behavior of each sensor model was investigated. Secondly, satellite images were decomposed and also pseudo images were generated. The satellite sensor model extended from collinearity equations was represented by the six exterior orientation parameters in $1^{st}$, $2^{nd}$ and $3^{rd}$ order function of satellite image row. Among them, the rotational angle parameters such as $\omega$(omega) and $\Phi$(phi) correlated highly with positional parameters could be assigned to constant values. For inaccessible area, satellite images were decomposed, which means that two consecutive images were combined as one image, The combined image consists of one satellite image with ground control points and the other without ground control points. In addition, a pseudo image which is an imaginary image, was prepared from one satellite image with ground control points and the other without ground control points. In other words, the pseudo image is an arbitrary image bridging two consecutive images. For the experiments, SPOT satellite images exposed to the similar area in different pass were used. Conclusively, it was found that 10 different satellite sensor models and 5 different decomposed methods delivered different levels of accuracy. Among them, the satellite camera model with 1st order function of image row for positional orientation parameters and rotational angle parameter of kappa, and constant rotational angle parameter omega and phi provided the best 60m maximum error at check point with pseudo images arrangement.

A Study on the Generalization of Multiple Linear Regression Model for Monthly-runoff Estimation (선형회귀모형(線型回歸模型)에 의한 하천(河川) 월(月) 유출량(流出量) 추정(推定)의 일반화(一般化)에 관한 연구(硏究))

  • Kim, Tai Cheol
    • Korean Journal of Agricultural Science
    • /
    • v.7 no.2
    • /
    • pp.131-144
    • /
    • 1980
  • The Linear Regression Model to extend the monthly runoff data in the short-recorded river was proposed by the author in 1979. Here in this study generalization precedure is made to apply that model to any given river basin and to any given station. Lengthier monthly runoff data generated by this generalized model would be useful for water resources assessment and waterworks planning. The results are as follows. 1. This Linear Regression Model which is a transformed water-balance equation attempts to represent the physical properties of the parameters and the time and space varient system in catchment response lumpedly, qualitatively and deductively through the regression coefficients as component grey box, whereas deterministic model deals the foregoings distributedly, quantitatively and inductively through all the integrated processes in the catchment response. This Linear Regression Model would be termed "Statistically deterministic model". 2. Linear regression equations are obtained at four hydrostation in Geum-river basin. Significance test of equations is carried out according to the statistical criterion and shows "Highly" It is recognized th at the regression coefficients of each parameter vary regularly with catchment area increase. Those are: The larger the catchment area, the bigger the loss of precipitation due to interception and detention storage in crease. The larger the catchment area, the bigger the release of baseflow due to catchment slope decrease and storage capacity increase. The larger the catchment area, the bigger the loss of evapotranspiration due to more naked coverage and soil properties. These facts coincide well with hydrological commonsenses. 3. Generalized diagram of regression coefficients is made to follow those commonsenses. By this diagram, Linear Regression Model would be set up for a given river basin and for a given station (Fig.10).

  • PDF

A Study of Tasseled Cap Transformation Coefficient for the Geostationary Ocean Color Imager (GOCI) (정지궤도 천리안위성 해양관측센서 GOCI의 Tasseled Cap 변환계수 산출연구)

  • Shin, Ji-Sun;Park, Wook;Won, Joong-Sun
    • Korean Journal of Remote Sensing
    • /
    • v.30 no.2
    • /
    • pp.275-292
    • /
    • 2014
  • The objective of this study is to determine Tasseled Cap Transformation (TCT) coefficients for the Geostationary Ocean Color Imager (GOCI). TCT is traditional method of analyzing the characteristics of the land area from multi spectral sensor data. TCT coefficients for a new sensor must be estimated individually because of different sensor characteristics of each sensor. Although the primary objective of the GOCI is for ocean color study, one half of the scene covers land area with typical land observing channels in Visible-Near InfraRed (VNIR). The GOCI has a unique capability to acquire eight scenes per day. This advantage of high temporal resolution can be utilized for detecting daily variation of land surface. The GOCI TCT offers a great potential for application in near-real time analysis and interpretation of land cover characteristics. TCT generally represents information of "Brightness", "Greenness" and "Wetness". However, in the case of the GOCI is not able to provide "Wetness" due to lack of ShortWave InfraRed (SWIR) band. To maximize the utilization of high temporal resolution, "Wetness" should be provided. In order to obtain "Wetness", the linear regression method was used to align the GOCI Principal Component Analysis (PCA) space with the MODIS TCT space. The GOCI TCT coefficients obtained by this method have different values according to observation time due to the characteristics of geostationary earth orbit. To examine these differences, the correlation between the GOCI TCT and the MODIS TCT were compared. As a result, while the GOCI TCT coefficients of "Brightness" and "Greenness" were selected at 4h, the GOCI TCT coefficient of "Wetness" was selected at 2h. To assess the adequacy of the resulting GOCI TCT coefficients, the GOCI TCT data were compared to the MODIS TCT image and several land parameters. The land cover classification of the GOCI TCT image was expressed more precisely than the MODIS TCT image. The distribution of land cover classification of the GOCI TCT space showed meaningful results. Also, "Brightness", "Greenness", and "Wetness" of the GOCI TCT data showed a relatively high correlation with Albedo ($R^2$ = 0.75), Normalized Difference Vegetation Index (NDVI) ($R^2$ = 0.97), and Normalized Difference Moisture Index (NDMI) ($R^2$ = 0.77), respectively. These results indicate the suitability of the GOCI TCT coefficients.

A New Method For Measuring Acupoint Pigmentation After Cupping Using Cross Polarization (교차편광 촬영술(Cross Polarization Photographic Technique)를 이용한 부항요법의 배수혈 피부 색소 침착 변화 측정 평가)

  • Kim, Soo-Byeong;Jung, Byungjo;Shin, Tae-Min;Lee, Yong-Heum
    • Korean Journal of Acupuncture
    • /
    • v.30 no.4
    • /
    • pp.252-263
    • /
    • 2013
  • Objectives : Skin color deformation by cupping has been widely used as a diagnostic parameter in Traditional Korean Medicine(TKM). Skin color deformation such as ecchymoses and purpura is induced by local vacuum in a suction cup. Since existing studies have relied on a visual diagnostic method, there is a need to use the quantitative measurement method. Methods : We conducted an analysis of cross-polarization photographic images to assess the changes in skin color deformation. The skin color variation was analyzed using $L^*a^*b^*$ space and the skin erythema index(E.I.). The meridian theory in TKM indicates that the condition of primary internal organs is closely related to the skin color deformation at special acupoints. Before conducting these studies, it is necessary to evaluate whether or not skin color deformation is influenced by muscle condition. Hence, we applied cupping at BL13, BL15, BL18, BL20 and BL23 at Bladder Meridian(BL) and measured blood lactate at every acupoint. Results : We confirmed the high system measurement accuracy, and observed the diverse skin color deformations. Moreover, we confirmed that the $L^*$, $a^*$ and E.I. had not changed after 40 minutes(p>0.05). The distribution of blood lactate levels at each part was observed differently. Blood lactate level and skin color deformation at each part was independent of each other. Conclusions : The negative pressure produced by the suction cup induces a reduction in the volumetric fraction of melanosomes and subsequent reduction in epidermal thickness. The relationship between variations of tissue and skin properties and skin color deformation degree must be investigated prior to considering the relationship between internal organ dysfunction and skin color deformation.

The Prediction of DEA based Efficiency Rating for Venture Business Using Multi-class SVM (다분류 SVM을 이용한 DEA기반 벤처기업 효율성등급 예측모형)

  • Park, Ji-Young;Hong, Tae-Ho
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.139-155
    • /
    • 2009
  • For the last few decades, many studies have tried to explore and unveil venture companies' success factors and unique features in order to identify the sources of such companies' competitive advantages over their rivals. Such venture companies have shown tendency to give high returns for investors generally making the best use of information technology. For this reason, many venture companies are keen on attracting avid investors' attention. Investors generally make their investment decisions by carefully examining the evaluation criteria of the alternatives. To them, credit rating information provided by international rating agencies, such as Standard and Poor's, Moody's and Fitch is crucial source as to such pivotal concerns as companies stability, growth, and risk status. But these types of information are generated only for the companies issuing corporate bonds, not venture companies. Therefore, this study proposes a method for evaluating venture businesses by presenting our recent empirical results using financial data of Korean venture companies listed on KOSDAQ in Korea exchange. In addition, this paper used multi-class SVM for the prediction of DEA-based efficiency rating for venture businesses, which was derived from our proposed method. Our approach sheds light on ways to locate efficient companies generating high level of profits. Above all, in determining effective ways to evaluate a venture firm's efficiency, it is important to understand the major contributing factors of such efficiency. Therefore, this paper is constructed on the basis of following two ideas to classify which companies are more efficient venture companies: i) making DEA based multi-class rating for sample companies and ii) developing multi-class SVM-based efficiency prediction model for classifying all companies. First, the Data Envelopment Analysis(DEA) is a non-parametric multiple input-output efficiency technique that measures the relative efficiency of decision making units(DMUs) using a linear programming based model. It is non-parametric because it requires no assumption on the shape or parameters of the underlying production function. DEA has been already widely applied for evaluating the relative efficiency of DMUs. Recently, a number of DEA based studies have evaluated the efficiency of various types of companies, such as internet companies and venture companies. It has been also applied to corporate credit ratings. In this study we utilized DEA for sorting venture companies by efficiency based ratings. The Support Vector Machine(SVM), on the other hand, is a popular technique for solving data classification problems. In this paper, we employed SVM to classify the efficiency ratings in IT venture companies according to the results of DEA. The SVM method was first developed by Vapnik (1995). As one of many machine learning techniques, SVM is based on a statistical theory. Thus far, the method has shown good performances especially in generalizing capacity in classification tasks, resulting in numerous applications in many areas of business, SVM is basically the algorithm that finds the maximum margin hyperplane, which is the maximum separation between classes. According to this method, support vectors are the closest to the maximum margin hyperplane. If it is impossible to classify, we can use the kernel function. In the case of nonlinear class boundaries, we can transform the inputs into a high-dimensional feature space, This is the original input space and is mapped into a high-dimensional dot-product space. Many studies applied SVM to the prediction of bankruptcy, the forecast a financial time series, and the problem of estimating credit rating, In this study we employed SVM for developing data mining-based efficiency prediction model. We used the Gaussian radial function as a kernel function of SVM. In multi-class SVM, we adopted one-against-one approach between binary classification method and two all-together methods, proposed by Weston and Watkins(1999) and Crammer and Singer(2000), respectively. In this research, we used corporate information of 154 companies listed on KOSDAQ market in Korea exchange. We obtained companies' financial information of 2005 from the KIS(Korea Information Service, Inc.). Using this data, we made multi-class rating with DEA efficiency and built multi-class prediction model based data mining. Among three manners of multi-classification, the hit ratio of the Weston and Watkins method is the best in the test data set. In multi classification problems as efficiency ratings of venture business, it is very useful for investors to know the class with errors, one class difference, when it is difficult to find out the accurate class in the actual market. So we presented accuracy results within 1-class errors, and the Weston and Watkins method showed 85.7% accuracy in our test samples. We conclude that the DEA based multi-class approach in venture business generates more information than the binary classification problem, notwithstanding its efficiency level. We believe this model can help investors in decision making as it provides a reliably tool to evaluate venture companies in the financial domain. For the future research, we perceive the need to enhance such areas as the variable selection process, the parameter selection of kernel function, the generalization, and the sample size of multi-class.

THE RELATIONSHIP BETWEEN PARTICLE INJECTION RATE OBSERVED AT GEOSYNCHRONOUS ORBIT AND DST INDEX DURING GEOMAGNETIC STORMS (자기폭풍 기간 중 정지궤도 공간에서의 입자 유입률과 Dst 지수 사이의 상관관계)

  • 문가희;안병호
    • Journal of Astronomy and Space Sciences
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2003
  • To examine the causal relationship between geomagnetic storm and substorm, we investigate the correlation between dispersionless particle injection rate of proton flux observed from geosynchronous satellites, which is known to be a typical indicator of the substorm expansion activity, and Dst index during magnetic storms. We utilize geomagnetic storms occurred during the period of 1996 ~ 2000 and categorize them into three classes in terms of the minimum value of the Dst index ($Dst_{min}$); intense ($-200nT{$\leq$}Dst_{min}{$\leq$}-100nT$), moderate($-100nT{\leq}Dst_{min}{\leq}-50nT$), and small ($-50nT{\leq}Dst_{min}{\leq}-30nT$) -30nT)storms. We use the proton flux of the energy range from 50 keV to 670 keV, the major constituents of the ring current particles, observed from the LANL geosynchronous satellites located within the local time sector from 18:00 MLT to 04:00 MLT. We also examine the flux ratio ($f_{max}/f_{ave}$) to estimate particle energy injection rate into the inner magnetosphere, with $f_{ave}$ and $f_{max}$ being the flux levels during quiet and onset levels, respectively. The total energy injection rate into the inner magnetosphere can not be estimated from particle measurements by one or two satellites. However, the total energy injection rate should be at least proportional to the flux ratio and the injection frequency. Thus we propose a quantity, “total energy injection parameter (TEIP)”, defined by the product of the flux ratio and the injection frequency as an indicator of the injected energy into the inner magnetosphere. To investigate the phase dependence of the substorm contribution to the development of magnetic storm, we examine the correlations during the two intervals, main and recovery phase of storm separately. Several interesting tendencies are noted particularly during the main phase of storm. First, the average particle injection frequency tends to increase with the storm size with the correlation coefficient being 0.83. Second, the flux ratio ($f_{max}/f_{ave}$) tends to be higher during large storms. The correlation coefficient between $Dst_{min}$ and the flux ratio is generally high, for example, 0.74 for the 75~113 keV energy channel. Third, it is also worth mentioning that there is a high correlation between the TEIP and $Dst_{min}$ with the highest coefficient (0.80) being recorded for the energy channel of 75~113 keV, the typical particle energies of the ring current belt. Fourth, the particle injection during the recovery phase tends to make the storms longer. It is particularly the case for intense storms. These characteristics observed during the main phase of the magnetic storm indicate that substorm expansion activity is closely associated with the development of mangetic storm.