• Title/Summary/Keyword: parameter errors

Search Result 723, Processing Time 0.031 seconds

Bootstrap Evaluation of Stem Density and Biomass Expansion Factors in Pinus rigida Stands in Korea (부트스트랩 시뮬레이션을 이용한 리기다소나무림의 줄기밀도와 바이오매스 확장계수 평가)

  • Seo, Yeon Ok;Lee, Young Jin;Pyo, Jung Kee;Kim, Rae Hyun;Son, Yeong Son;Lee, Kyeong Hak
    • Journal of Korean Society of Forest Science
    • /
    • v.100 no.4
    • /
    • pp.535-539
    • /
    • 2011
  • This study was conducted to examine the bootstrap evaluation of the stem density and biomass expansion factor for Pinus rigida plantations in Korea. The stem density ($g/cm^3$) in less than 20 tree years were 0.460 while more than 21 tree years were 0.456 respectively. Biomass expansion factor of less than 20 years and more than 21 years were 2.013, 1.171, respectively. The results of 100 and 500 bootstrap iterations, stem density ($g/cm^3$) in less than 20 years were 0.456~0.462 while more than 21 years were 0.457~0.456 respectively. Biomass expansion factor of less than 20 years and more than 21 years were 1.990~2.039, 1.173~1.170, respectively. The mean differences between observed biomass factor and average parameter estimates showed within 5 percent differences. The split datasets of younger stands and old stands were compared to the results of bootstrap simulations. The stem density in less than 20 years of mean difference were 0.441~1.049% while more than 21years were 0.123~0.206% respectively. Biomass expansion factor in less than 20 years and more than 21 years were -1.102~1.340%, -0.024~0.215% respectively. Younger stand had relatively higher errors compared to the old stand. The results of stem density and biomass expansion factor using the bootstrap simulation method indicated approximately 1.1% and 1.4%, respectively.

Multi-camera Calibration Method for Optical Motion Capture System (광학식 모션캡처를 위한 다중 카메라 보정 방법)

  • Shin, Ki-Young;Mun, Joung-H.
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.6
    • /
    • pp.41-49
    • /
    • 2009
  • In this paper, the multi-camera calibration algorithm for optical motion capture system is proposed. This algorithm performs 1st camera calibration using DLT(Direct linear transformation} method and 3-axis calibration frame with 7 optical markers. And 2nd calibration is performed by waving with a wand of known length(so called wand dance} throughout desired calibration volume. In the 1st camera calibration, it is obtained not only camera parameter but also radial lens distortion parameters. These parameters are used initial solution for optimization in the 2nd camera calibration. In the 2nd camera calibration, the optimization is performed. The objective function is to minimize the difference of distance between real markers and reconstructed markers. For verification of the proposed algorithm, re-projection errors are calculated and the distance among markers in the 3-axis frame and in the wand calculated. And then it compares the proposed algorithm with commercial motion capture system. In the 3D reconstruction error of 3-axis frame, average error presents 1.7042mm(commercial system) and 0.8765mm(proposed algorithm). Average error reduces to 51.4 percent in commercial system. In the distance between markers in the wand, the average error shows 1.8897mm in the commercial system and 2.0183mm in the proposed algorithm.

A Performance Improvement Method using Variable Break in Corpus Based Japanese Text-to-Speech System (가변 Break를 이용한 코퍼스 기반 일본어 음성 합성기의 성능 향상 방법)

  • Na, Deok-Su;Min, So-Yeon;Lee, Jong-Seok;Bae, Myung-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.2
    • /
    • pp.155-163
    • /
    • 2009
  • In text-to-speech systems, the conversion of text into prosodic parameters is necessarily composed of three steps. These are the placement of prosodic boundaries. the determination of segmental durations, and the specification of fundamental frequency contours. Prosodic boundaries. as the most important and basic parameter. affect the estimation of durations and fundamental frequency. Break prediction is an important step in text-to-speech systems as break indices (BIs) have a great influence on how to correctly represent prosodic phrase boundaries, However. an accurate prediction is difficult since BIs are often chosen according to the meaning of a sentence or the reading style of the speaker. In Japanese, the prediction of an accentual phrase boundary (APB) and major phrase boundary (MPB) is particularly difficult. Thus, this paper presents a method to complement the prediction errors of an APB and MPB. First, we define a subtle BI in which it is difficult to decide between an APB and MPB clearly as a variable break (VB), and an explicit BI as a fixed break (FB). The VB is chosen using the classification and regression tree, and multiple prosodic targets in relation to the pith and duration are then generated. Finally. unit-selection is conducted using multiple prosodic targets. In the MOS test result. the original speech scored a 4,99. while proposed method scored a 4.25 and conventional method scored a 4.01. The experimental results show that the proposed method improves the naturalness of synthesized speech.

Comparison of Lambertian Model on Multi-Channel Algorithm for Estimating Land Surface Temperature Based on Remote Sensing Imagery

  • A Sediyo Adi Nugraha;Muhammad Kamal;Sigit Heru Murti;Wirastuti Widyatmanti
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.4
    • /
    • pp.397-418
    • /
    • 2024
  • The Land Surface Temperature (LST) is a crucial parameter in identifying drought. It is essential to identify how LST can increase its accuracy, particularly in mountainous and hill areas. Increasing the LST accuracy can be achieved by applying early data processing in the correction phase, specifically in the context of topographic correction on the Lambertian model. Empirical evidence has demonstrated that this particular stage effectively enhances the process of identifying objects, especially within areas that lack direct illumination. Therefore, this research aims to examine the application of the Lambertian model in estimating LST using the Multi-Channel Method (MCM) across various physiographic regions. Lambertian model is a method that utilizes Lambertian reflectance and specifically addresses the radiance value obtained from Sun-Canopy-Sensor(SCS) and Cosine Correction measurements. Applying topographical adjustment to the LST outcome results in a notable augmentation in the dispersion of LST values. Nevertheless, the area physiography is also significant as the plains terrain tends to have an extreme LST value of ≥ 350 K. In mountainous and hilly terrains, the LST value often falls within the range of 310-325 K. The absence of topographic correction in LST results in varying values: 22 K for the plains area, 12-21 K for hilly and mountainous terrain, and 7-9 K for both plains and mountainous terrains. Furthermore, validation results indicate that employing the Lambertian model with SCS and Cosine Correction methods yields superior outcomes compared to processing without the Lambertian model, particularly in hilly and mountainous terrain. Conversely, in plain areas, the Lambertian model's application proves suboptimal. Additionally, the relationship between physiography and LST derived using the Lambertian model shows a high average R2 value of 0.99. The lowest errors(K) and root mean square error values, approximately ±2 K and 0.54, respectively, were achieved using the Lambertian model with the SCS method. Based on the findings, this research concluded that the Lambertian model could increase LST values. These corrected values are often higher than the LST values obtained without the Lambertian model.

An Electrical Conductivity Reconstruction for Evaluating Bone Mineral Density : Simulation (골 밀도 평가를 위한 뼈의 전기 전도도 재구성: 시뮬레이션)

  • 최민주;김민찬;강관석;최흥호
    • Journal of Biomedical Engineering Research
    • /
    • v.25 no.4
    • /
    • pp.261-268
    • /
    • 2004
  • Osteoporosis is a clinical condition in which the amount of bone tissue is reduced and the likelihood of fracture is increased. It is known that the electrical property of the bone is related to its density, and, in particular, the electrical resistance of the bone decreases as the bone loss increases. This implies that the electrical property of bone may be an useful parameter to diagnose osteoporosis, provided that it can be readily measured. The study attempted to evaluate the electrical conductivity of bone using a technique of electrical impedance tomography (EIT). It nay not be easy in general to get an EIT for the bone due to the big difference (an order of 2) of electrical properties between the bone and the surrounding soft tissue. In the present study, we took an adaptive mesh regeneration technique originally developed for the detection of two phase boundaries and modified it to be able to reconstruct the electrical conductivity inside the boundary provided that the geometry of the boundary was given. Numerical simulation was carried out for a tibia phantom, circular cylindrical phantom (radius of 40 mm) inside of which there is an ellipsoidal homeogenous tibia bone (short and long radius are 17 mm and 15 mm, respectively) surrounded by the soft tissue. The bone was located in the 15 mm above from the center of the circular cross section of the phantom. The electrical conductivity of the soft tissue was set to be 4 mS/cm and varies from 0.01 to 1 ms/cm for the bone. The simulation considered measurement errors in order to look into its effects. The simulated results showed that, if the measurement error was maintained less than 5 %, the reconstructed electrical conductivity of the bone was within 10 % errors. The accuracy increased with the electrical conductivity of the bone, as expected. This indicates that the present technique provides more accurate information for osteoporotic bones. It should be noted that tile simulation is based on a simple two phase image for the bone and the surrounding soft tissue when its anatomical information is provided. Nevertheless, the study indicates the possibility that the EIT technique may be used as a new means to detect the bone loss leading to osteoporotic fractures.

The Prediction of DEA based Efficiency Rating for Venture Business Using Multi-class SVM (다분류 SVM을 이용한 DEA기반 벤처기업 효율성등급 예측모형)

  • Park, Ji-Young;Hong, Tae-Ho
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.139-155
    • /
    • 2009
  • For the last few decades, many studies have tried to explore and unveil venture companies' success factors and unique features in order to identify the sources of such companies' competitive advantages over their rivals. Such venture companies have shown tendency to give high returns for investors generally making the best use of information technology. For this reason, many venture companies are keen on attracting avid investors' attention. Investors generally make their investment decisions by carefully examining the evaluation criteria of the alternatives. To them, credit rating information provided by international rating agencies, such as Standard and Poor's, Moody's and Fitch is crucial source as to such pivotal concerns as companies stability, growth, and risk status. But these types of information are generated only for the companies issuing corporate bonds, not venture companies. Therefore, this study proposes a method for evaluating venture businesses by presenting our recent empirical results using financial data of Korean venture companies listed on KOSDAQ in Korea exchange. In addition, this paper used multi-class SVM for the prediction of DEA-based efficiency rating for venture businesses, which was derived from our proposed method. Our approach sheds light on ways to locate efficient companies generating high level of profits. Above all, in determining effective ways to evaluate a venture firm's efficiency, it is important to understand the major contributing factors of such efficiency. Therefore, this paper is constructed on the basis of following two ideas to classify which companies are more efficient venture companies: i) making DEA based multi-class rating for sample companies and ii) developing multi-class SVM-based efficiency prediction model for classifying all companies. First, the Data Envelopment Analysis(DEA) is a non-parametric multiple input-output efficiency technique that measures the relative efficiency of decision making units(DMUs) using a linear programming based model. It is non-parametric because it requires no assumption on the shape or parameters of the underlying production function. DEA has been already widely applied for evaluating the relative efficiency of DMUs. Recently, a number of DEA based studies have evaluated the efficiency of various types of companies, such as internet companies and venture companies. It has been also applied to corporate credit ratings. In this study we utilized DEA for sorting venture companies by efficiency based ratings. The Support Vector Machine(SVM), on the other hand, is a popular technique for solving data classification problems. In this paper, we employed SVM to classify the efficiency ratings in IT venture companies according to the results of DEA. The SVM method was first developed by Vapnik (1995). As one of many machine learning techniques, SVM is based on a statistical theory. Thus far, the method has shown good performances especially in generalizing capacity in classification tasks, resulting in numerous applications in many areas of business, SVM is basically the algorithm that finds the maximum margin hyperplane, which is the maximum separation between classes. According to this method, support vectors are the closest to the maximum margin hyperplane. If it is impossible to classify, we can use the kernel function. In the case of nonlinear class boundaries, we can transform the inputs into a high-dimensional feature space, This is the original input space and is mapped into a high-dimensional dot-product space. Many studies applied SVM to the prediction of bankruptcy, the forecast a financial time series, and the problem of estimating credit rating, In this study we employed SVM for developing data mining-based efficiency prediction model. We used the Gaussian radial function as a kernel function of SVM. In multi-class SVM, we adopted one-against-one approach between binary classification method and two all-together methods, proposed by Weston and Watkins(1999) and Crammer and Singer(2000), respectively. In this research, we used corporate information of 154 companies listed on KOSDAQ market in Korea exchange. We obtained companies' financial information of 2005 from the KIS(Korea Information Service, Inc.). Using this data, we made multi-class rating with DEA efficiency and built multi-class prediction model based data mining. Among three manners of multi-classification, the hit ratio of the Weston and Watkins method is the best in the test data set. In multi classification problems as efficiency ratings of venture business, it is very useful for investors to know the class with errors, one class difference, when it is difficult to find out the accurate class in the actual market. So we presented accuracy results within 1-class errors, and the Weston and Watkins method showed 85.7% accuracy in our test samples. We conclude that the DEA based multi-class approach in venture business generates more information than the binary classification problem, notwithstanding its efficiency level. We believe this model can help investors in decision making as it provides a reliably tool to evaluate venture companies in the financial domain. For the future research, we perceive the need to enhance such areas as the variable selection process, the parameter selection of kernel function, the generalization, and the sample size of multi-class.

Impact of Lambertian Cloud Top Pressure Error on Ozone Profile Retrieval Using OMI (램버시안 구름 모델의 운정기압 오차가 OMI 오존 프로파일 산출에 미치는 영향)

  • Nam, Hyeonshik;Kim, Jae Hawn;Shin, Daegeun;Baek, Kanghyun
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.3
    • /
    • pp.347-358
    • /
    • 2019
  • Lambertian cloud model (Lambertian Cloud Model) is the simplified cloud model which is used to effectively retrieve the vertical ozone distribution of the atmosphere where the clouds exist. By using the Lambertian cloud model, the optical characteristics of clouds required for radiative transfer simulation are parametrized by Optical Centroid Cloud Pressure (OCCP) and Effective Cloud Fraction (ECF), and the accuracy of each parameter greatly affects the radiation simulation accuracy. However, it is very difficult to generalize the vertical ozone error due to the OCCP error because it varies depending on the radiation environment and algorithm setting. In addition, it is also difficult to analyze the effect of OCCP error because it is mixed with other errors that occur in the vertical ozone calculation process. This study analyzed the ozone retrieval error due to OCCP error using two methods. First, we simulated the impact of OCCP error on ozone retrieval based on Optimal Estimation. Using LIDORT radiation model, the radiation error due to the OCCP error is calculated. In order to convert the radiation error to the ozone calculation error, the radiation error is assigned to the conversion equation of the optimal estimation method. The results show that when the OCCP error occurs by 100 hPa, the total ozone is overestimated by 2.7%. Second, a case analysis is carried out to find the ozone retrieval error due to OCCP error. For the case analysis, the ozone retrieval error is simulated assuming OCCP error and compared with the ozone error in the case of PROFOZ 2005-2006, an OMI ozone profile product. In order to define the ozone error in the case, we assumed an ideal assumption. Considering albedo, and the horizontal change of ozone for satisfying the assumption, the 49 cases are selected. As a result, 27 out of 49 cases(about 55%)showed a correlation of 0.5 or more. This result show that the error of OCCP has a significant influence on the accuracy of ozone profile calculation.

Respiratory air flow transducer calibration technique for forced vital capacity test (노력성 폐활량검사시 호흡기류센서의 보정기법)

  • Cha, Eun-Jong;Lee, In-Kwang;Jang, Jong-Chan;Kim, Seong-Sik;Lee, Su-Ok;Jung, Jae-Kwan;Park, Kyung-Soon;Kim, Kyung-Ah
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.10 no.5
    • /
    • pp.1082-1090
    • /
    • 2009
  • Peak expiratory flow rate(PEF) is a very important diagnostic parameter obtained from the forced vital capacity(FVC) test. The expiratory flow rate increases during the short initial time period and may cause measurement error in PEF particularly due to non-ideal dynamic characteristic of the transducer. The present study evaluated the initial rise slope($S_r$) on the flow rate signal to compensate the transducer output data. The 26 standard signals recommended by the American Thoracic Society(ATS) were generated and flown through the velocity-type respiratory air flow transducer with simultaneously acquiring the transducer output signal. Most PEF and the corresponding output($N_{PEF}$) were well fitted into a quadratic equation with a high enough correlation coefficient of 0.9997. But only two(ATS#2 and 26) signals resulted significant deviation of $N_{PEF}$ with relative errors>10%. The relationship between the relative error in $N_{PEF}$ and $S_r$ was found to be linear, based on which $N_{PEF}$ data were compensated. As a result, the 99% confidence interval of PEF error was turned out to be approximately 2.5%, which was less than a quarter of the upper limit of 10% recommended by ATS. Therefore, the present compensation technique was proved to be very accurate, complying the international standards of ATS, which would be useful to calibrate respiratory air flow transducers.

Development of a Dose Calibration Program for Various Dosimetry Protocols in High Energy Photon Beams (고 에너지 광자선의 표준측정법에 대한 선량 교정 프로그램 개발)

  • Shin Dong Oh;Park Sung Yong;Ji Young Hoon;Lee Chang Geon;Suh Tae Suk;Kwon Soo IL;Ahn Hee Kyung;Kang Jin Oh;Hong Seong Eon
    • Radiation Oncology Journal
    • /
    • v.20 no.4
    • /
    • pp.381-390
    • /
    • 2002
  • Purpose : To develop a dose calibration program for the IAEA TRS-277 and AAPM TG-21, based on the air kerma calibration factor (or the cavity-gas calibration factor), as well as for the IAEA TRS-398 and the AAPM TG-51, based on the absorbed dose to water calibration factor, so as to avoid the unwanted error associated with these calculation procedures. Materials and Methods : Currently, the most widely used dosimetry Protocols of high energy photon beams are the air kerma calibration factor based on the IAEA TRS-277 and the AAPM TG-21. However, this has somewhat complex formalism and limitations for the improvement of the accuracy due to uncertainties of the physical quantities. Recently, the IAEA and the AAPM published the absorbed dose to water calibration factor based, on the IAEA TRS-398 and the AAPM TG-51. The formalism and physical parameters were strictly applied to these four dose calibration programs. The tables and graphs of physical data and the information for ion chambers were numericalized for their incorporation into a database. These programs were developed user to be friendly, with the Visual $C^{++}$ language for their ease of use in a Windows environment according to the recommendation of each protocols. Results : The dose calibration programs for the high energy photon beams, developed for the four protocols, allow the input of informations about a dosimetry system, the characteristics of the beam quality, the measurement conditions and dosimetry results, to enable the minimization of any inter-user variations and errors, during the calculation procedure. Also, it was possible to compare the absorbed dose to water data of the four different protocols at a single reference points. Conclusion : Since this program expressed information in numerical and data-based forms for the physical parameter tables, graphs and of the ion chambers, the error associated with the procedures and different user could be solved. It was possible to analyze and compare the major difference for each dosimetry protocol, since the program was designed to be user friendly and to accurately calculate the correction factors and absorbed dose. It is expected that accurate dose calculations in high energy photon beams can be made by the users for selecting and performing the appropriate dosimetry protocol.

The PRISM-based Rainfall Mapping at an Enhanced Grid Cell Resolution in Complex Terrain (복잡지형 고해상도 격자망에서의 PRISM 기반 강수추정법)

  • Chung, U-Ran;Yun, Kyung-Dahm;Cho, Kyung-Sook;Yi, Jae-Hyun;Yun, Jin-I.
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.11 no.2
    • /
    • pp.72-78
    • /
    • 2009
  • The demand for rainfall data in gridded digital formats has increased in recent years due to the close linkage between hydrological models and decision support systems using the geographic information system. One of the most widely used tools for digital rainfall mapping is the PRISM (parameter-elevation regressions on independent slopes model) which uses point data (rain gauge stations), a digital elevation model (DEM), and other spatial datasets to generate repeatable estimates of monthly and annual precipitation. In the PRISM, rain gauge stations are assigned with weights that account for other climatically important factors besides elevation, and aspects and the topographic exposure are simulated by dividing the terrain into topographic facets. The size of facet or grid cell resolution is determined by the density of rain gauge stations and a $5{\times}5km$ grid cell is considered as the lowest limit under the situation in Korea. The PRISM algorithms using a 270m DEM for South Korea were implemented in a script language environment (Python) and relevant weights for each 270m grid cell were derived from the monthly data from 432 official rain gauge stations. Weighted monthly precipitation data from at least 5 nearby stations for each grid cell were regressed to the elevation and the selected linear regression equations with the 270m DEM were used to generate a digital precipitation map of South Korea at 270m resolution. Among 1.25 million grid cells, precipitation estimates at 166 cells, where the measurements were made by the Korea Water Corporation rain gauge network, were extracted and the monthly estimation errors were evaluated. An average of 10% reduction in the root mean square error (RMSE) was found for any months with more than 100mm monthly precipitation compared to the RMSE associated with the original 5km PRISM estimates. This modified PRISM may be used for rainfall mapping in rainy season (May to September) at much higher spatial resolution than the original PRISM without losing the data accuracy.