• Title/Summary/Keyword: error distribution

Search Result 2,047, Processing Time 0.026 seconds

Atmospheric correction by Spectral Shape Matching Method (SSMM): Accounting for horizontal inhomogeneity of the atmosphere

  • Shanmugam Palanisamy;Ahn Yu-Hwan
    • Proceedings of the KSRS Conference
    • /
    • 2006.03a
    • /
    • pp.341-343
    • /
    • 2006
  • The current spectral shape matching method (SSMM), developed by Ahn and Shanmugam (2004), relies on the assumption that the path radiance resulting from scattered photons due to air molecules and aerosols and possibly direct-reflected light from the air-sea interface is spatially homogeneous over the sub-scene of interest, enabling the retrieval of water-leaving radiances ($L_w$) from the satellite ocean color image data. This assumption remains valid for the clear atmospheric conditions, but when the distribution of aerosol loadings varies dramatically the above postulation of spatial homogeneity will be violated. In this study, we present the second version of SSMM which will take into account the horizontal variations of aerosol loading in the correction of atmospheric effects in SeaWiFS ocean color image data. The new version includes models for the correction of the effects of aerosols and Raleigh particles and a method fur computation of diffuse transmittance ($t_{os}$) as similar to SeaWiFS. We tested this method over the different optical environments and compared its effectiveness with the results of standard atmospheric correction (SAC) algorithm (Gordon and Wang, 1994) and those from in-situ observations. Findings revealed that the SAC algorithm appeared to distort the spectral shape of water-leaving radiance spectra in suspended sediments (SS) and algal bloom dominated-areas and frequently yielded underestimated or often negative values in the lower green and blue part of the electromagnetic spectrum. Retrieval of water-leaving radiances in coastal waters with very high sediments, for instance = > 8g $m^{-3}$, was not possible with the SAC algorithm. As the current SAC algorithm does not include models for the Asian aerosols, the water-leaving radiances over the aerosol-dominated areas could not be retrieved from the image and large errors often resulted from an inappropriate extrapolation of the estimated aerosol radiance from two IR bands to visible spectrum. In contrast to the above results, the new SSMM enabled accurate retrieval of water-leaving radiances in a various range of turbid waters with SS concentrations from 1 to 100 g $m^{-3}$ that closely matched with those from the in-situ observations. Regardless of the spectral band, the RMS error deviation was minimum of 0.003 and maximum of 0.46, in contrast with those of 0.26 and 0.81, respectively, for SAC algorithm. The new SSMM also remove all aerosol effects excluding areas for which the signal-to-noise ratio is much lower than the water signal.

  • PDF

New Worstcase Optimization Method and Process-Variation-Aware Interconnect Worstcase Design Environment (새로운 Worstcase 최적화 방법 및 공정 편차를 고려한 배선의 Worstcase 설계 환경)

  • Jung, Won-Young;Kim, Hyun-Gon;Wee, Jae-Kyung
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.43 no.10 s.352
    • /
    • pp.80-89
    • /
    • 2006
  • The rapid development of process technology and the introduction of new materials not only make it difficult for process control but also as a result increase process variations. These process variations are barriers to successful implementation of design circuits because there are disparities between data on layout and that on wafer. This paper proposes a new design environment to determine the interconnect worstcase with accuracy and speed so that the interconnect effects due to process-induced variations can be applied to designs of $0.13{\mu}m$ and below. Common Geometry and Maximum Probability methods have been developed and integrated into the new worstcase optimization algorithm. The delay time of the 31-stage Ring Oscillator, manufactured in UMC $0.13{\mu}m$ Logic, was measured, and the results proved the accuracy of the algorithm. When the algorithm was used to optimize worstcase determination, the relative error was less than 1.00%, two times more accurate than the conventional methods. Furthermore, the new worstcase design environment improved optimization speed by 32.01% compared to that of conventional worstcase optimizers. Moreover, the new worstcitse design environment accurately predicted the worstcase of non-normal distribution which conventional methods cannot do well.

Assessment of Productive Areas for Quercus acutissima by Ecoprovince in Korea Using Environmental Factors (환경요인을 이용한 생태권역별 상수리나무의 적지판정)

  • Kim, Tae U;Sung, Joo Han;Kwon, Tae-Sung;Chun, Jung Hwa;Shin, Man Yong
    • Journal of Korean Society of Forest Science
    • /
    • v.102 no.3
    • /
    • pp.437-445
    • /
    • 2013
  • This study was conducted to develop site index equations and to estimate productive areas of Quercus acutissima by ecoprovince in Korea using environmental factors. Using the large data set from both a digital forest site map and a climatic map, a total of 48 environmental factors including 19 climatic variables were regressed on site index to develop site index equations. Four to six environmental factors for Quercus acutissima by ecoprovince were selected as independent variables in the final site index equations. The result showed that the coefficients of determination for site index equations were ranged from 0.30 to 0.41, which seem to be relatively low but good enough for the estimation of forest stand productivity. The site index equations developed in this study were also verified by three evaluation statistics such as the estimation bias of model, precision of model, and mean square error of measurement. According to the evaluation statistics, it was found that the site index equations fitted well to the test data sets with relatively low bias and variation. As a result, it was concluded that the site index equations were well capable of estimating site quality. Based on the site index equations of Quercus acutissima by ecoprovince, the productive areas by ecoprovince were estimated by applying GIS technique to the digital forest site map and climate map. In addition, the distribution of productive areas by ecoprovince was illustrated by using GIS technique.

Impact of GPS-RO Data Assimilation in 3DVAR System on the Typhoon Event (태풍 수치모의에서 GPS-RO 인공위성을 사용한 관측 자료동화 효과)

  • Park, Soon-Young;Yoo, Jung-Woo;Kang, Nam-Young;Lee, Soon-Hwan
    • Journal of Environmental Science International
    • /
    • v.26 no.5
    • /
    • pp.573-584
    • /
    • 2017
  • In order to simulate a typhoon precisely, the satellite observation data has been assimilated using WRF (Weather Research and Forecasting model) three-Dimensional Variational (3DVAR) data assimilation system. The observation data used in 3DVAR was GPS Radio Occultation (GPS-RO) data which is loaded on Low-Earth Orbit (LEO) satellite. The refractivity of Earth is deduced by temperature, pressure, and water vapor. GPS-RO data can be obtained with this refractivity when the satellite passes the limb position with respect to its original orbit. In this paper, two typhoon cases were simulated to examine the characteristics of data assimilation. One had been occurred in the Western Pacific from 16 to 25 October, 2015, and the other had affected Korean Peninsula from 22 to 29 August, 2012. In the simulation results, the typhoon track between background (BGR) and assimilation (3DV) run were significantly different when the track appeared to be rapidly change. The surface wind speed showed large difference for the long forecasting time because the GPS-RO data contained much information in the upper level, and it took a time to impact on the surface wind. Along with the modified typhoon track, the differences in the horizontal distribution of accumulated rain rate was remarkable with the range of -600~500 mm. During 7 days, we estimated the characteristics between daily assimilated simulation (3DV) and initial time assimilation (3DV_7). Because 3DV_7 demonstrated the accurate track of typhoon and its meteorological variables, the differences in two experiments have found to be insignificant. Using observed rain rate data at 79 surface observatories, the statistical analysis has been carried on for the evaluation of quantitative improvement. Although all experiments showed underestimated rain amount because of low model resolution (27 km), the reduced Mean Bias and Root-Mean-Square Error were found to be 2.92 mm and 4.53 mm, respectively.

A Study on Reliability and Training of Face-Bow Transfer Procedure (안궁의 신뢰성과 학습효과에 관한 연구)

  • So, Woong-Seup;Choi, Dae-Kyun;Kwon, Kung-Rock;Lee, Seok-Hyung
    • Journal of Dental Rehabilitation and Applied Science
    • /
    • v.19 no.4
    • /
    • pp.297-308
    • /
    • 2003
  • Face-bow is used to transfer models to the articulator in diagnosing the patient or treating problems associated with occlusion. However, there have been few reports on the reliability of the face-bow procedure and the relationship between the experience of the operator and the reliability of the face-bow procedure. The purposes of this study are to examine the reliability of the face-bow procedure and to evaluate whether the face-bow transferring has any training effect. Nine dentists working at M hospital conducted a face-bow transfer in one patient having a normal dentition and interdental relationship. The procedure was done two times a week for four weeks. The maxillary model was mounted to the articulator every time, then the landmarks on the maxillary right first molar, the maxillary left central incisor, and the maxillary left first molar were measured with a special three-dimensional instrument. These data were input into a computer, and evaluated statistically. The results were as follows ; 1. When examined with ANOVA test, the results were p=0.2040 in maxillary right first molar, p=0.0578 in maxillary left incisor, and p=0.1433 in maxillary left first molar. There was no significant(0< $p{\leq}0.05$). 2. Training 1) The correlation coefficient between trial and rejection was -0.578 when analyzed with T-distribution. The more we tried, the less errors we found. 2) When the S.D. of the first three trials was compared to the S.D. of the last three trials in face-bow transfer, the results showed that the former was larger than the latter in thirty-nine times, and the latter was larger than the former in fifteen times. The more we tried face-bow transfer, the less errors we found. 3. When the S.D. of x, y, z coordinates were examined, the S.D. of x coordinates had the largest measurement in five times, the S.D. of y coordinates had the largest measurement in four times, and the S.D. of z coordinates had the largest measurement in nine times. The possibility which the error can occur in z coordinate was the highest.

A High Speed Block Turbo Code Decoding Algorithm and Hardware Architecture Design (고속 블록 터보 코드 복호 알고리즘 및 하드웨어 구조 설계)

  • 유경철;신형식;정윤호;김근회;김재석
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.41 no.7
    • /
    • pp.97-103
    • /
    • 2004
  • In this paper, we propose a high speed block turbo code decoding algorithm and an efficient hardware architecture. The multimedia wireless data communication systems need channel codes which have the high-performance error correcting capabilities. Block turbo codes support variable code rates and packet sizes, and show a high performance due to a soft decision iteration decoding of turbo codes. However, block turbo codes have a long decoding time because of the iteration decoding and a complicated extrinsic information operation. The proposed algorithm using the threshold that represents a channel information reduces the long decoding time. After the threshold is decided by a simulation result, the proposed algorithm eliminates the calculation for the bits which have a good channel information and assigns a high reliability value to the bits. The threshold is decided by the absolute mean and the standard deviation of a LLR(Log Likelihood Ratio) in consideration that the LLR distribution is a gaussian one. Also, the proposed algorithm assigns '1', the highest reliable value, to those bits. The hardware design result using verilog HDL reduces a decoding time about 30% in comparison with conventional algorithm, and includes about 20K logic gate and 32Kbit memory sizes.

Development of Tree Stem Weight Equations for Larix kaempferi in Central Region of South Korea (중부지역 일본잎갈나무의 수간중량 추정식 개발)

  • Ko, Chi-Ung;Son, Yeong-Mo;Kang, Jin-Taek;Kim, Dong-Geun
    • Journal of Korean Society of Forest Science
    • /
    • v.107 no.2
    • /
    • pp.184-192
    • /
    • 2018
  • In this study was implemented to develop tree stem weight prediction equation of Larix kaempferi in central region by selecting a standard site, taking into account of diameter and position of the local trees. Fifty five sample trees were selected in total. By utilizing actual data of the sample trees, 11 models were compared and analyzed in order to estimate four different kinds of weights which include fresh weight, ovendry outside bark weight, ovendry inside bark weight and merchantable weight. As to estimate its weight, the study has classified its model according to three parameters: DBH, DBH and height, and volume. The optimal model was chosen by comparing the performance of model using the fit index and standard error of estimate and residual distribution. As a result, the formula utilizing DBH (Variable 1) is $W=a+bD+cD^2$ (3) and its fit index was 90~92%. The formula for DBH and height (Variable 2) is $W=aD^bH^C$ (8) and its fit index was 97~98%. In summation, Variable 2 model showed higher fitness than Variable 1 model. Moreover, fit index of formula for total volume and merchantable volume (W=aV) showed high rate of 98~99%, as well as resulting 7.7-17.5 with SEE and 8.0-10.0 with CV(%) which lead to predominately high fitness in conclusion. This study is expected to provide information on weights for single trees and furthermore, to be used as a basic study for weight of stand unit and biomass estimation equations.

Application of Hydroacoustic System and Kompsat-2 Image to Estimate Distribution of Seagrass Beds (수중음향과 Kompsat-2 위성영상을 이용한 해초지 분포 추정)

  • Kim, Keunyong;Eom, Jinah;Choi, Jong-Kuk;Ryu, Joo-Hyung;Kim, Kwang Yong
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.17 no.3
    • /
    • pp.181-188
    • /
    • 2012
  • Despite the ecological importance of seagrass beds, their distributional information in Korean coastal waters is insufficient. Therefore, we used hydroacoustic system to collect accurate bathymetry and classification of seagrass, and Kompsat-2 (4 m spatial resolution) image for detection of seagrass beds at Deukryang Bay, Korea. The accuracy of Kompsat-2 image classification was evaluated using hydracoustic survey result using error matrix and Kappa value. The total area of seagrass beds from satellite image classification was underestimated compared to the hydroacoustic survey, estimated 3.9 and $4.5km^2$ from satellite image and hydroacoustic data, respectively. Nonetheless, the accuracy of Kompsat-2 image classification over hydroacoustic-based method showing 90% (Kappa=0.85) for the three class maps (seagrass, unvegetated seawater and aquaculture). The agreement between the satellite image classification and the hydroacoustic result was 77.1% (the seagrass presence/absence map). From our result of satellite image classification, Kompsat-2 image is suitable for mapping seagrass beds with high accuracy and non-destructive method. For more accurate information, more researches with a variety of high-resolution satellite image will be preceded.

Effect of Inlet Shape on Thermal Flow Characteristics for Waste Gas in a Thermal Decomposition Reactor of Scrubber System (반도체 폐가스 처리용 열분해반응기의 입구형상이 열유동 특성에 미치는 영향에 관한 수치해석 연구)

  • Yoon, Jonghyuk;Kim, Youngbae;Song, Hyungwoon
    • Applied Chemistry for Engineering
    • /
    • v.29 no.5
    • /
    • pp.510-518
    • /
    • 2018
  • Recently, lots of interests have been concentrated on the scrubber system that abates waste gases produced from semiconductor manufacturing processes. An effective design of the thermal decomposition reactor inside a scrubber system is significantly important since it is directly related to the removal performance of pollutants and overall stabilities. In the present study, a computational fluid dynamics (CFD) analysis was conducted to figure out the thermal and flow characteristics inside the reactor of wet scrubber. In order to verify the numerical method, the temperature at several monitoring points was compared to that of experimental results. Average error rates of 1.27~2.27% between both the results were achieved, and numerical results of the temperature distribution were in good agreement with the experimental data. By using the validated numerical method, the effect of the reactor geometry on the heat transfer rate was also taken into consideration. From the result, it was observed that the flow and temperature uniformity were significantly improved. Overall, our current study could provide useful information to identify the fluid behavior and thermal performance for various scrubber systems.

A Study on the Compression and Major Pattern Extraction Method of Origin-Destination Data with Principal Component Analysis (주성분분석을 이용한 기종점 데이터의 압축 및 주요 패턴 도출에 관한 연구)

  • Kim, Jeongyun;Tak, Sehyun;Yoon, Jinwon;Yeo, Hwasoo
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.19 no.4
    • /
    • pp.81-99
    • /
    • 2020
  • Origin-destination data have been collected and utilized for demand analysis and service design in various fields such as public transportation and traffic operation. As the utilization of big data becomes important, there are increasing needs to store raw origin-destination data for big data analysis. However, it is not practical to store and analyze the raw data for a long period of time since the size of the data increases by the power of the number of the collection points. To overcome this storage limitation and long-period pattern analysis, this study proposes a methodology for compression and origin-destination data analysis with the compressed data. The proposed methodology is applied to public transit data of Sejong and Seoul. We first measure the reconstruction error and the data size for each truncated matrix. Then, to determine a range of principal components for removing random data, we measure the level of the regularity based on covariance coefficients of the demand data reconstructed with each range of principal components. Based on the distribution of the covariance coefficients, we found the range of principal components that covers the regular demand. The ranges are determined as 1~60 and 1~80 for Sejong and Seoul respectively.