• Title/Summary/Keyword: High accurate scheme

Search Result 245, Processing Time 0.028 seconds

Update of Digital Map by using The Terrestrial LiDAR Data and Modified RANSAC (수정된 RANSAC 알고리즘과 지상라이다 데이터를 이용한 수치지도 건물레이어 갱신)

  • Kim, Sang Min;Jung, Jae Hoon;Lee, Jae Bin;Heo, Joon;Hong, Sung Chul;Cho, Hyoung Sig
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.22 no.4
    • /
    • pp.3-11
    • /
    • 2014
  • Recently, rapid urbanization has necessitated continuous updates in digital map to provide the latest and accurate information for users. However, conventional aerial photogrammetry has some restrictions on periodic updates of small areas due to high cost, and as-built drawing also brings some problems with maintaining quality. Alternatively, this paper proposes a scheme for efficient and accurate update of digital map using point cloud data acquired by Terrestrial Laser Scanner (TLS). Initially, from the whole point cloud data, the building sides are extracted and projected onto a 2D image to trace out the 2D building footprints. In order to register the footprint extractions on the digital map, 2D Affine model is used. For Affine parameter estimation, the centroids of each footprint groups are randomly chosen and matched by means of a modified RANSAC algorithm. Based on proposed algorithm, the experimental results showed that it is possible to renew digital map using building footprint extracted from TLS data.

Field Map Estimation for Effective Fat Quantification at High Field MRI (고자장 자기공명영상에서 효율적인 지방 정량화를 위한 필드 맵 측정 기술)

  • Eun, Sung-Jong;Whangbo, Taeg-Keun
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.11
    • /
    • pp.558-574
    • /
    • 2014
  • The number of fatty liver patients is sharply growing due to the rapid increase in the incidence of metabolic syndrome, which can lead to diseases such as abdominal obesity, hypertension, diabetes, and hyperlipidemia. Early diagnosis requires examinations using magnetic resonance imaging (MRI), wherein quantitative analyses are implemented through a professional water-fat separation method in many cases, as the intensity values of the areas of interest and non-interest are considerably similar or the same. However, such separation method generates inaccurate results in high magnetic fields, where the inhomogeneity of the fields increases. To overcome the limits of such conventional fat quantification methods, this paper proposes a field map estimation method that is effective in high magnetic fields. This method generates field maps through echo images that are obtained using the existing IDEAL sequences, and considers the wrapping degree of the field maps. Then clustering is performed to separate calibration areas, the least square fits based on the region growing method schema of the separated calibration areas, and the histograms are adjusted to separate the water from the fats. In experiment results, our proposed method had a superior fat detection rate of an average of 86.4%, compared to the ideal method with an average of 61.5% and Yu's method with an average of 62.6%. In addition, it was confirmed that the proposed method had a more accurate water detection rate of 98.4% on the average than the 88.6% average of the fat saturation method.

Downscaling of AMSR2 Sea Ice Concentration Using a Weighting Scheme Derived from MODIS Sea Ice Cover Product (MODIS 해빙피복 기반의 가중치체계를 이용한 AMSR2 해빙면적비의 다운스케일링)

  • Ahn, Jihye;Hong, Sungwook;Cho, Jaeil;Lee, Yang-Won
    • Korean Journal of Remote Sensing
    • /
    • v.30 no.5
    • /
    • pp.687-701
    • /
    • 2014
  • Sea ice is generally accepted as an important factor to understand the process of earth climate changes and is the basis of earth system models for analysis and prediction of the climate changes. To continuously monitor sea ice changes at kilometer scale, it is demanded to create more accurate grid data from the current, limited sea ice data. In this paper we described a downscaling method for Advanced Microwave Scanning Radiometer 2 (AMSR2) Sea Ice Concentration (SIC) from 10 km to 1 km resolution using a weighting scheme of sea ice days ratio derived from Moderate Resolution Imaging Spectroradiometer (MODIS) sea ice cover product that has a high correlation with the SIC. In a case study for Okhotsk Sea, the sea ice areas of both data (before and after downscaling) were identical, and the monthly means and standard deviations of SIC exhibited almost the same values. Also, Empirical Orthogonal Function (EOF) analyses showed that three kinds of SIC data (ERA-Interim, original AMSR2, and downscaled AMSR2) had very similar principal components for spatial and temporal variations. Our method can apply to downscaling of other continuous variables in the form of ratio such as percentage and can contribute to monitoring small-scale changes of sea ice by providing finer SIC data.

An Iterative, Interactive and Unified Seismic Velocity Analysis (반복적 대화식 통합 탄성파 속도분석)

  • Suh Sayng-Yong;Chung Bu-Heung;Jang Seong-Hyung
    • Geophysics and Geophysical Exploration
    • /
    • v.2 no.1
    • /
    • pp.26-32
    • /
    • 1999
  • Among the various seismic data processing sequences, the velocity analysis is the most time consuming and man-hour intensive processing steps. For the production seismic data processing, a good velocity analysis tool as well as the high performance computer is required. The tool must give fast and accurate velocity analysis. There are two different approches in the velocity analysis, batch and interactive. In the batch processing, a velocity plot is made at every analysis point. Generally, the plot consisted of a semblance contour, super gather, and a stack pannel. The interpreter chooses the velocity function by analyzing the velocity plot. The technique is highly dependent on the interpreters skill and requires human efforts. As the high speed graphic workstations are becoming more popular, various interactive velocity analysis programs are developed. Although, the programs enabled faster picking of the velocity nodes using mouse, the main improvement of these programs is simply the replacement of the paper plot by the graphic screen. The velocity spectrum is highly sensitive to the presence of the noise, especially the coherent noise often found in the shallow region of the marine seismic data. For the accurate velocity analysis, these noise must be removed before the spectrum is computed. Also, the velocity analysis must be carried out by carefully choosing the location of the analysis point and accuarate computation of the spectrum. The analyzed velocity function must be verified by the mute and stack, and the sequence must be repeated most time. Therefore an iterative, interactive, and unified velocity analysis tool is highly required. An interactive velocity analysis program, xva(X-Window based Velocity Analysis) was invented. The program handles all processes required in the velocity analysis such as composing the super gather, computing the velocity spectrum, NMO correction, mute, and stack. Most of the parameter changes give the final stack via a few mouse clicks thereby enabling the iterative and interactive processing. A simple trace indexing scheme is introduced and a program to nike the index of the Geobit seismic disk file was invented. The index is used to reference the original input, i.e., CDP sort, directly A transformation techinique of the mute function between the T-X domain and NMOC domain is introduced and adopted to the program. The result of the transform is simliar to the remove-NMO technique in suppressing the shallow noise such as direct wave and refracted wave. However, it has two improvements, i.e., no interpolation error and very high speed computing time. By the introduction of the technique, the mute times can be easily designed from the NMOC domain and applied to the super gather in the T-X domain, thereby producing more accurate velocity spectrum interactively. The xva program consists of 28 files, 12,029 lines, 34,990 words and 304,073 characters. The program references Geobit utility libraries and can be installed under Geobit preinstalled environment. The program runs on X-Window/Motif environment. The program menu is designed according to the Motif style guide. A brief usage of the program has been discussed. The program allows fast and accurate seismic velocity analysis, which is necessary computing the AVO (Amplitude Versus Offset) based DHI (Direct Hydrocarn Indicator), and making the high quality seismic sections.

  • PDF

An Electric Load Forecasting Scheme with High Time Resolution Based on Artificial Neural Network (인공 신경망 기반의 고시간 해상도를 갖는 전력수요 예측기법)

  • Park, Jinwoong;Moon, Jihoon;Hwang, Eenjun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.11
    • /
    • pp.527-536
    • /
    • 2017
  • With the recent development of smart grid industry, the necessity for efficient EMS(Energy Management System) has been increased. In particular, in order to reduce electric load and energy cost, sophisticated electric load forecasting and efficient smart grid operation strategy are required. In this paper, for more accurate electric load forecasting, we extend the data collected at demand time into high time resolution and construct an artificial neural network-based forecasting model appropriate for the high time resolution data. Furthermore, to improve the accuracy of electric load forecasting, time series data of sequence form are transformed into continuous data of two-dimensional space to solve that problem that machine learning methods cannot reflect the periodicity of time series data. In addition, to consider external factors such as temperature and humidity in accordance with the time resolution, we estimate their value at the time resolution using linear interpolation method. Finally, we apply the PCA(Principal Component Analysis) algorithm to the feature vector composed of external factors to remove data which have little correlation with the power data. Finally, we perform the evaluation of our model through 5-fold cross-validation. The results show that forecasting based on higher time resolution improve the accuracy and the best error rate of 3.71% was achieved at the 3-min resolution.

A Novel Spectral Analysis of Ultrashort Laser Pulses Using Class-2 PRS Model (Class-2 PRS 모델을 이용한 극초단레이져펄스의 스펙트럼 분석)

  • 전진성;조형래;오용선
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 1999.11a
    • /
    • pp.177-183
    • /
    • 1999
  • In this paper, we analyze transmission characteristics of ultrashort laser pulses using the property of Raised-cosine pulse which are systematically obtained following Class-2 PRS model. The high-order pulses are easily derived front a modified PRS system model as Class-1 PRS model. This may be based on the fact that the spectra and bandwidths of the high-order pulses are beautifully related to their orders. And we make clear they are very useful to cover wider area and more accurate transmission characteristics of ultrashort pulses than Gaussian or Sech pulse approximations used conventionally. First modifying the generalized PRS system model, we propose a new model for deriving any type of high-order pulse. And we offer a novel analysis method of ultrashort pulse transmission which has any shape and FWHM, using the proposed model. In addition, by fixing the pulse range $\tau$=1(ps) and varying the order of the pulse from n=1 to n=100, we obtain spectra of ultrashort pulses with 1(ps)-100(fs) FWHM's, and width of FWHM in the Class-2 PRS model 50~100(fs) smaller than Class-1 PRS model. As a one-step further, we derive PSD's of their pulse trains when they are applied to Unipolar signaling scheme. These PSD's are derided in the range of possible pulse intervals. All of these results are not only coincided with some conventional experimental works but also will to applied to any pioneering ultrashort pulse in the future.

  • PDF

Error Resilient Video Coding Techniques Using Multiple Description Scheme (다중 표현을 이용한 에러에 강인한 동영상 부호화 방법)

  • 김일구;조남익
    • Journal of Broadcast Engineering
    • /
    • v.9 no.1
    • /
    • pp.17-31
    • /
    • 2004
  • This paper proposes an algorithm for the robust transmission of video in error Prone environment using multiple description codingby optimal split of DCT coefficients and rate-distortionoptimization framework. In MDC, a source signal is split Into several coded streams, which is called descriptions, and each description is transmitted to the decoder through different channel. Between descriptions, structured correlations are introduced at the encoder, and the decoder exploits this correlation to reconstruct the original signal even if some descriptions are missing. It has been shown that the MDC is more resilient than the singe description coding(SDC) against severe packet loss ratecondition. But the excessive redundancy in MDC, i.e., the correlation between the descriptions, degrades the RD performance under low PLR condition. To overcome this Problem of MDC, we propose a hybrid MDC method that controls the SDC/MDC switching according to channel condition. For example, the SDC is used for coding efficiency at low PLR condition and the MDC is used for the error resilience at high PLR condition. To control the SDC/MDC switching in the optimal way, RD optimization framework are used. Lagrange optimization technique minimizes the RD-based cost function, D+M, where R is the actually coded bit rate and D is the estimated distortion. The recursive optimal pet-pixel estimatetechnique is adopted to estimate accurate the decoder distortion. Experimental results show that the proposed optimal split of DCT coefficients and SD/MD switching algorithm is more effective than the conventional MU algorithms in low PLR conditions as well as In high PLR condition.

Development of Intelligent ATP System Using Genetic Algorithm (유전 알고리듬을 적용한 지능형 ATP 시스템 개발)

  • Kim, Tai-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.131-145
    • /
    • 2010
  • The framework for making a coordinated decision for large-scale facilities has become an important issue in supply chain(SC) management research. The competitive business environment requires companies to continuously search for the ways to achieve high efficiency and lower operational costs. In the areas of production/distribution planning, many researchers and practitioners have developedand evaluated the deterministic models to coordinate important and interrelated logistic decisions such as capacity management, inventory allocation, and vehicle routing. They initially have investigated the various process of SC separately and later become more interested in such problems encompassing the whole SC system. The accurate quotation of ATP(Available-To-Promise) plays a very important role in enhancing customer satisfaction and fill rate maximization. The complexity for intelligent manufacturing system, which includes all the linkages among procurement, production, and distribution, makes the accurate quotation of ATP be a quite difficult job. In addition to, many researchers assumed ATP model with integer time. However, in industry practices, integer times are very rare and the model developed using integer times is therefore approximating the real system. Various alternative models for an ATP system with time lags have been developed and evaluated. In most cases, these models have assumed that the time lags are integer multiples of a unit time grid. However, integer time lags are very rare in practices, and therefore models developed using integer time lags only approximate real systems. The differences occurring by this approximation frequently result in significant accuracy degradations. To introduce the ATP model with time lags, we first introduce the dynamic production function. Hackman and Leachman's dynamic production function in initiated research directly related to the topic of this paper. They propose a modeling framework for a system with non-integer time lags and show how to apply the framework to a variety of systems including continues time series, manufacturing resource planning and critical path method. Their formulation requires no additional variables or constraints and is capable of representing real world systems more accurately. Previously, to cope with non-integer time lags, they usually model a concerned system either by rounding lags to the nearest integers or by subdividing the time grid to make the lags become integer multiples of the grid. But each approach has a critical weakness: the first approach underestimates, potentially leading to infeasibilities or overestimates lead times, potentially resulting in excessive work-inprocesses. The second approach drastically inflates the problem size. We consider an optimized ATP system with non-integer time lag in supply chain management. We focus on a worldwide headquarter, distribution centers, and manufacturing facilities are globally networked. We develop a mixed integer programming(MIP) model for ATP process, which has the definition of required data flow. The illustrative ATP module shows the proposed system is largely affected inSCM. The system we are concerned is composed of a multiple production facility with multiple products, multiple distribution centers and multiple customers. For the system, we consider an ATP scheduling and capacity allocationproblem. In this study, we proposed the model for the ATP system in SCM using the dynamic production function considering the non-integer time lags. The model is developed under the framework suitable for the non-integer lags and, therefore, is more accurate than the models we usually encounter. We developed intelligent ATP System for this model using genetic algorithm. We focus on a capacitated production planning and capacity allocation problem, develop a mixed integer programming model, and propose an efficient heuristic procedure using an evolutionary system to solve it efficiently. This method makes it possible for the population to reach the approximate solution easily. Moreover, we designed and utilized a representation scheme that allows the proposed models to represent real variables. The proposed regeneration procedures, which evaluate each infeasible chromosome, makes the solutions converge to the optimum quickly.

Present Status of Soilborne Disease Incidence and Scheme for Its Integrated Management in Korea (국내 토양병해 발생현황과 종합 관리방안)

  • Kim, Choong-Hoe;Kim, Yong-Ki
    • Research in Plant Disease
    • /
    • v.8 no.3
    • /
    • pp.146-161
    • /
    • 2002
  • Incidence of soilborne diseases, as a major cause of failure of continuous monocropping becomes severe in recent years. For examples, recent epidemics of club root of chinese cabbage, white rot of garlic, bacterial wilt of potato, pepper phytophthora blight, tomato fusarium wilt and CGMMV of watermelon are the diseases that require urgent control measures. Reasons for the severe incidence of soilborne diseases are the simplified cropping system or continuous monocropping associated with allocation of major production areas of certain crop and year-round cultivation system that results in rapid degradation of soil environment. Neglect of breeding for disease resistance relative to giving much emphasis on high yield and good quality, and cultural methods putting first on the use of chemical fertilizers are thought to be the reason. Counter-measures against soilborne disease epidemics would become most effective when the remedies are seeded for individual causes. As long-term strategies, development of rational cropping system which fits local cropping and economic condition, development and supply of cultivars resistant to multiple diseases, and improvement of soil environment by soil conditioning are suggested. In short-term strategies, simple and economical soil-disinfestation technology, and quick and accurate forecasting methods for soilborne diseases are urgent matter far development. for these, extensive supports are required in governmental level for rearing soilborne disease specialists and activation of collaborating researches to solve encountering problems of soilborne diseases.

Application of Effective Regularization to Gradient-based Seismic Full Waveform Inversion using Selective Smoothing Coefficients (선택적 평활화 계수를 이용한 그래디언트기반 탄성파 완전파형역산의 효과적인 정규화 기법 적용)

  • Park, Yunhui;Pyun, Sukjoon
    • Geophysics and Geophysical Exploration
    • /
    • v.16 no.4
    • /
    • pp.211-216
    • /
    • 2013
  • In general, smoothing filters regularize functions by reducing differences between adjacent values. The smoothing filters, therefore, can regularize inverse solutions and produce more accurate subsurface structure when we apply it to full waveform inversion. If we apply a smoothing filter with a constant coefficient to subsurface image or velocity model, it will make layer interfaces and fault structures vague because it does not consider any information of geologic structures and variations of velocity. In this study, we develop a selective smoothing regularization technique, which adapts smoothing coefficients according to inversion iteration, to solve the weakness of smoothing regularization with a constant coefficient. First, we determine appropriate frequencies and analyze the corresponding wavenumber coverage. Then, we define effective maximum wavenumber as 99 percentile of wavenumber spectrum in order to choose smoothing coefficients which can effectively limit the wavenumber coverage. By adapting the chosen smoothing coefficients according to the iteration, we can implement multi-scale full waveform inversion while inverting multi-frequency components simultaneously. Through the successful inversion example on a salt model with high-contrast velocity structures, we can note that our method effectively regularizes the inverse solution. We also verify that our scheme is applicable to field data through the numerical example to the synthetic data containing random noise.