• Title/Summary/Keyword: 가우스 과정

Search Result 61, Processing Time 0.026 seconds

Result Analysis on Making Activities 1 to 100 with digits 1, 9, 9, 6 (숫자 1, 9, 9, 6을 이용하여 1에서 100까지 만들기 과제 적용 결과 분석)

  • Kim, Sang-Lyong
    • Education of Primary School Mathematics
    • /
    • v.13 no.2
    • /
    • pp.55-66
    • /
    • 2010
  • The basic direction of mathematical education for the 21st century is focused on helping student to understand mathematics and developing their problem solving abilities, mathematical disposition and mathematical thinking. Elementary mathematics teachers should help students make sense of mathematics, confident of their ability, and make learning environment comfortable for students to participate in. Through making activities 1 to 100 with digits 1,9,9,6, students improved the interest and preference of students about mathematics. This game is useful to foster students' mathematical thinking(concepts of exponential number expression, roots concept(${\sqrt}$), gauss function([])) and mathematical disposition. If students are helped to be interested in mathematics through mathematical games, they regard mathematics as interesting and challengeable subject to let themselves think many ways.

A Real-time Particle Filtering Framework for Robust Camera Tracking in An AR Environment (증강현실 환경에서의 강건한 카메라 추적을 위한 실시간 입자 필터링 기법)

  • Lee, Seok-Han
    • Journal of Digital Contents Society
    • /
    • v.11 no.4
    • /
    • pp.597-606
    • /
    • 2010
  • This paper describes a real-time camera tracking framework specifically designed to track a monocular camera in an AR workspace. Typically, the Kalman filter is often employed for the camera tracking. In general, however, tracking performances of conventional methods are seriously affected by unpredictable situations such as ambiguity in feature detection, occlusion of features and rapid camera shake. In this paper, a recursive Bayesian sampling framework which is also known as the particle filter is adopted for the camera pose estimation. In our system, the camera state is estimated on the basis of the Gaussian distribution without employing additional uncertainty model and sample weight computation. In addition, the camera state is directly computed based on new sample particles which are distributed according to the true posterior of system state. In order to verify the proposed system, we conduct several experiments for unstable situations in the desktop AR environments.

Practical Guide to X-ray Spectroscopic Data Analysis (X선 기반 분광광도계를 통해 얻은 데이터 분석의 기초)

  • Cho, Jae-Hyeon;Jo, Wook
    • Journal of the Korean Institute of Electrical and Electronic Material Engineers
    • /
    • v.35 no.3
    • /
    • pp.223-231
    • /
    • 2022
  • Spectroscopies are the most widely used for understanding the crystallographic, chemical, and physical aspects of materials; therefore, numerous commercial and non-commercial software have been introduced to help researchers better handling their spectroscopic data. However, not many researchers, especially early-stage ones, have a proper background knowledge on the choice of fitting functions and a technique for actual fitting, although the essence of such data analysis is peak fitting. In this regard, we present a practical guide for peak fitting for data analysis. We start with a basic-level theoretical background why and how a certain protocol for peak fitting works, followed by a step-by-step visualized demonstration how an actual fitting is performed. We expect that this contribution is sure to help many active researchers in the discipline of materials science better handle their spectroscopic data.

History of the Error and the Normal Distribution in the Mid Nineteenth Century (19세기 중반 오차와 정규분포의 역사)

  • Jo, Jae-Keun
    • Communications for Statistical Applications and Methods
    • /
    • v.15 no.5
    • /
    • pp.737-752
    • /
    • 2008
  • About 1800, mathematicians combined analysis of error and probability theory into error theory. After developed by Gauss and Laplace, error theory was widely used in branches of natural science. Motivated by the successful applications of error theory in natural sciences, scientists like Adolph Quetelet tried to incorporate social statistics with error theory. But there were not a few differences between social science and natural science. In this paper we discussed topics raised then. The problems considered are as follows: the interpretation of individual man in society; the arguments against statistical methods; history of the measures for diversity. From the successes and failures of the $19^{th}$ century social statisticians, we can see how statistics became a science that is essential to both natural and social sciences. And we can see that those problems, which were not easy to solve for the $19^{th}$ century social statisticians, matter today too.

Feature based Pre-processing Method to compensate color mismatching for Multi-view Video (다시점 비디오의 색상 성분 보정을 위한 특징점 기반의 전처리 방법)

  • Park, Sung-Hee;Yoo, Ji-Sang
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.12
    • /
    • pp.2527-2533
    • /
    • 2011
  • In this paper we propose a new pre-processing algorithm applied to multi-view video coding using color compensation algorithm based on image features. Multi-view images have a difference between neighboring frames according to illumination and different camera characteristics. To compensate this color difference, first we model the characteristics of cameras based on frame's feature from each camera and then correct the color difference. To extract corresponding features from each frame, we use Harris corner detection algorithm and characteristic coefficients used in the model is estimated by using Gauss-Newton algorithm. In this algorithm, we compensate RGB components of target images, separately from the reference image. The experimental results with many test images show that the proposed algorithm peformed better than the histogram based algorithm as much as 14 % of bit reduction and 0.5 dB ~ 0.8dB of PSNR enhancement.

A Study on Logconductivity-Head Cross Covariance in Two-Dimensional Nonstationary Porous Formations (비정체형 2차원 다공성 매질의 대수투수계수-수두 교차공분산에 관한 연구)

  • 성관제
    • Water for future
    • /
    • v.29 no.5
    • /
    • pp.215-222
    • /
    • 1996
  • An expression for the cross covariance of the logconductivity and the head in nonstationary porous formation is obtained. This cross covariance plays a key role in the inverse problem, i.e., in inferring the statistical characteristics of the conductivity field from head data. The nonstationary logconductivity is modeled as superposition of definite linear trend and stationary fluctuation and the hydraulic head in saturated aquifers is found through stochastic analysis of a steady, two-dimensional flow. The cross covariance with a Gaussian correlation function is investigated for two particular cases where the trend is either parallel or normal to the head gradient. The results show that cross covariances are stationary except along separation distances parallel to the mean flow direction for the case where the trend is parallel to head gradient. Also, unlike the stationary model, the cross covariance along distances normal to flow direction is non-zero. From these observations we conclude that when a trend in the conductivity field is suspected, this information must be incorporated in the analysis of groundwater flow and solute transjport.

  • PDF

Improved Estimation for Expected Sliding Distance of Caisson Breakwaters by Employment of a Doubly-Truncated Normal Distribution (이중절단정규분포의 적용을 통한 케이슨 방파제 기대활동량 평가의 향상)

  • Kim Tae-Min;Hwang Kyu-Nam;Takayama Tomotsuka
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.17 no.4
    • /
    • pp.221-231
    • /
    • 2005
  • The present study is deeply concerned with the reliability design method(Level III) for caisson breakwaters using expected sliding distance, and the objectives of this study are to propose the employment of a doubly-truncated normal distribution and to present the validity for it. In this study, therefore, the explanations are made for consideration of effects of uncertain factors, and a clear basis that the doubly-truncated normal distribution should be employed in the computation process of expected sliding distance by Monte-Carlo simulation is presented with introduction of the employment method. Even though only caisson breakwaters are treated in this paper, the employment of doubly-truncated normal distribution can be applied to various coastal structures as well as other engineering fields, and therefore it is expected that the present study will be extended in various fields.

Evaluation of Edge Detector′s Smoothness using Fuzzy Ambiguity (퍼지 애매성을 이용한 에지검출기의 평활화 정도평가)

  • Kim, Tae-Yong;Han, Joon-Hee
    • Journal of KIISE:Software and Applications
    • /
    • v.28 no.9
    • /
    • pp.649-661
    • /
    • 2001
  • While the conventional edge detection can be considered as the problem of determining the existence of edges at certain locations, the fuzzy edge modeling can be considered as the problem of determining the membership values of edges. Thus, if the location of an edge is unclear, or if the intensity function is different from the ideal edge model, the degree of edgeness at the location is represented as a fuzzy membership value. Using the concept of fuzzy edgeness, an automatic smoothing parameter evaluation and selection method for a conventional edge detector is proposed. This evaluation method uses the fuzzy edge modeling, and can analyze the effect of smoothing parameter to determine an optimal parameter for a given image. By using the selected parameter we can detect least ambiguous edges of a detection method for an image. The effectiveness of the parameter evaluation method is analyzed and demonstrated using a set of synthetic and real images.

  • PDF

Noise Modeling for CR Images of High-strength Materials (고강도매질 CR 영상의 잡음 모델링)

  • Hwang, Jung-Won;Hwang, Jae-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.5
    • /
    • pp.95-102
    • /
    • 2008
  • This paper presents an appropriate approach for modeling noise in Computed Radiography(CR) images of high strength materials. The approach is specifically designed for types of noise with the statistical and nonlinear properties. CR images Ere degraded even before they are encoded by computer process. Various types of noise often contribute to contaminate radiography image, although they are detected on digitalization. Quantum noise, which is Poisson distributed, is a shot noise, but the photon distribution on Image Plate(IP) of CR system is not always Poisson process. The statistical properties are relative and case-dependant due to its material characteristics. The usual assumption of a distribution of Poisson, binomial and Gaussian statistics are considered. Nonlinear effect is also represented in the process of statistical noise model. It leads to estimate the noise variance in regions from high to low intensity, specifying analytical model. The analysis approach is tested on a database of steel tube step-wedge CR images. The results are available for the comparative parameter studies which measure noise coherence, distribution, signal/noise ratios(SNR) and nonlinear interpolation.

Application of the Preconditioned Conjugate Gradient Method to the Generalized Finite Element Method with Global-Local Enrichment Functions (전처리된 켤레구배법의 전체-국부 확장함수를 지닌 일반유한요소해석에의 응용)

  • Choi, Won-Jeong;Kim, Min-Sook;Kim, Dae-Jin;Lee, Young-Hak;Kim, Hee-Cheul
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.24 no.4
    • /
    • pp.405-412
    • /
    • 2011
  • This paper introduces the generalized finite element method with global-local enrichment functions using the preconditioned conjugate gradient method. The proposed methodology is able to generate enrichment functions for problems where limited a-priori knowledge on the solution is available and to utilize a preconditioner and initial guess of good quality with only small addition of computational cost. Thus, it is very effective to analyze problems where a complex behavior is locally exhibited. Several numerical experiments are performed to confirm its effectiveness and show that it is computationally more efficient than the analysis utilizing direct solvers such as Gauss elimination method.