• Title/Summary/Keyword: weighted average method

Search Result 354, Processing Time 0.026 seconds

Study on Weight Summation Storage Algorithm of Facial Recognition Landmark (가중치 합산 기반 안면인식 특징점 저장 알고리즘 연구)

  • Jo, Seonguk;You, Youngkyon;Kwak, Kwangjin;Park, Jeong-Min
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.1
    • /
    • pp.163-170
    • /
    • 2022
  • This paper introduces a method of extracting facial features due to unrefined inputs in real life and improving the problem of not guaranteeing the ideal performance and speed of the object recognition model through a storage algorithm through weight summation. Many facial recognition processes ensure accuracy in ideal situations, but the problem of not being able to cope with numerous biases that can occur in real life is drawing attention, which may soon lead to serious problems in the face recognition process closely related to security. This paper presents a method of quickly and accurately recognizing faces in real time by comparing feature points extracted as input with a small number of feature points that are not overfit to multiple biases, using that various variables such as picture composition eventually take an average form.

Proposal of Analysis Method for Biota Survey Data Using Co-occurrence Frequency

  • Yong-Ki Kim;Jeong-Boon Lee;Sung Je Lee;Jong-Hyun Kang
    • Proceedings of the National Institute of Ecology of the Republic of Korea
    • /
    • v.5 no.3
    • /
    • pp.76-85
    • /
    • 2024
  • The purpose of this study is to propose a new method of analysis focusing on interconnections between species rather than traditional biodiversity analysis, which represents ecosystems in terms of species and individual counts such as species diversity and species richness. This new approach aims to enhance our understanding of ecosystem networks. Utilizing data from the 4th National Natural Environment Survey (2014-2018), the following eight taxonomic groups were targeted for our study: herbaceous plants, woody plants, butterflies, Passeriformes birds, mammals, reptiles & amphibians, freshwater fishes, and benthonic macroinvertebrates. A co-occurrence frequency analysis was conducted using nationwide data collected over five years. As a result, in all eight taxonomic groups, the degree value represented by a linear regression trend line showed a slope of 0.8 and the weighted degree value showed an exponential nonlinear curve trend line with a coefficient of determination (R2) exceeding 0.95. The average value of the clustering coefficient was also around 0.8, reminiscent of well-known social phenomena. Creating a combination set from the species list grouped by temporal information such as survey date and spatial information such as coordinates or grids is an easy approach to discern species distributed regionally and locally. Particularly, grouping by species or taxonomic groups to produce data such as co-occurrence frequency between survey points could allow us to discover spatial similarities based on species present. This analysis could overcome limitations of species data. Since there are no restrictions on time or space, data collected over a short period in a small area and long-term national-scale data can be analyzed through appropriate grouping. The co-occurrence frequency analysis enables us to measure how many species are associated with a single species and the frequency of associations among each species, which will greatly help us understand ecosystems that seem too complex to comprehend. Such connectivity data and graphs generated by the co-occurrence frequency analysis of species are expected to provide a wealth of information and insights not only to researchers, but also to those who observe, manage, and live within ecosystems.

Simulation comparison of standardization methods for interview scores (면접점수 표준화 방법 모의실험 비교)

  • Park, Cheol-Yong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.22 no.2
    • /
    • pp.189-196
    • /
    • 2011
  • In this study, we perform a simulation study to compare frequently used standardization methods for interview scores based on trimmed mean, rank mean, and z-score mean. In this simulation study we assume that interviewer's score is influenced by a weighted average of true interviewee's true score and independent noise whose weight is determined by the professionality of the interviewer. In other words, as interviewer's professionality increases, the observed score becomes closer to the true score and if interviewer's professionality decreases, the observed score becomes closer to the noise instead of the true score. By adding interviewer's tendency bias to the weighed average, final interviewee's score is assumed to be observed. In this simulation, the interviewers's cores for each method are computed and then the method is considered best whose rank correlation between the method's scores and the true scores is highest. Simulation results show that when the true score is from normal distributions, z-score mean is best in general and when the true score is from Laplace distributions, z-score mean is better than rank mean in full interview system, where all interviewers meet all interviewees, and rank mean is better than z-score mean in half split interview system, where the interviewers meet only half of the interviewees. Trimmed mean is worst in general.

Development of a Numerical Model of Shallow-Water Flow using Cut-cell System (분할격자체계를 이용한 천수흐름 수치모형의 개발)

  • Kim, Hyung-Jun;Lee, Seung-Oh;Cho, Yong-Sik
    • Journal of the Korean Society of Hazard Mitigation
    • /
    • v.8 no.4
    • /
    • pp.91-100
    • /
    • 2008
  • Numerical implementation with a Cartesian cut-cell method is conducted in this study. A Cartesian cut-cell method is an easy and efficient mesh generation methodology for complex geometries. In this method, a background Cartesian grid is employed for most of computational domain and a cut-cell grid is applied for the peculiar grids where the flow characteristics are changed such as solid boundary to enhance the accuracy, applicability and efficiency. Accurate representation of complex geometries can be obtained by using the cut-cell method. The cut-cell grids are constructed with irregular meshes which have various shape and size. Therefore, the finite volume method is applied to numerical discretization on a irregular domain. The HLLC approximate Riemann solver, a Godunov-type finite volume method, is employed to discretize the advection terms in the governing equations. The weighted average flux method applied on the Cartesian cut cell grid for stabilization of the numerical results. To validate the numerical model using the Cartesian cut-cell grids, the model is applied to the rectangular tank problem of which the exact solutions exist. As a comparison of numerical results with the analytical solutions, the numerical scheme well represents flow characteristics such as free surface elevation and velocities in x-and y-directions in a rectangular tank with the Cartesian and cut-cell grids.

Evaluation of Parameter Estimation Method for Design Rainfall Estimation (설계강우량 산정을 위한 매개변수 추정방법 평가)

  • Kim, Kwihoon;Jun, Sang-Min;Jang, Jeongyeol;Song, Inhong;Kang, Moon-Seong;Choi, Jin-Yong
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.63 no.4
    • /
    • pp.87-96
    • /
    • 2021
  • Determining design rainfall is the first step to plan an agricultural drainage facility. The objective of this study is to evaluate whether the current method for parameter estimation is reasonable for computing the design rainfall. The current Gumbel-Kendall (G-K) method was compared with two other methods which are Gumbel-Chow (G-C) method and Probability weighted moment (PWM). Hourly rainfall data were acquired from the 60 ASOS (Automated Synoptic Observing System) stations across the nation. For the goodness-of-fit test, this study used chi-squared (𝛘2) and Kolmogorov-Smirnov (K-S) test. When using G-K method, 𝛘2 statistics of 18 stations exceeded the critical value (𝑥2a=0.05,df=4=9.4877) and 10, 3 stations for G-C method, PWM method respectively. For K-S test, none of the stations exceeded the critical value (Da=0.05n=0.19838). However, G-K method showed the worst performances in both tests compared to other methods. Subsequently, this study computed design rainfall of 48-hour duration in 60 ASOS stations. G-K method showed 5.6 and 6.4% higher average design rainfall and 15.2 and 24.6% higher variance compared to G-C and PWM methods. In short, G-K showed the worst performance in goodness-of-fit tests and showed higher design rainfall with the least robustness. Likewise, considering the basic assumptions of the design rainfall estimation, G-K is not an appropriate method for the practical use. This study can be referenced and helpful when revising the agricultural drainage standards.

Comparison of Daily Rainfall Interpolation Techniques and Development of Two Step Technique for Rainfall-Runoff Modeling (강우-유출 모형 적용을 위한 강우 내삽법 비교 및 2단계 일강우 내삽법의 개발)

  • Hwang, Yeon-Sang;Jung, Young-Hun;Lim, Kwang-Suop;Heo, Jun-Haeng
    • Journal of Korea Water Resources Association
    • /
    • v.43 no.12
    • /
    • pp.1083-1091
    • /
    • 2010
  • Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. However, widely used estimation schemes fail to describe the realistic variability of daily precipitation field. We compare and contrast the performance of statistical methods for the spatial estimation of precipitation in two hydrologically different basins, and propose a two-step process for effective daily precipitation estimation. The methods assessed are: (1) Inverse Distance Weighted Average (IDW); (2) Multiple Linear Regression (MLR); (3) Climatological MLR; and (4) Locally Weighted Polynomial Regression (LWP). In the suggested simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before applying IDW scheme (one of the local scheme) to estimate the amount of precipitation separately on wet days. As the results, the suggested method shows the better performance of daily rainfall interpolation which has spatial differences compared with conventional methods. And this technique can be used for streamflow forecasting and downscaling of atmospheric circulation model effectively.

Multi Layered Planting Models of Zelkova serrata Community according to Warmth Index (온량지수에 따른 느티나무군락의 다층구조 식재모델)

  • Kong, Seok Jun;Shin, Jin Ho;Yang, Keum Chul
    • Journal of the Korean Society of Environmental Restoration Technology
    • /
    • v.15 no.2
    • /
    • pp.77-84
    • /
    • 2012
  • This study suggested the planting model of Zelkova serrata communities in the areas with the warmth index of both 80~100 and $100{\sim}120^{\circ}C{\cdot}month$. Warmth index was calculated with 449 weather points using inverse distance weighted interpolation method. The planting species were selected by correlation analysis between Z. serrata and each species of four or more frequency among the 36 relev$\acute{e}$ surveyed for this study. The result of this study is summarized as follows : Warmth index of Z. serrata communities was among $74{\sim}118^{\circ}C{\cdot}month$. Results of the correlation analysis between Z. serrata and each species observed that the Z. serrata belongs to the tree layer with warmth index of 80~100 and $100{\sim}120^{\circ}C{\cdot}month$. On the other hand, the species of Carpinus laxiflora, Quercus serrata, Prunus sargentii and Platycarya strobilacea appeared only in the tree layer with warmth index of $80{\sim}100^{\circ}C{\cdot}month$. Z. serrata and Styrax japonica appeared in the subtree layer with the warmth index of 80~100 and $100{\sim}120^{\circ}C{\cdot}month$, while Acer pseudosieboldianum, Lindera erythrocarpa, Acer mono, Quercus serrata, etc. appeared in the subtree layer with the warmth index of $80{\sim}100^{\circ}C{\cdot}month$. Z. serrata, Ligustrum obtusifolium, Lindera obtusiloba, Callicarpa japonica and Zanthoxylum schinifolium all appeared in the shrub layer with the warmth index of 80~100 and $100{\sim}120^{\circ}C{\cdot}month$. Lindera erythrocarpa, Orixa japonica, Staphylea bumalda, Akebia quinata and Sorbus alnifolia appeared in the shrub layer with the warmth index of $80{\sim}100^{\circ}C{\cdot}month$ and Styrax japonica and Stephanandra incisa appeared in the shrub layer with the warmth index of $100{\sim}120^{\circ}C{\cdot}month$, The numbers of each species planted in a $100m^2$ area of the Z. serrata community were suggested as follows : five in tree layer, five in subtree layer and nine in shrub layer. The average area of canopy are suggested to be about $86m^2$ for tree layer, $34m^2$ for subtree layer and $34m^2$ for shrub layer.

Gaussian Noise Reduction Algorithm using Self-similarity (자기 유사성을 이용한 가우시안 노이즈 제거 알고리즘)

  • Jeon, Yougn-Eun;Eom, Min-Young;Choe, Yoon-Sik
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.5
    • /
    • pp.1-10
    • /
    • 2007
  • Most of natural images have a special property, what is called self-similarity, which is the basis of fractal image coding. Even though an image has local stationarity in several homogeneous regions, it is generally non-stationarysignal, especially in edge region. This is the main reason that poor results are induced in linear techniques. In order to overcome the difficulty we propose a non-linear technique using self-similarity in the image. In our work, an image is classified into stationary and non-stationary region with respect to sample variance. In case of stationary region, do-noising is performed as simply averaging of its neighborhoods. However, if the region is non-stationary region, stationalization is conducted as make a set of center pixels by similarity matching with respect to bMSE(block Mean Square Error). And then do-nosing is performed by Gaussian weighted averaging of center pixels of similar blocks, because the set of center pixels of similar blocks can be regarded as nearly stationary. The true image value is estimated by weighted average of the elements of the set. The experimental results show that our method has better performance and smaller variance than other methods as estimator.

The Development of the Simple SHGC Calculation Method in Case of a Exterior Venetian Blind Using the Simulation (시뮬레이션을 이용한 외부 베네시안 블라인드의 약식 SHGC 계산법 개발)

  • Eom, Jae-Yong;Lee, Chung-Kook;Jang, Weol-Sang;Choi, Won-Ki
    • Journal of the Korean Solar Energy Society
    • /
    • v.35 no.2
    • /
    • pp.73-83
    • /
    • 2015
  • When it comes to these buildings for business use, cooling load during summertime was reported to have great importance which, as a result, impressively increased interest in Solar Heat Gain Coefficient (SHGC). Such SHGC is considered to be lowered with the help of colors and functions of glass itself, internal shading devices, insulation films and others but basically, these external shading devices for initial blocking that would not allow solar heat to come in from outside the buildings are determined to be most effective. Of many different external shading devices, this thesis conducted an analysis on Exterior Venetian Blind. As for vertical shading devices, previous researches already calculated SHGC conveniently using concepts of sky-opening ratios. However in terms of the Venetian Blind, such correlation is not possibly applied. In light of that, in order to extract a valid correlation, this study first introduced a concept called shape factor, which would use the breadth and a space of a shade, before carrying out the analysis. As a consequence, the concept helped this study to find a very similar correlation. Results of the analysis are summarized as follows. (1) Regarding SHGC depending on the surface reflectance of a shade, an average of 2% error is observed and yet, the figure can always be ignored when it comes to a simple calculation. (2) As for SHGC of each bearing, this study noticed deviations of 4% or less and in the end, it is confirmed that extraction can be achieved with no more than one correlation formula. (3) When only the shape factor and nothing else is used for finding a correlation formula, the formula with a deviation of approximately 5% or less is what one would expect. (4) Since the study observed slight differences in bearings depending on ranges of the shape factors, it needed to extract a weighted value of each bearing, and learned that the smaller the shape factor, the wider the range of a weighted value. The study now suggests that a follow-up research to extract a simple calculation formula by dealing with all these various inclined angles of shade, solar radiation conditions of each region (the ratio of diffuse radiation to direct radiation and others) as well as seasonal features should be carried out.

Performance Analysis of Automatic Target Recognition Using Simulated SAR Image (표적 SAR 시뮬레이션 영상을 이용한 식별 성능 분석)

  • Lee, Sumi;Lee, Yun-Kyung;Kim, Sang-Wan
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.3
    • /
    • pp.283-298
    • /
    • 2022
  • As Synthetic Aperture Radar (SAR) image can be acquired regardless of the weather and day or night, it is highly recommended to be used for Automatic Target Recognition (ATR) in the fields of surveillance, reconnaissance, and national security. However, there are some limitations in terms of cost and operation to build various and vast amounts of target images for the SAR-ATR system. Recently, interest in the development of an ATR system based on simulated SAR images using a target model is increasing. Attributed Scattering Center (ASC) matching and template matching mainly used in SAR-ATR are applied to target classification. The method based on ASC matching was developed by World View Vector (WVV) feature reconstruction and Weighted Bipartite Graph Matching (WBGM). The template matching was carried out by calculating the correlation coefficient between two simulated images reconstructed with adjacent points to each other. For the performance analysis of the two proposed methods, the Synthetic and Measured Paired Labeled Experiment (SAMPLE) dataset was used, which has been recently published by the U.S. Defense Advanced Research Projects Agency (DARPA). We conducted experiments under standard operating conditions, partial target occlusion, and random occlusion. The performance of the ASC matching is generally superior to that of the template matching. Under the standard operating condition, the average recognition rate of the ASC matching is 85.1%, and the rate of the template matching is 74.4%. Also, the ASC matching has less performance variation across 10 targets. The ASC matching performed about 10% higher than the template matching according to the amount of target partial occlusion, and even with 60% random occlusion, the recognition rate was 73.4%.