• Title/Summary/Keyword: computer based estimation

Search Result 1,366, Processing Time 0.031 seconds

Development of a low-cost fruite classification system based on Digital images (디지털영상 기반 저비용 선과시스템 개발)

  • Koo, Min-Jeong;Hwang, Dong-Kuk;Lee, Woo-Ram;Kim, Jae-Hong;Seo, Jeong-Man
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.6
    • /
    • pp.155-162
    • /
    • 2008
  • The quality of the fruits is measured by a lot of parameters. The grader of the fruits to measure the size of them is using the rotation drum method. Therefore when we classify the size of the fruits, they will be damaged. Also the optical grader used for estimating the degree of the saccharinity will incur high cost for it. In the proposed system, to select the characteristics of the fruits, three cameras are used. Because the information such as the volume and the degree of the maturity is used to classify the fruits, the degree of the saccharinity can't be estimated itself, but the information such as the color and the damage of the fruits can be estimated. Therefore, because we don't need the digital image with high resolution, we can develop the grader system of the fruit with low cost. To evaluate the performance of the proposed system, we compared it with the sight estimation and then we classified the sample. The result shows the accuracy of 96.7%.

  • PDF

A Study on the Scale Effort Estimation Model based on Industry Characteristics (산업별 특성에 따른 소요공수 규모 산정 모델 연구)

  • Kwoak, Song-Hae;Park, Koo-Rack;Kim, Dong-Hyun
    • Journal of Digital Convergence
    • /
    • v.14 no.11
    • /
    • pp.233-240
    • /
    • 2016
  • Information system development projects, have a mechanism for many of the costs generated by a variety of risk factors. In general, the probability that the software project of the information system is carried out successfully in the delivery time is very low. This prediction of a formal cost is needed as the most important factor since it can prevent the project from being failed. However, objectivity of most of the project scale calculation during the calculation criteria is insufficient. Further, it is the actual situation that the management of the base line is not properly made during the project. Therefore, in this paper, we propose a model to calculate the number of steps it takes to develop on the basis of a methodology in an attempt to overcome the limitation of being unpractical in the early stage of the information system development project. It is expected to be a tool to estimate the effort and cost required by the information system development business through these convergence proposals model.

Hierarchical Fast Mode Decision Algorithm for Intra Prediction in HEVC (HEVC 화면 내 예측을 위한 계층적 고속 모드 결정 알고리즘)

  • Kim, Tae Sun;Sunwoo, Myung Hoon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.6
    • /
    • pp.57-61
    • /
    • 2015
  • This paper proposes a fast intra prediction algorithm for the High Efficiency Video Coding (HEVC). HEVC has 35 modes, such as DC mode, Planar mode, and 33 angular modes for the intra-prediction. To reduce the complexity and to support fast decision for intra prediction, this paper proposes a hierarchical mode decision method (HMD). The proposed HMD mainly focuses on how to reduce the number of prediction modes. The experimental results show that the proposed HMD can reduce the encoding time about 39.17% with little BDBR loss. On average, the proposed HMD can achieve the encoding time saving e about 14.13 ~ 19.37% compared to that of the existing algorithms with slightly increasing 0.01 ~ 0.42% BDBR.

Neural Relighting using Specular Highlight Map (반사 하이라이트 맵을 이용한 뉴럴 재조명)

  • Lee, Yeonkyeong;Go, Hyunsung;Lee, Jinwoo;Kim, Junho
    • Journal of the Korea Computer Graphics Society
    • /
    • v.26 no.3
    • /
    • pp.87-97
    • /
    • 2020
  • In this paper, we propose a novel neural relighting that infers a relighted rendering image based on the user-guided specular highlight map. The proposed network utilizes a pre-trained neural renderer as a backbone network learned from the rendered image of a 3D scene with various lighting conditions. We jointly optimize a 3D light position and its associated relighted image by back-propagation, so that the difference between the base image and the relighted image is similar to the user-guided specular highlight map. The proposed method has the advantage of being able to explicitly infer the 3D lighting position, while providing the artists' preferred 2D screen-space interface. The performance of the proposed network was measured under the conditions that can establish ground truths, and the average error rate of light position estimations is 0.11, with the normalized 3D scene size.

Optimization of Gaussian Mixture in CDHMM Training for Improved Speech Recognition

  • Lee, Seo-Gu;Kim, Sung-Gil;Kang, Sun-Mee;Ko, Han-Seok
    • Speech Sciences
    • /
    • v.5 no.1
    • /
    • pp.7-21
    • /
    • 1999
  • This paper proposes an improved training procedure in speech recognition based on the continuous density of the Hidden Markov Model (CDHMM). Of the three parameters (initial state distribution probability, state transition probability, output probability density function (p.d.f.) of state) governing the CDHMM model, we focus on the third parameter and propose an efficient algorithm that determines the p.d.f. of each state. It is known that the resulting CDHMM model converges to a local maximum point of parameter estimation via the iterative Expectation Maximization procedure. Specifically, we propose two independent algorithms that can be embedded in the segmental K -means training procedure by replacing relevant key steps; the adaptation of the number of mixture Gaussian p.d.f. and the initialization using the CDHMM parameters previously estimated. The proposed adaptation algorithm searches for the optimal number of mixture Gaussian humps to ensure that the p.d.f. is consistently re-estimated, enabling the model to converge toward the global maximum point. By applying an appropriate threshold value, which measures the amount of collective changes of weighted variances, the optimized number of mixture Gaussian branch is determined. The initialization algorithm essentially exploits the CDHMM parameters previously estimated and uses them as the basis for the current initial segmentation subroutine. It captures the trend of previous training history whereas the uniform segmentation decimates it. The recognition performance of the proposed adaptation procedures along with the suggested initialization is verified to be always better than that of existing training procedure using fixed number of mixture Gaussian p.d.f.

  • PDF

The Comparative Study of Software Optimal Release Time Based on Log-Logistic Distribution (Log-Logistic 분포 모형에 근거한 소프트웨어 최적방출시기에 관한 비교연구)

  • Kim, Hee-Cheul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.7
    • /
    • pp.1-9
    • /
    • 2008
  • In this paper, make a study decision problem called an optimal release policies after testing a software system in development phase and transfer it to the user. When correcting or modifying the software, because of the possibility of introducing new faults when correcting or modifying the software, infinite failure non-homogeneous Poisson process models presented and propose an optimal release policies of the life distribution applied log-logistic distribution which can capture the increasing! decreasing nature of the failure occurrence rate per fault. In this paper, discuss optimal software release policies which minimize a total average software cost of development and maintenance under the constraint of satisfying a software reliability requirement. In a numerical example, after trend test applied and estimated the parameters using maximum likelihood estimation of inter-failure time data, make out estimating software optimal release time.

  • PDF

Estimation of Camera Motion Parameter using Invariant Feature Models (불변 특징모델을 이용한 카메라 동작인수 측정)

  • Cha, Jeong-Hee;Lee, Keun-Soo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.4 s.36
    • /
    • pp.191-201
    • /
    • 2005
  • In this paper, we propose a method to calculate camera motion parameter, which is based on efficient invariant features irrelevant to the camera veiwpoint. As feature information in previous research is variant to camera viewpoint. information content is increased, therefore, extraction of accurate features is difficult. LM(Levenberg-Marquardt) method for camera extrinsic parameter converges on the goat value exactly, but it has also drawback to take long time because of minimization process by small step size. Therefore, in this paper, we propose the extracting method of invariant features to camera viewpoint and two-stage calculation method of camera motion parameter which enhances accuracy and convergent degree by using camera motion parameter by 2D homography to the initial value of LM method. The proposed method are composed of features extraction stage, matching stage and calculation stage of motion parameter. In the experiments, we compare and analyse the proposed method with existing methods by using various indoor images to demonstrate the superiority of the proposed algorithm.

  • PDF

An Quality Management Effort Estimation Model Based on Defect Filtering Concept (결점 필터링 개념 기반 품질관리 노력 추정 모델)

  • Lee, Sang-Un
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.6
    • /
    • pp.101-109
    • /
    • 2012
  • To develop high quality software, quality control plan is required about fault correction that is latent within software. We should describe fault correction profile properly for this. The tank and pipe model performs complex processes to calculate fault that is remove and escapes. Also, we have to know in which phase the faults were inserted, removed and escaped and know the fault detection rate at any phases. To simplify such complex process, this paper presented model to fault filtering concept. Presented model has advantage that can describe fault more shortly because need not to consider whether was involved in fault that escaped fault is inserted at any step at free step. Also, presented effort estimating model that do fetters in function of fault removal quality and productivity measure and is required in fault detection.

The Comparative Study of Software Optimal Release Time Based on Gamma Exponential and Non-exponential Family Distribution Model (지수 및 비지수족 분포 모형에 근거한 소프트웨어 최적방출시기에 관한 비교 연구)

  • Kim, Hee-Cheul;Shin, Hyun-Cheul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.5
    • /
    • pp.125-132
    • /
    • 2010
  • Decision problem called an optimal release policies, after testing a software system in development phase and transfer it to the user, is studied. The applied model of release time exploited infinite non-homogeneous Poisson process. This infinite non-homogeneous Poisson process is a model which reflects the possibility of introducing new faults when correcting or modifying the software. The failure life-cycle distribution used exponential and non-exponential family which has various intensity. Thus, software release policies which minimize a total average software cost of development and maintenance under the constraint of satisfying a software reliability requirement becomes an optimal release policies. In a numerical example, after trend test applied and estimated the parameters using maximum likelihood estimation of inter-failure time data, estimated software optimal release time.

A Study on the Error Estimate for Wegmann's Method applying Low Frequency Pass Filler (저주파필터를 적용한 Wegmann방법의 오차평가에 관한 연구)

  • Song Eun-Jee
    • The KIPS Transactions:PartA
    • /
    • v.12A no.2 s.92
    • /
    • pp.103-108
    • /
    • 2005
  • The purpose of numerical analysis is to design an effective algorithm to realize some mathematical model on computer. In general the approximate value, which is obtained from computer operation, is not the same as the real value that is given by mathematical theory. Therefore the mr estimate measuring how approximate value is near to the real value, is the most significant task to evaluate the efficiency of algorithm. The limit of an error is used for mr estimation at the most case, but the exact mr evaluation could not be expected to get for there is no way to know the real value of the given problem. Wegmann's method has been researched, which is one of the solution to derive the numerical conformal mapping. We proposed an improved method for convergency by applying a low frequency filter to the Wegmann's method. In this paper we investigate error analysis based on some mathematical theory and propose an effective method which makes us able to estimate an error if the real value is not acquired. This kind of proposed method is also proved by numerical experiment.