• Title/Summary/Keyword: Finding error

Search Result 469, Processing Time 0.028 seconds

Practical Understanding of Gross Examination Techniques (육안검사기술의 실무적 이해)

  • Woo-Hyun JI
    • Korean Journal of Clinical Laboratory Science
    • /
    • v.56 no.1
    • /
    • pp.89-98
    • /
    • 2024
  • Gross examination techniques (GETs) of specimens collected from cancer surgery or endoscopy comprise the act of recording visual information about cancer for accurate histopathological diagnosis and collecting sections of the lesion to create microscopic specimens. GETs must include concise and accurate expressions, appropriate structuring, sufficient resections, error-free standardization of important information, and photo-diagramming of complex specimens. To increase the satisfaction of pathological interpretation, it is a task that must be performed accurately and carefully to gain confidence on a theoretical and practical basis with a sufficient understanding of gross examination. Based on the experience of clinical pathologists in the field of GETs, additional specimen types should be identified as viable candidates. Also, their needs and concerns regarding treatment should be carefully considered. In addition, departments at each institution should review the national focus on clinical partnerships, continuous professional training, diagnostic errors, and value-based healthcare provision.

The Measurement and Comparison of the Relative Efficiency for Currency Futures Markets : Advanced Currency versus Emerging Currency (통화선물시장의 상대적 효율성 측정과 비교 : 선진통화 대 신흥통화)

  • Kim, Tae-Hyuk;Eom, Cheol-Jun;Kang, Seok-Kyu
    • The Korean Journal of Financial Management
    • /
    • v.25 no.1
    • /
    • pp.1-22
    • /
    • 2008
  • This study is to evaluate, to the extent to, which advanced currency futures and emerging currency futures markets can predict accurately the future spot rate. To this end, Johansen's the maximum-likelihood cointegration method(1988, 1991) is adopted to test the unbiasedness and efficiency hypothesis. Also, this study is to estimate and compare a quantitative measure of relative efficiency as a ratio of the forecast error variance from the best-fitting quasi-error correction model to the forecast error variance of the futures price as predictor of the spot price in advanced currency futures with in emerging currency futures market. Advanced currency futures is British pound and Japan yen. Emerging currency futures includes Korea won, Mexico peso, and Brazil real. The empirical results are summarized as follows : First, the unbiasedness hypothesis is not rejected for Korea won and Japan yen futures exchange rates. This indicates that the emerging currency Korea won and the advanced currency Japan yen futures exchange rates are likely to predict accurately realized spot exchange rate at a maturity date without the trader having to pay a risk premium for the privilege of trading the contract. Second, in emerging currency futures markets, the unbiasedness hypothesis is not rejected for Korea won futures market apart from Mexico peso and Brazil real futures markets. This indicates that in emerging currency futures markets, Korea won futures market is more efficient than Mexico peso and Brazil real futures markets and is likely to predict accurately realized spot exchange rate at a maturity date without risk premium. Third, this findings show that the results of unbiasedness hypothesis tests can provide conflicting finding. according to currency futures class and forecasts horizon period, Fourth, from the best-fitting quasi-error correction model with forecast horizons of 14 days, the findings suggest the Japan yen futures market is 27.06% efficient, the British pound futures market is 26.87% efficient, the Korea won futures market is 20.77% efficient, the Mexico peso futures market is 11.55%, and the Brazil real futures market is 4.45% efficient in the usual order. This indicates that the Korea won-dollar futures market is more efficient than Mexico peso, and Brazil real futures market. It is therefore possible to concludes that the Korea won-dollar currency futures market has relatively high efficiency comparing with Mexico peso and Brazil real futures markets of emerging currency futures markets.

  • PDF

STRUCTURAL MODEL OF CAUSES OF CONDUCT PROBLEM - RELATIONSHIP AMONG CONDUCT PROBLEMS, DEPRESSION, ANXIETY, FAMILY ENVIRONMENT, SELF-CONCEPT, AND TODDLER TEMPERAMENT - (행동문제 원인의 구조적 모델에 관한 연구 - 행동문제, 우울, 불안, 가정환경, 자기개념, 걸음마기 기질의 관계 -)

  • Cho, Soo-Churl;Shin, Min-Sup;Roh, Myoung-Sun
    • Journal of the Korean Academy of Child and Adolescent Psychiatry
    • /
    • v.10 no.1
    • /
    • pp.3-14
    • /
    • 1999
  • Objective:This study was designed to investigate the difference between executive function of Attention Deficit/Hyperactivity Disorder(ADHD) group and that of neurotics, and to investigate the developmental aspects of ADHD group's executive function. Method:Executive function between ADHD(N=87) and Neurotics(N=19) was evaluated through their performance on the Wisconsin Card Sorting Test. The results were analyzed by 2-way ANOVA and t-test. Results:The results revealed group difference between ADHD and neurotics in total correct reponses, total error responses, nonperseverative errors, number of categories completed, conceptual level responses. There was no significant difference between the performance of 8-12 aged group and 13-15 aged group. But 7-8 aged group showed significantly poor performance than 8-12 aged in total responses, total error responses, perseverative responses, perseverative error responses, nonperseverative error responses. Conclusions:In comparison to the neurotics group, the children of ADHD group are suggested to be lacking the ability to correct their responses according to the external feedback and they probably respond randomly without self-control. However, as there is no difference between perseverative errors and perseverative responses, the interpretation of this finding warrants caution. It also suggests that the developmental aspects should be considered in the studies of executive functions because there are differences in the performance of executive functions by ages.

  • PDF

An Analysis of Problem Posing in the 5th and 6th Grade Mathematics Textbooks and Errors in Problem Posing of 6th Graders (5, 6학년 수학교재의 문제만들기 내용 및 6학년 학생들의 문제만들기에서의 오류 분석)

  • Kim, Gyeong Tak;Ryu, Sung Rim
    • Journal of Elementary Mathematics Education in Korea
    • /
    • v.17 no.2
    • /
    • pp.321-350
    • /
    • 2013
  • The purpose of this study to analysis of problem posing in 5th and 6th grade mathematics textbooks and to comprehend errors in the problem posing activity of 6th graders in elementary school. For solving the research problems, problem posing contents were extracted from mathematics textbooks and practice books for the 5th and 6th grade of elementary school in the 2007 revised national curriculum, and they were analyzed, according to each grade, domain and type. Based on the analysis results, 10 problem posing questions which were extracted and developed, were modified and supplemented through a pre-examination, and a questionnaire that problem posing questions are evenly distributed, according to each grade, domain and type, was produced. This examination was conducted with 129 6th graders, and types of error in problem posing were analyzed using collected data. The implications from the research results are as follows. First, it was found that there was a big numerical difference of problem posing questions in the 5th and 6th grade, and problem posing questions weren't properly suggested in even some domains and types, because the serious concentration in each grade, type and domain. Therefore, textbooks to be developed in the future would need to suggest more various and systematic of problem posing teaching learning activity for each domain and type. Second, the 'error resulting from the lack of information' occurred the most in the problems that 6th graders posed, followed by the 'error in the understanding of problems', 'technical errors', 'logical errors' and 'others'. This implies that a majority of students missed conditions necessary for problem solving, because they have been used to finding answers to given questions only. For such reason, there should be an environment in which students can pose problems by themselves, breaking from the way of learning to only solve given problems.

  • PDF

The Impacts of Smoking Bans on Smoking in Korea (금연법 강화가 흡연에 미치는 영향)

  • Kim, Beomsoo;Kim, Ahram
    • KDI Journal of Economic Policy
    • /
    • v.31 no.2
    • /
    • pp.127-153
    • /
    • 2009
  • There is a growing concern about potential harmful effect of second-hand or environmental tobacco smoking. As a result, smoking bans in workplace become more prevalent worldwide. In Korea, workplace smoking ban policy become more restrictive in 2003 when National health enhancing law was amended. The new law requires all office buildings larger than 3,000 square meters (multi-purpose buildings larger than 2,000 square meters) should be smoke free. Therefore, a lot of indoor office became non smoking area. Previous studies in other counties often found contradicting answers for the effects of workplace smoking ban on smoking behavior. In addition, there was no study in Korea yet that examines the causal impacts of smoking ban on smoking behavior. The situation in Korea might be different from other countries. Using 2001 and 2005 Korea National Health and Nutrition surveys which are representative for population in Korea we try to examine the impacts of law change on current smoker and cigarettes smoked per day. The amended law impacted the whole country at the same time and there was a declining trend in smoking rate even before the legislation update. So, the challenge here is to tease out the true impact only. We compare indoor working occupations which are constrained by the law change with outdoor working occupations which are less impacted. Since the data has been collected before (2001) and after (2005) the law change for treated (indoor working occupations) and control (outdoor working occupations) groups we will use difference in difference method. We restrict our sample to working age (between 20 and 65) since these are the relevant population by the workplace smoking ban policy. We also restrict the sample to indoor occupations (executive or administrative and administrative support) and outdoor occupations (sales and low skilled worker) after dropping unemployed and someone working for military since it is not clear whether these occupations are treated group or control group. This classification was supported when we examined the answers for workplace smoking ban policy existing only in 2005 survey. Sixty eight percent of indoor occupations reported having an office smoking ban policy compared to forty percent of outdoor occupation answering workplace smoking ban policy. The estimated impacts on current smoker are 4.1 percentage point decline and cigarettes per day show statistically significant decline of 2.5 cigarettes per day. Taking into account consumption of average sixteen cigarettes per day among smokers it is sixteen percent decline in smoking rate which is substantial. We tested robustness using the same sample across two surveys and also using tobit model. Our results are robust against both concerns. It is possible that our measure of treated and control group have measurement error which will lead to attenuation bias. However, we are finding statistically significant impacts which might be a lower bound of the true estimates. The magnitude of our finding is not much different from previous finding of significant impacts. For cigarettes per day previous estimates varied from 1.37 to 3.9 and for current smoker it showed between 1%p and 7.8%p.

  • PDF

Finding Weighted Sequential Patterns over Data Streams via a Gap-based Weighting Approach (발생 간격 기반 가중치 부여 기법을 활용한 데이터 스트림에서 가중치 순차패턴 탐색)

  • Chang, Joong-Hyuk
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.55-75
    • /
    • 2010
  • Sequential pattern mining aims to discover interesting sequential patterns in a sequence database, and it is one of the essential data mining tasks widely used in various application fields such as Web access pattern analysis, customer purchase pattern analysis, and DNA sequence analysis. In general sequential pattern mining, only the generation order of data element in a sequence is considered, so that it can easily find simple sequential patterns, but has a limit to find more interesting sequential patterns being widely used in real world applications. One of the essential research topics to compensate the limit is a topic of weighted sequential pattern mining. In weighted sequential pattern mining, not only the generation order of data element but also its weight is considered to get more interesting sequential patterns. In recent, data has been increasingly taking the form of continuous data streams rather than finite stored data sets in various application fields, the database research community has begun focusing its attention on processing over data streams. The data stream is a massive unbounded sequence of data elements continuously generated at a rapid rate. In data stream processing, each data element should be examined at most once to analyze the data stream, and the memory usage for data stream analysis should be restricted finitely although new data elements are continuously generated in a data stream. Moreover, newly generated data elements should be processed as fast as possible to produce the up-to-date analysis result of a data stream, so that it can be instantly utilized upon request. To satisfy these requirements, data stream processing sacrifices the correctness of its analysis result by allowing some error. Considering the changes in the form of data generated in real world application fields, many researches have been actively performed to find various kinds of knowledge embedded in data streams. They mainly focus on efficient mining of frequent itemsets and sequential patterns over data streams, which have been proven to be useful in conventional data mining for a finite data set. In addition, mining algorithms have also been proposed to efficiently reflect the changes of data streams over time into their mining results. However, they have been targeting on finding naively interesting patterns such as frequent patterns and simple sequential patterns, which are found intuitively, taking no interest in mining novel interesting patterns that express the characteristics of target data streams better. Therefore, it can be a valuable research topic in the field of mining data streams to define novel interesting patterns and develop a mining method finding the novel patterns, which will be effectively used to analyze recent data streams. This paper proposes a gap-based weighting approach for a sequential pattern and amining method of weighted sequential patterns over sequence data streams via the weighting approach. A gap-based weight of a sequential pattern can be computed from the gaps of data elements in the sequential pattern without any pre-defined weight information. That is, in the approach, the gaps of data elements in each sequential pattern as well as their generation orders are used to get the weight of the sequential pattern, therefore it can help to get more interesting and useful sequential patterns. Recently most of computer application fields generate data as a form of data streams rather than a finite data set. Considering the change of data, the proposed method is mainly focus on sequence data streams.

A Variable Latency Goldschmidt's Floating Point Number Square Root Computation (가변 시간 골드스미트 부동소수점 제곱근 계산기)

  • Kim, Sung-Gi;Song, Hong-Bok;Cho, Gyeong-Yeon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.1
    • /
    • pp.188-198
    • /
    • 2005
  • The Goldschmidt iterative algorithm for finding a floating point square root calculated it by performing a fixed number of multiplications. In this paper, a variable latency Goldschmidt's square root algorithm is proposed, that performs multiplications a variable number of times until the error becomes smaller than a given value. To find the square root of a floating point number F, the algorithm repeats the following operations: $R_i=\frac{3-e_r-X_i}{2},\;X_{i+1}=X_i{\times}R^2_i,\;Y_{i+1}=Y_i{\times}R_i,\;i{\in}\{{0,1,2,{\ldots},n-1} }}'$with the initial value is $'\;X_0=Y_0=T^2{\times}F,\;T=\frac{1}{\sqrt {F}}+e_t\;'$. The bits to the right of p fractional bits in intermediate multiplication results are truncated, and this truncation error is less than $'e_r=2^{-p}'$. The value of p is 28 for the single precision floating point, and 58 for the doubel precision floating point. Let $'X_i=1{\pm}e_i'$, there is $'\;X_{i+1}=1-e_{i+1},\;where\;'\;e_{i+1}<\frac{3e^2_i}{4}{\mp}\frac{e^3_i}{4}+4e_{r}'$. If '|X_i-1|<2^{\frac{-p+2}{2}}\;'$ is true, $'\;e_{i+1}<8e_r\;'$ is less than the smallest number which is representable by floating point number. So, $\sqrt{F}$ is approximate to $'\;\frac{Y_{i+1}}{T}\;'$. Since the number of multiplications performed by the proposed algorithm is dependent on the input values, the average number of multiplications per an operation is derived from many reciprocal square root tables ($T=\frac{1}{\sqrt{F}}+e_i$) with varying sizes. The superiority of this algorithm is proved by comparing this average number with the fixed number of multiplications of the conventional algorithm. Since the proposed algorithm only performs the multiplications until the error gets smaller than a given value, it can be used to improve the performance of a square root unit. Also, it can be used to construct optimized approximate reciprocal square root tables. The results of this paper can be applied to many areas that utilize floating point numbers, such as digital signal processing, computer graphics, multimedia, scientific computing, etc.

A Variable Latency Newton-Raphson's Floating Point Number Reciprocal Square Root Computation (가변 시간 뉴톤-랍손 부동소수점 역수 제곱근 계산기)

  • Kim Sung-Gi;Cho Gyeong-Yeon
    • The KIPS Transactions:PartA
    • /
    • v.12A no.5 s.95
    • /
    • pp.413-420
    • /
    • 2005
  • The Newton-Raphson iterative algorithm for finding a floating point reciprocal square mot calculates it by performing a fixed number of multiplications. In this paper, a variable latency Newton-Raphson's reciprocal square root algorithm is proposed that performs multiplications a variable number of times until the error becomes smaller than a given value. To find the rediprocal square root of a floating point number F, the algorithm repeats the following operations: '$X_{i+1}=\frac{{X_i}(3-e_r-{FX_i}^2)}{2}$, $i\in{0,1,2,{\ldots}n-1}$' with the initial value is '$X_0=\frac{1}{\sqrt{F}}{\pm}e_0$'. The bits to the right of p fractional bits in intermediate multiplication results are truncated and this truncation error is less than '$e_r=2^{-p}$'. The value of p is 28 for the single precision floating point, and 58 for the double precision floating point. Let '$X_i=\frac{1}{\sqrt{F}}{\pm}e_i$, there is '$X_{i+1}=\frac{1}{\sqrt{F}}-e_{i+1}$, where '$e_{i+1}{<}\frac{3{\sqrt{F}}{{e_i}^2}}{2}{\mp}\frac{{Fe_i}^3}{2}+2e_r$'. If '$|\frac{\sqrt{3-e_r-{FX_i}^2}}{2}-1|<2^{\frac{\sqrt{-p}{2}}}$' is true, '$e_{i+1}<8e_r$' is less than the smallest number which is representable by floating point number. So, $X_{i+1}$ is approximate to '$\frac{1}{\sqrt{F}}$. Since the number of multiplications performed by the proposed algorithm is dependent on the input values, the average number of multiplications Per an operation is derived from many reciprocal square root tables ($X_0=\frac{1}{\sqrt{F}}{\pm}e_0$) with varying sizes. The superiority of this algorithm is proved by comparing this average number with the fixed number of multiplications of the conventional algorithm. Since the proposed algorithm only performs the multiplications until the error gets smaller than a given value, it can be used to improve the performance of a reciprocal square root unit. Also, it can be used to construct optimized approximate reciprocal square root tables. The results of this paper can be applied to many areas that utilize floating point numbers, such as digital signal processing, computer graphics, multimedia, scientific computing, etc.

A Variable Latency Newton-Raphson's Floating Point Number Reciprocal Computation (가변 시간 뉴톤-랍손 부동소수점 역수 계산기)

  • Kim Sung-Gi;Cho Gyeong-Yeon
    • The KIPS Transactions:PartA
    • /
    • v.12A no.2 s.92
    • /
    • pp.95-102
    • /
    • 2005
  • The Newton-Raphson iterative algorithm for finding a floating point reciprocal which is widely used for a floating point division, calculates the reciprocal by performing a fixed number of multiplications. In this paper, a variable latency Newton-Raphson's reciprocal algorithm is proposed that performs multiplications a variable number of times until the error becomes smaller than a given value. To find the reciprocal of a floating point number F, the algorithm repeats the following operations: '$'X_{i+1}=X=X_i*(2-e_r-F*X_i),\;i\in\{0,\;1,\;2,...n-1\}'$ with the initial value $'X_0=\frac{1}{F}{\pm}e_0'$. The bits to the right of p fractional bits in intermediate multiplication results are truncated, and this truncation error is less than $'e_r=2^{-p}'$. The value of p is 27 for the single precision floating point, and 57 for the double precision floating point. Let $'X_i=\frac{1}{F}+e_i{'}$, these is $'X_{i+1}=\frac{1}{F}-e_{i+1},\;where\;{'}e_{i+1}, is less than the smallest number which is representable by floating point number. So, $X_{i+1}$ is approximate to $'\frac{1}{F}{'}$. Since the number of multiplications performed by the proposed algorithm is dependent on the input values, the average number of multiplications per an operation is derived from many reciprocal tables $(X_0=\frac{1}{F}{\pm}e_0)$ with varying sizes. The superiority of this algorithm is proved by comparing this average number with the fixed number of multiplications of the conventional algorithm. Since the proposed algorithm only performs the multiplications until the error gets smaller than a given value, it can be used to improve the performance of a reciprocal unit. Also, it can be used to construct optimized approximate reciprocal tables. The results of this paper can be applied to many areas that utilize floating point numbers, such as digital signal processing, computer graphics, multimedia scientific computing, etc.

Using High Resolution Satellite Imagery for New Address System (도로명 및 건물번호 부여사업에서 고해상도 위성영상의 활용)

  • Bae, Sun-Hak;Kim, Chang-Hwan;Shin, Young-Chul
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.6 no.4
    • /
    • pp.109-121
    • /
    • 2003
  • The point of this research is the use of the high resolution satellite image for local government's new address system, as well as spatially field investigation support and base map error finding. Most local governments use scale 1/1,000 and 1/5,000 digital map for base map and field investigation. But field investigator's knowledge insufficiency and the lack of base map's currency make things too difficult from the beginning of the project. As the way of solving this problem, this research offers the use of the high resolution satellite image in new address system with cadence data of digital base map. Until now satellite image is not suitable for our situation because it has low resolution. But this problem was solved for 1m space resolution satellite image and it is being applied wider and wider. Now vector data and Raster data are integrated for complimenting of each weak point. In this study the use of the high resolution satellite image in new address system is expected to improve the quality of the results and reduce the expenses. In addition the satellite image can use local government's fundamental data.

  • PDF