• Title/Summary/Keyword: Error level

Search Result 2,511, Processing Time 0.035 seconds

Estimation of Instantaneous Sea Level Using SAR Interferometry

  • Kim, Sang-Wan;Won, Joong-Sun
    • Korean Journal of Remote Sensing
    • /
    • v.18 no.5
    • /
    • pp.255-261
    • /
    • 2002
  • Strong and coherent radar backscattering signals are observed over oyster sea farms that consist of artificial structures installed on the bottom. We successfully obtained 21 coherent interferograms from 11 JERS-1 SAR data sets even though orbital baselines (up to 2 km) or temporal baselines (up to 1 year) were relatively large. The coherent phases preserved in the sea farms are probably formed by double bouncing from sea surface and the sea farming structures, and consequently they are correlated with tide height (or instantaneous sea level). Phase unwrapping is required to restore the absolute sea level. We show that radar backscattering intensity is roughly correlated with the sea surface height, and utilize the fact to determine the wrapping counts. While the SAR image intensity gives a rough range of absolute sea level, the interferometric phases provide the detailed relative height variations within a limit of $2{\pi}$ (or 15.3 cm) with respect to the sea level at the moment of the master data acquisition. A combined estimation results in an instantaneous sea level. The radar measurements were verified using tide gauge records, and the results yielded a correlation coefficient of 0.96 with an r.m.s. error of 6.0 cm. The results demonstrate that radar interferometry is a promising approach to sea level measurement in the near coastal regions.

An analysis of errors in problem solving of the function unit in the first grade highschool (고등학교 1학년 함수단원 문제해결에서의 오류에 대한 분석)

  • Mun, Hye-Young;Kim, Yung-Hwan
    • Journal of the Korean School Mathematics Society
    • /
    • v.14 no.3
    • /
    • pp.277-293
    • /
    • 2011
  • The purpose of mathematics education is to develop the ability of transforming various problems in general situations into mathematics problems and then solving the problem mathematically. Various teaching-learning methods for improving the ability of the mathematics problem-solving can be tried. However, it is necessary to choose an appropriate teaching-learning method after figuring out students' level of understanding the mathematics learning or their problem-solving strategies. The error analysis is helpful for mathematics learning by providing teachers more efficient teaching strategies and by letting students know the cause of failure and then find a correct way. The following subjects were set up and analyzed. First, the error classification pattern was set up. Second, the errors in the solving process of the function problems were analyzed according to the error classification pattern. For this study, the survey was conducted to 90 first grade students of ${\bigcirc}{\bigcirc}$high school in Chung-nam. They were asked to solve 8 problems in the function part. The following error classification patterns were set up by referring to the preceding studies about the error and the error patterns shown in the survey. (1)Misused Data, (2)Misinterpreted Language, (3)Logically Invalid Inference, (4)Distorted Theorem or Definition, (5)Unverified Solution, (6)Technical Errors, (7)Discontinuance of solving process The results of the analysis of errors due to the above error classification pattern were given below First, students don't understand the concept of the function completely. Even if they do, they lack in the application ability. Second, students make many mistakes when they interpret the mathematics problem into different types of languages such as equations, signals, graphs, and figures. Third, students misuse or ignore the data given in the problem. Fourth, students often give up or never try the solving process. The research on the error analysis should be done further because it provides the useful information for the teaching-learning process.

  • PDF

A Study of Six Sigma and Total Error Allowable in Chematology Laboratory (6 시그마와 총 오차 허용범위의 개발에 대한 연구)

  • Chang, Sang-Wu;Kim, Nam-Yong;Choi, Ho-Sung;Kim, Yong-Whan;Chu, Kyung-Bok;Jung, Hae-Jin;Park, Byong-Ok
    • Korean Journal of Clinical Laboratory Science
    • /
    • v.37 no.2
    • /
    • pp.65-70
    • /
    • 2005
  • Those specifications of the CLIA analytical tolerance limits are consistent with the performance goals in Six Sigma Quality Management. Six sigma analysis determines performance quality from bias and precision statistics. It also shows if the method meets the criteria for the six sigma performance. Performance standards calculates allowable total error from several different criteria. Six sigma means six standard deviations from the target value or mean value and about 3.4 failures per million opportunities for failure. Sigma Quality Level is an indicator of process centering and process variation total error allowable. Tolerance specification is replaced by a Total Error specification, which is a common form of a quality specification for a laboratory test. The CLIA criteria for acceptable performance in proficiency testing events are given in the form of an allowable total error, TEa. Thus there is a published list of TEa specifications for regulated analytes. In terms of TEa, Six Sigma Quality Management sets a precision goal of TEa/6 and an accuracy goal of 1.5 (TEa/6). This concept is based on the proficiency testing specification of target value +/-3s, TEa from reference intervals, biological variation, and peer group median mean surveys. We have found rules to calculate as a fraction of a reference interval and peer group median mean surveys. We studied to develop total error allowable from peer group survey results and CLIA 88 rules in US on 19 items TP, ALB, T.B, ALP, AST, ALT, CL, LD, K, Na, CRE, BUN, T.C, GLU, GGT, CA, phosphorus, UA, TG tests in chematology were follows. Sigma level versus TEa from peer group median mean CV of each item by group mean were assessed by process performance, fitting within six sigma tolerance limits were TP ($6.1{\delta}$/9.3%), ALB ($6.9{\delta}$/11.3%), T.B ($3.4{\delta}$/25.6%), ALP ($6.8{\delta}$/31.5%), AST ($4.5{\delta}$/16.8%), ALT ($1.6{\delta}$/19.3%), CL ($4.6{\delta}$/8.4%), LD ($11.5{\delta}$/20.07%), K ($2.5{\delta}$/0.39mmol/L), Na ($3.6{\delta}$/6.87mmol/L), CRE ($9.9{\delta}$/21.8%), BUN ($4.3{\delta}$/13.3%), UA ($5.9{\delta}$/11.5%), T.C ($2.2{\delta}$/10.7%), GLU ($4.8{\delta}$/10.2%), GGT ($7.5{\delta}$/27.3%), CA ($5.5{\delta}$/0.87mmol/L), IP ($8.5{\delta}$/13.17%), TG ($9.6{\delta}$/17.7%). Peer group survey median CV in Korean External Assessment greater than CLIA criteria were CL (8.45%/5%), BUN (13.3%/9%), CRE (21.8%/15%), T.B (25.6%/20%), and Na (6.87mmol/L/4mmol/L). Peer group survey median CV less than it were as TP (9.3%/10%), AST (16.8%/20%), ALT (19.3%/20%), K (0.39mmol/L/0.5mmol/L), UA (11.5%/17%), Ca (0.87mg/dL1mg/L), TG (17.7%/25%). TEa in 17 items were same one in 14 items with 82.35%. We found out the truth on increasing sigma level due to increased total error allowable, and were sure that the goal of setting total error allowable would affect the evaluation of sigma metrics in the process, if sustaining the same process.

  • PDF

Using Utterance and Semantic Level Confidence for Interactive Spoken Dialog Clarification

  • Jung, Sang-Keun;Lee, Cheong-Jae;Lee, Gary Geunbae
    • Journal of Computing Science and Engineering
    • /
    • v.2 no.1
    • /
    • pp.1-25
    • /
    • 2008
  • Spoken dialog tasks incur many errors including speech recognition errors, understanding errors, and even dialog management errors. These errors create a big gap between the user's intention and the system's understanding, which eventually results in a misinterpretation. To fill in the gap, people in human-to-human dialogs try to clarify the major causes of the misunderstanding to selectively correct them. This paper presents a method of clarification techniques to human-to-machine spoken dialog systems. We viewed the clarification dialog as a two-step problem-Belief confirmation and Clarification strategy establishment. To confirm the belief, we organized the clarification process into three systematic phases. In the belief confirmation phase, we consider the overall dialog system's processes including speech recognition, language understanding and semantic slot and value pairs for clarification dialog management. A clarification expert is developed for establishing clarification dialog strategy. In addition, we proposed a new design of plugging clarification dialog module in a given expert based dialog system. The experiment results demonstrate that the error verifiers effectively catch the word and utterance-level semantic errors and the clarification experts actually increase the dialog success rate and the dialog efficiency.

Parallel LDPC Decoding on a Heterogeneous Platform using OpenCL

  • Hong, Jung-Hyun;Park, Joo-Yul;Chung, Ki-Seok
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.6
    • /
    • pp.2648-2668
    • /
    • 2016
  • Modern mobile devices are equipped with various accelerated processing units to handle computationally intensive applications; therefore, Open Computing Language (OpenCL) has been proposed to fully take advantage of the computational power in heterogeneous systems. This article introduces a parallel software decoder of Low Density Parity Check (LDPC) codes on an embedded heterogeneous platform using an OpenCL framework. The LDPC code is one of the most popular and strongest error correcting codes for mobile communication systems. Each step of LDPC decoding has different parallelization characteristics. In the proposed LDPC decoder, steps suitable for task-level parallelization are executed on the multi-core central processing unit (CPU), and steps suitable for data-level parallelization are processed by the graphics processing unit (GPU). To improve the performance of OpenCL kernels for LDPC decoding operations, explicit thread scheduling, vectorization, and effective data transfer techniques are applied. The proposed LDPC decoder achieves high performance and high power efficiency by using heterogeneous multi-core processors on a unified computing framework.

A Risk Analysis on the Error Code of Vehicle Inspection Utilizing Portfolio Analysis (Portfolio 분석을 활용한 자동차 검사의 부적합항목에 대한 위험도분석)

  • Choi, Kyung-Im;Kim, Tae-Ho;Lee, Soo-Il
    • Journal of the Korean Society of Safety
    • /
    • v.27 no.4
    • /
    • pp.121-127
    • /
    • 2012
  • Vehicle Inspection System is to examine the condition of vehicle regularly at the national level to protect lives and properties of the people from traffic accidents due to vehicle's fault. However, the vehicle inspection method, criteria, period and effectiveness have become a controversial issue, because of examining safety management of vehicle by drivers regardless of regular vehicle inspection. Therefore, the aim of this study is to investigate vehicle inspection timeliness and risk level of inspection items through basic statistical survey and portfolio analysis. The results of the research through practical analysis are: (1) The inspection failure rates between 3 and 6 model year tend to increase. (2) The failure of inspection items for safety highly impacts on traffic accident rate in terms of accident risks. (3) According to the result of portfolio analysis, faulty items located 1st quadrant are riding device, driveline system, controlling device, steering actuator, and fuel system.

Orbit Determination of KOMPSAT-1 and Cryosat-2 Satellites Using Optical Wide-field Patrol Network (OWL-Net) Data with Batch Least Squares Filter

  • Lee, Eunji;Park, Sang-Young;Shin, Bumjoon;Cho, Sungki;Choi, Eun-Jung;Jo, Junghyun;Park, Jang-Hyun
    • Journal of Astronomy and Space Sciences
    • /
    • v.34 no.1
    • /
    • pp.19-30
    • /
    • 2017
  • The optical wide-field patrol network (OWL-Net) is a Korean optical surveillance system that tracks and monitors domestic satellites. In this study, a batch least squares algorithm was developed for optical measurements and verified by Monte Carlo simulation and covariance analysis. Potential error sources of OWL-Net, such as noise, bias, and clock errors, were analyzed. There is a linear relation between the estimation accuracy and the noise level, and the accuracy significantly depends on the declination bias. In addition, the time-tagging error significantly degrades the observation accuracy, while the time-synchronization offset corresponds to the orbital motion. The Cartesian state vector and measurement bias were determined using the OWL-Net tracking data of the KOMPSAT-1 and Cryosat-2 satellites. The comparison with known orbital information based on two-line elements (TLE) and the consolidated prediction format (CPF) shows that the orbit determination accuracy is similar to that of TLE. Furthermore, the precision and accuracy of OWL-Net observation data were determined to be tens of arcsec and sub-degree level, respectively.

Information Technology System-on-Chip (정보기술 시스템온칩)

  • Park, Chun-Myoung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2011.05a
    • /
    • pp.769-770
    • /
    • 2011
  • This paper presented a method constructing the ITSoC(Information Technology System-on-Chip). In order to implement the ITSoC, designers are increasing relying on reuse of intellectual property(IP) blocks. Since IP blocks are pre-designed and pre-verified, the designer can concentrate on the complete system without having to worry about the correctness or performance of the individual components. Also, embedded core in an ITSoC access mechanisms are required to test them at the system level. That is the goal, in theory. In practice, assembling an ITSoC using IP blocks is still an error-prone, labor-intensive and time-consuming process. This paper discuss the main challenge in ITSoC designs using IP blocks and elaborates on the methodology and tools being put in place for addressing the problem. It explains ITSoC architecture and gives algorithmic details on the high-level tools being developed for ITSoC design.

  • PDF

A Study on the Evaluation Method of ACC Test Using Monocular Camera (단안카메라를 활용한 ACC 시험평가 방법에 관한 연구)

  • Kim, Bong-Ju;Lee, Seon-Bong
    • Journal of Auto-vehicle Safety Association
    • /
    • v.12 no.3
    • /
    • pp.43-51
    • /
    • 2020
  • Currently, the second level of the six stages of self-driving technology, as defined by SAE, is commercialized, and the third level is preparing for commercialization. The purpose of ACC is to be evaluated as a system useful for preventing and preventing accidents by minimizing driver fatigue through longitudinal speed control and relative distance control of the vehicle. In this regard, for the study of safety assessment methods in the practical environment of ACC. Distance measurement method using monocular camera and data acquisition equipment such as DGPS are utilized. Based on the evaluation scenario considering the domestic road environment proposed by the preceding study, the relative distance obtained from equipment such as DPGS and the relative distance using a monocular camera in the actual test is verified by comparing and analyzing the safety assessment. The comparison by scenario results showed a minimum error rate of 3.83% in Scenario 1 and a maximum of 14.61% in Scenario 6. The cause of the maximum error is that the lane recognition is not accurate in the camera image and irregular operation conditions such as rushing in or exiting the surrounding area from the walkway. It is expected that safety evaluation using a monocular camera will be possible for other ADAS systems in the future.

Development of Material Switching System for Microstructure with Multiple Material in Projection Microstereolithography (전사방식 마이크로 광 조형에서 복합 재료의 미세구조물 제작을 위한 수지 교한 시스템 개발)

  • Jo, Kwang-Ho;Park, In-Baek;Ha, Young-Myoung;Kim, Min-Sub;Lee, Seok-Hee
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.28 no.8
    • /
    • pp.1000-1007
    • /
    • 2011
  • For enlarging the applications of microstereolithography, the use of diverse materials is required. In this study, the material switching system (MSS) for projection microstereolithography apparatus is proposed. The MSS consists of three part; resin level control, resin dispensing control, and vat level control. Curing characteristic of materials used in fabrication has been identified. Through repeated fabrication of test models, the critical fabrication error is investigated and a possible solution to this error is suggested. The developed system can be applied to improve the strength of microstructure and extended to fabricate an array of microstructures with multiple materials.