• Title/Summary/Keyword: Error level

Search Result 2,511, Processing Time 0.027 seconds

Errors of Surface Image Due to the Different Tip of Nano-Indenter (나노인덴터 압입팁의 특성에 따른 표면 이미지 오차 연구)

  • Kim, Soo-In;Lee, Chan-Mi;Lee, Chang-Woo
    • Journal of the Korean Vacuum Society
    • /
    • v.18 no.5
    • /
    • pp.346-351
    • /
    • 2009
  • Due to the decrease of line width and increase of the integration level of the device, it is expected that 'Bottom-up' method will replace currently used 'Top-down' method. Researches about 'Bottom-up' device production such as Nanowires and Nanobelts are widely held on. To utilize these technologies in devices, properties of matter should be exactly measured. Nano-indenters are used to measure the properties of nano-scale structures. Additionally, Nano-indenters provide AFM(Atomic Force Microscopy) function to get the image of the surface and get physical properties for exact position of nano-structure using this image. However, nano-indenter tips have relatively much bigger size than ordinary AFM probes, there occurs considerable error in surface image by Nano-Indenter. Accordingly, this research used 50nm Berkovich tip and 1um $90^{\circ}$ Conical tip, which are commonly used in Nano-Indenter. To find out the surface characteristics for each kind of tip, we indented the surface of thin layer by each tip and compared surface image and indentation depth. Then, we got image of 100nm-size structure by surface scanning using Nano-Indenter and compared it with surface image gained by current AFM technology. We calculated the errors between two images and compared it with theoretical error.

A Developmont of Numerical Mo del on the Estimation of the Log-term Run-off for the Design of Riverheads Works -With Special Reference to Small and Medium Sijed Catchment Areas- (제수원공 설계를 위한 장기간 연속수수량 추정모형의 개발 - 중심유역을 중심으로)

  • 엄병현
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.29 no.4
    • /
    • pp.59-72
    • /
    • 1987
  • Although long-term runoff analysis is important as much as flood analysis in the design of water works, the technological level of the former is relatively lower than that of the latter. In this respect, the precise estimation model for the volume of successive runoff should he developed as soon as possible. Up to now, in Korea, Gajiyama's formula has been widely used in long-term runoff analysis, which has many problems in applying in real situation. On the other hand, in flood analysis, unit hydrograph method has been exclusively used. Therefore, this study aims at trying to apply unit hydrograph method in long-term runoff analysis for the betterment of its estimation. Four test catchment areas were selected ; Maesan area in Namlum river as a representative area of Han river system, Cheongju area in Musim river as one of Geum river system, Hwasun area in Hwasun river as one of Yongsan river system, and Supyung area in Geum river as one of Nakdong river system. In the analysis of unit hydrograph, seperation of effective rainfall was carried out firstly. Considering that effective rainfall and moisture condition of catchrnent area are inside and outside of a phenomenon respectively and the latter is not considered in the analysis, Initial base flow(qb)was selected as an index of moisture condition. At the same time, basic equation(Eq.7) was established, in which qb can take a role as a parameter in relating between cumulative rainfall(P) and cumulative loss of rainfall(Ld). Based on the above equation, computer program for estimation model of qbwas seperately developed according to the range of qb, Developed model was applied to measured hydrographs and hyetographs for total 10 years in 4 test areas and effective rainfall was estimated. Estimation precision of model was checked as shown in Tab- 6 and Fig.8. In the next stage, based on the estimated effective rainfall(R) and runoff(Qd), a runoff distribution ratio was calculated for each teat area using by computerised least square method and used in making unit hydrographs in each test area. Significance of induced hydrographs was tested by checking the relative errors between estimated and measured runoff volume(Tab-9, 10). According to the results, runoff estimation error by unit hydrograph itself was merely 2 or 3 %, but other 2 or 3 % of error proved to be transferred error in the seperation of effective rainfall. In this study, special attentioning point is that, in spite of different river systems and forest conditions of test areas, standardized unit hydrographs for them have very similar curve shape, which can be explained by having similar catchinent characteristics such as stream length, catchinent area, slope, and vegetation intensity. That fact should be treated as important factor ingeneralization of unit hydrograph method.

  • PDF

Development of a New Automatic Image Quality Optimization System for Mobile TFT-LCD Applications (모바일 TFT-LCD 응용을 위한 새로운 형태의 자동화질 최적화 시스템 개발)

  • Ryu, Jee-Youl;Noh, Seok-Ho
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.47 no.1
    • /
    • pp.17-28
    • /
    • 2010
  • This paper presents a new automatic TFT-LCD image quality optimization system using DSP for the first time. Since conventional manual method depends on experiences of LCD module developers, it is highly labor-intensive and requires several correction steps providing large gamma correction error. The proposed system optimizes automatically gamma adjustment and power setting registers in mobile TFT-LCD driver IC to reduce gamma correction error, adjusting time, and flicker. It contains module-under-test (MUT, TFT-LCD module), PC installed with program, multimedia display tester for measuring luminance and flicker, and control board for interface between PC and TFT-LCD module. We have developed a new algorithm using 6-point programmable matching technique with reference gamma curve and applying automatic power setting sequence. Developed algorithm and program are generally applicable for most of the TFT-LCD modules. It is realized to calibrate gamma values of 1.8, 2.0, 2.2 and 3.0, and reduce flicker level. The control board is designed with DSP and FPGA, and it supports various interfaces such as RGB and CPU. Developed automatic image quality optimization system showed significantly reduced gamma adjusting time, reduced flicker, and much less average gamma error than conventional manual method. We believe that the proposed system is very useful to provide high-quality TFT-LCD and to improve developing process using optimized gamma-curve setting and automatic power setting.

Low Complexity Video Encoding Using Turbo Decoding Error Concealments for Sensor Network Application (센서네트워크상의 응용을 위한 터보 복호화 오류정정 기법을 이용한 경량화 비디오 부호화 방법)

  • Ko, Bong-Hyuck;Shim, Hyuk-Jae;Jeon, Byeung-Woo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.1
    • /
    • pp.11-21
    • /
    • 2008
  • In conventional video coding, the complexity of encoder is much higher than that of decoder. However, as more needs arises for extremely simple encoder in environments having constrained energy such as sensor network, much investigation has been carried out for eliminating motion prediction/compensation claiming most complexity and energy in encoder. The Wyner-Ziv coding, one of the representative schemes for the problem, reconstructs video at decoder by correcting noise on side information using channel coding technique such as turbo code. Since the encoder generates only parity bits without performing any type of processes extracting correlation information between frames, it has an extremely simple structure. However, turbo decoding errors occur in noisy side information. When there are high-motion or occlusion between frames, more turbo decoding errors appear in reconstructed frame and look like Salt & Pepper noise. This severely deteriorates subjective video quality even though such noise rarely occurs. In this paper, we propose a computationally extremely light encoder based on symbol-level Wyner-Ziv coding technique and a new corresponding decoder which, based on a decision whether a pixel has error or not, applies median filter selectively in order to minimize loss of texture detail from filtering. The proposed method claims extremely low encoder complexity and shows improvements both in subjective quality and PSNR. Our experiments have verified average PSNR gain of up to 0.8dB.

Design and Implementation of High Efficiency Transceiver Module for Active Phased Arrays System of IMT-Advanced (IMT-Advanced 능동위상배열 시스템용 고효율 송수신 모듈 설계 및 구현)

  • Lee, Suk-Hui;Jang, Hong-Ju
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.7
    • /
    • pp.26-36
    • /
    • 2014
  • The needs of active phased arrays antenna system is getting more increased for IMT-Advanced system efficiency. The active phased array structure consists of lots of small transceivers and radiation elements to increase system efficiency. The minimized module of high efficiency transceiver is key for system implementation. The power amplifier of transmitter decides efficiency of base-station. In this paper, we design and implement minimized module of high efficiency transceiver for IMT-Advanced active phased array system. The temperature compensation circuit of transceiver reduces gain error and the analog pre-distorter of linearizer reduces implemented size. For minimal size and high efficiency, the implented power amplifier consist of GaN MMIC Doherty structure. The size of implemented module is $40mm{\times}90mm{\times}50mm$ and output power is 47.65 dBm at LTE band 7. The efficiency of power amplifier is 40.7% efficiency and ACLR compensation of linearizer is above 12dB at operating power level, 37dBm. The noise figure of transceiver is under 1.28 dB and amplitude error and phase error on 6 bit control is 0.38 dB and 2.77 degree respectively.

Lightweight video coding using spatial correlation and symbol-level error-correction channel code (공간적 유사성과 심볼단위 오류정정 채널 코드를 이용한 경량화 비디오 부호화 방법)

  • Ko, Bong-Hyuck;Shim, Hiuk-Jae;Jeon, Byeung-Woo
    • Journal of Broadcast Engineering
    • /
    • v.13 no.2
    • /
    • pp.188-199
    • /
    • 2008
  • In conventional video coding, encoder complexity is much higher than that of decoder. However, investigations for lightweight encoder to eliminate motion prediction/compensation claiming most complexity in encoder have recently become an important issue. The Wyner-Ziv coding is one of the representative schemes for the problem and, in this scheme, since encoder generates only parity bits of a current frame without performing any type of processes extracting correlation information between frames, it has an extremely simple structure compared to conventional coding techniques. However, in Wyner-Ziv coding, channel decoding errors occur when noisy side information is used in channel decoding process. These channel decoding errors appear more frequently, especially, when there is not enough correlation between frames to generate accurate side information and, as a result, those errors look like Salt & Pepper type noise in the reconstructed frame. Since this noise severely deteriorates subjective video quality even though such noise rarely occurs, previously we proposed a computationally extremely light encoding method based on selective median filter that corrects such noise using spatial correlation of a frame. However, in the previous method, there is a problem that loss of texture from filtering may exceed gain from error correction by the filter for video sequences having complex torture. Therefore, in this paper, we propose an improved lightweight encoding method that minimizes loss of texture detail from filtering by allowing information of texture and that of noise in side information to be utilized by the selective median filter. Our experiments have verified average PSNR gain of up to 0.84dB compared to the previous method.

Reconfiguration of Physical Structure of Vegetation by Voxelization Based on 3D Point Clouds (3차원 포인트 클라우드 기반 복셀화에 의한 식생의 물리적 구조 재구현)

  • Ahn, Myeonghui;Jang, Eun-kyung;Bae, Inhyeok;Ji, Un
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.40 no.6
    • /
    • pp.571-581
    • /
    • 2020
  • Vegetation affects water level change and flow resistance in rivers and impacts waterway ecosystems as a whole. Therefore, it is important to have accurate information about the species, shape, and size of any river vegetation. However, it is not easy to collect full vegetation data on-site, so recent studies have attempted to obtain large amounts of vegetation data using terrestrial laser scanning (TLS). Also, due to the complex shape of vegetation, it is not easy to obtain accurate information about the canopy area, and there are limitations due to a complex range of variables. Therefore, the physical structure of vegetation was analyzed in this study by reconfiguring high-resolution point cloud data collected through 3-dimensional terrestrial laser scanning (3D TLS) in a voxel. Each physical structure was analyzed under three different conditions: a simple vegetation formation without leaves, a complete formation with leaves, and a patch-scale vegetation formation. In the raw data, the outlier and unnecessary data were filtered and removed by Statistical Outlier Removal (SOR), resulting in 17%, 26%, and 25% of data being removed, respectively. Also, vegetation volume by voxel size was reconfigured from post-processed point clouds and compared with vegetation volume; the analysis showed that the margin of error was 8%, 25%, and 63% for each condition, respectively. The larger the size of the target sample, the larger the error. The vegetation surface looked visually similar when resizing the voxel; however, the volume of the entire vegetation was susceptible to error.

Method of Earthquake Acceleration Estimation for Predicting Damage to Arbitrary Location Structures based on Artificial Intelligence (임의 위치 구조물의 손상예측을 위한 인공지능 기반 지진가속도 추정방법 )

  • Kyeong-Seok Lee;Young-Deuk Seo;Eun-Rim Baek
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.27 no.3
    • /
    • pp.71-79
    • /
    • 2023
  • It is not efficient to install a maintenance system that measures seismic acceleration and displacement on all bridges and buildings to evaluate the safety of structures after an earthquake occurs. In order to maintain this, an on-site investigation is conducted. Therefore, it takes a lot of time when the scope of the investigation is wide. As a result, secondary damage may occur, so it is necessary to predict the safety of individual structures quickly. The method of estimating earthquake damage of a structure includes a finite element analysis method using approved seismic information and a structural analysis model. Therefore, it is necessary to predict the seismic information generated at arbitrary location in order to quickly determine structure damage. In this study, methods to predict the ground response spectrum and acceleration time history at arbitrary location using linear estimation methods, and artificial neural network learning methods based on seismic observation data were proposed and their applicability was evaluated. In the case of the linear estimation method, the error was small when the locations of nearby observatories were gathered, but the error increased significantly when it was spread. In the case of the artificial neural network learning method, it could be estimated with a lower level of error under the same conditions.

Factor Analysis Affecting on the Charterage of Capesize Bulk Carriers (케이프사이즈 용선료에 미치는 영향 요인분석)

  • Ahn, Young-Gyun;Lee, Min-Kyu
    • Korea Trade Review
    • /
    • v.43 no.3
    • /
    • pp.125-145
    • /
    • 2018
  • The Baltic Shipping Exchange is reporting the Baltic Dry Index (BDI) which represents the average charter rate for bulk carriers transporting major cargoes such as iron ore, coal, grain, and so on. And the current BDI index is reflected in the proportion of capesize 40%, panamax 30% and spramax 30%. Like mentioned above, the capesize plays a major role among the various sizes of bulk carriers and this study is to analyze the influence of the factors influencing on charter rate of capesize carriers which transport iron ore and coal as the major cargoes. For this purpose, this study verified causality between variables using Vector Error Correction Model (VECM) and tried to derive a long-run equilibrium model between the dependent variable and independent variables. Regression analysis showed that every six independent variable has a significant effect on the capesize charter rate, even at the 1% level of significance. Charter rate decreases by 0.08% when capesize total fleet increases by 1%, charter rate increases by 0.04% when bunker oil price increases by 1%, and charter rate decreases by 0.01% when Yen/Dollar rate increases by 1%. And charter rate increases by 0.02% when global GDP increases by one unit (1%). In addition, the increase in cargo volume of iron ore and coal which are major transportation items of capesize carriers has also been shown to increase charter rates. Charter rate increases by 0.11% in case of 1% increase in iron ore cargo volume, and 0.09% in case of 1% increase in coal cargo volume. Although there have been some studies to analyze the influence of factors affecting the charterage of bulk carriers in the past, there have been few studies on the analysis of specific size vessels. At present moment when ship size is getting bigger, this study carried out research on capesize vessels, which are biggest among bulk carriers, and whose utilization is continuously increasing. This study is also expected to contribute to the establishment of trade policies for specific cargoes such as iron ore and coal.

  • PDF

Developing the speech screening test for 4-year-old children and application of Korean speech sound analysis tool (KSAT) (4세 말소리발달 선별검사 개발과 한국어말소리분석도구(Korean Speech Sound Analysis Tool, KSAT)의 활용)

  • Soo-Jin Kim;Ki-Wan Jang;Moon-Soo Chang
    • Phonetics and Speech Sciences
    • /
    • v.16 no.1
    • /
    • pp.49-55
    • /
    • 2024
  • This study aims to develop a three-sentence speech screening test to evaluate speech development in 4-year-old children and provide standards for comparison with peers. Screening tests were conducted on 24 children each in the first and second halves of 4 years old. The screening test results showed a correlation of .7 with the existing speech disorder evaluation test results. We compared whether there was a difference between the two groups of 4-year-old in the phonological development indicators and error patterns obtained through the screening test. The developmental indicators of the children in the second half were high, but there were no statistically significant differences. The Korean Speech Sound Analysis Tool (KSAT) was used for all analyses, and the automatic analysis results and contents of the clinician's manual analysis were compared. The degree of agreement between the automatic and manual error pattern analyses was 93.63%. The significance of this study is that the standard of speech of a 4-year-old child of the speech screening test according to three sentences at the level of elicited sentences, and the applicability of the KSAT were reviewed in both clinical and research fields.