• Title/Summary/Keyword: Bottom-up correction

Search Result 9, Processing Time 0.022 seconds

A Correction Method of the Error in the Survey of Topography Using an Ultrasound Altitude Sonar (초음파 고도계를 이용한 지형지물 측정에 있어서의 잡음에 의한 오차 보정 방법)

  • Kim, Sea-Moon;Choi, Jong-Su;Lee, Chong-Moo;Hong, Sup
    • Proceedings of the Korea Committee for Ocean Resources and Engineering Conference
    • /
    • 2001.10a
    • /
    • pp.26-31
    • /
    • 2001
  • In order to measure the distance from the bottom in the ocean we use ultrasound altitude sonars. The manganese nodule pick-up device developed by KRISO is also using an altitude sonar to control the gap between the pick-up head and sea bottom. This paper describes the performance of the altitude sonar by an experimental method. The experiment was performed with four ground models in a small basin, Manganese nodule models and water-bentonite mixture was used for setting up the ground models. Buttorworth filter was applied to remove the noise caused by a servo motor and its controller. The results show that the altitude sonar gives a good estimation of the types and slopes of the bottom as well as the distance.

  • PDF

Linear Feature Extraction from Satellite Imagery using Discontinuity-Based Segmentation Algorithm

  • Niaraki, Abolghasem Sadeghi;Kim, Kye-Hyun;Shojaei, Asghar
    • Proceedings of the KSRS Conference
    • /
    • v.2
    • /
    • pp.643-646
    • /
    • 2006
  • This paper addresses the approach to extract linear features from satellite imagery using an efficient segmentation method. The extraction of linear features from satellite images has been the main concern of many scientists. There is a need to develop a more capable and cost effective method for the Iranian map revision tasks. The conventional approaches for producing, maintaining, and updating GIS map are time consuming and costly process. Hence, this research is intended to investigate how to obtain linear features from SPOT satellite imagery. This was accomplished using a discontinuity-based segmentation technique that encompasses four stages: low level bottom-up, middle level bottom-up, edge thinning and accuracy assessment. The first step is geometric correction and noise removal using suitable operator. The second step includes choosing the appropriate edge detection method, finding its proper threshold and designing the built-up image. The next step is implementing edge thinning method using mathematical morphology technique. Lastly, the geometric accuracy assessment task for feature extraction as well as an assessment for the built-up result has been carried out. Overall, this approach has been applied successfully for linear feature extraction from SPOT image.

  • PDF

Correction of Depth Perception in Virtual Environment Using Spatial Compnents and Perceptual Clues (공간 구성요소 및 지각단서를 활용한 가상환경 내 깊이지각 보정)

  • Chae, Byung-Hoon;Lee, In-Soo;Chae, U-Ri;Lee, Joo-Yeoun
    • Journal of Digital Convergence
    • /
    • v.17 no.8
    • /
    • pp.205-219
    • /
    • 2019
  • As the education and training is such a virtual environment is applied to various fields, its usability is endless. However, there is an underestimation of the depth of perception in the training environment. In order to solve this problem, we tried to solve the problem by applying the top-down correction method. However, it is difficult to classify the result as a learning effect or perception change. In this study, it was confirmed that the proportion of spatial components of urine had a significant effect on the depth perception, and it was confirmed that the size perception were corrected together. In this study, we propose a correction method using spatial component and depth perception to improve the accuracy of depth perception.

Effect of Corrected Hydrostatic Pressure in Shallow-Water Flow over Large Slope (대경사를 지나는 천수 흐름에서 수정된 정수압의 효과)

  • Hwang, Seung-Yong
    • Journal of Korea Water Resources Association
    • /
    • v.47 no.12
    • /
    • pp.1177-1185
    • /
    • 2014
  • This study suggests a new hydrostatic pressure distribution corrected for nonuniform flow over a channel of large slope. For analyzing shallow-water flows over large slope accurately, it is developed a finite-volume model incorporating the pressure distribution to the shallow water equations. Traveling speed of the hydraulic jump downstream a parabolic bump in the drain case is quite reduced by the weakened bottom gradient source term in the model with the pressure correction. In simulating the dam-break flow over a triangular sill, it is identified that the model with pressure correction could capture the water surface by the digital imaging measurements more than the model without that. Due to the pressure correction decreasing the reflected flows on and increasing overflows over the sill, there are good agreements in the experiment and the simulation with that. Therefore, this model is expected to be applied to such practical problems as flows in the spillway of dam or run-up on the beach.

NANOCAD Framework for Simulation of Quantum Effects in Nanoscale MOSFET Devices

  • Jin, Seong-Hoon;Park, Chan-Hyeong;Chung, In-Young;Park, Young-June;Min, Hong-Shick
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.6 no.1
    • /
    • pp.1-9
    • /
    • 2006
  • We introduce our in-house program, NANOCAD, for the modeling and simulation of carrier transport in nanoscale MOSFET devices including quantum-mechanical effects, which implements two kinds of modeling approaches: the top-down approach based on the macroscopic quantum correction model and the bottom-up approach based on the microscopic non-equilibrium Green’s function formalism. We briefly review these two approaches and show their applications to the nanoscale bulk MOSFET device and silicon nanowire transistor, respectively.

CHART PARSER FOR ILL-FORMED INPUT SENTENCES (잘못 형성된 입력문장에 대한 CHART PARSER)

  • KyonghoMin
    • Korean Journal of Cognitive Science
    • /
    • v.4 no.1
    • /
    • pp.177-212
    • /
    • 1993
  • My research is based on the parser for ill-formed input by Mellish in a paper in ACL 27th meeting Proceedings. 1989. My system is composed of two parsers:WFCP and IFCP. When WFCP fails to give the parse tree for the input sentence, the sentence is identified as ill-formed and is parsed by IFCP for error detection and recovery at the syntactic level. My system is indendent of grammatical rules. It does not take into account semantic ill-formedness. My system uses a grammar composed of 25 context-free rules. My system consistes of two major parsing strategies:top-down expection and bottem-up satisfaction. With top-down expectation. rules are retrieved under the inference condition and expaned by inactive arcs. When doing bottom-up parsing. my parser used two modes:Left-Right parsing and Right-to-Left parsing. My system repairs errors sucessfully when the input contains an omitted word or an unknown word substitued for a valid word. Left- corner and right-corner errors are more easily detected and repaired than ill-formed senteces where the error is in teh middle. The deviance note. with repair details, is kept in new inactive arcs which are generated by the error correction procedure. The implementation of my system is quite different from Mellish's. When rules are invoked. my system invokes all rules with minimal inference. My bottom up parsing strategy uses Left-to-Right mode and Right-to-Left mode. My system is bottom-up-parsing-oriented like the chart parser. Errors are repaired in two ways:using top-down hypothesis, and using Need-Chart which keeps the information of expectation and complection of expanded goals by rules. To reduce the number of top-down cycles. all rules are invoked simultaneously and this invocation information is kept in Need-Chart. This idea will be extended for the implementation of multiple error recovery system.

An Efficient Feature Point Extraction and Comparison Method through Distorted Region Correction in 360-degree Realistic Contents

  • Park, Byeong-Chan;Kim, Jin-Sung;Won, Yu-Hyeon;Kim, Young-Mo;Kim, Seok-Yoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.1
    • /
    • pp.93-100
    • /
    • 2019
  • One of critical issues in dealing with 360-degree realistic contents is the performance degradation in searching and recognition process since they support up to 4K UHD quality and have all image angles including the front, back, left, right, top, and bottom parts of a screen. To solve this problem, in this paper, we propose an efficient search and comparison method for 360-degree realistic contents. The proposed method first corrects the distortion at the less distorted regions such as front, left and right parts of the image excluding severely distorted regions such as upper and lower parts, and then it extracts feature points at the corrected region and selects the representative images through sequence classification. When the query image is inputted, the search results are provided through feature points comparison. The experimental results of the proposed method shows that it can solve the problem of performance deterioration when 360-degree realistic contents are recognized comparing with traditional 2D contents.

A Sensitivity Test on the Minimum Depth of the Tide Model in the Northeast Asian Marginal Seas (동북아시아 조석 모델의 최소수심에 대한 민감도 분석)

  • Lee, Ho-Jin;Seo, Ok-Hee;Kang, Hyoun-Woo
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.19 no.5
    • /
    • pp.457-466
    • /
    • 2007
  • The effect of depth correction in the coastal sea has been investigated through a series of tide simulations in the area of $115{\sim}150^{\circ}E,\;20{\sim}52^{\circ}N$ of northwestern Pacific with $1/12^{\circ}$ resolution. Comparison of the solutions varying the minimum depth from 10m to 35 m with the 5m interval shows that the amplitude accuracies of $M_2,\;S_2,\;K_1$ tide using the minimum depth of 25 m have been improved up to 42%, 32%, 26%, respectively, comparing to those using the minimum depth of 10m. The discrepancy between model results using different minimum depth is found to be up to 20 cm for $M_2$ tidal amplitude around Cheju Islands and the positions of amphidromes are dramatically changed in the Bohai Sea. The calculated ARE(Averaged Relative Error) values have been minimized when the bottom frictional coefficient and the minimum depth is 0.0015 and 25 m, respectively.

Current Status of Hyperspectral Data Processing Techniques for Monitoring Coastal Waters (연안해역 모니터링을 위한 초분광영상 처리기법 현황)

  • Kim, Sun-Hwa;Yang, Chan-Su
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.18 no.1
    • /
    • pp.48-63
    • /
    • 2015
  • In this study, we introduce various hyperspectral data processing techniques for the monitoring of shallow and coastal waters to enlarge the application range and to improve the accuracy of the end results in Korea. Unlike land, more accurate atmospheric correction is needed in coastal region showing relatively low reflectance in visible wavelengths. Sun-glint which occurs due to a geometry of sun-sea surface-sensor is another issue for the data processing in the ocean application of hyperspectal imagery. After the preprocessing of the hyperspectral data, a semi-analytical algorithm based on a radiative transfer model and a spectral library can be used for bathymetry mapping in coastal area, type classification and status monitoring of benthos or substrate classification. In general, semi-analytical algorithms using spectral information obtained from hyperspectral imagey shows higher accuracy than an empirical method using multispectral data. The water depth and quality are constraint factors in the ocean application of optical data. Although a radiative transfer model suggests the theoretical limit of about 25m in depth for bathymetry and bottom classification, hyperspectral data have been used practically at depths of up to 10 m in shallow and coastal waters. It means we have to focus on the maximum depth of water and water quality conditions that affect the coastal applicability of hyperspectral data, and to define the spectral library of coastal waters to classify the types of benthos and substrates.