• Title/Summary/Keyword: Noise removing

Search Result 407, Processing Time 0.028 seconds

Patent Trend of Unmanned and Automated Agricultural Production - Open Field Operation -

  • Kim, YongJoo;Chung, SunOk;Lee, ChoongHan;Lee, DaeHyun;Lee, KyeongHwan
    • Agribusiness and Information Management
    • /
    • v.6 no.1
    • /
    • pp.30-36
    • /
    • 2014
  • This study was conducted to determine the major patent and analyze the patent trend of unmanned and automated agricultural production for the open field operation. As a result of conducting a search for patent applications related to these technologies, 1,080 valid patents were selected by evaluating the relevance of the patents and removing noise patents. As a result of the country-based analysis using the selected valid patents, it was found out that the largest number of patent applications were filed in the United States with 541 cases, followed by Japan with 326 cases, the European Union with 128 cases, and Korea with 85 cases. Upon classifying the valid patents into core technology, the path generation and tracking technology accounts for 33% with 353 cases; the implementing control with environmental condition technology accounts for 22% with 236 cases; the robot design technology accounts for 21% with 228 cases; the plant and environment sensing technology accounts for 19% with 206 cases; the yield and quality monitoring technology accounts for 5% with 58 cases. Finally, 10 core patents were selected by performing a patent index evaluation. The United States registered all of the 10 core patents. The results showed that Korea falls behind in the open field-related unmanned and automated agricultural production, compared to other developed agricultural countries.

Development of Contact Pressure Analysis Model of Automobile Wiper Blades (차량용 와이퍼 블레이드의 접촉압력 해석모델 개발)

  • Lee, Sangjin;Noh, Yoojeong;Kim, Kyungnam;Kim, Keunwoo;Jang, Youngkeun;Kim, Kwanhee;Lee, Jaecheon
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.23 no.3
    • /
    • pp.292-298
    • /
    • 2015
  • A wiper is a safety device removing rain and debris from windshield and ensuring visibility of drivers. If contact pressure distribution between rubber of the blade and the windshield is unbalanced, unwanted noise, vibration, and abrasion of the blade can occur and sometimes fatal accidents could occur. To improve the safety of the wiper, there have been many researches on the contact pressure analysis of the wiper, but the analysis results were not converged or require much computational time due to material nonlinearity of the rubber and contact conditions between the blade rubber and the windshield. In this research, a simple model with 1D beam and 2D shell elements was used for the contact pressure analysis instead of the 3D blade model. The simplified model saved computational time of the analysis and resolved convergence problems. The accuracy of the analysis results was verified by comparing them with experimental results for different rail spring curvatures.

Estimation of the Flood Area Using Multi-temporal RADARSAT SAR Imagery

  • Sohn, Hong-Gyoo;Song, Yeong-Sun;Yoo, Hwan-Hee;Jung, Won-Jo
    • Korean Journal of Geomatics
    • /
    • v.2 no.1
    • /
    • pp.37-46
    • /
    • 2002
  • Accurate classification of water area is an preliminary step to accurately analyze the flooded area and damages caused by flood. This step is especially useful for monitoring the region where annually repeating flood is a problem. The accurate estimation of flooded area can ultimately be utilized as a primary source of information for the policy decision. Although SAR (Synthetic Aperture Radar) imagery with its own energy source is sensitive to the water area, its shadow effect similar to the reflectance signature of the water area should be carefully checked before accurate classification. Especially when we want to identify small flood area with mountainous environment, the step for removing shadow effect turns out to be essential in order to accurately classify the water area from the SAR imagery. In this paper, the flood area was classified and monitored using multi-temporal RADARSAT SAR images of Ok-Chun and Bo-Eun located in Chung-Book Province taken in 12th (during the flood) and 19th (after the flood) of August, 1998. We applied several steps of geometric and radiometric calculations to the SAR imagery. First we reduced the speckle noise of two SAR images and then calculated the radar backscattering coefficient $(\sigma^0)$. After that we performed the ortho-rectification via satellite orbit modeling developed in this study using the ephemeris information of the satellite images and ground control points. We also corrected radiometric distortion caused by the terrain relief. Finally, the water area was identified from two images and the flood area is calculated accordingly. The identified flood area is analyzed by overlapping with the existing land use map.

  • PDF

Improved Method of License Plate Detection and Recognition Facilitated by Fast Super-Resolution GAN (Fast Super-Resolution GAN 기반 자동차 번호판 검출 및 인식 성능 고도화 기법)

  • Min, Dongwook;Lim, Hyunseok;Gwak, Jeonghwan
    • Smart Media Journal
    • /
    • v.9 no.4
    • /
    • pp.134-143
    • /
    • 2020
  • Vehicle License Plate Recognition is one of the approaches for transportation and traffic safety networks, such as traffic control, speed limit enforcement and runaway vehicle tracking. Although it has been studied for decades, it is attracting more and more attention due to the recent development of deep learning and improved performance. Also, it is largely divided into license plate detection and recognition. In this study, experiments were conducted to improve license plate detection performance by utilizing various object detection methods and WPOD-Net(Warped Planar Object Detection Network) model. The accuracy was improved by selecting the method of detecting the vehicle(s) and then detecting the license plate(s) instead of the conventional method of detecting the license plate using the object detection model. In particular, the final performance was improved through the process of removing noise existing in the image by using the Fast-SRGAN model, one of the Super-Resolution methods. As a result, this experiment showed the performance has improved an average of 4.34% from 92.38% to 96.72% compared to previous studies.

A Study on Kernel Size Adaptation for Correntropy-based Learning Algorithms (코렌트로피 기반 학습 알고리듬의 커널 사이즈에 관한 연구)

  • Kim, Namyong
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.2
    • /
    • pp.714-720
    • /
    • 2021
  • The ITL (information theoretic learning) based on the kernel density estimation method that has successfully been applied to machine learning and signal processing applications has a drawback of severe sensitiveness in choosing proper kernel sizes. For the maximization of correntropy criterion (MCC) as one of the ITL-type criteria, several methods of adapting the remaining kernel size ( ) after removing the term have been studied. In this paper, it is shown that the main cause of sensitivity in choosing the kernel size derives from the term and that the adaptive adjustment of in the remaining terms leads to approach the absolute value of error, which prevents the weight adjustment from continuing. Thus, it is proposed that choosing an appropriate constant as the kernel size for the remaining terms is more effective. In addition, the experiment results when compared to the conventional algorithm show that the proposed method enhances learning performance by about 2dB of steady state MSE with the same convergence rate. In an experiment for channel models, the proposed method enhances performance by 4 dB so that the proposed method is more suitable for more complex or inferior conditions.

Deep learning-based target distance and velocity estimation technique for OFDM radars (OFDM 레이다를 위한 딥러닝 기반 표적의 거리 및 속도 추정 기법)

  • Choi, Jae-Woong;Jeong, Eui-Rim
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.1
    • /
    • pp.104-113
    • /
    • 2022
  • In this paper, we propose deep learning-based target distance and velocity estimation technique for OFDM radar systems. In the proposed technique, the 2D periodogram is obtained via 2D fast Fourier transform (FFT) from the reflected signal after removing the modulation effect. The periodogram is the input to the conventional and proposed estimators. The peak of the 2D periodogram represents the target, and the constant false alarm rate (CFAR) algorithm is the most popular conventional technique for the target's distance and speed estimation. In contrast, the proposed method is designed using the multiple output convolutional neural network (CNN). Unlike the conventional CFAR, the proposed estimator is easier to use because it does not require any additional information such as noise power. According to the simulation results, the proposed CNN improves the mean square error (MSE) by more than 5 times compared with the conventional CFAR, and the proposed estimator becomes more accurate as the number of transmitted OFDM symbols increases.

ESTIMATION OF NITROGEN-TO-IRON ABUNDANCE RATIOS FROM LOW-RESOLUTION SPECTRA

  • Kim, Changmin;Lee, Young Sun;Beers, Timothy C.;Masseron, Thomas
    • Journal of The Korean Astronomical Society
    • /
    • v.55 no.2
    • /
    • pp.23-36
    • /
    • 2022
  • We present a method to determine nitrogen abundance ratios with respect to iron ([N/Fe]) from molecular CN-band features observed in low-resolution (R ~ 2000) stellar spectra obtained by the Sloan Digital Sky Survey (SDSS) and the Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST). Various tests are carried out to check the systematic and random errors of our technique, and the impact of signal-to-noise (S/N) ratios of stellar spectra on the determined [N/Fe]. We find that the uncertainty of our derived [N/Fe] is less than 0.3 dex for S/N ratios larger than 10 in the ranges Teff = [4000, 6000] K, log g = [0.0, 3.5], [Fe/H] = [-3.0, 0.0], [C/Fe] = [-1.0, +4.5], and [N/Fe] = [-1.0, +4.5], the parameter space that we are interested in to identify N-enhanced stars in the Galactic halo. A star-by-star comparison with a sample of stars with [N/Fe] estimates available from the Apache Point Observatory Galactic Evolution Experiment (APOGEE) also suggests a similar level of uncertainty in our measured [N/Fe], after removing its systematic error. Based on these results, we conclude that our method is able to reproduce [N/Fe] from low-resolution spectroscopic data, with an uncertainty sufficiently small to discover N-rich stars that presumably originated from disrupted Galactic globular clusters.

Improvement of early prediction performance of under-performing students using anomaly data (이상 데이터를 활용한 성과부진학생의 조기예측성능 향상)

  • Hwang, Chul-Hyun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.11
    • /
    • pp.1608-1614
    • /
    • 2022
  • As competition between universities intensifies due to the recent decrease in the number of students, it is recognized as an essential task of universities to predict students who are underperforming at an early stage and to make various efforts to prevent dropouts. For this, a high-performance model that accurately predicts student performance is essential. This paper proposes a method to improve prediction performance by removing or amplifying abnormal data in a classification prediction model for identifying underperforming students. Existing anomaly data processing methods have mainly focused on deleting or ignoring data, but this paper presents a criterion to distinguish noise from change indicators, and contributes to improving the performance of predictive models by deleting or amplifying data. In an experiment using open learning performance data for verification of the proposed method, we found a number of cases in which the proposed method can improve classification performance compared to the existing method.

Digital Filter Algorithm based on Local Steering Kernel and Block Matching in AWGN Environment (AWGN 환경에서 로컬 스티어링 커널과 블록매칭에 기반한 디지털 필터 알고리즘)

  • Cheon, Bong-Won;Kim, Nam-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.7
    • /
    • pp.910-916
    • /
    • 2021
  • In modern society, various digital communication equipment is being used due to the influence of the 4th industrial revolution. Accordingly, interest in removing noise generated in a data transmission process is increasing, and research is being conducted to efficiently reconstruct an image. In this paper, we propose a filtering algorithm to remove the AWGN generated in the digital image transmission process. The proposed algorithm classifies pixels with high similarity by selecting regions with similar patterns around the input pixels according to block matching to remove the AWGN that appears strongly in the image. The selected pixel determines the estimated value by applying the weight obtained by the local steering kernel, and obtains the final output by adding or subtracting the input pixel value according to the standard deviation of the center mask. In order to evaluate the proposed algorithm, it was simulated with existing AWGN removal algorithms, and comparative analysis was performed using enlarged images and PSNR.

SAVITZKY-GOLAY DERIVATIVES : A SYSTEMATIC APPROACH TO REMOVING VARIABILITY BEFORE APPLYING CHEMOMETRICS

  • Hopkins, David W.
    • Proceedings of the Korean Society of Near Infrared Spectroscopy Conference
    • /
    • 2001.06a
    • /
    • pp.1041-1041
    • /
    • 2001
  • Removal of variability in spectra data before the application of chemometric modeling will generally result in simpler (and presumably more robust) models. Particularly for sparsely sampled data, such as typically encountered in diode array instruments, the use of Savitzky-Golay (S-G) derivatives offers an effective method to remove effects of shifting baselines and sloping or curving apparent baselines often observed with scattering samples. The application of these convolution functions is equivalent to fitting a selected polynomial to a number of points in the spectrum, usually 5 to 25 points. The value of the polynomial evaluated at its mid-point, or its derivative, is taken as the (smoothed) spectrum or its derivative at the mid-point of the wavelength window. The process is continued for successive windows along the spectrum. The original paper, published in 1964 [1] presented these convolution functions as integers to be used as multipliers for the spectral values at equal intervals in the window, with a normalization integer to divide the sum of the products, to determine the result for each point. Steinier et al. [2] published corrections to errors in the original presentation [1], and a vector formulation for obtaining the coefficients. The actual selection of the degree of polynomial and number of points in the window determines whether closely situated bands and shoulders are resolved in the derivatives. Furthermore, the actual noise reduction in the derivatives may be estimated from the square root of the sums of the coefficients, divided by the NORM value. A simple technique to evaluate the actual convolution factors employed in the calculation by the software will be presented. It has been found that some software packages do not properly account for the sampling interval of the spectral data (Equation Ⅶ in [1]). While this is not a problem in the construction and implementation of chemometric models, it may be noticed in comparing models at differing spectral resolutions. Also, the effects on parameters of PLS models of choosing various polynomials and numbers of points in the window will be presented.

  • PDF