• Title/Summary/Keyword: 잡음생성 알고리즘

Search Result 150, Processing Time 0.029 seconds

Lane Detection in Complex Environment Using Grid-Based Morphology and Directional Edge-link Pairs (복잡한 환경에서 Grid기반 모폴리지와 방향성 에지 연결을 이용한 차선 검출 기법)

  • Lin, Qing;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.6
    • /
    • pp.786-792
    • /
    • 2010
  • This paper presents a real-time lane detection method which can accurately find the lane-mark boundaries in complex road environment. Unlike many existing methods that pay much attention on the post-processing stage to fit lane-mark position among a great deal of outliers, the proposed method aims at removing those outliers as much as possible at feature extraction stage, so that the searching space at post-processing stage can be greatly reduced. To achieve this goal, a grid-based morphology operation is firstly used to generate the regions of interest (ROI) dynamically, in which a directional edge-linking algorithm with directional edge-gap closing is proposed to link edge-pixels into edge-links which lie in the valid directions, these directional edge-links are then grouped into pairs by checking the valid lane-mark width at certain height of the image. Finally, lane-mark colors are checked inside edge-link pairs in the YUV color space, and lane-mark types are estimated employing a Bayesian probability model. Experimental results show that the proposed method is effective in identifying lane-mark edges among heavy clutter edges in complex road environment, and the whole algorithm can achieve an accuracy rate around 92% at an average speed of 10ms/frame at the image size of $320{\times}240$.

Quantitative Conductivity Estimation Error due to Statistical Noise in Complex $B_1{^+}$ Map (정량적 도전율측정의 오차와 $B_1{^+}$ map의 노이즈에 관한 분석)

  • Shin, Jaewook;Lee, Joonsung;Kim, Min-Oh;Choi, Narae;Seo, Jin Keun;Kim, Dong-Hyun
    • Investigative Magnetic Resonance Imaging
    • /
    • v.18 no.4
    • /
    • pp.303-313
    • /
    • 2014
  • Purpose : In-vivo conductivity reconstruction using transmit field ($B_1{^+}$) information of MRI was proposed. We assessed the accuracy of conductivity reconstruction in the presence of statistical noise in complex $B_1{^+}$ map and provided a parametric model of the conductivity-to-noise ratio value. Materials and Methods: The $B_1{^+}$ distribution was simulated for a cylindrical phantom model. By adding complex Gaussian noise to the simulated $B_1{^+}$ map, quantitative conductivity estimation error was evaluated. The quantitative evaluation process was repeated over several different parameters such as Larmor frequency, object radius and SNR of $B_1{^+}$ map. A parametric model for the conductivity-to-noise ratio was developed according to these various parameters. Results: According to the simulation results, conductivity estimation is more sensitive to statistical noise in $B_1{^+}$ phase than to noise in $B_1{^+}$ magnitude. The conductivity estimate of the object of interest does not depend on the external object surrounding it. The conductivity-to-noise ratio is proportional to the signal-to-noise ratio of the $B_1{^+}$ map, Larmor frequency, the conductivity value itself and the number of averaged pixels. To estimate accurate conductivity value of the targeted tissue, SNR of $B_1{^+}$ map and adequate filtering size have to be taken into account for conductivity reconstruction process. In addition, the simulation result was verified at 3T conventional MRI scanner. Conclusion: Through all these relationships, quantitative conductivity estimation error due to statistical noise in $B_1{^+}$ map is modeled. By using this model, further issues regarding filtering and reconstruction algorithms can be investigated for MREPT.

Indoor Environment Control System based EEG Signal and Internet of Things (EEG 신호 및 사물인터넷 기반 실내 환경 제어 시스템)

  • Jeong, Haesung;Lee, Sangmin;Kwon, Jangwoo
    • Journal of rehabilitation welfare engineering & assistive technology
    • /
    • v.11 no.1
    • /
    • pp.45-52
    • /
    • 2017
  • EEG signals that are the same as those that have the same disabled people. So, the EEG signals are becoming the next generation. In this paper, we propose an internet of things system that controls the indoor environment using EEG signal. The proposed system consists EEG measurement device, EEG simulation software and indoor environment control device. We use data as EEG signal data on emotional imagination condition in a comfortable state and logical imagination condition in concentrated state. The noise of measured signal is removed by the ICA algorithm and beta waves are extracted from it. then, it goes through learning and test process using SVM. The subjects were trained to improve the EEG signal accuracy through the EEG simulation software and the average accuracy were 87.69%. The EEG signal from the EEG measurement device is transmitted to the EEG simulation software through the serial communication. then the control command is generated by classifying emotional imagination condition and logical imagination condition. The generated control command is transmitted to the indoor environment control device through the Zigbee communication. In case of the emotional imagination condition, the soft lighting and classical music are outputted. In the logical imagination condition, the learning white noise and bright lighting are outputted. The proposed system can be applied to software and device control based BCI.

Double Encryption of Digital Hologram Based on Phase-Shifting Digital Holography and Digital Watermarking (위상 천이 디지털 홀로그래피 및 디지털 워터마킹 기반 디지털 홀로그램의 이중 암호화)

  • Kim, Cheol-Su
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.22 no.4
    • /
    • pp.1-9
    • /
    • 2017
  • In this Paper, Double Encryption Technology Based on Phase-Shifting Digital Holography and Digital Watermarking is Proposed. For the Purpose, we First Set a Logo Image to be used for Digital Watermark and Design a Binary Phase Computer Generated Hologram for this Logo Image using an Iterative Algorithm. And Random Generated Binary Phase Mask to be set as a Watermark and Key Image is Obtained through XOR Operation between Binary Phase CGH and Random Binary Phase Mask. Object Image is Phase Modulated to be a Constant Amplitude and Multiplied with Binary Phase Mask to Generate Object Wave. This Object Wave can be said to be a First Encrypted Image Having a Pattern Similar to the Noise Including the Watermark Information. Finally, we Interfere the First Encrypted Image with Reference Wave using 2-step PSDH and get a Good Visible Interference Pattern to be Called Second Encrypted Image. The Decryption Process is Proceeded with Fresnel Transform and Inverse Process of First Encryption Process After Appropriate Arithmetic Operation with Two Encrypted Images. The Proposed Encryption and Decryption Process is Confirmed through the Computer Simulations.

Adversarial Learning-Based Image Correction Methodology for Deep Learning Analysis of Heterogeneous Images (이질적 이미지의 딥러닝 분석을 위한 적대적 학습기반 이미지 보정 방법론)

  • Kim, Junwoo;Kim, Namgyu
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.11
    • /
    • pp.457-464
    • /
    • 2021
  • The advent of the big data era has enabled the rapid development of deep learning that learns rules by itself from data. In particular, the performance of CNN algorithms has reached the level of self-adjusting the source data itself. However, the existing image processing method only deals with the image data itself, and does not sufficiently consider the heterogeneous environment in which the image is generated. Images generated in a heterogeneous environment may have the same information, but their features may be expressed differently depending on the photographing environment. This means that not only the different environmental information of each image but also the same information are represented by different features, which may degrade the performance of the image analysis model. Therefore, in this paper, we propose a method to improve the performance of the image color constancy model based on Adversarial Learning that uses image data generated in a heterogeneous environment simultaneously. Specifically, the proposed methodology operates with the interaction of the 'Domain Discriminator' that predicts the environment in which the image was taken and the 'Illumination Estimator' that predicts the lighting value. As a result of conducting an experiment on 7,022 images taken in heterogeneous environments to evaluate the performance of the proposed methodology, the proposed methodology showed superior performance in terms of Angular Error compared to the existing methods.

An Efficient Coding Technique of Holographic Video Signal using 3D Segment Scanning (분할영역의 3차원 스캐닝을 이용한 홀로그래픽 비디오 신호의 효율적인 부호화 기술)

  • Seo, Young-Ho;Choi, Hyun-Jun;Kim, Dong-Wook
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.2C
    • /
    • pp.132-140
    • /
    • 2007
  • In this paper, we proposed a new technique to encode and decode the digital hologram. Since the digital hologram (or fringe pattern) is generated by interference of light, it has much different property from natural 2D (2 dimensional) images. First, we acquisite optical-sensed or computer-generated hologram by digital type, and then extract a chrominance component. The extracted digital hologram for coding is separated into segments to use multi-view properties. The segmented hologram shows the similar characteristics with picturing an object with 2D cameras in various point of view. Since fringe pattern is visually observed like as noise, we expect that the fringe pattern has poor coding efficiency. To obtain high efficiency, the segment is transformed with DCT (Discrete Cosine Transform) which resembles hologram generation process with high performance. Each transformed segment passes the 3D scanning process according to time and spatial correlation, and is organized into a video stream. Since the segment which correspond to frame of a video stream consists of the transformed coefficients with wide range of value, it is classified and re-normalized. Finally it is compressed with coding tools. The proposed algorithm illustrated that it has better properties for reconstruction of 16 times higher compression rate than the previous researches.

Water body extraction using block-based image partitioning and extension of water body boundaries (블록 기반의 영상 분할과 수계 경계의 확장을 이용한 수계 검출)

  • Ye, Chul-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.32 no.5
    • /
    • pp.471-482
    • /
    • 2016
  • This paper presents an extraction method for water body which uses block-based image partitioning and extension of water body boundaries to improve the performance of supervised classification for water body extraction. The Mahalanobis distance image is created by computing the spectral information of Normalized Difference Water Index (NDWI) and Near Infrared (NIR) band images over a training site within the water body in order to extract an initial water body area. To reduce the effect of noise contained in the Mahalanobis distance image, we apply mean curvature diffusion to the image, which controls diffusion coefficients based on connectivity strength between adjacent pixels and then extract the initial water body area. After partitioning the extracted water body image into the non-overlapping blocks of same size, we update the water body area using the information of water body belonging to water body boundaries. The update is performed repeatedly under the condition that the statistical distance between water body area belonging to water body boundaries and the training site is not greater than a threshold value. The accuracy assessment of the proposed algorithm was tested using KOMPSAT-2 images for the various block sizes between $11{\times}11$ and $19{\times}19$. The overall accuracy and Kappa coefficient of the algorithm varied from 99.47% to 99.53% and from 95.07% to 95.80%, respectively.

Diurnal Effect Compensation Algorithm for a Backup and Substitute Navigation System of GPS (GPS 백업 및 대체 항법을 위한 지상파 신호의 일변효과 보상 방안)

  • Lee, Young-Kyu;Lee, Chang-Bok;Yang, Sung-Hoon;Lee, Jong-Koo;Kong, Hyun-Dong
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.33 no.12A
    • /
    • pp.1225-1232
    • /
    • 2008
  • In this paper, we describe a compensation method of diurnal effect which is one of the factors giving large effect on the performance when using ground-wave signals like Loran-C for a backup and substitute navigation system of global satellite navigation system such as GPS, and currently many researches of the topics are doing in USA and in Europe. In order to compensate diurnal effect, we find periodic frequency components by using the Least Square Spectral Analysis (LSSA) method at first and then compensate the effect by subtracting the estimated compensation signal, obtained by using the estimated amplitude and phase of the individual frequency component, from the original signal. In this paper, we propose a simple compensation algorithm and analysis the performance through simulations. From the results, it is observed that the amplitude and phase can be estimated with under 5 % and 0.17 % in a somewhat poor receiving situation with 0 dB Signal to Noise Ratio (SNR). Also, we analyze the obtainable performance improvement after compensation by using the measured Loran-C data. From the results, it is observed that we can get about 22 % performance improvement when a moving average with 5 minutes interval is employed.

Comparison of Ultrasound Image Quality using Edge Enhancement Mask (경계면 강조 마스크를 이용한 초음파 영상 화질 비교)

  • Jung-Min, Son;Jun-Haeng, Lee
    • Journal of the Korean Society of Radiology
    • /
    • v.17 no.1
    • /
    • pp.157-165
    • /
    • 2023
  • Ultrasound imaging uses sound waves of frequencies to cause physical actions such as reflection, absorption, refraction, and transmission at the edge between different tissues. Improvement is needed because there is a lot of noise due to the characteristics of the data generated from the ultrasound equipment, and it is difficult to grasp the shape of the tissue to be actually observed because the edge is vague. The edge enhancement method is used as a method to solve the case where the edge surface looks clumped due to a decrease in image quality. In this paper, as a method to strengthen the interface, the quality improvement was confirmed by strengthening the interface, which is the high-frequency part, in each image using an unsharpening mask and high boost. The mask filtering used for each image was evaluated by measuring PSNR and SNR. Abdominal, head, heart, liver, kidney, breast, and fetal images were obtained from Philips epiq5g and affiniti70g and Alpinion E-cube 15 ultrasound equipment. The program used to implement the algorithm was implemented with MATLAB R2022a of MathWorks. The unsharpening and high-boost mask array size was set to 3*3, and the laplacian filter, a spatial filter used to create outline-enhanced images, was applied equally to both masks. ImageJ program was used for quantitative evaluation of image quality. As a result of applying the mask filter to various ultrasound images, the subjective image quality showed that the overall contour lines of the image were clearly visible when unsharpening and high-boost mask were applied to the original image. When comparing the quantitative image quality, the image quality of the image to which the unsharpening mask and the high boost mask were applied was evaluated higher than that of the original image. In the portal vein, head, gallbladder, and kidney images, the SNR, PSNR, RMSE and MAE of the image to which the high-boost mask was applied were measured to be high. Conversely, for images of the heart, breast, and fetus, SNR, PSNR, RMSE and MAE values were measured as images with the unsharpening mask applied. It is thought that using the optimal mask according to the image will help to improve the image quality, and the contour information was provided to improve the image quality.

A Study on the Effect of the Document Summarization Technique on the Fake News Detection Model (문서 요약 기법이 가짜 뉴스 탐지 모형에 미치는 영향에 관한 연구)

  • Shim, Jae-Seung;Won, Ha-Ram;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.201-220
    • /
    • 2019
  • Fake news has emerged as a significant issue over the last few years, igniting discussions and research on how to solve this problem. In particular, studies on automated fact-checking and fake news detection using artificial intelligence and text analysis techniques have drawn attention. Fake news detection research entails a form of document classification; thus, document classification techniques have been widely used in this type of research. However, document summarization techniques have been inconspicuous in this field. At the same time, automatic news summarization services have become popular, and a recent study found that the use of news summarized through abstractive summarization has strengthened the predictive performance of fake news detection models. Therefore, the need to study the integration of document summarization technology in the domestic news data environment has become evident. In order to examine the effect of extractive summarization on the fake news detection model, we first summarized news articles through extractive summarization. Second, we created a summarized news-based detection model. Finally, we compared our model with the full-text-based detection model. The study found that BPN(Back Propagation Neural Network) and SVM(Support Vector Machine) did not exhibit a large difference in performance; however, for DT(Decision Tree), the full-text-based model demonstrated a somewhat better performance. In the case of LR(Logistic Regression), our model exhibited the superior performance. Nonetheless, the results did not show a statistically significant difference between our model and the full-text-based model. Therefore, when the summary is applied, at least the core information of the fake news is preserved, and the LR-based model can confirm the possibility of performance improvement. This study features an experimental application of extractive summarization in fake news detection research by employing various machine-learning algorithms. The study's limitations are, essentially, the relatively small amount of data and the lack of comparison between various summarization technologies. Therefore, an in-depth analysis that applies various analytical techniques to a larger data volume would be helpful in the future.