• Title/Summary/Keyword: 노이즈 제거 알고리즘

Search Result 207, Processing Time 0.023 seconds

Text Region Extraction from Videos using the Harris Corner Detector (해리스 코너 검출기를 이용한 비디오 자막 영역 추출)

  • Kim, Won-Jun;Kim, Chang-Ick
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.7
    • /
    • pp.646-654
    • /
    • 2007
  • In recent years, the use of text inserted into TV contents has grown to provide viewers with better visual understanding. In this paper, video text is defined as superimposed text region located of the bottom of video. Video text extraction is the first step for video information retrieval and video indexing. Most of video text detection and extraction methods in the previous work are based on text color, contrast between text and background, edge, character filter, and so on. However, the video text extraction has big problems due to low resolution of video and complex background. To solve these problems, we propose a method to extract text from videos using the Harris corner detector. The proposed algorithm consists of four steps: corer map generation using the Harris corner detector, extraction of text candidates considering density of comers, text region determination using labeling, and post-processing. The proposed algorithm is language independent and can be applied to texts with various colors. Text region update between frames is also exploited to reduce the processing time. Experiments are performed on diverse videos to confirm the efficiency of the proposed method.

Development of Registration Post-Processing Technology to Homogenize the Density of the Scan Data of Earthwork Sites (토공현장 스캔데이터 밀도 균일화를 위한 정합 후처리 기술 개발)

  • Kim, Yonggun;Park, Suyeul;Kim, Seok
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.42 no.5
    • /
    • pp.689-699
    • /
    • 2022
  • Recently, high productivity capabilities have been improved due to the application of advanced technologies in various industries, but in the construction industry, productivity improvements have been relatively low. Research on advanced technology for the construction industry is being conducted quickly to overcome the current low productivity. Among advanced technologies, 3D scan technology is widely used for creating 3D digital terrain models at construction sites. In particular, the 3D digital terrain model provides basic data for construction automation processes, such as earthwork machine guidance and control. The quality of the 3D digital terrain model has a lot of influence not only on the performance and acquisition environment of the 3D scanner, but also on the denoising, registration and merging process, which is a preprocessing process for creating a 3D digital terrain model after acquiring terrain scan data. Therefore, it is necessary to improve the terrain scan data processing performance. This study seeks to solve the problem of density inhomogeneity in terrain scan data that arises during the pre-processing step. The study suggests a 'pixel-based point cloud comparison algorithm' and verifies the performance of the algorithm using terrain scan data obtained at an actual earthwork site.

A Road Feature Extraction and Obstacle Localization Based on Stereo Vision (스테레오 비전 기반의 도로 특징 정보 추출 및 장애 물체 검출)

  • Lee, Chung-Hee;Lim, Young-Chul;Kwon, Soon;Lee, Jong-Hun
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.46 no.6
    • /
    • pp.28-37
    • /
    • 2009
  • In this paper, we propose an obstacle localization method using a road feature based on a V-disparity map binarized by a maximum frequency value. In a conventional method, the detection performance is severely affected by the size, number and type of obstacles. It's especially difficult to extract a large obstacle or a continuous obstacle like a median strip. So we use a road feature as a new decision standard to localize obstacles irrespective of external environments. A road feature is proper to be a new decision standard because it keeps its rough feature very well in V-disparity under environments where many obstacles exist. And first of all, we create a binary V-disparity map using a maximum frequency value to extract a road feature easily. And then we compare the binary V-disparity map with a median value to remove noises. Finally, we use a linear interpolation for rows which have no value. Comparing this road feature with each column value in disparity map, we can localize obstacles robustly. We also propose a post-processing technique to remove noises made in obstacle localization stage. The results in real road tests show that the proposed algorithm has a better performance than a conventional method.

MORPHEUS: A More Scalable Comparison-Shopping Agent (MORPHEUS: 확장성이 있는 비교 쇼핑 에이전트)

  • Yang, Jae-Yeong;Kim, Tae-Hyeong;Choe, Jung-Min
    • Journal of KIISE:Software and Applications
    • /
    • v.28 no.2
    • /
    • pp.179-191
    • /
    • 2001
  • Comparison shopping is a merchant brokering process that finds the best price for the desired product from several Web-based online stores. To get a scalable comparison shopper, we need an agent that automatically constructs a simple information extraction procedure, called a wrapper, for each semi-structured store. Automatic construction of wrappers for HTML-based Web stores is difficult because HTML only defines how information is to be displayed, not what it means, and different stores employ different ways of manipulating customer queries and different presentation formats for displaying product descriptions. Wrapper induction has been suggested as a promising strategy for overcoming this heterogeneity. However, previous scalable comparison-shoppers such as ShopBot rely on a strong bias in the product descriptions, and as a result, many stores that do not confirm to this bias were unable to be recognized. This paper proposes a more scalable comparison-shopping agent named MORPHEUS. MORPHEUS presents a simple but robust inductive learning algorithm that antomatically constructs wrappers. The main idea of the proposed algorithm is to recognize the position and the structure of a product description unit by finding the most frequent pattern from the sequence of logical line information in output HTML pages. MORPHEUS successfully constructs correct wtappers for most stores by weakening a bias assumed in previous systems. It also tolerates some noises that might be present in production descriptions such as missing attributes. MORPHEUS generates the wrappers rapidly by excluding the pre-processing phase of removing redundant fragments in a page such as a header, a tailer, and advertisements. Eventually, MORPHEUS provides a framework from which a customized comparison-shopping agent can be organized for a user by facilitating the dynamic addition of new stores.

  • PDF

Development of Learning Algorithm using Brain Modeling of Hippocampus for Face Recognition (얼굴인식을 위한 해마의 뇌모델링 학습 알고리즘 개발)

  • Oh, Sun-Moon;Kang, Dae-Seong
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.5 s.305
    • /
    • pp.55-62
    • /
    • 2005
  • In this paper, we propose the face recognition system using HNMA(Hippocampal Neuron Modeling Algorithm) which can remodel the cerebral cortex and hippocampal neuron as a principle of a man's brain in engineering, then it can learn the feature-vector of the face images very fast and construct the optimized feature each image. The system is composed of two parts. One is feature-extraction and the other is teaming and recognition. In the feature extraction part, it can construct good-classified features applying PCA(Principal Component Analysis) and LDA(Linear Discriminants Analysis) in order. In the learning part, it cm table the features of the image data which are inputted according to the order of hippocampal neuron structure to reaction-pattern according to the adjustment of a good impression in the dentate gyrus region and remove the noise through the associate memory in the CA3 region. In the CA1 region receiving the information of the CA3, it can make long-term memory learned by neuron. Experiments confirm the each recognition rate, that are face changes, pose changes and low quality image. The experimental results show that we can compare a feature extraction and learning method proposed in this paper of any other methods, and we can confirm that the proposed method is superior to existing methods.

Detection of Toluene Hazardous and Noxious Substances (HNS) Based on Hyperspectral Remote Sensing (초분광 원격탐사 기반 위험·유해물질 톨루엔 탐지)

  • Park, Jae-Jin;Park, Kyung-Ae;Foucher, Pierre-Yves;Kim, Tae-Sung;Lee, Moonjin
    • Journal of the Korean earth science society
    • /
    • v.42 no.6
    • /
    • pp.623-631
    • /
    • 2021
  • The increased transport of marine hazardous and noxious substances (HNS) has resulted in frequent HNS spill accidents domestically and internationally. There are about 6,000 species of HNS internationally, and most of them have toxic properties. When an accidental HNS spill occurs, it can destroys the marine ecosystem and can damage life and property due to explosion and fire. Constructing a spectral library of HNS according to wavelength and developing a detection algorithm would help prepare for accidents. In this study, a ground HNS spill experiment was conducted in France. The toluene spectrum was determined through hyperspectral sensor measurements. HNS present in the hyperspectral images were detected by applying the spectral mixture algorithm. Preprocessing principal component analysis (PCA) removed noise and performed dimensional compression. The endmember spectra of toluene and seawater were extracted through the N-FINDR technique. By calculating the abundance fraction of toluene and seawater based on the spectrum, the detection accuracy of HNS in all pixels was presented as a probability. The probability was compared with radiance images at a wavelength of 418.15 nm to select abundance fractions with maximum detection accuracy. The accuracy exceeded 99% at a ratio of approximately 42%. Response to marine spills of HNS are presently impeded by the restricted access to the site because of high risk of exposure to toxic compounds. The present experimental and detection results could help estimate the area of contamination with HNS based on hyperspectral remote sensing.

Statistical Techniques to Detect Sensor Drifts (센서드리프트 판별을 위한 통계적 탐지기술 고찰)

  • Seo, In-Yong;Shin, Ho-Cheol;Park, Moon-Ghu;Kim, Seong-Jun
    • Journal of the Korea Society for Simulation
    • /
    • v.18 no.3
    • /
    • pp.103-112
    • /
    • 2009
  • In a nuclear power plant (NPP), periodic sensor calibrations are required to assure sensors are operating correctly. However, only a few faulty sensors are found to be calibrated. For the safe operation of an NPP and the reduction of unnecessary calibration, on-line calibration monitoring is needed. In this paper, principal component-based Auto-Associative support vector regression (PCSVR) was proposed for the sensor signal validation of the NPP. It utilizes the attractive merits of principal component analysis (PCA) for extracting predominant feature vectors and AASVR because it easily represents complicated processes that are difficult to model with analytical and mechanistic models. With the use of real plant startup data from the Kori Nuclear Power Plant Unit 3, SVR hyperparameters were optimized by the response surface methodology (RSM). Moreover the statistical techniques are integrated with PCSVR for the failure detection. The residuals between the estimated signals and the measured signals are tested by the Shewhart Control Chart, Exponentially Weighted Moving Average (EWMA), Cumulative Sum (CUSUM) and generalized likelihood ratio test (GLRT) to detect whether the sensors are failed or not. This study shows the GLRT can be a candidate for the detection of sensor drift.

Rock Joint Trace Detection Using Image Processing Technique (영상 처리를 이용한 암석 절리 궤적의 추적)

  • 이효석;김재동;김동현
    • Tunnel and Underground Space
    • /
    • v.13 no.5
    • /
    • pp.373-388
    • /
    • 2003
  • The investigation on the rock discontinuity geometry has been usually undergone by direct measurement on the rock exposures. But this sort of field work has disadvantages, which we, for example, restriction of surveying areas and consuming excessive times and labors. To cover these kinds of disadvantages, image processing could be regarded as an altemative way, with additional advantages such as automatic and objective tools when used under adequate computerized algorithm. This study was focused on the recognition of the rock discontinuities captured in the image of rock exposure by digital camera and the production of the discontinuity map automatically. The whole process was written using macro commands builtin image analyzer, ImagePro Plus. ver 4.1(Media Cybernetic). The procedure of image processing developed in this research could be divided with three steps, which are enhancement, recognition and extraction of discontinuity traces from the digital image. Enhancement contains combining and applying several filters to remove and relieve various types of noises from the image of rock surface. For the next step, recognition of discontinuity traces was executed. It used local topographic features characterized by the differences of gray scales between discontinuity and rock. Such segments of discontinuity traces extracted from the image were reformulated using an algorithm of computer decision-making criteria and linked to form complete discontinuity traces. To verify the image processing algorithms and their sequences developed in this research, discontinuity traces digitally photographed on the rock slope were analyzed. The result showed about 75~80% of discontinuities could be detected. It is thought to be necessary that the algorithms and computer codes developed in this research need to be advanced further especially in combining digital filters to produce images to be more acceptable for extraction of discontinuity traces and setting seed pixels automatically when linking trace segments to make a complete discontinuity trace.

Analysis of Mass Transport in PEMFC GDL (연료전지 가스확산층(GDL) 내의 물질거동에 대한 연구)

  • Jeong, Hee-Seok;Kim, Jeong-Ik;Lee, Seong-Ho;Lim, Cheol-Ho;Ahn, Byung-Ki;Kim, Charn-Jung
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.36 no.10
    • /
    • pp.979-988
    • /
    • 2012
  • The 3D structure of GDL for fuel cells was measured using high-resolution X-ray tomography in order to study material transport in the GDL. A computational algorithm has been developed to remove noise in the 3D image and construct 3D elements representing carbon fibers of GDL, which were used for both structural and fluid analyses. Changes in the pore structure of GDL under various compression levels were calculated, and the corresponding volume meshes were generated to evaluate the anisotropic permeability of gas within GDL as a function of compression. Furthermore, the transfer of liquid water and reactant gases was simulated by using the volume of fluid (VOF) and pore-network model (PNM) techniques. In addition, the simulation results of liquid water transport in GDL were validated by analogous experiments to visualize the diffusion of fluid in porous media. Through this research, a procedure for simulating the material transport in deformed GDL has been developed; this will help in optimizing the clamping force of fuel cell stacks as well as in determining the design parameters of GDL, such as thickness and porosity.

A New Endpoint Detection Method Based on Chaotic System Features for Digital Isolated Word Recognition System (음성인식을 위한 혼돈시스템 특성기반의 종단탐색 기법)

  • Zang, Xian;Chong, Kil-To
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.46 no.5
    • /
    • pp.8-14
    • /
    • 2009
  • In the research field of speech recognition, pinpointing the endpoints of speech utterance even with the presence of background noise is of great importance. These noise present during recording introduce disturbances which complicates matters since what we just want is to get the stationary parameters corresponding to each speech section. One major cause of error in automatic recognition of isolated words is the inaccurate detection of the beginning and end boundaries of the test and reference templates, thus the necessity to find an effective method in removing the unnecessary regions of a speech signal. The conventional methods for speech endpoint detection are based on two linear time-domain measurements: the short-time energy, and short-time zero-crossing rate. They perform well for clean speech but their precision is not guaranteed if there is noise present, since the high energy and zero-crossing rate of the noise is mistaken as a part of the speech uttered. This paper proposes a novel approach in finding an apparent threshold between noise and speech based on Lyapunov Exponents (LEs). This proposed method adopts the nonlinear features to analyze the chaos characteristics of the speech signal instead of depending on the unreliable factor-energy. The excellent performance of this approach compared with the conventional methods lies in the fact that it detects the endpoints as a nonlinearity of speech signal, which we believe is an important characteristic and has been neglected by the conventional methods. The proposed method extracts the features based only on the time-domain waveform of the speech signal illustrating its low complexity. Simulations done showed the effective performance of the Proposed method in a noisy environment with an average recognition rate of up 92.85% for unspecified person.