• Title/Summary/Keyword: Edge detection

Search Result 1,469, Processing Time 0.026 seconds

Reduction of Radiographic Quantum Noise Using Adaptive Weighted Median Filter (적응성 가중메디안 필터를 이용한 방사선 투과영상의 양자 잡음 제거)

  • Lee, Hoo-Min;Nam, Moon-Hyon
    • Journal of the Korean Society for Nondestructive Testing
    • /
    • v.22 no.5
    • /
    • pp.465-473
    • /
    • 2002
  • Images are easily corrupted by noise during the data transmission, data capture and data processing. A technical method of noise analyzing and adaptive filtering for reducing of quantum noise in radiography is presented. By adjusting the characteristics of the filter according to local statistics around each pixel of the image as moving windowing, it is possible to suppress noise sufficiently while preserve edge and other significant information required in reading. We have proposed adaptive weighted median(AWM) filters based on local statistics. We show two ways of realizing the AWM filters. One is a simple type of AWM filter, whose weights are given by a simple non-linear function of three local characteristics. The other is the AWM filter which is constructed by homogeneous factor(HF). Homogeneous factor(HF) from the quantum noise models that enables the filter to recognize the local structures of the image is introduced, and an algorithm for determining the HF fitted to the detection systems with various inner statistical properties is proposed. We show by the experimented that the performances of proposed method is superior to these of other filters and models in preserving small details and suppressing the noise at homogeneous region. The proposed algorithms were implemented by visual C++ language on a IBM-PC Pentium 550 for testing purposes, the effects and results of the noise filtering were proposed by comparing with images of the other existing filtering methods.

Automatic On-Chip Glitch-Free Backup Clock Changing Method for MCU Clock Failure Protection in Unsafe I/O Pin Noisy Environment (안전하지 않은 I/O핀 노이즈 환경에서 MCU 클럭 보호를 위한 자동 온칩 글리치 프리 백업 클럭 변환 기법)

  • An, Joonghyun;Youn, Jiae;Cho, Jeonghun;Park, Daejin
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.12
    • /
    • pp.99-108
    • /
    • 2015
  • The embedded microcontroller which is operated by the logic gates synchronized on the clock pulse, is gradually used as main controller of mission-critical systems. Severe electrical situations such as high voltage/frequency surge may cause malfunctioning of the clock source. The tolerant system operation is required against the various external electric noise and means the robust design technique is becoming more important issue in system clock failure problems. In this paper, we propose on-chip backup clock change architecture for the automatic clock failure detection. For the this, we adopt the edge detector, noise canceller logic and glitch-free clock changer circuit. The implemented edge detector unit detects the abnormal low-frequency of the clock source and the delay chain circuit of the clock pulse by the noise canceller can cancel out the glitch clock. The externally invalid clock source by detecting the emergency status will be switched to back-up clock source by glitch-free clock changer circuit. The proposed circuits are evaluated by Verilog simulation and the fabricated IC is validated by using test equipment electrical field radiation noise

A new approach for overlay text detection from complex video scene (새로운 비디오 자막 영역 검출 기법)

  • Kim, Won-Jun;Kim, Chang-Ick
    • Journal of Broadcast Engineering
    • /
    • v.13 no.4
    • /
    • pp.544-553
    • /
    • 2008
  • With the development of video editing technology, there are growing uses of overlay text inserted into video contents to provide viewers with better visual understanding. Since the content of the scene or the editor's intention can be well represented by using inserted text, it is useful for video information retrieval and indexing. Most of the previous approaches are based on low-level features, such as edge, color, and texture information. However, existing methods experience difficulties in handling texts with various contrasts or inserted in a complex background. In this paper, we propose a novel framework to localize the overlay text in a video scene. Based on our observation that there exist transient colors between inserted text and its adjacent background a transition map is generated. Then candidate regions are extracted by using the transition map and overlay text is finally determined based on the density of state in each candidate. The proposed method is robust to color, size, position, style, and contrast of overlay text. It is also language free. Text region update between frames is also exploited to reduce the processing time. Experiments are performed on diverse videos to confirm the efficiency of the proposed method.

Extraction of Renal Glomeruli Region using Genetic Algorithm (유전적 알고리듬을 이용한 신장 사구체 영역의 추출)

  • Kim, Eung-Kyeu
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.2
    • /
    • pp.30-39
    • /
    • 2009
  • Extraction of glomeruli region plays a very important role for diagnosing nephritis automatically. However, it is not easy to extract glomeruli region correctly because the difference between glomeruli region and other region is not obvious, simultaneously unevennesses that is brought in the sampling process and in the imaging process. In this study, a new method for extracting renal glomeruli region using genetic algorithm is proposed. The first, low and high resolution images are obtained by using Laplacian-Gaussian filter with ${\sigma}=2.1$ and ${\sigma}=1.8$, then, binary images by setting the threshold value to zero are obtained. And then border edge is detected from low resolution images, the border of glomeruli is expressed by a closed B-splines' curve line. The parameters that decide the closed curve line with this low resolution image prevent the noises and the border lines from breaking off in the middle by searching using genetic algorithm. Next, in order to obtain more precise border edges of glomeruli, the number of node points is increased and corrected in order from eight to sixteen and thirty two from high resolution images. Finally, the validity of this proposed method is shown to be effective by applying to the real images.

Text Region Extraction from Videos using the Harris Corner Detector (해리스 코너 검출기를 이용한 비디오 자막 영역 추출)

  • Kim, Won-Jun;Kim, Chang-Ick
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.7
    • /
    • pp.646-654
    • /
    • 2007
  • In recent years, the use of text inserted into TV contents has grown to provide viewers with better visual understanding. In this paper, video text is defined as superimposed text region located of the bottom of video. Video text extraction is the first step for video information retrieval and video indexing. Most of video text detection and extraction methods in the previous work are based on text color, contrast between text and background, edge, character filter, and so on. However, the video text extraction has big problems due to low resolution of video and complex background. To solve these problems, we propose a method to extract text from videos using the Harris corner detector. The proposed algorithm consists of four steps: corer map generation using the Harris corner detector, extraction of text candidates considering density of comers, text region determination using labeling, and post-processing. The proposed algorithm is language independent and can be applied to texts with various colors. Text region update between frames is also exploited to reduce the processing time. Experiments are performed on diverse videos to confirm the efficiency of the proposed method.

Analysis of size distribution of riverbed gravel through digital image processing (영상 처리에 의한 하상자갈의 입도분포 분석)

  • Yu, Kwonkyu;Cho, Woosung
    • Journal of Korea Water Resources Association
    • /
    • v.52 no.7
    • /
    • pp.493-503
    • /
    • 2019
  • This study presents a new method of estimating the size distribution of river bed gravel through image processing. The analysis was done in two steps; first the individual grain images were analyzed and then the grain particle segmentation of river-bed images were processed. In the first part of the analysis, the relationships (long axes, intermediate axes and projective areas) between grain features from images and those measured were compared. For this analysis, 240 gravel particles were collected at three river stations. All particles were measured with vernier calipers and weighed with scales. The measured data showed that river gravel had shape factors of 0.514~0.585. It was found that the weight of gravel had a stronger correlation with the projective areas than the long or intermediate axes. Using these results, we were able to establish an area-weight formula. In the second step, we calculated the projective areas of the river-bed gravels by detecting their edge lines using the ImageJ program. The projective areas of the gravels were converted to the grain-size distribution using the formula previously established. The proposed method was applied to 3 small- and medium- sized rivers in Korea. Comparisons of the analyzed size distributions with those measured showed that the proposed method could estimate the median diameter within a fair error range. However, the estimated distributions showed a slight deviation from the observed value, which is something that needs improvement in the future.

A Study on the remote acuisition of HejHome Air Cloud artifacts (스마트 홈 헤이 홈 Air의 클라우드 아티팩트 원격 수집 방안 연구)

  • Kim, Ju-eun;Seo, Seung-hee;Cha, Hae-seong;Kim, Yeok;Lee, Chang-hoon
    • Journal of Internet Computing and Services
    • /
    • v.23 no.5
    • /
    • pp.69-78
    • /
    • 2022
  • As the use of Internet of Things (IoT) devices has expanded, digital forensics coverage of the National Police Agency has expanded to smart home areas. Accordingly, most of the existing studies conducted to acquire smart home platform data were mainly conducted to analyze local data of mobile devices and analyze network perspectives. However, meaningful data for evidence analysis is mainly stored on cloud storage on smart home platforms. Therefore, in this paper, we study how to acquire stored in the cloud in a Hey Home Air environment by extracting accessToken of user accounts through a cookie database of browsers such as Microsoft Edge, Google Chrome, Mozilia Firefox, and Opera, which are recorded on a PC when users use the Hey Home app-based "Hey Home Square" service. In this paper, the it was configured with smart temperature and humidity sensors, smart door sensors, and smart motion sensors, and artifacts such as temperature and humidity data by date and place, device list used, and motion detection records were collected. Information such as temperature and humidity at the time of the incident can be seen from the results of the artifact analysis and can be used in the forensic investigation process. In addition, the cloud data acquisition method using OpenAPI proposed in this paper excludes the possibility of modulation during the data collection process and uses the API method, so it follows the principle of integrity and reproducibility, which are the principles of digital forensics.

Development of a PTV Algorithm for Measuring Sediment-Laden Flows (유사 흐름 측정을 위한 입자추적유속계 알고리듬의 개발)

  • Yu, Kwon-Kyu;Muste, Marian;Ettema, Robert;Yoon, Byung-Man
    • Journal of Korea Water Resources Association
    • /
    • v.38 no.10 s.159
    • /
    • pp.841-849
    • /
    • 2005
  • Two-phase flows, e.g. sediment-laden flow and bubbly flow, have two different flow profiles; flow velocity and sediment velocity. To measure velocity distributions of two-phase flows, it is necessary to use sophisticated instruments which can separate velocity profiles of two-phases. For bubbly flows, PIV (Particle Image Velocimetry) or PTV (Particle Tracking Velocimetry) has given fairly good velocity profiles of two-phases. However, for sediment-laden flows, the applications of PIV or PTV has not been so successful, because the sediment particles introduced to the flow kept the images from being analyzed. A new algorithm, which consists of several image analysis methods, is proposed to analyze sediment-laden flows. For detection algorithm, threshold method, edge detection method, and thinning method are adapted, and for finding matching pair PIV and PTV routines are combined. The proposed method can (1) detect sediment particles with irregular boundaries, (2) remove reflected images and scattered images, and (3) discriminate tracer particles from reflected images of sediment particles.

Automated Detecting and Tracing for Plagiarized Programs using Gumbel Distribution Model (굼벨 분포 모델을 이용한 표절 프로그램 자동 탐색 및 추적)

  • Ji, Jeong-Hoon;Woo, Gyun;Cho, Hwan-Gue
    • The KIPS Transactions:PartA
    • /
    • v.16A no.6
    • /
    • pp.453-462
    • /
    • 2009
  • Studies on software plagiarism detection, prevention and judgement have become widespread due to the growing of interest and importance for the protection and authentication of software intellectual property. Many previous studies focused on comparing all pairs of submitted codes by using attribute counting, token pattern, program parse tree, and similarity measuring algorithm. It is important to provide a clear-cut model for distinguishing plagiarism and collaboration. This paper proposes a source code clustering algorithm using a probability model on extreme value distribution. First, we propose an asymmetric distance measure pdist($P_a$, $P_b$) to measure the similarity of $P_a$ and $P_b$ Then, we construct the Plagiarism Direction Graph (PDG) for a given program set using pdist($P_a$, $P_b$) as edge weights. And, we transform the PDG into a Gumbel Distance Graph (GDG) model, since we found that the pdist($P_a$, $P_b$) score distribution is similar to a well-known Gumbel distribution. Second, we newly define pseudo-plagiarism which is a sort of virtual plagiarism forced by a very strong functional requirement in the specification. We conducted experiments with 18 groups of programs (more than 700 source codes) collected from the ICPC (International Collegiate Programming Contest) and KOI (Korean Olympiad for Informatics) programming contests. The experiments showed that most plagiarized codes could be detected with high sensitivity and that our algorithm successfully separated real plagiarism from pseudo plagiarism.

Development of Signal Processing Circuit for Side-absorber of Dual-mode Compton Camera (이중 모드 컴프턴 카메라의 측면 흡수부 제작을 위한 신호처리회로 개발)

  • Seo, Hee;Park, Jin-Hyung;Park, Jong-Hoon;Kim, Young-Su;Kim, Chan-Hyeong;Lee, Ju-Hahn;Lee, Chun-Sik
    • Journal of Radiation Protection and Research
    • /
    • v.37 no.1
    • /
    • pp.16-24
    • /
    • 2012
  • In the present study, a gamma-ray detector and associated signal processing circuit was developed for a side-absorber of a dual-mode Compton camera. The gamma-ray detector was made by optically coupling a CsI(Tl) scintillation crystal to a silicon photodiode. The developed signal processing circuit consists of two parts, i.e., the slow part for energy measurement and the fast part for timing measurement. In the fast part, there are three components: (1) fast shaper, (2) leading-edge discriminator, and (3) TTL-to-NIM logic converter. AC coupling configuration between the detector and front-end electronics (FEE) was used. Because the noise properties of FEE can significantly affect the overall performance of the detection system, some design criteria were presented. The performance of the developed system was evaluated in terms of energy and timing resolutions. The evaluated energy resolution was 12.0% and 15.6% FWHM for 662 and 511 keV peaks, respectively. The evaluated timing resolution was 59.0 ns. In the conclusion, the methods to improve the performance were discussed because the developed gamma-ray detection system showed the performance that could be applicable but not satisfactory in Compton camera application.