• Title/Summary/Keyword: 잡음 제거 기술

Search Result 430, Processing Time 0.03 seconds

Digital watermarking algorithm for authentication and detection of manipulated positions in MPEG-2 bit-stream (MPEG-2비트열에서의 인증 및 조작위치 검출을 위한 디지털 워터마킹 기법)

  • 박재연;임재혁;원치선
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.5
    • /
    • pp.378-387
    • /
    • 2003
  • Digital watermarking is the technique that embeds invisible signalsincluding owner identification information, specific code, or pattern into multimedia data such as image, video and audio. Watermarking techniques can be classified into two groups; robust watermarking and fragile(semi-fragile) watermarking. The main purpose of the robust watermarking is the protection of copyright, whereas fragile(semi-fragile) watermarking prevents image or video data from illegal modifications. To achieve this goal watermark should survive from unintentional modifications such as random noise or compression, but it should be fragile for malicious manipulations. In this paper, an invertible semi-fragile watermarkingalgorithm for authentication and detection of manipulated location in MPEG-2 bit-stream is proposed. The proposed algorithm embeds two kinds of watermarks, which are embedded into quantized DCT coefficients. So it can be applied directly to the compressed bit-stream. The first watermark is used for authentication of video data. The second one is used for detection of malicious manipulations. It can distinguish transcodingin bit-stream domain from malicious manipulation and detect the block-wise locations of manipulations in video data. Also, since the proposed algorithm has an invertible property, recovering original video data is possible if the watermarked video is authentic.

Eyelid Detection Algorithm Based on Parabolic Hough Transform for Iris Recognition (홍채 인식을 위한 포물 허프 변환 기반 눈꺼풀 영역 검출 알고리즘)

  • Jang, Young-Kyoon;Kang, Byung-Jun;Park, Kang-Ryoung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.1
    • /
    • pp.94-104
    • /
    • 2007
  • Iris recognition is biometric technology which uses a unique iris pattern of user in order to identify person. In the captured iris image by conventional iris recognition camera, it is often the case with eyelid occlusion, which covers iris information. The eyelids are unnecessary information that causes bad recognition performance, so this paper proposes robust algorithm in order to detect eyelid. This research has following three advantages compared to previous works. First, we remove the detected eyelash and specular reflection by linear interpolation method because they act as noise factors when locating eyelid. Second, we detect the candidate points of eyelid by using mask in limited eyelid searching area, which is determined by searching the cross position of eyelid and the outer boundary of iris. And our proposed algorithm detects eyelid by using parabolic hough transform based on the detected candidate points. Third, there have been many researches to detect eyelid, but they did not consider the rotation of eyelid in an iris image. Whereas, we consider the rotation factor in parabolic hough transform to overcome such problem. We tested our algorithm with CASIA Database. As the experimental results, the detection accuracy were 90.82% and 96.47% in case of detecting upper and lower eyelid, respectively.

Intrusion Detection Method Using Unsupervised Learning-Based Embedding and Autoencoder (비지도 학습 기반의 임베딩과 오토인코더를 사용한 침입 탐지 방법)

  • Junwoo Lee;Kangseok Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.8
    • /
    • pp.355-364
    • /
    • 2023
  • As advanced cyber threats continue to increase in recent years, it is difficult to detect new types of cyber attacks with existing pattern or signature-based intrusion detection method. Therefore, research on anomaly detection methods using data learning-based artificial intelligence technology is increasing. In addition, supervised learning-based anomaly detection methods are difficult to use in real environments because they require sufficient labeled data for learning. Research on an unsupervised learning-based method that learns from normal data and detects an anomaly by finding a pattern in the data itself has been actively conducted. Therefore, this study aims to extract a latent vector that preserves useful sequence information from sequence log data and develop an anomaly detection learning model using the extracted latent vector. Word2Vec was used to create a dense vector representation corresponding to the characteristics of each sequence, and an unsupervised autoencoder was developed to extract latent vectors from sequence data expressed as dense vectors. The developed autoencoder model is a recurrent neural network GRU (Gated Recurrent Unit) based denoising autoencoder suitable for sequence data, a one-dimensional convolutional neural network-based autoencoder to solve the limited short-term memory problem that GRU can have, and an autoencoder combining GRU and one-dimensional convolution was used. The data used in the experiment is time-series-based NGIDS (Next Generation IDS Dataset) data, and as a result of the experiment, an autoencoder that combines GRU and one-dimensional convolution is better than a model using a GRU-based autoencoder or a one-dimensional convolution-based autoencoder. It was efficient in terms of learning time for extracting useful latent patterns from training data, and showed stable performance with smaller fluctuations in anomaly detection performance.

Improving target recognition of active sonar multi-layer processor through deep learning of a small amounts of imbalanced data (소수 불균형 데이터의 심층학습을 통한 능동소나 다층처리기의 표적 인식성 개선)

  • Young-Woo Ryu;Jeong-Goo Kim
    • The Journal of the Acoustical Society of Korea
    • /
    • v.43 no.2
    • /
    • pp.225-233
    • /
    • 2024
  • Active sonar transmits sound waves to detect covertly maneuvering underwater objects and detects the signals reflected back from the target. However, in addition to the target's echo, the active sonar's received signal is mixed with seafloor, sea surface reverberation, biological noise, and other noise, making target recognition difficult. Conventional techniques for detecting signals above a threshold not only cause false detections or miss targets depending on the set threshold, but also have the problem of having to set an appropriate threshold for various underwater environments. To overcome this, research has been conducted on automatic calculation of threshold values through techniques such as Constant False Alarm Rate (CFAR) and application of advanced tracking filters and association techniques, but there are limitations in environments where a significant number of detections occur. As deep learning technology has recently developed, efforts have been made to apply it in the field of underwater target detection, but it is very difficult to acquire active sonar data for discriminator learning, so not only is the data rare, but there are only a very small number of targets and a relatively large number of non-targets. There are difficulties due to the imbalance of data. In this paper, the image of the energy distribution of the detection signal is used, and a classifier is learned in a way that takes into account the imbalance of the data to distinguish between targets and non-targets and added to the existing technique. Through the proposed technique, target misclassification was minimized and non-targets were eliminated, making target recognition easier for active sonar operators. And the effectiveness of the proposed technique was verified through sea experiment data obtained in the East Sea.

Measurement of Backscattering Coefficients of Rice Canopy Using a Ground Polarimetric Scatterometer System (지상관측 레이다 산란계를 이용한 벼 군락의 후방산란계수 측정)

  • Hong, Jin-Young;Kim, Yi-Hyun;Oh, Yi-Sok;Hong, Suk-Young
    • Korean Journal of Remote Sensing
    • /
    • v.23 no.2
    • /
    • pp.145-152
    • /
    • 2007
  • The polarimetric backscattering coefficients of a wet-land rice field which is an experimental plot belong to National Institute of Agricultural Science and Technology in Suwon are measured using ground-based polarimetric scatterometers at 1.8 and 5.3 GHz throughout a growth year from transplanting period to harvest period (May to October in 2006). The polarimetric scatterometers consist of a vector network analyzer with time-gating function and polarimetric antenna set, and are well calibrated to get VV-, HV-, VH-, HH-polarized backscattering coefficients from the measurements, based on single target calibration technique using a trihedral corner reflector. The polarimetric backscattering coefficients are measured at $30^{\circ},\;40^{\circ},\;50^{\circ}\;and\;60^{\circ}$ with 30 independent samples for each incidence angle at each frequency. In the measurement periods the ground truth data including fresh and dry biomass, plant height, stem density, leaf area, specific leaf area, and moisture contents are also collected for each measurement. The temporal variations of the measured backscattering coefficients as well as the measured plant height, LAI (leaf area index) and biomass are analyzed. Then, the measured polarimetric backscattering coefficients are compared with the rice growth parameters. The measured plant height increases monotonically while the measured LAI increases only till the ripening period and decreases after the ripening period. The measured backscattering coefficientsare fitted with polynomial expressions as functions of growth age, plant LAI and plant height for each polarization, frequency, and incidence angle. As the incidence angle is bigger, correlations of L band signature to the rice growth was higher than that of C band signatures. It is found that the HH-polarized backscattering coefficients are more sensitive than the VV-polarized backscattering coefficients to growth age and other input parameters. It is necessary to divide the data according to the growth period which shows the qualitative changes of growth such as panicale initiation, flowering or heading to derive functions to estimate rice growth.

Baseline Survey Seismic Attribute Analysis for CO2 Monitoring on the Aquistore CCS Project, Canada (캐나다 아퀴스토어 CCS 프로젝트의 이산화탄소 모니터링을 위한 Baseline 탄성파 속성분석)

  • Cheong, Snons;Kim, Byoung-Yeop;Bae, Jaeyu
    • Economic and Environmental Geology
    • /
    • v.46 no.6
    • /
    • pp.485-494
    • /
    • 2013
  • $CO_2$ Monitoring, Mitigation and Verification (MMV) is the essential part in the Carbon Capture and Storage (CCS) project in order to assure the storage permanence economically and environmentally. In large-scale CCS projects in the world, the seismic time-lapse survey is a key technology for monitoring the behavior of injected $CO_2$. In this study, we developed a basic process procedure for 3-D seismic baseline data from the Aquistore project, Estevan, Canada. Major target formations of Aquistore CCS project are the Winnipeg and the Deadwood sandstone formations located between 1,800 and 1,900 ms in traveltime. The analysis of trace energy and similarity attributes of seismic data followed by spectral decomposition are carried out for the characterization of $CO_2$ injection zone. High trace energies are concentrated in the northern part of the survey area at 1,800 ms and in the southern part at 1,850 ms in traveltime. The sandstone dominant regions are well recognized with high reflectivity by the trace energy analysis. Similarity attributes show two structural discontinuities trending the NW-SE direction at the target depth. Spectral decomposition of 5, 20 and 40 Hz frequency contents discriminated the successive E-W depositional events at the center of the research area. Additional noise rejection and stratigraphic interpretation on the baseline data followed by applying appropriate imaging technique will be helpful to investigate the differences between baseline data and multi-vintage monitor data.

A Study on the Field Data Applicability of Seismic Data Processing using Open-source Software (Madagascar) (오픈-소스 자료처리 기술개발 소프트웨어(Madagascar)를 이용한 탄성파 현장자료 전산처리 적용성 연구)

  • Son, Woohyun;Kim, Byoung-yeop
    • Geophysics and Geophysical Exploration
    • /
    • v.21 no.3
    • /
    • pp.171-182
    • /
    • 2018
  • We performed the seismic field data processing using an open-source software (Madagascar) to verify if it is applicable to processing of field data, which has low signal-to-noise ratio and high uncertainties in velocities. The Madagascar, based on Python, is usually supposed to be better in the development of processing technologies due to its capabilities of multidimensional data analysis and reproducibility. However, this open-source software has not been widely used so far for field data processing because of complicated interfaces and data structure system. To verify the effectiveness of the Madagascar software on field data, we applied it to a typical seismic data processing flow including data loading, geometry build-up, F-K filter, predictive deconvolution, velocity analysis, normal moveout correction, stack, and migration. The field data for the test were acquired in Gunsan Basin, Yellow Sea using a streamer consisting of 480 channels and 4 arrays of air-guns. The results at all processing step are compared with those processed with Landmark's ProMAX (SeisSpace R5000) which is a commercial processing software. Madagascar shows relatively high efficiencies in data IO and management as well as reproducibility. Additionally, it shows quick and exact calculations in some automated procedures such as stacking velocity analysis. There were no remarkable differences in the results after applying the signal enhancement flows of both software. For the deeper part of the substructure image, however, the commercial software shows better results than the open-source software. This is simply because the commercial software has various flows for de-multiple and provides interactive processing environments for delicate processing works compared to Madagascar. Considering that many researchers around the world are developing various data processing algorithms for Madagascar, we can expect that the open-source software such as Madagascar can be widely used for commercial-level processing with the strength of expandability, cost effectiveness and reproducibility.

Computer Aided Diagnosis System for Evaluation of Mechanical Artificial Valve (기계식 인공판막 상태 평가를 위한 컴퓨터 보조진단 시스템)

  • 이혁수
    • Journal of Biomedical Engineering Research
    • /
    • v.25 no.5
    • /
    • pp.421-430
    • /
    • 2004
  • Clinically, it is almost impossible for a physician to distinguish subtle changes of frequency spectrum by using a stethoscope alone especially in the early stage of thrombus formation. Considering that reliability of mechanical valve is paramount because the failure might end up with patient death, early detection of valve thrombus using noninvasive technique is important. Thus the study was designed to provide a tool for early noninvasive detection of valve thrombus by observing shift of frequency spectrum of acoustic signals with computer aid diagnosis system. A thrombus model was constructed on commercialized mechanical valves using polyurethane or silicon. Polyurethane coating was made on the valve surface, and silicon coating on the sewing ring of the valve. To simulate pannus formation, which is fibrous tissue overgrowth obstructing the valve orifice, the degree of silicone coating on the sewing ring varied from 20%, 40%, 60% of orifice obstruction. In experiment system, acoustic signals from the valve were measured using microphone and amplifier. The microphone was attached to a coupler to remove environmental noise. Acoustic signals were sampled by an AID converter, frequency spectrum was obtained by the algorithm of spectral analysis. To quantitatively distinguish the frequency peak of the normal valve from that of the thrombosed valves, analysis using a neural network was employed. A return map was applied to evaluate continuous monitoring of valve motion cycle. The in-vivo data also obtained from animals with mechanical valves in circulatory devices as well as patients with mechanical valve replacement for 1 year or longer before. Each spectrum wave showed a primary and secondary peak. The secondary peak showed changes according to the thrombus model. In the mock as well as the animal study, both spectral analysis and 3-layer neural network could differentiate the normal valves from thrombosed valves. In the human study, one of 10 patients showed shift of frequency spectrum, however the presence of valve thrombus was yet to be determined. Conclusively, acoustic signal measurement can be of suggestive as a noninvasive diagnostic tool in early detection of mechanical valve thrombosis.

The Effect of PET/CT Images on SUV with the Correction of CT Image by Using Contrast Media (PET/CT 영상에서 조영제를 이용한 CT 영상의 보정(Correction)에 따른 표준화섭취계수(SUV)의 영향)

  • Ahn, Sha-Ron;Park, Hoon-Hee;Park, Min-Soo;Lee, Seung-Jae;Oh, Shin-Hyun;Lim, Han-Sang;Kim, Jae-Sam;Lee, Chang-Ho
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.1
    • /
    • pp.77-81
    • /
    • 2009
  • Purpose: The PET of the PET/CT (Positron Emission Tomography/Computed Tomography) quantitatively shows the biological and chemical information of the body, but has limitation of presenting the clear anatomic structure. Thus combining the PET with CT, it is not only possible to offer the higher resolution but also effectively shorten the scanning time and reduce the noises by using CT data in attenuation correction. And because, at the CT scanning, the contrast media makes it easy to determine a exact range of the lesion and distinguish the normal organs, there is a certain increase in the use of it. However, in the case of using the contrast media, it affects semi-quantitative measures of the PET/CT images. In this study, therefore, we will be to establish the reliability of the SUV (Standardized Uptake Value) with CT data correction so that it can help more accurate diagnosis. Materials and Methods: In this experiment, a total of 30 people are targeted - age range: from 27 to 72, average age : 49.6 - and DSTe (General Electric Healthcare, Milwaukee, MI, USA) is used for equipment. $^{18}F$- FDG 370~555 MBq is injected into the subjects depending on their weight and, after about 60 minutes of their stable position, a whole-body scan is taken. The CT scan is set to 140 kV and 210 mA, and the injected amount of the contrast media is 2 cc per 1 kg of the patients' weight. With the raw data from the scan, we obtain a image showing the effect of the contrast media through the attenuation correction by both of the corrected and uncorrected CT data. Then we mark out ROI (Region of Interest) in each area to measure SUV and analyze the difference. Results: According to the analysis, the SUV is decreased in the liver and heart which have more bloodstream than the others, because of the contrast media correction. On the other hand, there is no difference in the lungs. Conclusions: Whereas the CT scan images with the contrast media from the PET/CT increase the contrast of the targeted region for the test so that it can improve efficiency of diagnosis, there occurred an increase of SUV, a semi-quantitative analytical method. In this research, we measure the variation of SUV through the correction of the influence of contrast media and compare the differences. As we revise the SUV which is increasing in the image with attenuation correction by using contrast media, we can expect anatomical images of high-resolution. Furthermore, it is considered that through this trusted semi-quantitative method, it will definitely enhance the diagnostic value.

  • PDF

Utility of Wide Beam Reconstruction in Whole Body Bone Scan (전신 뼈 검사에서 Wide Beam Reconstruction 기법의 유용성)

  • Kim, Jung-Yul;Kang, Chung-Koo;Park, Min-Soo;Park, Hoon-Hee;Lim, Han-Sang;Kim, Jae-Sam;Lee, Chang-Ho
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.1
    • /
    • pp.83-89
    • /
    • 2010
  • Purpose: The Wide Beam Reconstruction (WBR) algorithms that UltraSPECT, Ltd. (U.S) has provides solutions which improved image resolution by eliminating the effect of the line spread function by collimator and suppression of the noise. It controls the resolution and noise level automatically and yields unsurpassed image quality. The aim of this study is WBR of whole body bone scan in usefulness of clinical application. Materials and Methods: The standard line source and single photon emission computed tomography (SPECT) reconstructed spatial resolution measurements were performed on an INFINA (GE, Milwaukee, WI) gamma camera, equipped with low energy high resolution (LEHR) collimators. The total counts of line source measurements with 200 kcps and 300 kcps. The SPECT phantoms analyzed spatial resolution by the changing matrix size. Also a clinical evaluation study was performed with forty three patients, referred for bone scans. First group altered scan speed with 20 and 30 cm/min and dosage of 740 MBq (20 mCi) of $^{99m}Tc$-HDP administered but second group altered dosage of $^{99m}Tc$-HDP with 740 and 1,110 MBq (20 mCi and 30 mCi) in same scan speed. The acquired data was reconstructed using the typical clinical protocol in use and the WBR protocol. The patient's information was removed and a blind reading was done on each reconstruction method. For each reading, a questionnaire was completed in which the reader was asked to evaluate, on a scale of 1-5 point. Results: The result of planar WBR data improved resolution more than 10%. The Full-Width at Half-Maximum (FWHM) of WBR data improved about 16% (Standard: 8.45, WBR: 7.09). SPECT WBR data improved resolution more than about 50% and evaluate FWHM of WBR data (Standard: 3.52, WBR: 1.65). A clinical evaluation study, there was no statistically significant difference between the two method, which includes improvement of the bone to soft tissue ratio and the image resolution (first group p=0.07, second group p=0.458). Conclusion: The WBR method allows to shorten the acquisition time of bone scans while simultaneously providing improved image quality and to reduce the dosage of radiopharmaceuticals reducing radiation dose. Therefore, the WBR method can be applied to a wide range of clinical applications to provide clinical values as well as image quality.

  • PDF