• Title/Summary/Keyword: wavelet.

Search Result 3,585, Processing Time 0.034 seconds

Identification of Subsurface Discontinuities via Analyses of Borehole Synthetic Seismograms (시추공 합성탄성파 기록을 통한 지하 불연속 경계면의 파악)

  • Kim, Ji-Soo;Lee, Jae-Young;Seo, Yong-Seok;Ju, Hyeon-Tae
    • The Journal of Engineering Geology
    • /
    • v.23 no.4
    • /
    • pp.457-465
    • /
    • 2013
  • We integrated and correlated datasets from surface and subsurface geophysics, drilling cores, and engineering geology to identify geological interfaces and characterize the joints and fracture zones within the rock mass. The regional geometry of a geologically weak zone was investigated via a fence projection of electrical resistivity data and a borehole image-processing system. Subsurface discontinuities and intensive fracture zones within the rock mass are delineated by cross-hole seismic tomography and analyses of dip directions in rose diagrams. The dynamic elastic modulus is studied in terms of the P-wave velocity and Poisson's ratio. Subsurface discontinuities, which are conventionally identified using the N value and from core samples, can now be identified from anomalous reflection coefficients (i.e., acoustic impedance contrast) calculated using a pair of well logs, comprising seismic velocity from suspension-PS logging and density from logging. Intensive fracture zones identified in the synthetic seismogram are matched to core loss zones in the drilling core data and to a high concentration of joints in the borehole imaging system. The upper boundaries of fracture zones are correlated to strongly negative amplitude in the synthetic trace, which is constructed by convolution of the optimal Ricker wavelet with a reflection coefficient. The standard deviations of dynamic elastic moduli are higher for fracture zones than for acompact rock mass, due to the wide range of velocities resulting from the large numbers of joints and fractures within the zone.

Seismic AVO Analysis, AVO Modeling, AVO Inversion for understanding the gas-hydrate structure (가스 하이드레이트 부존층의 구조파악을 위한 탄성파 AVO 분석 AVO모델링, AVO역산)

  • Kim Gun-Duk;Chung Bu-Heung
    • 한국신재생에너지학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.643-646
    • /
    • 2005
  • The gas hydrate exploration using seismic reflection data, the detection of BSR(Bottom Simulating Reflector) on the seismic section is the most important work flow because the BSR have been interpreted as being formed at the base of a gas hydrate zone. Usually, BSR has some dominant qualitative characteristics on seismic section i.e. Wavelet phase reversal compare to sea bottom signal, Parallel layer with sea bottom, Strong amplitude, Masking phenomenon above the BSR, Cross bedding with other geological layer. Even though a BSR can be selected on seismic section with these guidance, it is not enough to conform as being true BSR. Some other available methods for verifying the BSR with reliable analysis quantitatively i.e. Interval velocity analysis, AVO(Amplitude Variation with Offset)analysis etc. Usually, AVO analysis can be divided by three main parts. The first part is AVO analysis, the second is AVO modeling and the last is AVO inversion. AVO analysis is unique method for detecting the free gas zone on seismic section directly. Therefore it can be a kind of useful analysis method for discriminating true BSR, which might arise from an Possion ratio contrast between high velocity layer, partially hydrated sediment and low velocity layer, water saturated gas sediment. During the AVO interpretation, as the AVO response can be changed depend upon the water saturation ratio, it is confused to discriminate the AVO response of gas layer from dry layer. In that case, the AVO modeling is necessary to generate synthetic seismogram comparing with real data. It can be available to make conclusions from correspondence or lack of correspondence between the two seismograms. AVO inversion process is the method for driving a geological model by iterative operation that the result ing synthetic seismogram matches to real data seismogram wi thin some tolerance level. AVO inversion is a topic of current research and for now there is no general consensus on how the process should be done or even whether is valid for standard seismic data. Unfortunately, there are no well log data acquired from gas hydrate exploration area in Korea. Instead of that data, well log data and seismic data acquired from gas sand area located nearby the gas hydrate exploration area is used to AVO analysis, As the results of AVO modeling, type III AVO anomaly confirmed on the gas sand layer. The Castagna's equation constant value for estimating the S-wave velocity are evaluated as A=0.86190, B=-3845.14431 respectively and water saturation ratio is $50\%$. To calculate the reflection coefficient of synthetic seismogram, the Zoeppritz equation is used. For AVO inversion process, the dataset provided by Hampson-Rushell CO. is used.

  • PDF

Classification of Multi-temporal SAR Data by Using Data Transform Based Features and Multiple Classifiers (자료변환 기반 특징과 다중 분류자를 이용한 다중시기 SAR자료의 분류)

  • Yoo, Hee Young;Park, No-Wook;Hong, Sukyoung;Lee, Kyungdo;Kim, Yeseul
    • Korean Journal of Remote Sensing
    • /
    • v.31 no.3
    • /
    • pp.205-214
    • /
    • 2015
  • In this study, a novel land-cover classification framework for multi-temporal SAR data is presented that can combine multiple features extracted through data transforms and multiple classifiers. At first, data transforms using principle component analysis (PCA) and 3D wavelet transform are applied to multi-temporal SAR dataset for extracting new features which were different from original dataset. Then, three different classifiers including maximum likelihood classifier (MLC), neural network (NN) and support vector machine (SVM) are applied to three different dataset including data transform based features and original backscattering coefficients, and as a result, the diverse preliminary classification results are generated. These results are combined via a majority voting rule to generate a final classification result. From an experiment with a multi-temporal ENVISAT ASAR dataset, every preliminary classification result showed very different classification accuracy according to the used feature and classifier. The final classification result combining nine preliminary classification results showed the best classification accuracy because each preliminary classification result provided complementary information on land-covers. The improvement of classification accuracy in this study was mainly attributed to the diversity from combining not only different features based on data transforms, but also different classifiers. Therefore, the land-cover classification framework presented in this study would be effectively applied to the classification of multi-temporal SAR data and also be extended to multi-sensor remote sensing data fusion.

Block-Based Transform-Domain Measurement Coding for Compressive Sensing of Images (영상 압축센싱을 위한 블록기반 변환영역 측정 부호화)

  • Nguyen, Quang Hong;Nguyen, Viet Anh;Trinh, Chien Van;Dinh, Khanh Quoc;Park, Younghyeon;Jeon, Byeungwoo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39A no.12
    • /
    • pp.746-755
    • /
    • 2014
  • Compressive sensing (CS) has drawn much interest as a new sampling technique that enables signals to be sampled at a much lower than the Nyquist rate. By noting that the block-based compressive sensing can still keep spatial correlation in measurement domain, in this paper, we propose a novel encoding technique for measurement data obtained in the block-based CS of natural image. We apply discrete wavelet transform (DWT) to decorrelate CS measurements and then assign a proper quantization scheme to those DWT coefficients. Thus, redundancy of CS measurements and bitrate of system are reduced remarkably. Experimental results show improvements in rate-distortion performance by the proposed method against two existing methods of scalar quantization (SQ) and differential pulse-code modulation (DPCM). In the best case, the proposed method gains up to 4 dB, 0.9 dB, and 2.5 dB compared with the Block-based CS-Smoothed Projected Landweber plus SQ, Block-based CS-Smoothed Projected Landweber plus DPCM, and Multihypothesis Block-based CS-Smoothed Projected Landweber plus DPCM, respectively.

Object-Based Video Segmentation Using Spatio-temporal Entropic Thresholding and Camera Panning Compensation (시공간 엔트로피 임계법과 카메라 패닝 보상을 이용한 객체 기반 동영상 분할)

  • 백경환;곽노윤
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.4 no.3
    • /
    • pp.126-133
    • /
    • 2003
  • This paper is related to a morphological segmentation method for extracting the moving object in video sequence using global motion compensation and two-dimensional spatio-temporal entropic thresholding. First, global motion compensation is performed with camera panning vector estimated in the hierarchical pyramid structure constructed by wavelet transform. Secondly, the regions with high possibility to include the moving object between two consecutive frames are extracted block by block from the global motion compensated image using two-dimensional spatio-temporal entropic thresholding. Afterwards, the LUT classifying each block into one among changed block, uncertain block, stationary block according to the results classified by two-dimensional spatio-temporal entropic thresholding is made out. Next, by adaptively selecting the initial search layer and the search range referring to the LUT, the proposed HBMA can effectively carry out fast motion estimation and extract object-included region in the hierarchical pyramid structure. Finally, after we define the thresholded gradient image in the object-included region, and apply the morphological segmentation method to the object-included region pixel by pixel and extract the moving object included in video sequence. As shown in the results of computer simulation, the proposed method provides relatively good segmentation results for moving object and specially comes up with reasonable segmentation results in the edge areas with lower contrast.

  • PDF

Pipeline Structural Damage Detection Using Self-Sensing Technology and PNN-Based Pattern Recognition (자율 감지 및 확률론적 신경망 기반 패턴 인식을 이용한 배관 구조물 손상 진단 기법)

  • Lee, Chang-Gil;Park, Woong-Ki;Park, Seung-Hee
    • Journal of the Korean Society for Nondestructive Testing
    • /
    • v.31 no.4
    • /
    • pp.351-359
    • /
    • 2011
  • In a structure, damage can occur at several scales from micro-cracking to corrosion or loose bolts. This makes the identification of damage difficult with one mode of sensing. Hence, a multi-mode actuated sensing system is proposed based on a self-sensing circuit using a piezoelectric sensor. In the self sensing-based multi-mode actuated sensing, one mode provides a wide frequency-band structural response from the self-sensed impedance measurement and the other mode provides a specific frequency-induced structural wavelet response from the self-sensed guided wave measurement. In this study, an experimental study on the pipeline system is carried out to verify the effectiveness and the robustness of the proposed structural health monitoring approach. Different types of structural damage are artificially inflicted on the pipeline system. To classify the multiple types of structural damage, a supervised learning-based statistical pattern recognition is implemented by composing a two-dimensional space using the damage indices extracted from the impedance and guided wave features. For more systematic damage classification, several control parameters to determine an optimal decision boundary for the supervised learning-based pattern recognition are optimized. Finally, further research issues will be discussed for real-world implementation of the proposed approach.

Characteristics of the flow in the Usan Trough in the East Sea (동해 우산해곡 해수 유동 특성)

  • Baek, Gyu Nam;Seo, Seongbong;Lee, Jae Hak;Hong, Chang Su;Kim, Yun-Bae
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.19 no.2
    • /
    • pp.99-108
    • /
    • 2014
  • One year long time-series current data were obtained at two stations (K1 and K2) located in the Usan Trough in the area north of Ulleungdo in the East Sea from September 2006. The observed data reveal enhanced seafloor flows in both stations with variabilities of about 20 days which is possibly governed by the topographic Rossby wave. After February 2007, strong flow in the upper layer in St. K1 appears throughout the mooring period and this is due to the passage of the warm eddy comparing with satellite sea surface temperature data. During this period, no significant correlation between the current in the upper layer and those in two deep layers is shown indicating the eddy does not affect flows in the deep ocean. It is also observed that the flow direction rotates clockwise with depth in both stations except for the upper of the K1. This implies that the deep flow does not parallel to the isobaths exactly and it has a downwelling velocity component. The possibility of the flow from the Japan Basin to the Ulleung Basin across the Usan Trough is not evidenced from the data.

Digital Video Watermarking Based on SPIHT Coding Using Motion Vector Analysis (움직임 벡터 정보를 이용한 SPIHT 부호화 기반의 디지털 비디오 워터마킹)

  • Kwon, Seong-Geun;Hwang, Eui-Chang;Lee, Mi-Hee;Jeong, Tai-Il;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.11
    • /
    • pp.1427-1438
    • /
    • 2007
  • Video watermarking technologies are classified into types of four kinds. The first type is to embed the watermark into a raw video signal and to code the watermarked video signal. Most of video watermarking technologies fall into the category of this type. The second type is to apply watermarking to the coding process, such as block DCT and quantization. The third is to directly embed the watermark into the compressed bitstream itself. Generally, it is referred as labelling rather than watermarking. Finally, the fourth is to embed the water mark into MPEG motion vector. This type has the difficulty in real-time process because of the high complexity and has the blocking effects because of DCT-based on coder. In this paper, we proposed the digital video watermarking that embed the watermark in SPIHT video code for I-frame using motion vector analysis. This method can remove the blocking effect occurred at the DCT-based on coder and obtain video data that has progressive transmission property. The proposed method is to select the region for the watermark embedding in I frame using motion vector estimated from the previous P or B frame. And then, it is to perform DWT and embed the watermark based on HVS into the wavelet coefficients in the same subband of DWT as the motion vector direction. Finally, the watermarked video bitstream is obtained by the SPIHT coder. The experimental results verified that the proposed method has the invisibility from the objective and subjective image quality and the robustness against the various SPIHT compression and MPEG re-code.

  • PDF

An Improved Algorithm for Building Multi-dimensional Histograms with Overlapped Buckets (중첩된 버킷을 사용하는 다차원 히스토그램에 대한 개선된 알고리즘)

  • 문진영;심규석
    • Journal of KIISE:Databases
    • /
    • v.30 no.3
    • /
    • pp.336-349
    • /
    • 2003
  • Histograms have been getting a lot of attention recently. Histograms are commonly utilized in commercial database systems to capture attribute value distributions for query optimization Recently, in the advent of researches on approximate query answering and stream data, the interests in histograms are widely being spread. The simplest approach assumes that the attributes in relational tables are independent by AVI(Attribute Value Independence) assumption. However, this assumption is not generally valid for real-life datasets. To alleviate the problem of approximation on multi-dimensional data with multiple one-dimensional histograms, several techniques such as wavelet, random sampling and multi-dimensional histograms are proposed. Among them, GENHIST is a multi-dimensional histogram that is designed to approximate the data distribution with real attributes. It uses overlapping buckets that allow more efficient approximation on the data distribution. In this paper, we propose a scheme, OPT that can determine the optimal frequencies of overlapped buckets that minimize the SSE(Sum Squared Error). A histogram with overlapping buckets is first generated by GENHIST and OPT can improve the histogram by calculating the optimal frequency for each bucket. Our experimental result confirms that our technique can improve the accuracy of histograms generated by GENHIST significantly.

An Adaptive Information Hiding Technique of JPEG2000-based Image using Chaotic System (카오스 시스템을 이용한 JPEG2000-기반 영상의 적응적 정보 은닉 기술)

  • 김수민;서영호;김동욱
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.4
    • /
    • pp.9-21
    • /
    • 2004
  • In this paper, we proposed the image hiding method which decreases calculation amount by encrypt partial data using discrete wavelet transform and linear scale quantization which were adopted as the main technique for frequency transform in JPEG2000 standard. Also we used the chaotic system which has smaller calculation amount than other encryption algorithms and then dramatically decreased calculation amount. This method operates encryption process between quantization and entropy coding for preserving compression ratio of images and uses the subband selection method and the random changing method using the chaotic system. For ciphering the quantization index we use a novel image encryption algerian of cyclically shifted in the right or left direction and encrypts two quantization assignment method (Top-down/Reflection code), made change of data less. Also, suggested encryption method to JPEG2000 progressive transmission. The experiments have been performed with the proposed methods implemented in software for about 500 images. consequently, we are sure that the proposed are efficient image encryption methods to acquire the high encryption effect with small amount of encryption. It has been shown that there exits a relation of trade-off between the execution time and the effect of the encryption. It means that the proposed methods can be selectively used according to the application areas. Also, because the proposed methods are performed in the application layer, they are expected to be a good solution for the end-to-end security problem, which is appearing as one of the important problems in the networks with both wired and wireless sections.