• Title/Summary/Keyword: Process Noise

Search Result 2,993, Processing Time 0.037 seconds

A 10b 250MS/s $1.8mm^2$ 85mW 0.13um CMOS ADC Based on High-Accuracy Integrated Capacitors (높은 정확도를 가진 집적 커페시터 기반의 10비트 250MS/s $1.8mm^2$ 85mW 0.13un CMOS A/D 변환기)

  • Sa, Doo-Hwan;Choi, Hee-Cheol;Kim, Young-Lok;Lee, Seung-Hoon
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.43 no.11 s.353
    • /
    • pp.58-68
    • /
    • 2006
  • This work proposes a 10b 250MS/s $1.8mm^2$ 85mW 0.13um CMOS A/D Converter (ADC) for high-performance integrated systems such as next-generation DTV and WLAN simultaneously requiring low voltage, low power, and small area at high speed. The proposed 3-stage pipeline ADC minimizes chip area and power dissipation at the target resolution and sampling rate. The input SHA maintains 10b resolution with either gate-bootstrapped sampling switches or nominal CMOS sampling switches. The SHA and two MDACs based on a conventional 2-stage amplifier employ optimized trans-conductance ratios of two amplifier stages to achieve the required DC gain, bandwidth, and phase margin. The proposed signal insensitive 3-D fully symmetric capacitor layout reduces the device mismatch of two MDACs. The low-noise on-chip current and voltage references can choose optional off-chip voltage references. The prototype ADC is implemented in a 0.13um 1P8M CMOS process. The measured DNL and INL are within 0.24LSB and 0.35LSB while the ADC shows a maximum SNDR of 54dB and 48dB and a maximum SFDR of 67dB and 61dB at 200MS/s and 250MS/s, respectively. The ADC with an active die area of $1.8mm^2$ consumes 85mW at 250MS/s at a 1.2V supply.

PCA­based Waveform Classification of Rabbit Retinal Ganglion Cell Activity (주성분분석을 이용한 토끼 망막 신경절세포의 활동전위 파형 분류)

  • 진계환;조현숙;이태수;구용숙
    • Progress in Medical Physics
    • /
    • v.14 no.4
    • /
    • pp.211-217
    • /
    • 2003
  • The Principal component analysis (PCA) is a well-known data analysis method that is useful in linear feature extraction and data compression. The PCA is a linear transformation that applies an orthogonal rotation to the original data, so as to maximize the retained variance. PCA is a classical technique for obtaining an optimal overall mapping of linearly dependent patterns of correlation between variables (e.g. neurons). PCA provides, in the mean-squared error sense, an optimal linear mapping of the signals which are spread across a group of variables. These signals are concentrated into the first few components, while the noise, i.e. variance which is uncorrelated across variables, is sequestered in the remaining components. PCA has been used extensively to resolve temporal patterns in neurophysiological recordings. Because the retinal signal is stochastic process, PCA can be used to identify the retinal spikes. With excised rabbit eye, retina was isolated. A piece of retina was attached with the ganglion cell side to the surface of the microelectrode array (MEA). The MEA consisted of glass plate with 60 substrate integrated and insulated golden connection lanes terminating in an 8${\times}$8 array (spacing 200 $\mu$m, electrode diameter 30 $\mu$m) in the center of the plate. The MEA 60 system was used for the recording of retinal ganglion cell activity. The action potentials of each channel were sorted by off­line analysis tool. Spikes were detected with a threshold criterion and sorted according to their principal component composition. The first (PC1) and second principal component values (PC2) were calculated using all the waveforms of the each channel and all n time points in the waveform, where several clusters could be separated clearly in two dimension. We verified that PCA-based waveform detection was effective as an initial approach for spike sorting method.

  • PDF

Interactive analysis tools for the wide-angle seismic data for crustal structure study (Technical Report) (지각 구조 연구에서 광각 탄성파 자료를 위한 대화식 분석 방법들)

  • Fujie, Gou;Kasahara, Junzo;Murase, Kei;Mochizuki, Kimihiro;Kaneda, Yoshiyuki
    • Geophysics and Geophysical Exploration
    • /
    • v.11 no.1
    • /
    • pp.26-33
    • /
    • 2008
  • The analysis of wide-angle seismic reflection and refraction data plays an important role in lithospheric-scale crustal structure study. However, it is extremely difficult to develop an appropriate velocity structure model directly from the observed data, and we have to improve the structure model step by step, because the crustal structure analysis is an intrinsically non-linear problem. There are several subjective processes in wide-angle crustal structure modelling, such as phase identification and trial-and-error forward modelling. Because these subjective processes in wide-angle data analysis reduce the uniqueness and credibility of the resultant models, it is important to reduce subjectivity in the analysis procedure. From this point of view, we describe two software tools, PASTEUP and MODELING, to be used for developing crustal structure models. PASTEUP is an interactive application that facilitates the plotting of record sections, analysis of wide-angle seismic data, and picking of phases. PASTEUP is equipped with various filters and analysis functions to enhance signal-to-noise ratio and to help phase identification. MODELING is an interactive application for editing velocity models, and ray-tracing. Synthetic traveltimes computed by the MODELING application can be directly compared with the observed waveforms in the PASTEUP application. This reduces subjectivity in crustal structure modelling because traveltime picking, which is one of the most subjective process in the crustal structure analysis, is not required. MODELING can convert an editable layered structure model into two-way traveltimes which can be compared with time-sections of Multi Channel Seismic (MCS) reflection data. Direct comparison between the structure model of wide-angle data with the reflection data will give the model more credibility. In addition, both PASTEUP and MODELING are efficient tools for handling a large dataset. These software tools help us develop more plausible lithospheric-scale structure models using wide-angle seismic data.

Enhancement of the Deformable Image Registration Accuracy Using Image Modification of MV CBCT (Megavoltage Cone-beam CT 영상의 변환을 이용한 변환 영상 정합의 정확도 향상)

  • Kim, Min-Joo;Chang, Ji-Na;Park, So-Hyun;Kim, Tae-Ho;Kang, Young-Nam;Suh, Tae-Suk
    • Progress in Medical Physics
    • /
    • v.22 no.1
    • /
    • pp.28-34
    • /
    • 2011
  • To perform the Adaptive Radiation Therapy (ART), a high degree of deformable registration accuracy is essential. The purpose of this study is to identify whether the change of MV CBCT intensity can improve registration accuracy using predefined modification level and filtering process. To obtain modification level, the cheese phantom images was acquired from both kilovoltage CT (kV CT), megavoltage cone-beam CT (MV CBCT). From the cheese phantom images, the modification level of MV CBCT was defined from the relationship between Hounsfield Units (HUs) of kV CT and MV CBCT images. 'Gaussian smoothing filter' was added to reduce the noise of the MV CBCT images. The intensity of MV CBCT image was changed to the intensity of the kV CT image to make the two images have the same intensity range as if they were obtained from the same modality. The demon deformable registration which was efficient and easy to perform the deformable registration was applied. The deformable lung phantom which was intentionally created in the laboratory to imitate the changes of the breathing period was acquired from kV CT and MV CBCT. And then the deformable lung phantom images were applied to the proposed method. As a result of deformable image registration, the similarity of the correlation coefficient was used for a quantitative evaluation of the result was increased by 6.07% in the cheese phantom, and 18% in the deformable lung phantom. For the additional evaluation of the registration of the deformable lung phantom, the centric coordinates of the mark which was inserted into the inner part of the phantom were measured to calculate the vector difference. The vector differences from the result were 2.23, 1.39 mm with/without modification of intensity of MV CBCT images, respectively. In summary, our method has quantitatively improved the accuracy of deformable registration and could be a useful solution to improve the image registration accuracy. A further study was also suggested in this paper.

Baseline Survey Seismic Attribute Analysis for CO2 Monitoring on the Aquistore CCS Project, Canada (캐나다 아퀴스토어 CCS 프로젝트의 이산화탄소 모니터링을 위한 Baseline 탄성파 속성분석)

  • Cheong, Snons;Kim, Byoung-Yeop;Bae, Jaeyu
    • Economic and Environmental Geology
    • /
    • v.46 no.6
    • /
    • pp.485-494
    • /
    • 2013
  • $CO_2$ Monitoring, Mitigation and Verification (MMV) is the essential part in the Carbon Capture and Storage (CCS) project in order to assure the storage permanence economically and environmentally. In large-scale CCS projects in the world, the seismic time-lapse survey is a key technology for monitoring the behavior of injected $CO_2$. In this study, we developed a basic process procedure for 3-D seismic baseline data from the Aquistore project, Estevan, Canada. Major target formations of Aquistore CCS project are the Winnipeg and the Deadwood sandstone formations located between 1,800 and 1,900 ms in traveltime. The analysis of trace energy and similarity attributes of seismic data followed by spectral decomposition are carried out for the characterization of $CO_2$ injection zone. High trace energies are concentrated in the northern part of the survey area at 1,800 ms and in the southern part at 1,850 ms in traveltime. The sandstone dominant regions are well recognized with high reflectivity by the trace energy analysis. Similarity attributes show two structural discontinuities trending the NW-SE direction at the target depth. Spectral decomposition of 5, 20 and 40 Hz frequency contents discriminated the successive E-W depositional events at the center of the research area. Additional noise rejection and stratigraphic interpretation on the baseline data followed by applying appropriate imaging technique will be helpful to investigate the differences between baseline data and multi-vintage monitor data.

A Design of PLL and Spread Spectrum Clock Generator for 2.7Gbps/1.62Gbps DisplayPort Transmitter (2.7Gbps/1.62Gbps DisplayPort 송신기용 PLL 및 확산대역 클록 발생기의 설계)

  • Kim, Young-Shin;Kim, Seong-Geun;Pu, Young-Gun;Hur, Jeong;Lee, Kang-Yoon
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.47 no.2
    • /
    • pp.21-31
    • /
    • 2010
  • This paper presents a design of PLL and SSCG for reducing the EMI effect at the electronic machinery and tools for DisplayPort application. This system is composed of the essential element of PLL and Charge-Pump2 and Reference Clock Divider to implement the SSCG operation. In this paper, 270MHz/162MHz dual-mode PLL that can provide 10-phase and 1.35GHz/810MHz PLL that can reduce the jitter are designed for 2.7Gbps/162Gbps DisplayPort application. The jitter can be reduced drastically by combining 270MHz/162MHz PLL with 2-stage 5 to 1 serializer and 1.35GHz PLL with 2 to 1 serializer. This paper propose the frequency divider topology which can share the divider between modes and guarantee the 50% duty ratio. And, the output current mismatch can be reduced by using the proposed charge-pump topology. It is implemented using 0.13 um CMOS process and die areas of 270MHz/162MHz PLL and 1.35GHz/810MHz PLL are $650um\;{\times}\;500um$ and $600um\;{\times}\;500um$, respectively. The VCO tuning range of 270 MHz/162 MHz PLL is 330 MHz and the phase noise is -114 dBc/Hz at 1 MHz offset. The measured SSCG down spread amplitude is 0.5% and modulation frequency is 31kHz. The total power consumption is 48mW.

A Study on the Factors affecting the Utilization of Waterscape Facilitiesin Apartment Complexes based upon Resident Perception - Focused on the Factors of Planning·Design, Maintenance and Usage - (주민인식에 기반한 아파트단지 내 수경시설 이용 영향 요인 분석 - 계획·설계, 유지·관리, 이용 행태를 중심으로 -)

  • Park, Do-Hwan;Cho, Se-Hwan
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.45 no.6
    • /
    • pp.62-75
    • /
    • 2017
  • This study analyzes the multiple effects of the following three aspects of waterscape facilities within apartment complexes: planning/designing, maintenance/management, and use of the facilities and suggests primary documents that will be fundamental for the methods to accelerate the implementation of waterscape facilities. A survey and analysis was conducted among a few of the most representative private apartment complexes in Seoul in accordance with the management and operation of waterscape facilities. The analysis used frequency analysis, descriptive statistics, reliability test, t-test, and PLS regression analysis. The research findings are as follows: first, the degree of use of waterscape facilities was found to be low regardless of the levels of operation, but residents' preference for the facilities was shown to be high, thus indicating there are still high expectations on the part of residents. Second, regardless of whether the facilities are being operated efficiently, the two items of location and display method under the section of planning and designing and the two items of aptitude and convenience under the section of use were found to positively affect the operation and use of waterscape facilities. Particularly, the item of freshness, cleanliness was shown to be directly and indirectly correlated with obsolescence, administration costs, and noise, which negatively affect the operation. Third, it was found that the administration costs itself that had been shown as the most negative factor of operating landscaping facilities in previous research did not cause problems in the residential area where the facilities are not operated efficiently. The finding suggests that the administration costs do not matter but that in the case of experience- and entertainment-typed facilities that residents want, they are linked to problems that do not introduce the desired facilities. Fourth, it was found that various aspects of planning, designing, maintaining, and using facilities interconnect and affect one another in the process of operating and using waterscape facilities resulting in the need to have a comprehensive approach to these three factors of planning, design, maintenance, management, and utilization. This study proposes that the needs and values of residents should be reflected to activate the introduction of landscaping facilities in the apartment complexes.

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.

Assessment of Attenuation Correction Techniques with a $^{137}Cs$ Point Source ($^{137}Cs$ 점선원을 이용한 감쇠 보정기법들의 평가)

  • Bong, Jung-Kyun;Kim, Hee-Joung;Son, Hye-Kyoung;Park, Yun-Young;Park, Hae-Joung;Yun, Mi-Jin;Lee, Jong-Doo;Jung, Hae-Jo
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.1
    • /
    • pp.57-68
    • /
    • 2005
  • Purpose: The objective of this study was to assess attenuation correction algorithms with the $^{137}Cs$ point source for the brain positron omission tomography (PET) imaging process. Materials & Methods: Four different types of phantoms were used in this study for testing various types of the attenuation correction techniques. Transmission data of a $^{137}Cs$ point source were acquired after infusing the emission source into phantoms and then the emission data were subsequently acquired in 3D acquisition mode. Scatter corrections were performed with a background tail-fitting algorithm. Emission data were then reconstructed using iterative reconstruction method with a measured (MAC), elliptical (ELAC), segmented (SAC) and remapping (RAC) attenuation correction, respectively. Reconstructed images were then both qualitatively and quantitatively assessed. In addition, reconstructed images of a normal subject were assessed by nuclear medicine physicians. Subtracted images were also compared. Results: ELEC, SAC, and RAC provided a uniform phantom image with less noise for a cylindrical phantom. In contrast, a decrease in intensity at the central portion of the attenuation map was noticed at the result of the MAC. Reconstructed images of Jaszack and Hoffan phantoms presented better quality with RAC and SAC. The attenuation of a skull on images of the normal subject was clearly noticed and the attenuation correction without considering the attenuation of the skull resulted in artificial defects on images of the brain. Conclusion: the complicated and improved attenuation correction methods were needed to obtain the better accuracy of the quantitative brain PET images.

A Reflectance Normalization Via BRDF Model for the Korean Vegetation using MODIS 250m Data (한반도 식생에 대한 MODIS 250m 자료의 BRDF 효과에 대한 반사도 정규화)

  • Yeom, Jong-Min;Han, Kyung-Soo;Kim, Young-Seup
    • Korean Journal of Remote Sensing
    • /
    • v.21 no.6
    • /
    • pp.445-456
    • /
    • 2005
  • The land surface parameters should be determined with sufficient accuracy, because these play an important role in climate change near the ground. As the surface reflectance presents strong anisotropy, off-nadir viewing results a strong dependency of observations on the Sun - target - sensor geometry. They contribute to the random noise which is produced by surface angular effects. The principal objective of the study is to provide a database of accurate surface reflectance eliminated the angular effects from MODIS 250m reflective channel data over Korea. The MODIS (Moderate Resolution Imaging Spectroradiometer) sensor has provided visible and near infrared channel reflectance at 250m resolution on a daily basis. The successive analytic processing steps were firstly performed on a per-pixel basis to remove cloudy pixels. And for the geometric distortion, the correction process were performed by the nearest neighbor resampling using 2nd-order polynomial obtained from the geolocation information of MODIS Data set. In order to correct the surface anisotropy effects, this paper attempted the semiempirical kernel-driven Bi- directional Reflectance Distribution Function(BRDF) model. The algorithm yields an inversion of the kernel-driven model to the angular components, such as viewing zenith angle, solar zenith angle, viewing azimuth angle, solar azimuth angle from reflectance observed by satellite. First we consider sets of the model observations comprised with a 31-day period to perform the BRDF model. In the next step, Nadir view reflectance normalization is carried out through the modification of the angular components, separated by BRDF model for each spectral band and each pixel. Modeled reflectance values show a good agreement with measured reflectance values and their RMSE(Root Mean Square Error) was totally about 0.01(maximum=0.03). Finally, we provide a normalized surface reflectance database consisted of 36 images for 2001 over Korea.