• Title/Summary/Keyword: image decomposition

Search Result 367, Processing Time 0.023 seconds

Enhanced Block Matching Scheme for Denoising Images Based on Bit-Plane Decomposition of Images (영상의 이진화평면 분해에 기반한 확장된 블록매칭 잡음제거)

  • Pok, Gouchol
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.12 no.3
    • /
    • pp.321-326
    • /
    • 2019
  • Image denoising methods based on block matching are founded on the experimental observations that neighboring patches or blocks in images retain similar features with each other, and have been proved to show superior performance in denoising different kinds of noise. The methods, however, take into account only neighboring blocks in searching for similar blocks, and ignore the characteristic features of the reference block itself. Consequently, denoising performance is negatively affected when outliers of the Gaussian distribution are included in the reference block which is to be denoised. In this paper, we propose an expanded block matching method in which noisy images are first decomposed into a number of bit-planes, then the range of true signals are estimated based on the distribution of pixels on the bit-planes, and finally outliers are replaced by the neighboring pixels belonging to the estimated range. In this way, the advantages of the conventional Gaussian filter can be added to the blocking matching method. We tested the proposed method through extensive experiments with well known test-bed images, and observed that performance gain can be achieved by the proposed method.

Air passenger demand forecasting for the Incheon airport using time series models (시계열 모형을 이용한 인천공항 이용객 수요 예측)

  • Lee, Jihoon;Han, Hyerim;Yoon, Sanghoo
    • Journal of Digital Convergence
    • /
    • v.18 no.12
    • /
    • pp.87-95
    • /
    • 2020
  • The Incheon airport is a gateway to and from the Republic of Korea and has a great influence on the image of the country. Therefore, it is necessary to predict the number of airport passengers in the long term in order to maintain the quality of service at the airport. In this study, we compared the predictive performance of various time series models to predict the air passenger demand at Incheon Airport. From 2002 to 2019, passenger data include trend and seasonality. We considered the naive method, decomposition method, exponential smoothing method, SARIMA, PROPHET. In order to compare the capacity and number of passengers at Incheon Airport in the future, the short-term, mid-term, and long-term was forecasted by time series models. For the short-term forecast, the exponential smoothing model, which weighted the recent data, was excellent, and the number of annual users in 2020 will be about 73.5 million. For the medium-term forecast, the SARIMA model considering stationarity was excellent, and the annual number of air passengers in 2022 will be around 79.8 million. The PROPHET model was excellent for long-term prediction and the annual number of passengers is expected to be about 99.0 million in 2024.

Analysis on the Changes in Abandoned Paddy Wetlands as a Carbon Absorption Sources and Topographic Hydrological Environment (탄소흡수원으로서의 묵논습지 변화와 지형수문 환경 분석)

  • Miok, Park;Sungwon, Hong;Bonhak, Koo
    • Land and Housing Review
    • /
    • v.14 no.1
    • /
    • pp.83-97
    • /
    • 2023
  • The study aims to provide an academic basis for the preservation and restoration of abandoned paddy wetland and the enhancement of its carbon accumulation function. First, the temporal change of the wetlands was analysed, and a typological classification system for wetlands was attempted with the goal of carbon reduction. The types of wetland were classified based on three variables: hydrological environment, vegetation, and carbon accumulation, with a special attention on the function of carbon accumulation. The types of abandoned paddy wetlands were classified into 12 categories based on hydrologic variables- either high or low levels of water inflow potential-, vegetation variables with either dominance of aquatic plants or terrestrial plants, and three carbon accumulation variables including organic matter production, soil organic carbon accumulation, and decomposition. It was found that the development period of abandoned paddy analyzed with aerial photographs provided by the National Geographic Information Institute happened between 2010 and 2015. In the case of the wetland in Daejeon 1 (DJMN01) farming stopped by 1990 and it appeared to be a similar structure to natural wetlands after 2010 . Over the past 40 years the abandoned paddy wetland changed to a high proportion of forests and agricultural lands. As time went by, such forests and agricultural lands tended to decrease rapidly and the lands were covered by artificial grass and other types of forests.

Athermalization and Narcissus Analysis of Mid-IR Dual-FOV IR Optics (이중 시야 중적외선 광학계 비열화·나르시서스 분석)

  • Jeong, Do Hwan;Lee, Jun Ho;Jeong, Ho;Ok, Chang Min;Park, Hyun-Woo
    • Korean Journal of Optics and Photonics
    • /
    • v.29 no.3
    • /
    • pp.110-118
    • /
    • 2018
  • We have designed a mid-infrared optical system for an airborne electro-optical targeting system. The mid-IR optical system is a dual-field-of-view (FOV) optics for an airborne electro-optical targeting system. The optics consists of a beam-reducer, a zoom lens group, a relay lens group, a cold stop conjugation optics, and an IR detector. The IR detector is an f/5.3 cooled detector with a resolution of $1280{\times}1024$ square pixels, with a pixel size of $15{\times}15{\mu}m$. The optics provides two stepwise FOVs ($1.50^{\circ}{\times}1.20^{\circ}$ and $5.40^{\circ}{\times}4.23^{\circ}$) by the insertion of two lenses into the zoom lens group. The IR optical system was designed in such a way that the working f-number (f/5.3) of the cold stop internally provided by the IR detector is maintained over the entire FOV when changing the zoom. We performed two analyses to investigate thermal effects on the image quality: athermalization analysis and Narcissus analysis. Athermalization analysis investigated the image focus shift and residual high-order wavefront aberrations as the working temperature changes from $-55^{\circ}C$ to $50^{\circ}C$. We first identified the best compensator for the thermal focus drift, using the Zernike polynomial decomposition method. With the selected compensator, the optics was shown to maintain the on-axis MTF at the Nyquist frequency of the detector over 10%, throughout the temperature range. Narcissus analysis investigated the existence of the thermal ghost images of the cold detector formed by the optics itself, which is quantified by the Narcissus Induced Temperature Difference (NITD). The reported design was shown to have an NITD of less than $1.5^{\circ}C$.

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.

Study on Affecting Variables Appearing through Chemical Pretreatments of Poplar Wood (Populus euramericana) to Enzymatic Hydrolysis (이태리 포플러의 화학적 전처리 공정을 통한 효소가수분해 영향 인자 분석)

  • Koo, Bon-Wook;Park, Nahyun;Yeo, Hwanmyeong;Kim, Hoon;Choi, In-Gyu
    • Journal of the Korean Wood Science and Technology
    • /
    • v.37 no.3
    • /
    • pp.255-264
    • /
    • 2009
  • To evaluate the effects of chemical pretreatments of lignocellulosic biomass on enzymatic hydrolysis process, Populus euramericana was pretreated for 1 hr with 1% sulfuric acid ($H_2SO_4$) at $150^{\circ}C$ and 1% sodium hydroxide (NaOH) at $160^{\circ}C$, respectively. Before the enzymatic hydrolysis, each pretreated sample was subjected to drying process and thus finally divided into four subgroups; dried or non-dried acid pretreated samples and dried or non-dried alkali pretreated samples and chemical and physical properties of them were analyzed. Biomass degradation by acid pretreatment was determined to 6% higher compared to alkali pretreatment. By the action of acid ca. 24.5% of biomass was dissolved into solution, while alkali degraded ca. 18.6% of biomass. However, reverse results were observed in delignification rates, in which alkali pretreatment released 2% more lignin fragment from biomass to the solution than acid pretreatment. Unexpectedly, samples after both pretreatments were determined to somewhat higher crystallinity than untreated samples. This result may be explained by selective disrupture of amorphous region in cellulose during pretreatments, thus the cellulose crystallinity seems to be accumulated in the pretreated samples. SEM images revealed that pretreated samples showed relative rough and partly cracked surfaces due to the decomposition of components, but the image of acid pretreated samples which were dried was similar to that of the control. In pore size distribution, dried acid pretreated samples were similar to the control, while that in alkali pretreated samples was gradually increased as pore diameter increased. The pore volume which increased by acid pretreatment rapidly decreased by drying process. Alkali pretreatment was much more effective on enzymatic digestibility than acid pretreatment. The sample after alkali pretreatment was enzymatically hydrolyzed up to 45.8%, while only 26.9% of acid pretreated sample was digested at the same condition. The high digestibility of the sample was also influenced to the yields of monomeric sugars during enzymatic hydrolysis. In addition, drying process of pretreated samples affected detrimentally not only to digestibility but also to the yields of monomeric sugars.

Label Embedding for Improving Classification Accuracy UsingAutoEncoderwithSkip-Connections (다중 레이블 분류의 정확도 향상을 위한 스킵 연결 오토인코더 기반 레이블 임베딩 방법론)

  • Kim, Museong;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.175-197
    • /
    • 2021
  • Recently, with the development of deep learning technology, research on unstructured data analysis is being actively conducted, and it is showing remarkable results in various fields such as classification, summary, and generation. Among various text analysis fields, text classification is the most widely used technology in academia and industry. Text classification includes binary class classification with one label among two classes, multi-class classification with one label among several classes, and multi-label classification with multiple labels among several classes. In particular, multi-label classification requires a different training method from binary class classification and multi-class classification because of the characteristic of having multiple labels. In addition, since the number of labels to be predicted increases as the number of labels and classes increases, there is a limitation in that performance improvement is difficult due to an increase in prediction difficulty. To overcome these limitations, (i) compressing the initially given high-dimensional label space into a low-dimensional latent label space, (ii) after performing training to predict the compressed label, (iii) restoring the predicted label to the high-dimensional original label space, research on label embedding is being actively conducted. Typical label embedding techniques include Principal Label Space Transformation (PLST), Multi-Label Classification via Boolean Matrix Decomposition (MLC-BMaD), and Bayesian Multi-Label Compressed Sensing (BML-CS). However, since these techniques consider only the linear relationship between labels or compress the labels by random transformation, it is difficult to understand the non-linear relationship between labels, so there is a limitation in that it is not possible to create a latent label space sufficiently containing the information of the original label. Recently, there have been increasing attempts to improve performance by applying deep learning technology to label embedding. Label embedding using an autoencoder, a deep learning model that is effective for data compression and restoration, is representative. However, the traditional autoencoder-based label embedding has a limitation in that a large amount of information loss occurs when compressing a high-dimensional label space having a myriad of classes into a low-dimensional latent label space. This can be found in the gradient loss problem that occurs in the backpropagation process of learning. To solve this problem, skip connection was devised, and by adding the input of the layer to the output to prevent gradient loss during backpropagation, efficient learning is possible even when the layer is deep. Skip connection is mainly used for image feature extraction in convolutional neural networks, but studies using skip connection in autoencoder or label embedding process are still lacking. Therefore, in this study, we propose an autoencoder-based label embedding methodology in which skip connections are added to each of the encoder and decoder to form a low-dimensional latent label space that reflects the information of the high-dimensional label space well. In addition, the proposed methodology was applied to actual paper keywords to derive the high-dimensional keyword label space and the low-dimensional latent label space. Using this, we conducted an experiment to predict the compressed keyword vector existing in the latent label space from the paper abstract and to evaluate the multi-label classification by restoring the predicted keyword vector back to the original label space. As a result, the accuracy, precision, recall, and F1 score used as performance indicators showed far superior performance in multi-label classification based on the proposed methodology compared to traditional multi-label classification methods. This can be seen that the low-dimensional latent label space derived through the proposed methodology well reflected the information of the high-dimensional label space, which ultimately led to the improvement of the performance of the multi-label classification itself. In addition, the utility of the proposed methodology was identified by comparing the performance of the proposed methodology according to the domain characteristics and the number of dimensions of the latent label space.