• Title/Summary/Keyword: 필터링 알고리즘

Search Result 851, Processing Time 0.028 seconds

Water Segmentation Based on Morphologic and Edge-enhanced U-Net Using Sentinel-1 SAR Images (형태학적 연산과 경계추출 학습이 강화된 U-Net을 활용한 Sentinel-1 영상 기반 수체탐지)

  • Kim, Hwisong;Kim, Duk-jin;Kim, Junwoo
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_2
    • /
    • pp.793-810
    • /
    • 2022
  • Synthetic Aperture Radar (SAR) is considered to be suitable for near real-time inundation monitoring. The distinctly different intensity between water and land makes it adequate for waterbody detection, but the intrinsic speckle noise and variable intensity of SAR images decrease the accuracy of waterbody detection. In this study, we suggest two modules, named 'morphology module' and 'edge-enhanced module', which are the combinations of pooling layers and convolutional layers, improving the accuracy of waterbody detection. The morphology module is composed of min-pooling layers and max-pooling layers, which shows the effect of morphological transformation. The edge-enhanced module is composed of convolution layers, which has the fixed weights of the traditional edge detection algorithm. After comparing the accuracy of various versions of each module for U-Net, we found that the optimal combination is the case that the morphology module of min-pooling and successive layers of min-pooling and max-pooling, and the edge-enhanced module of Scharr filter were the inputs of conv9. This morphologic and edge-enhanced U-Net improved the F1-score by 9.81% than the original U-Net. Qualitative inspection showed that our model has capability of detecting small-sized waterbody and detailed edge of water, which are the distinct advancement of the model presented in this research, compared to the original U-Net.

A Study on the Effect of Network Centralities on Recommendation Performance (네트워크 중심성 척도가 추천 성능에 미치는 영향에 대한 연구)

  • Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.23-46
    • /
    • 2021
  • Collaborative filtering, which is often used in personalization recommendations, is recognized as a very useful technique to find similar customers and recommend products to them based on their purchase history. However, the traditional collaborative filtering technique has raised the question of having difficulty calculating the similarity for new customers or products due to the method of calculating similaritiesbased on direct connections and common features among customers. For this reason, a hybrid technique was designed to use content-based filtering techniques together. On the one hand, efforts have been made to solve these problems by applying the structural characteristics of social networks. This applies a method of indirectly calculating similarities through their similar customers placed between them. This means creating a customer's network based on purchasing data and calculating the similarity between the two based on the features of the network that indirectly connects the two customers within this network. Such similarity can be used as a measure to predict whether the target customer accepts recommendations. The centrality metrics of networks can be utilized for the calculation of these similarities. Different centrality metrics have important implications in that they may have different effects on recommended performance. In this study, furthermore, the effect of these centrality metrics on the performance of recommendation may vary depending on recommender algorithms. In addition, recommendation techniques using network analysis can be expected to contribute to increasing recommendation performance even if they apply not only to new customers or products but also to entire customers or products. By considering a customer's purchase of an item as a link generated between the customer and the item on the network, the prediction of user acceptance of recommendation is solved as a prediction of whether a new link will be created between them. As the classification models fit the purpose of solving the binary problem of whether the link is engaged or not, decision tree, k-nearest neighbors (KNN), logistic regression, artificial neural network, and support vector machine (SVM) are selected in the research. The data for performance evaluation used order data collected from an online shopping mall over four years and two months. Among them, the previous three years and eight months constitute social networks composed of and the experiment was conducted by organizing the data collected into the social network. The next four months' records were used to train and evaluate recommender models. Experiments with the centrality metrics applied to each model show that the recommendation acceptance rates of the centrality metrics are different for each algorithm at a meaningful level. In this work, we analyzed only four commonly used centrality metrics: degree centrality, betweenness centrality, closeness centrality, and eigenvector centrality. Eigenvector centrality records the lowest performance in all models except support vector machines. Closeness centrality and betweenness centrality show similar performance across all models. Degree centrality ranking moderate across overall models while betweenness centrality always ranking higher than degree centrality. Finally, closeness centrality is characterized by distinct differences in performance according to the model. It ranks first in logistic regression, artificial neural network, and decision tree withnumerically high performance. However, it only records very low rankings in support vector machine and K-neighborhood with low-performance levels. As the experiment results reveal, in a classification model, network centrality metrics over a subnetwork that connects the two nodes can effectively predict the connectivity between two nodes in a social network. Furthermore, each metric has a different performance depending on the classification model type. This result implies that choosing appropriate metrics for each algorithm can lead to achieving higher recommendation performance. In general, betweenness centrality can guarantee a high level of performance in any model. It would be possible to consider the introduction of proximity centrality to obtain higher performance for certain models.

A Study on the Selection of Parameter Values of FUSION Software for Improving Airborne LiDAR DEM Accuracy in Forest Area (산림지역에서의 LiDAR DEM 정확도 향상을 위한 FUSION 패러미터 선정에 관한 연구)

  • Cho, Seungwan;Park, Joowon
    • Journal of Korean Society of Forest Science
    • /
    • v.106 no.3
    • /
    • pp.320-329
    • /
    • 2017
  • This study aims to evaluate whether the accuracy of LiDAR DEM is affected by the changes of the five input levels ('1','3','5','7' and '9') of median parameter ($F_{md}$), mean parameter ($F_{mn}$) of the Filtering Algorithm (FA) in the GroundFilter module and median parameter ($I_{md}$), mean parameter ($I_{mn}$) of the Interpolation Algorithm (IA) in the GridSurfaceCreate module of the FUSION in order to present the combination of parameter levels producing the most accurate LiDAR DEM. The accuracy is measured by the residuals calculated by difference between the field elevation values and their corresponding DEM elevation values. A multi-way ANOVA is used to statistically examine whether there are effects of parameter level changes on the means of the residuals. The Tukey HSD is conducted as a post-hoc test. The results of the multi- way ANOVA test show that the changes in the levels of $F_{md}$, $F_{mn}$, $I_{mn}$ have significant effects on the DEM accuracy with the significant interaction effect between $F_{md}$ and $F_{mn}$. Therefore, the level of $F_{md}$, $F_{mn}$, and the interaction between two variables are considered to be factors affecting the accuracy of LiDAR DEM as well as the level of $I_{mn}$. As the results of the Tukey HSD test on the combination levels of $F_{md}{\ast}F_{mn}$, the mean of residuals of the '$9{\ast}3$' combination provides the highest accuracy while the '$1{\ast}1$' combination provides the lowest one. Regarding $I_{mn}$ levels, the mean of residuals of the both '3' and '1' provides the highest accuracy. This study can contribute to improve the accuracy of the forest attributes as well as the topographic information extracted from the LiDAR data.

Quantitative Conductivity Estimation Error due to Statistical Noise in Complex $B_1{^+}$ Map (정량적 도전율측정의 오차와 $B_1{^+}$ map의 노이즈에 관한 분석)

  • Shin, Jaewook;Lee, Joonsung;Kim, Min-Oh;Choi, Narae;Seo, Jin Keun;Kim, Dong-Hyun
    • Investigative Magnetic Resonance Imaging
    • /
    • v.18 no.4
    • /
    • pp.303-313
    • /
    • 2014
  • Purpose : In-vivo conductivity reconstruction using transmit field ($B_1{^+}$) information of MRI was proposed. We assessed the accuracy of conductivity reconstruction in the presence of statistical noise in complex $B_1{^+}$ map and provided a parametric model of the conductivity-to-noise ratio value. Materials and Methods: The $B_1{^+}$ distribution was simulated for a cylindrical phantom model. By adding complex Gaussian noise to the simulated $B_1{^+}$ map, quantitative conductivity estimation error was evaluated. The quantitative evaluation process was repeated over several different parameters such as Larmor frequency, object radius and SNR of $B_1{^+}$ map. A parametric model for the conductivity-to-noise ratio was developed according to these various parameters. Results: According to the simulation results, conductivity estimation is more sensitive to statistical noise in $B_1{^+}$ phase than to noise in $B_1{^+}$ magnitude. The conductivity estimate of the object of interest does not depend on the external object surrounding it. The conductivity-to-noise ratio is proportional to the signal-to-noise ratio of the $B_1{^+}$ map, Larmor frequency, the conductivity value itself and the number of averaged pixels. To estimate accurate conductivity value of the targeted tissue, SNR of $B_1{^+}$ map and adequate filtering size have to be taken into account for conductivity reconstruction process. In addition, the simulation result was verified at 3T conventional MRI scanner. Conclusion: Through all these relationships, quantitative conductivity estimation error due to statistical noise in $B_1{^+}$ map is modeled. By using this model, further issues regarding filtering and reconstruction algorithms can be investigated for MREPT.

Implementation of a portable pulse oximeter for SpO2 using Compact Flash Interface (컴팩트 플래쉬 방식의 휴대용 산소포화도 측정 시스템 구현)

  • Lee, Han;Kim, Young-Kil
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2003.05a
    • /
    • pp.678-681
    • /
    • 2003
  • In this paper, we aims to develop a microcontroll er-based portable pulse oximeter using Compact Flash Interface. First, portable pulse oxineter system is designed to record 2 channel of biosignals simultaneously, including 1 channel of SpO$_2$ and 1 channel of pulse rate. It is very small and portable. Besides, the system makes it possible to measure a patients condition without an additional medical equipment. We tried to solve the problems generated by a patient's motion. That is, we added an analog circuit to a traditional pulse oximeter in order to eliminate the change of the base line. And we used 2D sector algorithm. As present, SpO$_2$ modules are completed. But there are still many further development needed in order to enhance the function. Especially, compact flash interface remains the most to complete. Second, ECG monitoring system uses almost same as present 3-lead ECG system. But we focus on the analog part, especially in filter. The proposed filter is composed of two parts. One is a filter to remove the power-line interface. The other is a filter to remove the baseline drift. A filter to remove the power-line and the baseline drift is necessarily used in the ECG system. The implemented filter have three features; minimizing the distortion in DC component, removing the harmonic component of power-line frequency. Using compact flash interface, we can easily transfer a patient's personal information and the measured signal data to a network based server environment. That means, it is possible to implement a patient's monitoring system with low cost.

  • PDF

Smoothed Group-Sparsity Iterative Hard Thresholding Recovery for Compressive Sensing of Color Image (컬러 영상의 압축센싱을 위한 평활 그룹-희소성 기반 반복적 경성 임계 복원)

  • Nguyen, Viet Anh;Dinh, Khanh Quoc;Van Trinh, Chien;Park, Younghyeon;Jeon, Byeungwoo
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.4
    • /
    • pp.173-180
    • /
    • 2014
  • Compressive sensing is a new signal acquisition paradigm that enables sparse/compressible signal to be sampled under the Nyquist-rate. To fully benefit from its much simplified acquisition process, huge efforts have been made on improving the performance of compressive sensing recovery. However, concerning color images, compressive sensing recovery lacks in addressing image characteristics like energy distribution or human visual system. In order to overcome the problem, this paper proposes a new group-sparsity hard thresholding process by preserving some RGB-grouped coefficients important in both terms of energy and perceptual sensitivity. Moreover, a smoothed group-sparsity iterative hard thresholding algorithm for compressive sensing of color images is proposed by incorporating a frame-based filter with group-sparsity hard thresholding process. In this way, our proposed method not only pursues sparsity of image in transform domain but also pursues smoothness of image in spatial domain. Experimental results show average PSNR gains up to 2.7dB over the state-of-the-art group-sparsity smoothed recovery method.

Examination of Aggregate Quality Using Image Processing Based on Deep-Learning (딥러닝 기반 영상처리를 이용한 골재 품질 검사)

  • Kim, Seong Kyu;Choi, Woo Bin;Lee, Jong Se;Lee, Won Gok;Choi, Gun Oh;Bae, You Suk
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.6
    • /
    • pp.255-266
    • /
    • 2022
  • The quality control of coarse aggregate among aggregates, which are the main ingredients of concrete, is currently carried out by SPC(Statistical Process Control) method through sampling. We construct a smart factory for manufacturing innovation by changing the quality control of coarse aggregates to inspect the coarse aggregates based on this image by acquired images through the camera instead of the current sieve analysis. First, obtained images were preprocessed, and HED(Hollistically-nested Edge Detection) which is the filter learned by deep learning segment each object. After analyzing each aggregate by image processing the segmentation result, fineness modulus and the aggregate shape rate are determined by analyzing result. The quality of aggregate obtained through the video was examined by calculate fineness modulus and aggregate shape rate and the accuracy of the algorithm was more than 90% accurate compared to that of aggregates through the sieve analysis. Furthermore, the aggregate shape rate could not be examined by conventional methods, but the content of this paper also allowed the measurement of the aggregate shape rate. For the aggregate shape rate, it was verified with the length of models, which showed a difference of ±4.5%. In the case of measuring the length of the aggregate, the algorithm result and actual length of the aggregate showed a ±6% difference. Analyzing the actual three-dimensional data in a two-dimensional video made a difference from the actual data, which requires further research.

Analysis of Interactions in Multiple Genes using IFSA(Independent Feature Subspace Analysis) (IFSA 알고리즘을 이용한 유전자 상호 관계 분석)

  • Kim, Hye-Jin;Choi, Seung-Jin;Bang, Sung-Yang
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.3
    • /
    • pp.157-165
    • /
    • 2006
  • The change of external/internal factors of the cell rquires specific biological functions to maintain life. Such functions encourage particular genes to jnteract/regulate each other in multiple ways. Accordingly, we applied a linear decomposition model IFSA, which derives hidden variables, called the 'expression mode' that corresponds to the functions. To interpret gene interaction/regulation, we used a cross-correlation method given an expression mode. Linear decomposition models such as principal component analysis (PCA) and independent component analysis (ICA) were shown to be useful in analyzing high dimensional DNA microarray data, compared to clustering methods. These methods assume that gene expression is controlled by a linear combination of uncorrelated/indepdendent latent variables. However these methods have some difficulty in grouping similar patterns which are slightly time-delayed or asymmetric since only exactly matched Patterns are considered. In order to overcome this, we employ the (IFSA) method of [1] to locate phase- and shut-invariant features. Membership scoring functions play an important role to classify genes since linear decomposition models basically aim at data reduction not but at grouping data. We address a new function essential to the IFSA method. In this paper we stress that IFSA is useful in grouping functionally-related genes in the presence of time-shift and expression phase variance. Ultimately, we propose a new approach to investigate the multiple interaction information of genes.

Enhancement of Inter-Image Statistical Correlation for Accurate Multi-Sensor Image Registration (정밀한 다중센서 영상정합을 위한 통계적 상관성의 증대기법)

  • Kim, Kyoung-Soo;Lee, Jin-Hak;Ra, Jong-Beom
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.4 s.304
    • /
    • pp.1-12
    • /
    • 2005
  • Image registration is a process to establish the spatial correspondence between images of the same scene, which are acquired at different view points, at different times, or by different sensors. This paper presents a new algorithm for robust registration of the images acquired by multiple sensors having different modalities; the EO (electro-optic) and IR(infrared) ones in the paper. The two feature-based and intensity-based approaches are usually possible for image registration. In the former selection of accurate common features is crucial for high performance, but features in the EO image are often not the same as those in the R image. Hence, this approach is inadequate to register the E0/IR images. In the latter normalized mutual Information (nHr) has been widely used as a similarity measure due to its high accuracy and robustness, and NMI-based image registration methods assume that statistical correlation between two images should be global. Unfortunately, since we find out that EO and IR images don't often satisfy this assumption, registration accuracy is not high enough to apply to some applications. In this paper, we propose a two-stage NMI-based registration method based on the analysis of statistical correlation between E0/1R images. In the first stage, for robust registration, we propose two preprocessing schemes: extraction of statistically correlated regions (ESCR) and enhancement of statistical correlation by filtering (ESCF). For each image, ESCR automatically extracts the regions that are highly correlated to the corresponding regions in the other image. And ESCF adaptively filters out each image to enhance statistical correlation between them. In the second stage, two output images are registered by using NMI-based algorithm. The proposed method provides prospective results for various E0/1R sensor image pairs in terms of accuracy, robustness, and speed.

Automatic Text Extraction from News Video using Morphology and Text Shape (형태학과 문자의 모양을 이용한 뉴스 비디오에서의 자동 문자 추출)

  • Jang, In-Young;Ko, Byoung-Chul;Kim, Kil-Cheon;Byun, Hye-Ran
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.4
    • /
    • pp.479-488
    • /
    • 2002
  • In recent years the amount of digital video used has risen dramatically to keep pace with the increasing use of the Internet and consequently an automated method is needed for indexing digital video databases. Textual information, both superimposed and embedded scene texts, appearing in a digital video can be a crucial clue for helping the video indexing. In this paper, a new method is presented to extract both superimposed and embedded scene texts in a freeze-frame of news video. The algorithm is summarized in the following three steps. For the first step, a color image is converted into a gray-level image and applies contrast stretching to enhance the contrast of the input image. Then, a modified local adaptive thresholding is applied to the contrast-stretched image. The second step is divided into three processes: eliminating text-like components by applying erosion, dilation, and (OpenClose+CloseOpen)/2 morphological operations, maintaining text components using (OpenClose+CloseOpen)/2 operation with a new Geo-correction method, and subtracting two result images for eliminating false-positive components further. In the third filtering step, the characteristics of each component such as the ratio of the number of pixels in each candidate component to the number of its boundary pixels and the ratio of the minor to the major axis of each bounding box are used. Acceptable results have been obtained using the proposed method on 300 news images with a recognition rate of 93.6%. Also, my method indicates a good performance on all the various kinds of images by adjusting the size of the structuring element.