• Title/Summary/Keyword: Filtering efficiency

Search Result 409, Processing Time 0.033 seconds

Application of the Recursive Contract Net Protocol for the Threshold Value Determination in Wireless Sensor Networks (무선 센서 네트워크에서 경계값 결정을 위한 재귀적 계약망 프로토콜의 적용)

  • Seo, Hee-Suk
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.19 no.4
    • /
    • pp.41-49
    • /
    • 2009
  • In ubiquitous sensor networks, sensor nodes can be compromised by an adversary since they are deployed in hostile environments. False sensing reports can be injected into the network through these compromised nodes, which may cause not only false alarms but also the depletion of limited energy resource in the network. In the security solutions for the filtering of false reports, the choice of a security threshold value which determines the security level is important. In the existing adaptive solutions, a newly determined threshold value is broadcasted to the whole nodes, so that extra energy resource may be consumed unnecessarily. In this paper, we propose an application of the recursive contract net protocol to determine the threshold value which can provide both energy efficiency and sufficient security level. To manage the network more efficiently, the network is hierarchically grouped, and the contract net protocol is applied to each group. Through the protocol, the threshold value determined by the base station using a fuzzy logic is applied only where the security attack occurs on.

Baseline Correction in Computed Radiography Images with 1D Morphological Filter (CR 영상에서 기저선 보정을 위한 1차원 모폴로지컬 필터의 이용에 관한 연구)

  • Kim, Yong-Gwon;Ryu, Yeunchul
    • Journal of radiological science and technology
    • /
    • v.45 no.5
    • /
    • pp.397-405
    • /
    • 2022
  • Computed radiography (CR) systems, which convert an analog signal recorded on a cassette into a digital image, combine the characteristics of analog and digital imaging systems. Compared to digital radiography (DR) systems, CR systems have presented difficulties in evaluating system performance because of their lower detective quantum efficiency, their lower signal-to-noise ratio (SNR), and lower modulation transfer function (MTF). During the step of energy-storing and reading out, a baseline offset occurs in the edge area and makes low-frequency overestimation. The low-frequency offset component in the line spread function (LSF) critically affects the MTF and other image-analysis or qualification processes. In this study, we developed the method of baseline correction using mathematical morphology to determine the LSF and MTF of CR systems accurately. We presented a baseline correction that used a morphological filter to effectively remove the low-frequency offset from the LSF. We also tried an MTF evaluation of the CR system to demonstrate the effectiveness of the baseline correction. The MTF with a 3-pixel structuring element (SE) fluctuated since it overestimated the low-frequency component. This overestimation led the algorithm to over-compensate in the low-frequency region so that high-frequency components appeared relatively strong. The MTFs with between 11- and 15-pixel SEs showed little variation. Compared to spatial or frequency filtering that eliminated baseline effects in the edge spread function, our algorithm performed better at precisely locating the edge position and the averaged LSF was narrower.

Properties of chi-square statistic and information gain for feature selection of imbalanced text data (불균형 텍스트 데이터의 변수 선택에 있어서의 카이제곱통계량과 정보이득의 특징)

  • Mun, Hye In;Son, Won
    • The Korean Journal of Applied Statistics
    • /
    • v.35 no.4
    • /
    • pp.469-484
    • /
    • 2022
  • Since a large text corpus contains hundred-thousand unique words, text data is one of the typical large-dimensional data. Therefore, various feature selection methods have been proposed for dimension reduction. Feature selection methods can improve the prediction accuracy. In addition, with reduced data size, computational efficiency also can be achieved. The chi-square statistic and the information gain are two of the most popular measures for identifying interesting terms from text data. In this paper, we investigate the theoretical properties of the chi-square statistic and the information gain. We show that the two filtering metrics share theoretical properties such as non-negativity and convexity. However, they are different from each other in the sense that the information gain is prone to select more negative features than the chi-square statistic in imbalanced text data.

A study on the improvement of concrete defect detection performance through the convergence of transfer learning and k-means clustering (전이학습과 k-means clustering의 융합을 통한 콘크리트 결함 탐지 성능 향상에 대한 연구)

  • Younggeun Yoon;Taekeun Oh
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.2
    • /
    • pp.561-568
    • /
    • 2023
  • Various defects occur in concrete structures due to internal and external environments. If there is a defect, it is important to efficiently identify and maintain it because there is a problem with the structural safety of concrete. However, recent deep learning research has focused on cracks in concrete, and studies on exfoliation and contamination are lacking. In this study, focusing on exfoliation and contamination, which are difficult to label, four models were developed and their performance evaluated through unlabelling method, filtering method, the convergence of transfer learning based k-means clustering. As a result of the analysis, the convergence model classified the defects in the most detail and could increase the efficiency compared to direct labeling. It is hoped that the results of this study will contribute to the development of deep learning models for various types of defects that are difficult to label in the future.

Predicting the Performance of Recommender Systems through Social Network Analysis and Artificial Neural Network (사회연결망분석과 인공신경망을 이용한 추천시스템 성능 예측)

  • Cho, Yoon-Ho;Kim, In-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.159-172
    • /
    • 2010
  • The recommender system is one of the possible solutions to assist customers in finding the items they would like to purchase. To date, a variety of recommendation techniques have been developed. One of the most successful recommendation techniques is Collaborative Filtering (CF) that has been used in a number of different applications such as recommending Web pages, movies, music, articles and products. CF identifies customers whose tastes are similar to those of a given customer, and recommends items those customers have liked in the past. Numerous CF algorithms have been developed to increase the performance of recommender systems. Broadly, there are memory-based CF algorithms, model-based CF algorithms, and hybrid CF algorithms which combine CF with content-based techniques or other recommender systems. While many researchers have focused their efforts in improving CF performance, the theoretical justification of CF algorithms is lacking. That is, we do not know many things about how CF is done. Furthermore, the relative performances of CF algorithms are known to be domain and data dependent. It is very time-consuming and expensive to implement and launce a CF recommender system, and also the system unsuited for the given domain provides customers with poor quality recommendations that make them easily annoyed. Therefore, predicting the performances of CF algorithms in advance is practically important and needed. In this study, we propose an efficient approach to predict the performance of CF. Social Network Analysis (SNA) and Artificial Neural Network (ANN) are applied to develop our prediction model. CF can be modeled as a social network in which customers are nodes and purchase relationships between customers are links. SNA facilitates an exploration of the topological properties of the network structure that are implicit in data for CF recommendations. An ANN model is developed through an analysis of network topology, such as network density, inclusiveness, clustering coefficient, network centralization, and Krackhardt's efficiency. While network density, expressed as a proportion of the maximum possible number of links, captures the density of the whole network, the clustering coefficient captures the degree to which the overall network contains localized pockets of dense connectivity. Inclusiveness refers to the number of nodes which are included within the various connected parts of the social network. Centralization reflects the extent to which connections are concentrated in a small number of nodes rather than distributed equally among all nodes. Krackhardt's efficiency characterizes how dense the social network is beyond that barely needed to keep the social group even indirectly connected to one another. We use these social network measures as input variables of the ANN model. As an output variable, we use the recommendation accuracy measured by F1-measure. In order to evaluate the effectiveness of the ANN model, sales transaction data from H department store, one of the well-known department stores in Korea, was used. Total 396 experimental samples were gathered, and we used 40%, 40%, and 20% of them, for training, test, and validation, respectively. The 5-fold cross validation was also conducted to enhance the reliability of our experiments. The input variable measuring process consists of following three steps; analysis of customer similarities, construction of a social network, and analysis of social network patterns. We used Net Miner 3 and UCINET 6.0 for SNA, and Clementine 11.1 for ANN modeling. The experiments reported that the ANN model has 92.61% estimated accuracy and 0.0049 RMSE. Thus, we can know that our prediction model helps decide whether CF is useful for a given application with certain data characteristics.

A Study on the Method for the Removal of Radioactive Corrosion Produce Using Permanent and Electric Magnets

  • Kong Tae-Young;Song Min-Chul;Lee Kun-Jai
    • Journal of Nuclear Fuel Cycle and Waste Technology(JNFCWT)
    • /
    • v.3 no.2
    • /
    • pp.113-123
    • /
    • 2005
  • The removal of radioactive corrosion products from the reactor coolant through a magnetic filter system is one of the many approaches being investigated as a means to reduce radiation sources and exposures to the operational and maintenance personnel in a nuclear power plant. Many research activities in water chemistry, therefore, have been performed to provide a filtration system with high reliability and feasibility and are still in process. In this study, it was devised the magnetic filter system with permanent and electric magnets to remove the corrosion products in the coolant stream taking an advantage of the magnetic properties of corrosion particles. Permanent magnets were used for separation of corrosion products and electric magnets were utilized for flocculation of colloidal particles to increase in their size. Experiments using only permanent magnets, in the previous study, displayed the satisfactory outcome of filtering corrosion products and indicated that the removal efficiency was more than 90 $\%$ for above 5 $\mu$m particles. Experiments using electric magnets also showed the good performance of flocculation without chemical agents and exhibited that most corrosion particles were flocculated into larger aggregates about 5 $\mu$m and over in diameter. It is, thus, expected that the magnetic filter system with the arrangement of permanent and electric magnets will be an effective way for the removal of radioactive corrosion products with considerably high removal efficiency.

  • PDF

Encounter of Lattice-type coding with Wiener's MMSE and Shannon's Information-Theoretic Capacity Limits in Quantity and Quality of Signal Transmission (신호 전송의 양과 질에서 위너의 MMSE와 샤논의 정보 이론적 정보량 극한 과 격자 코드 와의 만남)

  • Park, Daechul;Lee, Moon Ho
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.8
    • /
    • pp.83-93
    • /
    • 2013
  • By comparing Wiener's MMSE on stochastic signal transmission with Shannon's mutual information first proved by C.E. Shannon in terms of information theory, connections between two approaches were investigated. What Wiener wanted to see in signal transmission in noisy channel is to try to capture fundamental limits for signal quality in signal estimation. On the other hands, Shannon was interested in finding fundamental limits of signal quantity that maximize the uncertainty in mutual information using the entropy concept in noisy channel. First concern of this paper is to show that in deriving limits of Shannon's point to point fundamental channel capacity, Shannon's mutual information obtained by exploiting MMSE combiner and Wiener filter's MMSE are interelated by integro-differential equantion. Then, At the meeting point of Wiener's MMSE and Shannon's mutual information the upper bound of spectral efficiency and the lower bound of energy efficiency were computed. Choosing a proper lattice-type code of a mod-${\Lambda}$AWGN channel model and MMSE estimation of ${\alpha}$ confirmed to lead to the fundamental Shannon capacity limits.

Current status and future plans of KMTNet microlensing experiments

  • Chung, Sun-Ju;Gould, Andrew;Jung, Youn Kil;Hwang, Kyu-Ha;Ryu, Yoon-Hyun;Shin, In-Gu;Yee, Jennifer C.;Zhu, Wei;Han, Cheongho;Cha, Sang-Mok;Kim, Dong-Jin;Kim, Hyun-Woo;Kim, Seung-Lee;Lee, Chung-Uk;Lee, Yongseok
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.43 no.1
    • /
    • pp.41.1-41.1
    • /
    • 2018
  • We introduce a current status and future plans of Korea Microlensing Telescope Network (KMTNet) microlensing experiments, which include an observational strategy, pipeline, event-finder, and collaborations with Spitzer. The KMTNet experiments were initiated in 2015. From 2016, KMTNet observes 27 fields including 6 main fields and 21 subfields. In 2017, we have finished the DIA photometry for all 2016 and 2017 data. Thus, it is possible to do a real-time DIA photometry from 2018. The DIA photometric data is used for finding events from the KMTNet event-finder. The KMTNet event-finder has been improved relative to the previous version, which already found 857 events in 4 main fields of 2015. We have applied the improved version to all 2016 data. As a result, we find that 2597 events are found, and out of them, 265 are found in KMTNet-K2C9 overlapping fields. For increasing the detection efficiency of event-finder, we are working on filtering false events out by machine-learning method. In 2018, we plan to measure event detection efficiency of KMTNet by injecting fake events into the pipeline near the image level. Thanks to high-cadence observations, KMTNet found fruitful interesting events including exoplanets and brown dwarfs, which were not found by other groups. Masses of such exoplanets and brown dwarfs are measured from collaborations with Spitzer and other groups. Especially, KMTNet has been closely cooperating with Spitzer from 2015. Thus, KMTNet observes Spitzer fields. As a result, we could measure the microlens parallaxes for many events. Also, the automated KMTNet PySIS pipeline was developed before the 2017 Spitzer season and it played a very important role in selecting the Spitzer target. For the 2018 Spitzer season, we will improve the PySIS pipeline to obtain better photometric results.

  • PDF

Flotation for Recycling of a Waste Water Filtered from Molybdenite Tailings (몰리브덴 선광광미 응집여과액 재활용을 위한 부유선별 특성)

  • Park, Chul-Hyun;Jeon, Ho-Seok;Han, Oh-Hyung;Kim, Byoung-Gon;Baek, Sang-Ho;Kim, Hak-Sun
    • Journal of the Mineralogical Society of Korea
    • /
    • v.23 no.3
    • /
    • pp.235-242
    • /
    • 2010
  • Froth flotation using the residual water in the end of flotation process has been performed through controlling of pH. IEP (isoelectric point) of molybdenite and quartz in distilled water was below pH 3 and pH 2.7, respectively and the stabilized range was pH 5~10. In case of a suspension in reusing water, zeta potential of molybdenite decreased to below -10 mV or less at over pH 4 due to residual flocculants. As result of pH control, flotation efficiency in the alkaline conditions was deteriorated by flocculation, resulting from expanded polymer chain, ion bridge of the divalent metal cations ($Ca^{2+}$), and hydrophobic interactions between the nonpolar site of polymer/the hydrophobic areas of the particle surfaces. However, the weak acid conditions (pH 5.5~6) improved the efficiency of flotation as hydrogen ions neutralize polymer chains and then weakened its function. In cleans after rougher flotation, the Mo grade of 52.7% and recovery of 90.1% could be successfully obtained under the conditions of 20 g/t kerosene, 50 g/t AF65, 300 g/t $Na_2SiO_3$, pH 5.5 and 2 cleaning times. Hence, we developed a technique which can continuously supply waste water filtered from tailings into the grinding-rougher-cleaning processes.

Hardware Design of High Performance In-loop Filter in HEVC Encoder for Ultra HD Video Processing in Real Time (UHD 영상의 실시간 처리를 위한 고성능 HEVC In-loop Filter 부호화기 하드웨어 설계)

  • Im, Jun-seong;Dennis, Gookyi;Ryoo, Kwang-ki
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.10a
    • /
    • pp.401-404
    • /
    • 2015
  • This paper proposes a high-performance in-loop filter in HEVC(High Efficiency Video Coding) encoder for Ultra HD video processing in real time. HEVC uses in-loop filter consisting of deblocking filter and SAO(Sample Adaptive Offset) to solve the problems of quantization error which causes image degradation. In the proposed in-loop filter encoder hardware architecture, the deblocking filter and SAO has a 2-level hybrid pipeline structure based on the $32{\times}32CTU$ to reduce the execution time. The deblocking filter is performed by 6-stage pipeline structure, and it supports minimization of memory access and simplification of reference memory structure using proposed efficient filtering order. Also The SAO is implemented by 2-statge pipeline for pixel classification and applying SAO parameters and it uses two three-layered parallel buffers to simplify pixel processing and reduce operation cycle. The proposed in-loop filter encoder architecture is designed by Verilog HDL, and implemented by 205K logic gates in TSMC 0.13um process. At 110MHz, the proposed in-loop filter encoder can support 4K Ultra HD video encoding at 30fps in realtime.

  • PDF