• Title/Summary/Keyword: Similarity reduction

Search Result 205, Processing Time 0.025 seconds

Application of LiDAR Data & High-Resolution Satellite Image for Calculate Forest Biomass (산림바이오매스 산정을 위한 LiDAR 자료와 고해상도 위성영상 활용)

  • Lee, Hyun-Jik;Ru, Ji-Ho
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.20 no.1
    • /
    • pp.53-63
    • /
    • 2012
  • As a result of the economical loss caused by unusual climate changes resulting from emission of excessive green house gases such as carbon dioxide which is expected to account for 5~20% of the world GDP by 2100, researching technologies regarding the reduction of carbon dioxide emission is being favored worldwide as a part of the high value-added industry. As one of the Annex II countries of Kyoto Protocol of 1997 that should keep the average $CO_2$ emission rate of 5% by 2013, South Korea is also dedicated to the researches and industries of $CO_2$ emission reduction. In this study, Application of LiDAR data & KOMPSAT-2 satellite image for calculated forest Biomass. Raw LiDAR data's tree numbers and tree-high with field survey data resulted in 90% similarity of objects and an average of 0.3m difference in tree-high. Calculating the forest biomass through forest type information categorized as KOMPSAT-2 image and LiDAR data's tree-high data of tree enabled the estimation of $CO_2$ absorption and forest biomass of forest type, The similarity between the field survey average of 90% or higher were analyzed.

Identification of electron beam-resistant bacteria in the microbial reduction of dried laver (Porphyra tenera) subjected to electron beam treatment (전자선 처리에 따른 마른 김(Porphyra tenera)의 미생물 저감화 효과와 저항성 세균의 동정)

  • Kim, You Jin;Oh, Hui Su;Kim, Min Ji;Kim, Jeong Hoon;Goh, Jae Baek;Choi, In Young;Park, Mi-Kyung
    • Food Science and Preservation
    • /
    • v.23 no.1
    • /
    • pp.139-143
    • /
    • 2016
  • This study investigated the effect of electron beam (EB) treatment on the microbial reduction of dried laver (Porphyra tenera) and identified EB-resistant bacteria from the treated dried laver. After EB treatments of 4 kGy and 7 kGy, the numbers of total bacteria and EB-resistant bacteria were measured using tryptic soy agar and mannitol salt agar, respectively. The morphological and biochemical characteristics of each isolated EB-resistant bacteria were investigated and these bacteria were identified. Compared to the control ($1.5{\pm}0.2){\times}10^6CFU/g$, the total bacterial number was significantly decreased to ($5.4{\pm}0.5){\times}10^4CFU/g$ and ($1.1{\pm}0.6){\times}10^4CFU/g$ after EB treatments of 4 kGy and 7 kGy, respectively. With a higher EB dosage, the number of red colonies was almost same, whereas the number of yellow colonies was significantly decreased to ($3.3{\pm}1.2){\times}10^3CFU/g$ and 0 CFU/g for 4 kGy and 7 kGy, respectively. All red and yellow colonies were gram-positive cocci, catalase-positive, and resistant to 3% and 5% NaCl media. From the 16S rDNA sequence analysis, yellow and red colonies were identified as either Micrococcus flavus or M. luteus, with 99% similarity for the yellow colonies, and Deinococcus proteolyticus and D. piscis, with 99% and 97% similarity for the red colonies, respectively.

Toward 6 Degree-of-Freedom Video Coding Technique and Performance Analysis (6 자유도 전방위 몰입형 비디오의 압축 코덱 개발 및 성능 분석)

  • Park, Hyeonsu;Park, Sang-hyo;Kang, Je-Won
    • Journal of Broadcast Engineering
    • /
    • v.24 no.6
    • /
    • pp.1035-1052
    • /
    • 2019
  • Recently, as the demand for immersive videos increases, efficient video processing techniques for omnidirectional immersive video is actively developed by MPEG-I. While the omnidirectional video provides a larger degree of freedom for a free viewpoint, the size of the video increases significantly. Furthermore, in order to compress 6 degree-of-freedom (6 DoF) videos that support motion parallax, it is required to develop a codec to yield better coding efficiency. In this paper, we develop a 6 DoF codec using Versatile Video Coding (VVC) as the next generation video coding standard. To the authors' best knowledge, this is the first VVC-based 6 DoF video codec toward the future ISO/IEC 23090 Part 7 (Metadata for Immersive Media (Video)) MPEG-I standardization. The experiments were conducted on the seven test video sequences specified in Common Test Condition (CTC) in two operation modes of TMIV (Test Model for Immersive Media) software. It is demonstrated that the proposed codec improves coding performance around 33.8% BD-rate reduction in the MIV (Metadata for Immersive Video) mode and 30.2% BD-rate reduction in the MIV view mode as compared to the state-of-the-art TMIV reference software. We also show the performance comparisons using Immersive Video PSNR (IV-PSNR) and Mean Structural Similarity (MSSIM).

Artifact Reduction in Sparse-view Computed Tomography Image using Residual Learning Combined with Wavelet Transformation (Wavelet 변환과 결합한 잔차 학습을 이용한 희박뷰 전산화단층영상의 인공물 감소)

  • Lee, Seungwan
    • Journal of the Korean Society of Radiology
    • /
    • v.16 no.3
    • /
    • pp.295-302
    • /
    • 2022
  • Sparse-view computed tomography (CT) imaging technique is able to reduce radiation dose, ensure the uniformity of image characteristics among projections and suppress noise. However, the reconstructed images obtained by the sparse-view CT imaging technique suffer from severe artifacts, resulting in the distortion of image quality and internal structures. In this study, we proposed a convolutional neural network (CNN) with wavelet transformation and residual learning for reducing artifacts in sparse-view CT image, and the performance of the trained model was quantitatively analyzed. The CNN consisted of wavelet transformation, convolutional and inverse wavelet transformation layers, and input and output images were configured as sparse-view CT images and residual images, respectively. For training the CNN, the loss function was calculated by using mean squared error (MSE), and the Adam function was used as an optimizer. Result images were obtained by subtracting the residual images, which were predicted by the trained model, from sparse-view CT images. The quantitative accuracy of the result images were measured in terms of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). The results showed that the trained model is able to improve the spatial resolution of the result images as well as reduce artifacts in sparse-view CT images effectively. Also, the trained model increased the PSNR and SSIM by 8.18% and 19.71% in comparison to the imaging model trained without wavelet transformation and residual learning, respectively. Therefore, the imaging model proposed in this study can restore the image quality of sparse-view CT image by reducing artifacts, improving spatial resolution and quantitative accuracy.

An Efficient Application of XML Schema Matching Technique to Structural Calculation Document of Bridge (XML 스키마 매칭 기법의 교량 구조계산서 적용 방안)

  • Park, Sang Il;Kim, Bong-Geun;Lee, Sang-Ho
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.32 no.1D
    • /
    • pp.51-59
    • /
    • 2012
  • An efficient application method of XML schema matching technique to the document structure of structural calculation document (SCD) of bridge is proposed. With 30 case studies, a parametric study on weightings of name, sibling, child, and parent elements of XML scheme component that are used in the similarity measure of XML schema matching technique has been performed, and suitable weighting to analyze document structure of SCD is suggested. A simplified formula for quantification of similarity is also introduced to reduce computation time in huge scale document structure of SCDs. Numerical experiments show that the suggested method can increase the accuracy of XML schema matching by 10% with suitable weighting parameters, and can maintain almost the same accuracy without weighting parameters compared to previous studies. In addition, computation time can be reduced dramatically when the proposed simplified formula for the quantification of similarity is used. In the numerical experiments of testing 20 practical SCDs of bridges, the suggested method is superior to previous studies in the accuracy of analyzing document structure and 4 to 460 times faster than the previous results in computation time.

SOM-Based $R^{*}-Tree$ for Similarity Retrieval (자기 조직화 맵 기반 유사 검색 시스템)

  • O, Chang-Yun;Im, Dong-Ju;O, Gun-Seok;Bae, Sang-Hyeon
    • The KIPS Transactions:PartD
    • /
    • v.8D no.5
    • /
    • pp.507-512
    • /
    • 2001
  • Feature-based similarity has become an important research issue in multimedia database systems. The features of multimedia data are useful for discriminating between multimedia objects. the performance of conventional multidimensional data structures tends to deteriorate as the number of dimensions of feature vectors increase. The $R^{*}-Tree$ is the most successful variant of the R-Tree. In this paper, we propose a SOM-based $R^{*}-Tree$ as a new indexing method for high-dimensional feature vectors. The SOM-based $R^{*}-Tree$ combines SOM and $R^{*}-Tree$ to achieve search performance more scalable to high-dimensionalties. Self-Organizingf Maps (SOMs) provide mapping from high-dimensional feature vectors onto a two-dimensional space. The map is called a topological feature map, and preserves the mutual relationships (similarity) in the feature spaces of input data, clustering mutually similar feature vectors in neighboring nodes. Each node of the topological feature map holds a codebook vector. We experimentally compare the retrieval time cost of a SOM-based $R^{*}-Tree$ with of an SOM and $R^{*}-Tree$ using color feature vectors extracted from 40,000 images. The results show that the SOM-based $R^{*}-Tree$ outperform both the SOM and $R^{*}-Tree$ due to reduction of the number of nodes to build $R^{*}-Tree$ and retrieval time cost.

  • PDF

An Intelligence Support System Research on KTX Rolling Stock Failure Using Case-based Reasoning and Text Mining (사례기반추론과 텍스트마이닝 기법을 활용한 KTX 차량고장 지능형 조치지원시스템 연구)

  • Lee, Hyung Il;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.47-73
    • /
    • 2020
  • KTX rolling stocks are a system consisting of several machines, electrical devices, and components. The maintenance of the rolling stocks requires considerable expertise and experience of maintenance workers. In the event of a rolling stock failure, the knowledge and experience of the maintainer will result in a difference in the quality of the time and work to solve the problem. So, the resulting availability of the vehicle will vary. Although problem solving is generally based on fault manuals, experienced and skilled professionals can quickly diagnose and take actions by applying personal know-how. Since this knowledge exists in a tacit form, it is difficult to pass it on completely to a successor, and there have been studies that have developed a case-based rolling stock expert system to turn it into a data-driven one. Nonetheless, research on the most commonly used KTX rolling stock on the main-line or the development of a system that extracts text meanings and searches for similar cases is still lacking. Therefore, this study proposes an intelligence supporting system that provides an action guide for emerging failures by using the know-how of these rolling stocks maintenance experts as an example of problem solving. For this purpose, the case base was constructed by collecting the rolling stocks failure data generated from 2015 to 2017, and the integrated dictionary was constructed separately through the case base to include the essential terminology and failure codes in consideration of the specialty of the railway rolling stock sector. Based on a deployed case base, a new failure was retrieved from past cases and the top three most similar failure cases were extracted to propose the actual actions of these cases as a diagnostic guide. In this study, various dimensionality reduction measures were applied to calculate similarity by taking into account the meaningful relationship of failure details in order to compensate for the limitations of the method of searching cases by keyword matching in rolling stock failure expert system studies using case-based reasoning in the precedent case-based expert system studies, and their usefulness was verified through experiments. Among the various dimensionality reduction techniques, similar cases were retrieved by applying three algorithms: Non-negative Matrix Factorization(NMF), Latent Semantic Analysis(LSA), and Doc2Vec to extract the characteristics of the failure and measure the cosine distance between the vectors. The precision, recall, and F-measure methods were used to assess the performance of the proposed actions. To compare the performance of dimensionality reduction techniques, the analysis of variance confirmed that the performance differences of the five algorithms were statistically significant, with a comparison between the algorithm that randomly extracts failure cases with identical failure codes and the algorithm that applies cosine similarity directly based on words. In addition, optimal techniques were derived for practical application by verifying differences in performance depending on the number of dimensions for dimensionality reduction. The analysis showed that the performance of the cosine similarity was higher than that of the dimension using Non-negative Matrix Factorization(NMF) and Latent Semantic Analysis(LSA) and the performance of algorithm using Doc2Vec was the highest. Furthermore, in terms of dimensionality reduction techniques, the larger the number of dimensions at the appropriate level, the better the performance was found. Through this study, we confirmed the usefulness of effective methods of extracting characteristics of data and converting unstructured data when applying case-based reasoning based on which most of the attributes are texted in the special field of KTX rolling stock. Text mining is a trend where studies are being conducted for use in many areas, but studies using such text data are still lacking in an environment where there are a number of specialized terms and limited access to data, such as the one we want to use in this study. In this regard, it is significant that the study first presented an intelligent diagnostic system that suggested action by searching for a case by applying text mining techniques to extract the characteristics of the failure to complement keyword-based case searches. It is expected that this will provide implications as basic study for developing diagnostic systems that can be used immediately on the site.

Impact Analysis of Traffic Patterns on Energy Efficiency and Delay in Ethernet with Rate Adaptation (적응적 전송률 기법을 이용한 이더넷에서 트래픽 패턴이 에너지 절약률 및 지연 시간에 미치는 영향)

  • Yang, Won-Hyuk;Kang, Dong-Ki;Kim, Young-Chon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.7B
    • /
    • pp.1034-1042
    • /
    • 2010
  • As many researchers have been interested in Green IT, Energy Efficient Ethernet(EEE) with rate adaptation has recently begun to receive many attention. However, the rate adaptation scheme can have different energy efficiency and delay according to the characteristics of various traffic patterns. Therefore, in this paper, we analyze the impact of different traffic patterns on the energy efficiency and delay in Ethernet with rate adaptation. To do this, firstly we design a rate adaptation simulator which consists of Poisson based traffic generator, Pareto distribution based ON-OFF generator and Ethernet node with rate adaptation by using OPNET Modeler. Using this simulator, we perform the simulation in view of the total number of switching, transmission rate reduction, energy saving ratio and average queueing delay. Simulation results show that IP traffic patterns with high self-similarity affect the number of switching, rate reduction and energy saving ratio. Additionally, the transition overhead is caused due to the high self-similar traffic.

Search Space Reduction Techniques in Small Molecular Docking (소분자 도킹에서 탐색공간의 축소 방법)

  • Cho, Seung Joo
    • Journal of Integrative Natural Science
    • /
    • v.3 no.3
    • /
    • pp.143-147
    • /
    • 2010
  • Since it is of great importance to know how a ligand binds to a receptor, there have been a lot of efforts to improve the quality of prediction of docking poses. Earlier efforts were focused on improving search algorithm and scoring function in a docking program resulting in a partial improvement with a lot of variations. Although these are basically very important and essential, more tangible improvements came from the reduction of search space. In a normal docking study, the approximate active site is assumed to be known. After defining active site, scoring functions and search algorithms are used to locate the expected binding pose within this search space. A good search algorithm will sample wisely toward the correct binding pose. By careful study of receptor structure, it was possible to prioritize sub-space in the active site using "receptor-based pharmacophores" or "hot spots". In a sense, these techniques reduce the search space from the beginning. Further improvements were made when the bound ligand structure is available, i.e., the searching could be directed by molecular similarity using ligand information. This could be very helpful to increase the accuracy of binding pose. In addition, if the biological activity data is available, docking program could be improved to the level of being useful in affinity prediction for a series of congeneric ligands. Since the number of co-crystal structures is increasing in protein databank, "Ligand-Guided Docking" to reduce the search space would be more important to improve the accuracy of docking pose prediction and the efficiency of virtual screening. Further improvements in this area would be useful to produce more reliable docking programs.

Mathematical explanation on the POD applications for wind pressure fields with or without mean value components

  • Zhang, Jun-Feng;Ge, Yao-Jun;Zhao, Lin;Chen, Huai
    • Wind and Structures
    • /
    • v.23 no.4
    • /
    • pp.367-383
    • /
    • 2016
  • The influence mechanism of mean value components, noted as $P_0$, on POD applications for complete random fields $P_C(t)$ and fluctuating random fields $P_F(t)$ are illustrated mathematically. The critical philosophy of the illustration is introduction of a new matrix, defined as the correlation function matrix of $P_0$, which connect the correlation function matrix of $P_C(t)$ and $P_F(t)$, and their POD results. Then, POD analyses for several different wind pressure fields were presented comparatively as validation. It's inevitable mathematically that the first eigenmode of $P_C(t)$ resembles the distribution of $P_0$ and the first eigenvalue of $P_C(t)$ is close to the energy of $P_0$, due to similarity of the correlation function matrixs of $P_C(t)$ and $P_0$. However, the viewpoint is not rigorous mathematically that the first mode represents the mean pressure and the following modes represent the fluctuating pressure when $P_C(t)$ are employed in POD application. When $P_C(t)$ are employed, POD results of all modes would be distorted by the mean value components, and it's impossible to identify $P_0$ and $P_F(t)$ separately. Consequently, characteristics of the fluctuating component, which is always the primary concern in wind pressure field analysis, can only be precisely identified with $P_0$ excluded in POD.