• Title/Summary/Keyword: combined algorithm

Search Result 1,614, Processing Time 0.031 seconds

Robust Image Fusion Using Stationary Wavelet Transform (정상 웨이블렛 변환을 이용한 로버스트 영상 융합)

  • Kim, Hee-Hoon;Kang, Seung-Hyo;Park, Jea-Hyun;Ha, Hyun-Ho;Lim, Jin-Soo;Lim, Dong-Hoon
    • The Korean Journal of Applied Statistics
    • /
    • v.24 no.6
    • /
    • pp.1181-1196
    • /
    • 2011
  • Image fusion is the process of combining information from two or more source images of a scene into a single composite image with application to many fields, such as remote sensing, computer vision, robotics, medical imaging and defense. The most common wavelet-based fusion is discrete wavelet transform fusion in which the high frequency sub-bands and low frequency sub-bands are combined on activity measures of local windows such standard deviation and mean, respectively. However, discrete wavelet transform is not translation-invariant and it often yields block artifacts in a fused image. In this paper, we propose a robust image fusion based on the stationary wavelet transform to overcome the drawback of discrete wavelet transform. We use the activity measure of interquartile range as the robust estimator of variance in high frequency sub-bands and combine the low frequency sub-band based on the interquartile range information present in the high frequency sub-bands. We evaluate our proposed method quantitatively and qualitatively for image fusion, and compare it to some existing fusion methods. Experimental results indicate that the proposed method is more effective and can provide satisfactory fusion results.

Design of Discriminant Function for White and Yellow Coating with Multi-dimensional Color Vectors (다차원 컬러벡터 기반 백태 및 황태 분류 판별함수 설계)

  • Lee, Jeon;Choi, Eun-Ji;Ryu, Hyun-Hee;Lee, Hae-Jung;Lee, Yu-Jung;Park, Kyung-Mo;Kim, Jong-Yeol
    • Korean Journal of Oriental Medicine
    • /
    • v.13 no.2 s.20
    • /
    • pp.47-52
    • /
    • 2007
  • In Oriental medicine, the status of tongue is the important indicator to diagnose one's health, because it represents physiological and clinicopathological changes of inner parts of the body. The method of tongue diagnosis is not only convenient but also non-invasive, therefore, tongue diagnosis is one of the most widely used in Oriental medicine. But tongue diagnosis is affected by examination circumstances a lot. It depends on a light source, degrees of an angle, doctor's condition and so on. So it is not easy to make an objective and standardized tongue diagnosis. As part of way to solve this problem, in this study, we tried to design a discriminant function for white and yellow coating with multi-dimensional color vectors. There were 62 subjects involved in this study, among them 48 subjects diagnosed as white-coated tongue and 14 subjects diagnosed as yellow-coated tongue by oriental doctors. And their tongue images were acquired by a well-made Digital Tongue Diagnosis System. From those acquired tongue images, each coating section were extracted by oriental doctors, and then mean values of multi -dimensional color vectors in each coating section were calculated. By statistical analysis, two significant vectors, R in RGB space and H in HSV space, were found that they were able to describe the difference between white coating section and yellow coating section very well. Using these two values, we designed the discriminant function for coating classification and examined how good it works. As a result, the overall accuracy of coating classification was 98.4%. We can expect that the discriminant function for other coatings can be obtained in a similar way. Furthermore, if an automated segmentation algorithm of tongue coating is combined with these discriminant functions, an automated tongue coating diagnosis can be accomplished.

  • PDF

Fragment Combination From DNA Sequence Data Using Fuzzy Reasoning Method (퍼지 추론기법을 이용한 DNA 염기 서열의 단편결합)

  • Kim, Kwang-Baek;Park, Hyun-Jung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.12
    • /
    • pp.2329-2334
    • /
    • 2006
  • In this paper, we proposed a method complementing failure of combining DNA fragments, defect of conventional contig assembly programs. In the proposed method, very long DNA sequence data are made into a prototype of fragment of about 700 bases that can be analyzed by automatic sequence analyzer at one time, and then matching ratio is calculated by comparing a standard prototype with 3 fragmented clones of about 700 bases generated by the PCR method. In this process, the time for calculation of matching ratio is reduced by Compute Agreement algorithm. Two candidates of combined fragments of every prototype are extracted by the degree of overlapping of calculated fragment pairs, and then degree of combination is decided using a fuzzy reasoning method that utilizes the matching ratios of each extracted fragment, and A, C, G, T membership degrees of each DNA sequence, and previous frequencies of each A, C, G, T. In this paper. DNA sequence combination is completed by the iteration of the process to combine decided optimal test fragments until no fragment remains. For the experiments, fragments or about 700 bases were generated from each sequence of 10,000 bases and 100,000 bases extracted from 'PCC6803', complete protein genome. From the experiments by applying random notations on these fragments, we could see that the proposed method was faster than FAP program, and combination failure, defect of conventional contig assembly programs, did not occur.

Making a Science Map of Korea (국내 광역 과학 지도 생성 연구)

  • Lee, Jae-Yun
    • Journal of the Korean Society for information Management
    • /
    • v.24 no.3
    • /
    • pp.363-383
    • /
    • 2007
  • Global map of science, which is visualizing large scientific domains, can be used to visually analyze the structural relationships between major areas of science. This paper reviewed previous efforts on global science map, and then tried to making a science map of Korea with some new methods. There are several research groups on making global map of science including Dr. Small and Dr. Garfield of ISI (now Thompson Scientific), SCImago research group at the University of Granada, and Dr. Borner's InfoVis Lab at the Indiana University. They called their maps as science map or scientogram and called the activity of mapping science as scientography. Most of the previous works are based on citations between scientific articles. However citation database for Korean journal articles is still under construction. This research tried to make a Korean science map with the text in the proposals suggested for funding from Korean Research Foundation. Two kinds of method for generating networks of scientific fields are used. One is Pathfinder network (PFNet) alogorithm which has been used in several published bibliometric studies. The other is clustering-based network (CBnet) algorithm which was proposed recently as an alternative to PFNet. In order to take into account both views of the two algorithms, the resulting maps are combined to a final science map of Korea.

Feature Analysis of Endoscopic Ultrasonography Images (내시경 초음파 영상의 특징 분석)

  • Kim, kwang-beak;Kang, hyo-joo;Kim, mi-jeong;Kim, gwang-ha
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2009.05a
    • /
    • pp.390-397
    • /
    • 2009
  • Endoscopic ultrasonography is a medical procedure in endoscopy combined with ultrasound to obtain images of the internal organs. It is useful to have a predictive pathological manifestation since a doctor can observe tumors under mucosa. However, it is often subjective to judge the degree of malignant degeneration of tumors. Thus, in this paper, we propose a feature analysis procedure to make the pathological manifestation more objective so as to improve the accuracy and recall of the diagnosis. In the process, we extract the ultrasound region from the image obtained by endoscopic ultrasonography. It is necessary to standardize the intensity of this region with the intensity of water region as a base since frequently found small intensity difference is only to be inefficient in the analysis. Then, we analyze the spot region with high echo and calcium deposited region by applying LVQ algorithm and bit plane partitioning procedure to tumor regions selected by medical expert. For detailed analysis, features such as intensity value, intensity information included within two random points chosen by medical expert in tumor region, and the slant of outline of tumor region in order to decide the degree of malignant degeneration. Such procedure is proven to be helpful for medical experts in tumor analysis.

  • PDF

The viterbi decoder implementation with efficient structure for real-time Coded Orthogonal Frequency Division Multiplexing (실시간 COFDM시스템을 위한 효율적인 구조를 갖는 비터비 디코더 설계)

  • Hwang Jong-Hee;Lee Seung-Yerl;Kim Dong-Sun;Chung Duck-Jin
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.42 no.2 s.332
    • /
    • pp.61-74
    • /
    • 2005
  • Digital Multimedia Broadcasting(DMB) is a reliable multi-service system for reception by mobile and portable receivers. DMB system allows interference-free reception under the conditions of multipath propagation and transmission errors using COFDM modulation scheme, simultaneously, needs powerful channel error's correction ability. Viterbi Decoder for DMB receiver uses punctured convolutional code and needs lots of computations for real-time operation. So, it is desired to design a high speed and low-power hardware scheme for Viterbi decoder. This paper proposes a combined add-compare-select(ACS) and path metric normalization(PMN) unit for computation power. The proposed PMN architecture reduces the problem of the critical path by applying fixed value for selection algorithm due to the comparison tree which has a weak point from structure with the high-speed operation. The proposed ACS uses the decomposition and the pre-computation technique for reducing the complicated degree of the adder, the comparator and multiplexer. According to a simulation result, reduction of area $3.78\%$, power consumption $12.22\%$, maximum gate delay $23.80\%$ occurred from punctured viterbi decoder for DMB system.

Prediction of Lung Cancer Based on Serum Biomarkers by Gene Expression Programming Methods

  • Yu, Zhuang;Chen, Xiao-Zheng;Cui, Lian-Hua;Si, Hong-Zong;Lu, Hai-Jiao;Liu, Shi-Hai
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.15 no.21
    • /
    • pp.9367-9373
    • /
    • 2014
  • In diagnosis of lung cancer, rapid distinction between small cell lung cancer (SCLC) and non-small cell lung cancer (NSCLC) tumors is very important. Serum markers, including lactate dehydrogenase (LDH), C-reactive protein (CRP), carcino-embryonic antigen (CEA), neurone specific enolase (NSE) and Cyfra21-1, are reported to reflect lung cancer characteristics. In this study classification of lung tumors was made based on biomarkers (measured in 120 NSCLC and 60 SCLC patients) by setting up optimal biomarker joint models with a powerful computerized tool - gene expression programming (GEP). GEP is a learning algorithm that combines the advantages of genetic programming (GP) and genetic algorithms (GA). It specifically focuses on relationships between variables in sets of data and then builds models to explain these relationships, and has been successfully used in formula finding and function mining. As a basis for defining a GEP environment for SCLC and NSCLC prediction, three explicit predictive models were constructed. CEA and NSE are requentlyused lung cancer markers in clinical trials, CRP, LDH and Cyfra21-1 have significant meaning in lung cancer, basis on CEA and NSE we set up three GEP models-GEP 1(CEA, NSE, Cyfra21-1), GEP2 (CEA, NSE, LDH), GEP3 (CEA, NSE, CRP). The best classification result of GEP gained when CEA, NSE and Cyfra21-1 were combined: 128 of 135 subjects in the training set and 40 of 45 subjects in the test set were classified correctly, the accuracy rate is 94.8% in training set; on collection of samples for testing, the accuracy rate is 88.9%. With GEP2, the accuracy was significantly decreased by 1.5% and 6.6% in training set and test set, in GEP3 was 0.82% and 4.45% respectively. Serum Cyfra21-1 is a useful and sensitive serum biomarker in discriminating between NSCLC and SCLC. GEP modeling is a promising and excellent tool in diagnosis of lung cancer.

A Study on the Horizontal Drainage Method Using Plastic Drain Board (플라스틱 배수재를 이용한 수평배수공법에 관한 연구)

  • 황정규;김홍택;김석열;강인규;김승욱
    • Geotechnical Engineering
    • /
    • v.14 no.6
    • /
    • pp.93-112
    • /
    • 1998
  • In the present study, 2-D consolidation theory of the dredged clay by means of the horizontal drain method is proposed. The horizontal drain method to install the drains such as plastic drain board within the dredged clay is a soil improvement method to accelerate the consolidation by expelling pore water in the vertical direction along the horizontal drains. Based on the finite strain consolidation theory by Gibson et al., the partial differential equation of 2-D consolidation due to the horizontal drain is derived. The consolidation due to the horizontal drain can be illustrated from combined self-weight consolidation effect and consolidation effect by horizontal drains. For the prediction of consolidation settlement and degree of consolidation numerical analysis is suggested on the basis of Dufort-Frankel finite differential algorithm. Also, the analytical procedures proposed in this study are verified by the model tests, and the predictions of the consolidation settlement and degree of consolidation are compared with the results obtained from the tests for the dredged clay gathering at Siwha site in Ansan, Korea. For the predictions, the relationship void ratio vs effective stress and the relationship permeability vs void ratio of the dredged clay are obtained from the odometer tests. Additionally, the parametric study for consolidation settlement by variations of design parameters related with horizontal drain method is carried out. Based on the results of the parametric study, design .charts for the preliminary design are also proposed.

  • PDF

3D Tile Application Method for Improvement of Performance of V-world 3D Map Service (브이월드 3D 지도 서비스 성능 향상을 위한 3D 타일 적용 방안 연구)

  • Kim, Tae Hoon;Jang, Han Sol;Yoo, Sung Hwan;Go, Jun Hee
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.25 no.1
    • /
    • pp.55-61
    • /
    • 2017
  • The V-world, korean type spatial information open platform, provides various services to easily utilize 2D, 3D map and administrative information of the country. Among them, V-world 3D map service, modeled in individual building unit, require requests for each building model file and the draw calls for drawing models on the screen by the request. This causes a large number of model requests and draw calls to occur that increase the latency occurring during the transmission and conversion process between the central processing unit(CPU) and the graphic processing unit(GPU), which lead to the performance degradation of the 3D map service. In this paper, we propose a performance improvement plan to reduce the performance degradation of 3D map service caused by multiple model requests and draw calls. Therefore, we tried to reduce the number of requests and draw calls for the model file by applying a 3D tile model that combined multiple building models to single tile. In addition, we applied the quadtree algorithm to reduce the time required to load the model file by shortening the retrieval time of the model. This is expected to contribute to improving the performance of 3D map service of V-world.

Real-Time Face Recognition Based on Subspace and LVQ Classifier (부분공간과 LVQ 분류기에 기반한 실시간 얼굴 인식)

  • Kwon, Oh-Ryun;Min, Kyong-Pil;Chun, Jun-Chul
    • Journal of Internet Computing and Services
    • /
    • v.8 no.3
    • /
    • pp.19-32
    • /
    • 2007
  • This paper present a new face recognition method based on LVQ neural net to construct a real time face recognition system. The previous researches which used PCA, LDA combined neural net usually need much time in training neural net. The supervised LVQ neural net needs much less time in training and can maximize the separability between the classes. In this paper, the proposed method transforms the input face image by PCA and LDA sequentially into low-dimension feature vectors and recognizes the face through LVQ neural net. In order to make the system robust to external light variation, light compensation is performed on the detected face by max-min normalization method as preprocessing. PCA and LDA transformations are applied to the normalized face image to produce low-level feature vectors of the image. In order to determine the initial centers of LVQ and speed up the convergency of the LVQ neural net, the K-Means clustering algorithm is adopted. Subsequently, the class representative vectors can be produced by LVQ2 training using initial center vectors. The face recognition is achieved by using the euclidean distance measure between the center vector of classes and the feature vector of input image. From the experiments, we can prove that the proposed method is more effective in the recognition ratio for the cases of still images from ORL database and sequential images rather than using conventional PCA of a hybrid method with PCA and LDA.

  • PDF