• Title/Summary/Keyword: Database Algorithm

Search Result 1,655, Processing Time 0.035 seconds

Fingerprint Image Quality Assessment for On-line Fingerprint Recognition (온라인 지문 인식 시스템을 위한 지문 품질 측정)

  • Lee, Sang-Hoon
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.2
    • /
    • pp.77-85
    • /
    • 2010
  • Fingerprint image quality checking is one of the most important issues in on-line fingerprint recognition because the recognition performance is largely affected by the quality of fingerprint images. In the past, many related fingerprint quality checking methods have typically considered the local quality of fingerprint. However, It is necessary to estimate the global quality of fingerprint to judge whether the fingerprint can be used or not in on-line recognition systems. Therefore, in this paper, we propose both local and global-based methods to calculate the fingerprint quality. Local fingerprint quality checking algorithm considers both the condition of the input fingerprints and orientation estimation errors. The 2D gradients of the fingerprint images were first separated into two sets of 1D gradients. Then,the shapes of the PDFs(Probability Density Functions) of these gradients were measured in order to determine fingerprint quality. And global fingerprint quality checking method uses neural network to estimate the global fingerprint quality based on local quality values. We also analyze the matching performance using FVC2002 database. Experimental results showed that proposed quality check method has better matching performance than NFIQ(NIST Fingerprint Image Quality) method.

Surveying and Optimizing the Predictors for Ependymoma Specific Survival using SEER Data

  • Cheung, Min Rex
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.15 no.2
    • /
    • pp.867-870
    • /
    • 2014
  • Purpose: This study used receiver operating characteristic curve to analyze Surveillance, Epidemiology and End Results (SEER) ependymoma data to identify predictive models and potential disparity in outcome. Materials and Methods: This study analyzed socio-economic, staging and treatment factors available in the SEER database for ependymoma. For the risk modeling, each factor was fitted by a Generalized Linear Model to predict the outcome ('brain and other nervous systems' specific death in yes/no). The area under the receiver operating characteristic curve (ROC) was computed. Similar strata were combined to construct the most parsimonious models. A random sampling algorithm was used to estimate the modeling errors. Risk of ependymoma death was computed for the predictors for comparison. Results: A total of 3,500 patients diagnosed from 1973 to 2009 were included in this study. The mean follow up time (S.D.) was 79.8 (82.3) months. Some 46% of the patients were female. The mean (S.D.) age was 34.4 (22.8) years. Age was the most predictive factor of outcome. Unknown grade demonstrated a 15% risk of cause specific death compared to 9% for grades I and II, and 36% for grades III and IV. A 5-tiered grade model (with a ROC area 0.48) was optimized to a 3-tiered model (with ROC area of 0.53). This ROC area tied for the second with that for surgery. African-American patients had 21.5% risk of death compared with 16.6% for the others. Some 72.7% of patient who did not get RT had cerebellar or spinal ependymoma. Patients undergoing surgery had 16.3% risk of death, as compared to 23.7% among those who did not have surgery. Conclusion: Grading ependymoma may dramatically improve modeling of data. RT is under used for cerebellum and spinal cord ependymoma and it may be a potential way to improve outcome.

A Study on Multimedia Database Transmission Algorithm (멀티미디어 데이터베이스 전송 알고리즘에 관한 연구)

  • 최진탁
    • Journal of the Korea Computer Industry Society
    • /
    • v.3 no.7
    • /
    • pp.921-926
    • /
    • 2002
  • B+-Tree is the most popular indexing method in DBMS to manage large size data in more efficiency. However the existing B+-Tree has shortcomings in there is overhead on DISK/IO when the first time of constructing DB or of making Index, and it lessens the concurrency if there are frequent delete operations so that the index structure also should be changed frequently. To solve these problems almost DBMS is using batch construction method and lazy deletion method. But to apply B+-Tree, which is using batch construction method and lazy deletion method, into DBMS the technique for controlling and recovering concurrency is necessary, but its researching is still unsatisfactory so that there is a problem on applying it into actual systems. On this paper I suggest the technique for controlling and recovering concurrency how to implement the batch construction method and the lazy deletion method in actual DBMS. Through the suggested technique there is no cascade rollback by using Pending list, it enhances the concurrency by enabling insertion and deletion for base table on every reconstruction, and it shortens transaction response time for user by using system queue which makes the batch constructing operation is processed not in user's transaction level but in system transaction level.

  • PDF

An Effective Incremental Text Clustering Method for the Large Document Database (대용량 문서 데이터베이스를 위한 효율적인 점진적 문서 클러스터링 기법)

  • Kang, Dong-Hyuk;Joo, Kil-Hong;Lee, Won-Suk
    • The KIPS Transactions:PartD
    • /
    • v.10D no.1
    • /
    • pp.57-66
    • /
    • 2003
  • With the development of the internet and computer, the amount of information through the internet is increasing rapidly and it is managed in document form. For this reason, the research into the method to manage for a large amount of document in an effective way is necessary. The document clustering is integrated documents to subject by classifying a set of documents through their similarity among them. Accordingly, the document clustering can be used in exploring and searching a document and it can increased accuracy of search. This paper proposes an efficient incremental cluttering method for a set of documents increase gradually. The incremental document clustering algorithm assigns a set of new documents to the legacy clusters which have been identified in advance. In addition, to improve the correctness of the clustering, removing the stop words can be proposed and the weight of the word can be calculated by the proposed TF$\times$NIDF function.

Efficient R Wave Detection based on Subtractive Operation Method (차감 동작 기법 기반의 효율적인 R파 검출)

  • Cho, Ik-Sung;Kwon, Hyeog-Soong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.4
    • /
    • pp.945-952
    • /
    • 2013
  • The R wave of QRS complex is the most prominent feature in ECG because of its specific shape; therefore it is taken as a reference in ECG feature extraction. But R wave detection suffers from the fact that frequency bands of the noise/other components such as P/T waves overlap with that of QRS complex. ECG signal processing must consider efficiency for hardware and software resources available in processing for miniaturization and low power. In other words, the design of algorithm that exactly detects QRS region using minimal computation by analyzing the person's physical condition and/or environment is needed. Therefore, efficient QRS detection based on SOM(Subtractive Operation Method) is presented in this paper. For this purpose, we detected R wave through the preprocessing method using morphological filter, empirical threshold, and subtractive signal. Also, we applied dynamic backward searching method for efficient detection. The performance of R wave detection is evaluated by using MIT-BIH arrhythmia database. The achieved scores indicate the average of 99.41% in R wave detection.

Efficient QRS Detection and PVC(Premature Ventricular Contraction) Classification based on Profiling Method (효율적인 QRS 검출과 프로파일링 기법을 통한 심실조기수축(PVC) 분류)

  • Cho, Ik-Sung;Kwon, Hyeog-Soong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.3
    • /
    • pp.705-711
    • /
    • 2013
  • QRS detection of ECG is the most popular and easy way to detect cardiac-disease. But it is difficult to analyze the ECG signal because of various noise types. Also in the healthcare system that must continuously monitor people's situation, it is necessary to process ECG signal in realtime. In other words, the design of algorithm that exactly detects QRS wave using minimal computation and classifies PVC by analyzing the persons's physical condition and/or environment is needed. Thus, efficient QRS detection and PVC classification based on profiling method is presented in this paper. For this purpose, we detected QRS through the preprocessing method using morphological filter, adaptive threshold, and window. Also, we applied profiling method to classify each patient's normal cardiac behavior through hash function. The performance of R wave detection, normal beat and PVC classification is evaluated by using MIT-BIH arrhythmia database. The achieved scores indicate the average of 99.77% in R wave detection and the rate of 0.65% in normal beat classification error and 93.29% in PVC classification.

Visualization of Path Expressions with Set Attributes and Methods in Graphical Object Query Languages (그래픽 객체 질의어에서 집합 속성과 메소드를 포함한 경로식의 시각화)

  • 조완섭
    • Journal of KIISE:Databases
    • /
    • v.30 no.2
    • /
    • pp.109-124
    • /
    • 2003
  • Although most commercial relational DBMSs Provide a graphical query language for the user friendly interfaces of the databases, few research has been done for graphical query languages in object databases. Expressing complex query conditions in a concise and intuitive way has been an important issue in the design of graphical query languages. Since the object data model and object query languages are more complex than those of the relational ones, the graphical object query language should have a concise and intuitive representation method. We propose a graphical object query language called GOQL (Graphical Object Query Language) for object databases. By employing simple graphical notations, advanced features of the object queries such as path expressions including set attributes, quantifiers, and/or methods can be represented in a simple graphical notation. GOQL has an excellent expressive power compared with previous graphical object query languages. We show that path expressions in XSQL(1,2) can be represented by the simple graphical notations in GOQL. We also propose an algorithm that translates a graphical query in GOQL into the textual object query with the same semantics. We finally describe implementation results of GOQL in the Internet environments.

CNVR Detection Reflecting the Properties of the Reference Sequence in HLA Region (레퍼런스 시퀀스의 특성을 고려한 HLA 영역에서의 CNVR 탐지)

  • Lee, Jong-Keun;Hong, Dong-Wan;Yoon, Jee-Hee
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.6
    • /
    • pp.712-716
    • /
    • 2010
  • In this paper, we propose a novel shape-based approach to detect CNV regions (CNVR) by analyzing the coverage graph obtained by aligning the giga-sequencing data onto the human reference sequence. The proposed algorithm proceeds in two steps: a filtering step and a post-processing step. In the filtering step, it takes several shape parameters as input and extracts candidate CNVRs having various depth and width. In the post-processing step, it revises the candidate regions to make up for errors potentially included in the reference sequence and giga-sequencing data, and filters out regions with high ratio of GC-contents, and returns the final result set from those candidate CNVRs. To verify the superiority of our approach, we performed extensive experiments using giga-sequencing data publicly opened by "1000 genome project" and verified the accuracy by comparing our results with those registered in DGV database. The result revealed that our approach successfully finds the CNVR having various shapes (gains or losses) in HLA (Human Leukocyte Antigen) region.

Multiple Classifier Fusion Method based on k-Nearest Templates (k-최근접 템플릿기반 다중 분류기 결합방법)

  • Min, Jun-Ki;Cho, Sung-Bae
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.4
    • /
    • pp.451-455
    • /
    • 2008
  • In this paper, the k-nearest templates method is proposed to combine multiple classifiers effectively. First, the method decomposes training samples of each class into several subclasses based on the outputs of classifiers to represent a class as multiple models, and estimates a localized template by averaging the outputs for each subclass. The distances between a test sample and templates are then calculated. Lastly, the test sample is assigned to the class that is most frequently represented among the k most similar templates. In this paper, C-means clustering algorithm is used as the decomposition method, and k is automatically chosen according to the intra-class compactness and inter-class separation of a given data set. Since the proposed method uses multiple models per class and refers to k models rather than matches with the most similar one, it could obtain stable and high accuracy. In this paper, experiments on UCI and ELENA database showed that the proposed method performed better than conventional fusion methods.

A Study on LED Energy Efficiency in Buildings through Cloud-based User Authentication in IoT Network Environment (IoT 네트워크 환경에서 클라우드 기반의 사용자 인증을 통한 건물 내의 LED에너지 효율화 연구)

  • Ahn, Ye-Chan;Lee, Keun-Ho
    • Journal of Digital Convergence
    • /
    • v.15 no.5
    • /
    • pp.235-240
    • /
    • 2017
  • Recently, Internet of things have been applied to various fields and are becoming common in our everyday life. Everything in our daily lives is networked, interacting with each other and giving us more useful effects. In this paper, we implemented an algorithm that can illuminate the LED lighting to the authorized person 's place in the network environment connected with the authentication device only once with the user authentication. User authentication in one building User authenticated through a single user search for the location of the user's lab stored in the database using the certified information and LED lights up to the lab where the user works. Through this, unnecessary energy is generated because people do not pass frequently.