• Title/Summary/Keyword: Normal database

Search Result 335, Processing Time 0.023 seconds

An Approach to Automatic Generation of Fourth Normal Form for Relational Database

  • Park, Sung-Joo;Lee, Young-Gun;Cho, Hyung-Rae
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.13 no.1
    • /
    • pp.51-64
    • /
    • 1988
  • A new approach to the logical design of 4NF database shceme, which can be easily automated, is proposed. The main features of the approach are : introduction of a single attribute right hand side, extension of the concept of independent relations, semantic analysis, and adaoption of dependency matrix. The underlying viewpoints of functional relationships of the approach are different from Fagin's in that we distinguish functional and multivalued dependency in terms of cardinality. An algorithm for automatic generation of fourth normal form is presented and implemented.

  • PDF

Data Mining for Uncertain Data Based on Difference Degree of Concept Lattice

  • Qian Wang;Shi Dong;Hamad Naeem
    • Journal of Information Processing Systems
    • /
    • v.20 no.3
    • /
    • pp.317-327
    • /
    • 2024
  • Along with the rapid development of the database technology, as well as the widespread application of the database management systems are more and more large. Now the data mining technology has already been applied in scientific research, financial investment, market marketing, insurance and medical health and so on, and obtains widespread application. We discuss data mining technology and analyze the questions of it. Therefore, the research in a new data mining method has important significance. Some literatures did not consider the differences between attributes, leading to redundancy when constructing concept lattices. The paper proposes a new method of uncertain data mining based on the concept lattice of connotation difference degree (c_diff). The method defines the two rules. The construction of a concept lattice can be accelerated by excluding attributes with poor discriminative power from the process. There is also a new technique of calculating c_diff, which does not scan the full database on each layer, therefore reducing the number of database scans. The experimental outcomes present that the proposed method can save considerable time and improve the accuracy of the data mining compared with U-Apriori algorithm.

Improve the Performance of Semi-Supervised Side-channel Analysis Using HWFilter Method

  • Hong Zhang;Lang Li;Di Li
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.3
    • /
    • pp.738-754
    • /
    • 2024
  • Side-channel analysis (SCA) is a cryptanalytic technique that exploits physical leakages, such as power consumption or electromagnetic emanations, from cryptographic devices to extract secret keys used in cryptographic algorithms. Recent studies have shown that training SCA models with semi-supervised learning can effectively overcome the problem of few labeled power traces. However, the process of training SCA models using semi-supervised learning generates many pseudo-labels. The performance of the SCA model can be reduced by some of these pseudo-labels. To solve this issue, we propose the HWFilter method to improve semi-supervised SCA. This method uses a Hamming Weight Pseudo-label Filter (HWPF) to filter the pseudo-labels generated by the semi-supervised SCA model, which enhances the model's performance. Furthermore, we introduce a normal distribution method for constructing the HWPF. In the normal distribution method, the Hamming weights (HWs) of power traces can be obtained from the normal distribution of power points. These HWs are filtered and combined into a HWPF. The HWFilter was tested using the ASCADv1 database and the AES_HD dataset. The experimental results demonstrate that the HWFilter method can significantly enhance the performance of semi-supervised SCA models. In the ASCADv1 database, the model with HWFilter requires only 33 power traces to recover the key. In the AES_HD dataset, the model with HWFilter outperforms the current best semi-supervised SCA model by 12%.

KBUD: The Korea Brain UniGene Database

  • Jeon, Yeo-Jin;Oh, Jung-Hwa;Yang, Jin-Ok;Kim, Nam-Soon
    • Genomics & Informatics
    • /
    • v.3 no.3
    • /
    • pp.86-93
    • /
    • 2005
  • Human brain EST data provide important clues for our understanding of the molecular biology associated with the function of the normal brain and the molecular pathophysiology with brain disorders. To systematically and efficiently study the function and disorders of the human brain, 45,773 human brain ESTs were collected from 27 human brain cDNA libraries, which were constructed from normal brains and brain disorders such as brain tumors, Parkinson's disease (PO) and epilepsy. An analysis of 45,773 human brain ESTs using our EST analysis pipeline resulted in 38,396 high-quality ESTs and 35,906 ESTs, which were coalesced into 8,246 unique gene clusters, showing a significant similarity to known genes in the human RefSeq, human mRNAs and UniGene database. In addition, among 8,246 gene clusters, 4,287 genes ($52\%$) were found to contain full-length cONA clones. To facilitate the extraction of useful information in collected these human brain ESTs, we developed a user-friendly interface system, the Korea Brain Unigene Database (KBUD). The KBUD web interface allows access to our human brain data through three major search modes, the BioCarta pathway, keywords and BLAST searches. Each result when viewed in KBUD offers comprehensive information concerning the analyzed human brain ESTs provided by our data as well as data linked to various other publiC databases. The user-friendly developed KBUD, the first world-wide web interface for human brain EST data with ESTs of human brain disorders as well as normal brains, will be a helpful system for developing a better understanding of the underlying mechanisms of the normal brain well as brain disorders. The KBUD system is freely accessible at http://kugi.kribb.re.kr/KU/cgi -bin/brain. pI.

Human Recogmotion based on Retinal Bifurcations and Modified Correlation Function

  • Amin Dehghani
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.8
    • /
    • pp.169-173
    • /
    • 2024
  • Nowadays high security is an important issue for most of the secure places and recent advances increase the needs of high-security systems. Therefore, needs to high security for controlling and permitting the allowable people to enter the high secure places, increases and extends the use of conventional recognition methods. Therefore, a novel identification method using retinal images is proposed in this paper. For this purpose, new mathematical functions are applied on corners and bifurcations. To evaluate the proposed method we use 40 retinal images from the DRIVE database, 20 normal retinal image from STARE database and 140 normal retinal images from local collected database and the accuracy rate is 99.34 percent.

An Efficient Database Design Method for Mobile Multimedia Services on Home Network Systems (홈 네트워크 시스템 상에서 모바일 멀티미디어 서비스를 위한 효과적인 데이타베이스 설계 방안)

  • Song, Hye-Ju;Park, Young-Ho;Kim, Jung-Tae;Paik, Eui-Hyun
    • The KIPS Transactions:PartD
    • /
    • v.14D no.6
    • /
    • pp.615-622
    • /
    • 2007
  • Recently, users who want to be provided motile devices, such as PDP, PMP, and IPTV connected wireless internet, with multimedia contents are increasing due to an influence of multimedia contents. In the paper, we propose an efficient database design method for managing mobile multimedia services on home network systems. For this, we build relations using attributes required while providing multimedia services, and then design a database. Specially, we propose a database design method based on normalization theory to eliminate redundancies and update anomalies caused by a non trivial multi valued dependency in relations. In the experiments, we compare and analyze occurrence frequencies of data redundancies and update anomalies through query executions on the relation decomposed into normal forms. The results reveal that our database design is failrly effective.

Analysis of Molecular Pathways in Pancreatic Ductal Adenocarcinomas with a Bioinformatics Approach

  • Wang, Yan;Li, Yan
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.16 no.6
    • /
    • pp.2561-2567
    • /
    • 2015
  • Pancreatic ductal adenocarcinoma (PDAC) is a leading cause of cancer death worldwide. Our study aimed to reveal molecular mechanisms. Microarray data of GSE15471 (including 39 matching pairs of pancreatic tumor tissues and patient-matched normal tissues) was downloaded from Gene Expression Omnibus (GEO) database. We identified differentially expressed genes (DEGs) in PDAC tissues compared with normal tissues by limma package in R language. Then GO and KEGG pathway enrichment analyses were conducted with online DAVID. In addition, principal component analysis was performed and a protein-protein interaction network was constructed to study relationships between the DEGs through database STRING. A total of 532 DEGs were identified in the 38 PDAC tissues compared with 33 normal tissues. The results of principal component analysis of the top 20 DEGs could differentiate the PDAC tissues from normal tissues directly. In the PPI network, 8 of the 20 DEGs were all key genes of the collagen family. Additionally, FN1 (fibronectin 1) was also a hub node in the network. The genes of the collagen family as well as FN1 were significantly enriched in complement and coagulation cascades, ECM-receptor interaction and focal adhesion pathways. Our results suggest that genes of collagen family and FN1 may play an important role in PDAC progression. Meanwhile, these DEGs and enriched pathways, such as complement and coagulation cascades, ECM-receptor interaction and focal adhesion may be important molecular mechanisms involved in the development and progression of PDAC.

A Transformation of XML DTD to Relational Database Schema Using Functional Dependency (함수적 종속관계를 이용한 XML DTD의 관계형 스키마 변환)

  • Lee Jung-hwa;Lee Man-sik;Yun Hong-won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.7
    • /
    • pp.1604-1609
    • /
    • 2004
  • We have to convert XML DTD into relational database schema for storing XML Document at relational database. Hybrid inlining algorithm are used for converting XML DTD to relational database schema. But this method have some problem. That is the relational database schema have N:N relationship are created according this method are not satisfied with third normal from. Therefore, We proposed Extended Hybrid inlining algorithm for solving this problem in this paper.

SPATIAL DISTRIBUTION OF THE SPIN VECTORS OF THE DISK GALAXIES IN THE VIRGO CLUSTER

  • YUAN Q. R.;HU F. X.;HE X. T.
    • Journal of The Korean Astronomical Society
    • /
    • v.29 no.spc1
    • /
    • pp.55-56
    • /
    • 1996
  • In order to investigate the spatial orientation of the spin vectors of galaxies in the Virgo cluster, we carried out a detailed identification of all the certain and possible member disk galaxies with four UK Schmidt Telescope (UKST) III a-j direct plates digitized by the Automated Plate Measuring System (APM). As a result, a relatively large and complete database with no selection effect of the member galaxies has been established. We provide the APM measured values of the position angle (P.A.) and diameters at the isophotal level of 24.5 $m_j / arcsec^2$. Based on this newly generated database, an initial study on the spatial orientation of the spin vectors of galaxies in the Virgo cluster is shown.

  • PDF

Development of a Prototype Expert System for Intelligent Operation Aids in Rod Consolidation Process (핵연료 밀집공정의 지능적 조업을 위한 전문가시스템 모형의 개발)

  • Kim, Ho-Dong;Kim, Ki-Joo;Yoon, Wan-Ki
    • Nuclear Engineering and Technology
    • /
    • v.25 no.1
    • /
    • pp.1-7
    • /
    • 1993
  • This paper describes a prototype expert system to aid operation in rod consolidation process. The knowledgebase is composed of three database groups and 60 rules with production, and object oriented techniques that correlates database groups. The expert system is designed to track the transitions of nuclear materials through the operation areas of the rod consolidation process, to diagnose current status in any operating conditions, normal and off-normal, and to advise operators to properly recover off-normality. The expert system can give efficient management of nuclear material accountability and process operation in the rod consolidation.

  • PDF