• Title/Summary/Keyword: Data-Warehouse

Search Result 348, Processing Time 0.03 seconds

Airport Punctuality Analysis Using Multi-Dimensional Visual Analysis Method (다차원 시각적 분석방법을 이용한 공항 정시운항 분석에 관한 연구)

  • Cho, Jae-Hee;Li, De-Kui
    • Journal of Information Technology Services
    • /
    • v.10 no.1
    • /
    • pp.167-176
    • /
    • 2011
  • Punctuality is one of the key performance indicators of the airline industry and an important service differentiator especially for valuable customers. In addition, improvement on time performance can help achieve cost saving, i.e. the cost of airline report, which could range from 0.6% to 2.9% of their operating revenues. Therefore efficient management of punctuality is crucial for the industry. This study overcomes the limitations of existing analyses on punctuality and develops a multi-dimensional model for airport punctuality analysis. In addition to analysis of airport punctuality, visual analysis is also proposed in the study. Data was collected from actual flight data of Incheon International Airport. Using the new visual analysis method, the study discovered the pattern of the punctuality that has never studied before.

Sharing Product Data among Heterogeneous PDM Systems Using OpenPDM (서로 다른 PDM 시스템 간에 OpenPDM을 이용한 제품데이터의 교환)

  • Yang, Jeong-Sam;Han, Soon-Hung;Mun, Du-Hwan
    • Korean Journal of Computational Design and Engineering
    • /
    • v.13 no.2
    • /
    • pp.89-97
    • /
    • 2008
  • Today's manufacturing environment is becoming a distributed manufacturing process in which a unique and specialized technological background is required in specific domains rather than having a single company execute all the manufacturing processes. This phenomenon is especially true in the automotive industry, where the sharing of product data between companies is rampant; however, this kind of interoperability causes many problems. When each company has its own method of managing product data, the sharing of product data in a distributed environment is a major problem. A data translator module or a data mapping module had to be developed for the exchange of data in heterogeneous systems of product data management (PDM); moreover, this type of module must be continually changed and improved due to the fact that PDM systems change for many reasons. In addition, the growth in corporate partnerships deepens the burden of developing and maintaining this module and creates further data exchange problems due to the increasing complexity of the system. This paper introduces a way of exchanging product data among heterogeneous PDM systems through the use of OpenPDM, which is a kind of virtual data warehouse. The implementation of a PDM integrating system is also discussed with respect to the requirement for a logical integration of product data which are physically distributed.

A Comparison of Data Extraction Techniques and an Implementation of Data Extraction Technique using Index DB -S Bank Case- (원천 시스템 환경을 고려한 데이터 추출 방식의 비교 및 Index DB를 이용한 추출 방식의 구현 -ㅅ 은행 사례를 중심으로-)

  • 김기운
    • Korean Management Science Review
    • /
    • v.20 no.2
    • /
    • pp.1-16
    • /
    • 2003
  • Previous research on data extraction and integration for data warehousing has concentrated mainly on the relational DBMS or partly on the object-oriented DBMS. Mostly, it describes issues related with the change data (deltas) capture and the incremental update by using the triggering technique of active database systems. But, little attention has been paid to data extraction approaches from other types of source systems like hierarchical DBMS, etc. and from source systems without triggering capability. This paper argues, from the practical point of view, that we need to consider not only the types of information sources and capabilities of ETT tools but also other factors of source systems such as operational characteristics (i.e., whether they support DBMS log, user log or no log, timestamp), and DBMS characteristics (i.e., whether they have the triggering capability or not, etc), in order to find out appropriate data extraction techniques that could be applied to different source systems. Having applied several different data extraction techniques (e.g., DBMS log, user log, triggering, timestamp-based extraction, file comparison) to S bank's source systems (e.g., IMS, DB2, ORACLE, and SAM file), we discovered that data extraction techniques available in a commercial ETT tool do not completely support data extraction from the DBMS log of IMS system. For such IMS systems, a new date extraction technique is proposed which first creates Index database and then updates the data warehouse using the Index database. We illustrates this technique using an example application.

Pre-aggregation Index Method Based on the Spatial Hierarchy in the Spatial Data Warehouse (공간 데이터 웨어하우스에서 공간 데이터의 개념계층기반 사전집계 색인 기법)

  • Jeon, Byung-Yun;Lee, Dong-Wook;You, Byeong-Seob;Kim, Gyoung-Bae;Bae, Hae-Young
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.11
    • /
    • pp.1421-1434
    • /
    • 2006
  • Spatial data warehouses provide analytical information for decision supports using SOLAP (Spatial On-Line Analytical Processing) operations. Many researches have been studied to reduce analysis cost of SOLAP operations using pre-aggregation methods. These methods use the index composed of fixed size nodes for supporting the concept hierarchy. Therefore, these methods have many unused entries in sparse data area. Also, it is impossible to support the concept hierarchy in dense data area. In this paper, we propose a dynamic pre-aggregation index method based on the spatial hierarchy. The proposed method uses the level of the index for supporting the concept hierarchy. In sparse data area, if sibling nodes have a few used entries, those entries are integrated in a node and the parent entries share the node. In dense data area, if a node has many objects, the node is connected with linked list of several nodes and data is stored in linked nodes. Therefore, the proposed method saves the space of unused entries by integrating nodes. Moreover it can support the concept hierarchy because a node is not divided by linked nodes. Experimental result shows that the proposed method saves both space and aggregation search cost with the similar building cost of other methods.

  • PDF

A Meta Analysis of the Edible Insects (식용곤충 연구 메타 분석)

  • Yu, Ok-Kyeong;Jin, Chan-Yong;Nam, Soo-Tai;Lee, Hyun-Chang
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.10a
    • /
    • pp.182-183
    • /
    • 2018
  • Big data analysis is the process of discovering a meaningful correlation, pattern, and trends in large data set stored in existing data warehouse management tools and creating new values. In addition, by extracts new value from structured and unstructured data set in big volume means a technology to analyze the results. Most of the methods of Big data analysis technology are data mining, machine learning, natural language processing, pattern recognition, etc. used in existing statistical computer science. Global research institutes have identified Big data as the most notable new technology since 2011.

  • PDF

Perspectives on Clinical Informatics: Integrating Large-Scale Clinical, Genomic, and Health Information for Clinical Care

  • Choi, In Young;Kim, Tae-Min;Kim, Myung Shin;Mun, Seong K.;Chung, Yeun-Jun
    • Genomics & Informatics
    • /
    • v.11 no.4
    • /
    • pp.186-190
    • /
    • 2013
  • The advances in electronic medical records (EMRs) and bioinformatics (BI) represent two significant trends in healthcare. The widespread adoption of EMR systems and the completion of the Human Genome Project developed the technologies for data acquisition, analysis, and visualization in two different domains. The massive amount of data from both clinical and biology domains is expected to provide personalized, preventive, and predictive healthcare services in the near future. The integrated use of EMR and BI data needs to consider four key informatics areas: data modeling, analytics, standardization, and privacy. Bioclinical data warehouses integrating heterogeneous patient-related clinical or omics data should be considered. The representative standardization effort by the Clinical Bioinformatics Ontology (CBO) aims to provide uniquely identified concepts to include molecular pathology terminologies. Since individual genome data are easily used to predict current and future health status, different safeguards to ensure confidentiality should be considered. In this paper, we focused on the informatics aspects of integrating the EMR community and BI community by identifying opportunities, challenges, and approaches to provide the best possible care service for our patients and the population.

Development of the Performance Benchmark Tool for Data Stream Management Systems Combined with DBMS (DBMS와 결합된 데이터스트림관리시스템을 위한 성능 평가 도구 개발)

  • Kim, Gyoung-Bae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.8
    • /
    • pp.1-11
    • /
    • 2010
  • Many applications of DSMS(Data Stream Management System) require not only to process real-time stream data efficiently but also to provide high quality services such as data mining and data warehouse combining with DBMS(Database Management System) to users. In this paper we execute the performance benchmark of the combined system of DSMS and DBMS that is developed for high quality services. We use the stream data of network monitoring application system and combine the traditional representative DSMSs and DBMSs in a single system for the performance testing. We develop the total performance benchmark tool implementing JAVA language for the our testing. For our performance testing, we combine DSMS such as STREAM and Coral8 and DBMS such MySQL and Oracle10g respectively.

Suggestions for the Study of Acupoint Indications in the Era of Artificial Intelligence (인공지능시대의 경혈 주치 연구를 위한 제언)

  • Chae, Youn Byoung
    • Journal of Physiology & Pathology in Korean Medicine
    • /
    • v.35 no.5
    • /
    • pp.132-138
    • /
    • 2021
  • Artificial intelligence technology sheds light on new ways of innovating acupuncture research. As acupoint selection is specific to target diseases, each acupoint is generally believed to have a specific indication. However, the specificity of acupoint selection may be not always same with the specificity of acupoint indication. In this review, we propose that the specificity of acupoint indication can be inferred from clinical data using reverse inference. Using forward inference, the prescribed acupoints for each disease can be quantified for the specificity of acupoint selection. Using reverse inference, targeted diseases for each acupoint can be quantified for the specificity of acupoint indication. It is noteworthy that the selection of an acupoint for a particular disease does not imply the acupoint has specific indications for that disease. Electronic medical record includes various symptoms and chosen acupoint combinations. Data mining approach can be useful to reveal the complex relationships between diseases and acupoints from clinical data. Combining the clinical information and the bodily sensation map, the spatial patterns of acupoint indication can be further estimated. Interoperable medical data should be collected for medical knowledge discovery and clinical decision support system. In the era of artificial intelligence, machine learning can reveal the associations between diseases and prescribed acupoints from large scale clinical data warehouse.

A GML-based Schema for Data Cube Construction in a Spatial Data Warehouse (공간 데이터 웨어하우스에서 데이터큐브 구축을 위한 GML 기반의 스키마)

  • Kwak Dong-Uk;You Byeong-Seob;Lee Dong-Uk;Lee Jae-Dong;Bae Hae-Young
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2006.05a
    • /
    • pp.93-96
    • /
    • 2006
  • 본 논문에서는 OGC 의 공간 정보 인코딩 표준 명세인 GML 을 기반으로 공간 데이터 웨어하우스를 구축하는 스키마를 제안한다. GML 기반의 스키마는 비공간 정보뿐만 아니라 공간 정보에 대한 정의가 가능하다. 그리고 XML 스키마를 이용하여 전체 큐브 스키마, 차원 스키마 및 사실테이블에 대한 스키마 정의의 예를 보인다. 따라서 제안 기법은 GML 을 이용하여 이질적인 시스템간의 데이터 통합이 용이하고, 비공간 정보뿐만 아니라 공간 정보의 활용이 가능하다. 그리고 공간 데이터 웨어하우스의 개념계층 관계에 대한 표현이 용이하고 구조에 대한 이해가 쉽다.

  • PDF

The AH Index for Efficient Query Processing in ORDBMS-based Data Warehouses (ORDMS 기반 데이터 웨어하우스에서 효율적인 질이 처리를 위한 AH 인덱스)

  • 장혜경;이정남;조완섭;이충세;김홍기
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2000.04b
    • /
    • pp.137-139
    • /
    • 2000
  • 본 논문에서는 차세대 DBMS로 각광을 받고 있는 객체-관계형 DBMS(Object-Relational DBMS : ORDBMS)기반의 데이터 웨어하우스(data warehouse)에서 질의 처리의 성능을 향상시키는 AH(Attribute Hierarchy) 인덱스와 이를 이용한 질의 처리 기법을 제안한다. 지금까지 관계 DBMS를 이용한 데이터 웨어하우스의 성능 향상에 관한 연구는 거의 이루어지지 않고 있다. 데이터 웨어하우스는 기존의 데이터베이스와는 비교할 수 없을 만큼의 대용량 데이터를 가정하므로 ORDBMS를 이용하여 데이터 웨어하우스를 구축하는 경우에서도 적절한 성능의 보장이 필수적으로 요구된다. 이 논문에서 제안된 AH 인덱스를 사용함으로써 데이터 웨어하우스 분석용 질의에서 자주 사용되는 조인과 그루핑 연산은 비용이 저렴한 인덱스 액세스 연산으로 대치되며, 데이터의 량과 무관하게 질의 처리비용이 거의 고정되는 효과를 얻을 수 있다.

  • PDF