• Title/Summary/Keyword: SQL-database

Search Result 435, Processing Time 0.027 seconds

A Nonunique Composite Foreign Key-Based Approach to Fact Table Modeling and MDX Query Composing (비유일 외래키 조합 복합키 기반의 사실테이블 모델링과 MDX 쿼리문 작성법)

  • Yu, Han-Ju;Lee, Duck-Sung;Choi, In-Soo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.12 no.1 s.45
    • /
    • pp.177-188
    • /
    • 2007
  • A star schema consists of a central fact table, which is surrounded by one or more dimension tables. Each row in the fact table contains a multi-part primary key(or a composite foreign key) along with one or more columns containing various facts about the data stored in the row. Each of the composit foreign key components is related to a dimensional table. The combination of keys in the fact table creates a composite foreign key that is unique to the fact table record. The composite foreign key, however, is rarely unique to the fact table retold in real-world applications, particularly in financial applications. In order to make the composite foreign key be the determinant in real-world application, some precalculation might be performed in the SQL relational database, and cached in the OLAP database. However, there are many drawbacks to this approach. In some cases, this approach might give users the wrong results. In this paper, an approach to fact table modeling and related MDX query composing, which can be used in real-world applications without performing any precalculation and gives users the correct results, is proposed.

  • PDF

A Study on the Construction of Database, Online Management System, and Analysis Instrument for Biological Diversity Data (생물다양성 자료의 데이터베이스화와 온라인 관리시스템 및 분석도구 구축에 관한 연구)

  • Bec Kee-Yul;Jung Jong-Chul;Park Seon-Joo;Lee Jong-Wook
    • Journal of Environmental Science International
    • /
    • v.14 no.12
    • /
    • pp.1119-1127
    • /
    • 2005
  • The management of data on biological diversity is presently complex and confusing. This study was initiated to construct a database so that such data could be stored in a data management, and analysis instrument to correct the problems inherent in the current incoherent storage methods. MySQL was used in DBMS(DataBase Management System), and the program was basically produced using Java technology Also, the program was developed so people could adapt to the requirements that are changing every minute. We hope this was accomplished by modifying easily and quickly the advanced programming technology and patterns. To this end, an effective and flexible database schema was devised to store and analyze diversity databases. Even users with no knowledge of databases should be able to access this management instrument and easily manage the database through the World Wide Web. On a basis of databases stored in this manner, it could become routinely used for various databases using this analysis instrument supplied on the World Wide Web. Supplying the derived results by using a simple table and making results visible using simple charts, researchers could easily adapt these methods to various data analyses. As the diversity data was stored in a database, not in a general file, this study makes the precise, error-free and high -quality storage in a consistent manner. The methods proposed here should also minimize the errors that might appear in each data search, data movement, or data conversion by supplying management instrumentation on the Web. Also, this study was to deduce the various results to the level we required and execute the comparative analysis without the lengthy time necessary to supply the analytical instrument with similar results as provided by various other methods of analysis. The results of this research may be summerized as follows: 1)This study suggests methods of storage by giving consistency to diversity data. 2)This study prepared a suggested foundation for comparative analysis of various data. 3)It may suggest further research, which could lead to more and better standardization of diversity data and to better methods for predicting changes in species diversity.

Improving Bidirectional LSTM-CRF model Of Sequence Tagging by using Ontology knowledge based feature (온톨로지 지식 기반 특성치를 활용한 Bidirectional LSTM-CRF 모델의 시퀀스 태깅 성능 향상에 관한 연구)

  • Jin, Seunghee;Jang, Heewon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.253-266
    • /
    • 2018
  • This paper proposes a methodology applying sequence tagging methodology to improve the performance of NER(Named Entity Recognition) used in QA system. In order to retrieve the correct answers stored in the database, it is necessary to switch the user's query into a language of the database such as SQL(Structured Query Language). Then, the computer can recognize the language of the user. This is the process of identifying the class or data name contained in the database. The method of retrieving the words contained in the query in the existing database and recognizing the object does not identify the homophone and the word phrases because it does not consider the context of the user's query. If there are multiple search results, all of them are returned as a result, so there can be many interpretations on the query and the time complexity for the calculation becomes large. To overcome these, this study aims to solve this problem by reflecting the contextual meaning of the query using Bidirectional LSTM-CRF. Also we tried to solve the disadvantages of the neural network model which can't identify the untrained words by using ontology knowledge based feature. Experiments were conducted on the ontology knowledge base of music domain and the performance was evaluated. In order to accurately evaluate the performance of the L-Bidirectional LSTM-CRF proposed in this study, we experimented with converting the words included in the learned query into untrained words in order to test whether the words were included in the database but correctly identified the untrained words. As a result, it was possible to recognize objects considering the context and can recognize the untrained words without re-training the L-Bidirectional LSTM-CRF mode, and it is confirmed that the performance of the object recognition as a whole is improved.

Design of the Web based Mini-PACS (웹(Web)을 기반으로 한 Mini-PACS의 설계)

  • 안종철;신현진;안면환;박복환;김성규;안현수
    • Progress in Medical Physics
    • /
    • v.14 no.1
    • /
    • pp.43-50
    • /
    • 2003
  • PACS mostly has been used in large scaled hospital due to expensive initial cost to set up the system. The network of PACS is independent of the others: network. The user's PC has to be connected physically to the network of PACS as well as the image viewer has to be installed. The web based mini-PACS can store, manage and search inexpensively a large quantity of radiologic image acquired in a hospital. The certificated user can search and diagnose the radiologic image using web browser anywhere Internet connected. The implemented Image viewer is a viewer to diagnose the radiologic image. Which support the DICOM standard and was implemented to use JAVA programming technology. The JAVA program language is cross-platform which makes easier upgrade the system than others. The image filter was added to the viewer so as to diagnose the radiologic image in detail. In order to access to the database, the user activates his web browser to specify the URL of the web based PACS. Thus, The invoked PERL script generates an HTML file, which displays a query form with two fields: Patient name and Patient ID. The user fills out the form and submits his request via the PERL script that enters the search into the relational database to determine the patient who is corresponding to the input criteria. The user selects a patient and obtains a display list of the patient's personal study and images.

  • PDF

Development of new on-line statistical program for the Korean Society for Radiation Oncology

  • Song, Si Yeol;Ahn, Seung Do;Chung, Weon Kuu;Shin, Kyung Hwan;Choi, Eun Kyung;Cho, Kwan Ho
    • Radiation Oncology Journal
    • /
    • v.33 no.2
    • /
    • pp.142-148
    • /
    • 2015
  • Purpose: To develop new on-line statistical program for the Korean Society for Radiation Oncology (KOSRO) to collect and extract medical data in radiation oncology more efficiently. Materials and Methods: The statistical program is a web-based program. The directory was placed in a sub-folder of the homepage of KOSRO and its web address is http://www.kosro.or.kr/asda. The operating systems server is Linux and the webserver is the Apache HTTP server. For database (DB) server, MySQL is adopted and dedicated scripting language is the PHP. Each ID and password are controlled independently and all screen pages for data input or analysis are made to be friendly to users. Scroll-down menu is actively used for the convenience of user and the consistence of data analysis. Results: Year of data is one of top categories and main topics include human resource, equipment, clinical statistics, specialized treatment and research achievement. Each topic or category has several subcategorized topics. Real-time on-line report of analysis is produced immediately after entering each data and the administrator is able to monitor status of data input of each hospital. Backup of data as spread sheets can be accessed by the administrator and be used for academic works by any members of the KOSRO. Conclusion: The new on-line statistical program was developed to collect data from nationwide departments of radiation oncology. Intuitive screen and consistent input structure are expected to promote entering data of member hospitals and annual statistics should be a cornerstone of advance in radiation oncology.

Multi-type and shape data meta management and dynamic user configurable interface method (다종다형 자료 메타 관리 및 사용자 동적 구성 가능한 검색 인터페이스 제공 방안)

  • Choi, Myungjin;Kim, Taeyoung;Lee, Minseob;Yang, Yunjung;Yoon, Kyoungwon;Kim, Moongi
    • Journal of Satellite, Information and Communications
    • /
    • v.12 no.1
    • /
    • pp.81-87
    • /
    • 2017
  • In this paper, we present the system that user can search and manage data using united interface and user can define search field dynamically. The feature of this system is that it is possible to manage multiple polymorphic meta information first. Second, there is a database integration bus that can support easy integration between the various systems. Third, it is possible to set the search item for each user which can customize polymorphism data for each user. The system studied in this paper is expected to be capable of managing big data, which is currently well received in the field of ICT. In addition, it will be possible to effectively manage multi-species polymorphic data in various fields in the future and to easily integrate between systems having various environments.

Study on development of vessel shore report management system for IMO MSP 8

  • Rind, Sobia;Mo, Soo-Jong;Yu, Yung-Ho
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.40 no.5
    • /
    • pp.418-428
    • /
    • 2016
  • In this study, a Vessel Shore Report Management System (VSRMS) is developed for the International Maritime Organization (IMO), Maritime Service Portfolio (MSP) Number 8, which comprises vessel shore reporting. Several documents have to be completed before the arrival/departure of a vessel at a port, as each national port has its own reporting format and data. The present vessel reporting system is inefficient, time-consuming, and involves excessive paperwork, which results in duplications and errors. To solve this problem, in this study, the vessel reporting formats and data contents of various national ports are investigated, as at present, the reporting documents required by the current IMO standard includes insufficient information which is requested by national ports. Initially, the vessel reporting information of various national ports are collected and analyzed. Subsequently, a database structure for managing vessel reporting data for ports worldwide is devised. To make the transfer of data and the exchange of information of vessel reports much more reliable, efficient, and paper-free, VSRMS, which is a software application for the simplification and facilitation of vessel report formalities, is developed. This application is developed using the latest Microsoft C#.Net Programming Language in the Microsoft Visual Studio framework 4.5. It provides a user interface and a backend MySQL server used for database management. SAP Crystal Reports 2013 is used for designing and generating vessel reports in the original report formats. The VSRMS can facilitate vessel reporting and improve data accuracy through the reduction of input data, efficient data exchange, and reduction of the cost of communication. Adoption of the VSRMS will allow the vessel shore reporting system to be automated, resulting in enhanced work efficiency for shipping companies. Based on this information system and architecture, the consensus of various international organizations, such as the IMO, the International Association of Marine Aids to Navigation and Lighthouse Authorities (IALA), the Federation of National Associations of Ship Brokers and Agents (FONASBA), and the Baltic and International Maritime Council (BIMCO), is required so that vessel reporting is standardized internationally.

Development of a Web-based Geovisualization System using Google Earth and Spatial DBMS (구글어스와 공간데이터베이스를 이용한 웹기반 지리정보 표출시스템 개발)

  • Im, Woo-Hyuk;Lee, Yang-Won;Suh, Yong-Cheol
    • Spatial Information Research
    • /
    • v.18 no.4
    • /
    • pp.141-149
    • /
    • 2010
  • One of recent trends in Web-based GIS is the system development using FOSS (Free and Open Source Software). Open Source software is independent from the technologies of commercial software and can increase the reusability and extensibility of existing systems. In this study, we developed a Web-based GIS for interactive visualization of geographic information using Google Earth and spatial DBMS(database management system). Google Earth Plug-in and Google Earth API(application programming interface) were used to embed a geo-browser in the Web browser. In order to integrate the Google Earth with a spatial DBMS, we implemented a KML(Keyhole Markup Language) generator for transmitting server-side data according to user's query and converting the data to a variety of KML for geovisualization on the Web. Our prototype system was tested using time-series of LAI(leaf area index), forest map, and crop yield statistics. The demonstration included the geovisualization of raster and vector data in the form of an animated map and a 3-D choropleth map. We anticipate our KML generator and system framework will be extended to a more comprehensive geospatial analysis system on the Web.

Web-Based Data Processing and Model Linkage Techniques for Agricultural Water-Resource Analysis (농촌유역 물순환 해석을 위한 웹기반 자료 전처리 및 모형 연계 기법 개발)

  • Park, Jihoon;Kang, Moon Seong;Song, Jung-Hun;Jun, Sang Min;Kim, Kyeung;Ryu, Jeong Hoon
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.57 no.5
    • /
    • pp.101-111
    • /
    • 2015
  • Establishment of appropriate data in certain formats is essential for agricultural water cycle analysis, which involves complex interactions and uncertainties such as climate change, social & economic change, and watershed environmental change. The main objective of this study was to develop web-based Data processing and Model linkage Techniques for Agricultural Water-Resource analysis (AWR-DMT). The developed techniques consisted of database development, data processing technique, and model linkage technique. The watershed of this study was the upper Cheongmi stream and Geunsam-Ri. The database was constructed using MS SQL with data code, watershed characteristics, reservoir information, weather station information, meteorological data, processed data, hydrological data, and paddy field information. The AWR-DMT was developed using Python. Processing technique generated probable rainfall data using non-stationary frequency analysis and evapotranspiration data. Model linkage technique built input data for agricultural watershed models, such as the TANK and Agricultural Watershed Supply (AWS). This study might be considered to contribute to the development of intelligent watercycle analysis by developing data processing and model linkage techniques for agricultural water-resource analysis.

Applicability of Geo-spatial Processing Open Sources to Geographic Object-based Image Analysis (GEOBIA)

  • Lee, Ki-Won;Kang, Sang-Goo
    • Korean Journal of Remote Sensing
    • /
    • v.27 no.3
    • /
    • pp.379-388
    • /
    • 2011
  • At present, GEOBIA (Geographic Object-based Image Analysis), heir of OBIA (Object-based Image Analysis), is regarded as an important methodology by object-oriented paradigm for remote sensing, dealing with geo-objects related to image segmentation and classification in the different view point of pixel-based processing. This also helps to directly link to GIS applications. Thus, GEOBIA software is on the booming. The main theme of this study is to look into the applicability of geo-spatial processing open source to GEOBIA. However, there is no few fully featured open source for GEOBIA which needs complicated schemes and algorithms, till It was carried out to implement a preliminary system for GEOBIA running an integrated and user-oriented environment. This work was performed by using various open sources such as OTB or PostgreSQL/PostGIS. Some points are different from the widely-used proprietary GEOBIA software. In this system, geo-objects are not file-based ones, but tightly linked with GIS layers in spatial database management system. The mean shift algorithm with parameters associated with spatial similarities or homogeneities is used for image segmentation. For classification process in this work, tree-based model of hierarchical network composing parent and child nodes is implemented by attribute join in the semi-automatic mode, unlike traditional image-based classification. Of course, this integrated GEOBIA system is on the progressing stage, and further works are necessary. It is expected that this approach helps to develop and to extend new applications such as urban mapping or change detection linked to GIS data sets using GEOBIA.