• Title/Summary/Keyword: mapping database

Search Result 375, Processing Time 0.024 seconds

MAPPING SOIL ORGANIC MATTER CONTENT IN FLOODPLAINS USING A DIGITAL SOIL DATABASE AND GIS TECHNIQUES: A CASE STUDY WITH A TOPOGRAPHIC FACTOR IN NORTHEAST KANSAS

  • Park, Sunyurp
    • Spatial Information Research
    • /
    • v.10 no.4
    • /
    • pp.533-550
    • /
    • 2002
  • Soil organic matter (SOM) content and other physical soil properties were extracted from a digital soil database, the Soil Survey Geographic (SSURGO) database, to map the amount of SOM and determine its relationship with topographic positions in floodplain areas along a river basin in Douglas County, Kansas. In the floodplains, results showed that slope and SOM content had a significant negative relationship. Soils near river channels were deep and nearly level, and they had the greatest SOM content in the floodplain areas. For the whole county, SOM content was influenced primarily by soil depth and percent SOM by weight. Among different slope areas, soils on mid-range slopes (10-15%) and ridgetops had the highest SOM content because they had relatively high percent SOM content by weight and very deep soils, respectively. SOM content was also significantly variable among different land cover types. Forest/woodland had significantly higher SOM content than others, followed by cropland, grassland, and urban areas.

  • PDF

IMPLEMENTATION OF SUBSEQUENCE MAPPING METHOD FOR SEQUENTIAL PATTERN MINING

  • Trang, Nguyen Thu;Lee, Bum-Ju;Lee, Heon-Gyu;Ryu, Keun-Ho
    • Proceedings of the KSRS Conference
    • /
    • v.2
    • /
    • pp.627-630
    • /
    • 2006
  • Sequential Pattern Mining is the mining approach which addresses the problem of discovering the existent maximal frequent sequences in a given databases. In the daily and scientific life, sequential data are available and used everywhere based on their representative forms as text, weather data, satellite data streams, business transactions, telecommunications records, experimental runs, DNA sequences, histories of medical records, etc. Discovering sequential patterns can assist user or scientist on predicting coming activities, interpreting recurring phenomena or extracting similarities. For the sake of that purpose, the core of sequential pattern mining is finding the frequent sequence which is contained frequently in all data sequences. Beside the discovery of frequent itemsets, sequential pattern mining requires the arrangement of those itemsets in sequences and the discovery of which of those are frequent. So before mining sequences, the main task is checking if one sequence is a subsequence of another sequence in the database. In this paper, we implement the subsequence matching method as the preprocessing step for sequential pattern mining. Matched sequences in our implementation are the normalized sequences as the form of number chain. The result which is given by this method is the review of matching information between input mapped sequences.

  • PDF

Implementation of Subsequence Mapping Method for Sequential Pattern Mining

  • Trang Nguyen Thu;Lee Bum-Ju;Lee Heon-Gyu;Park Jeong-Seok;Ryu Keun-Ho
    • Korean Journal of Remote Sensing
    • /
    • v.22 no.5
    • /
    • pp.457-462
    • /
    • 2006
  • Sequential Pattern Mining is the mining approach which addresses the problem of discovering the existent maximal frequent sequences in a given databases. In the daily and scientific life, sequential data are available and used everywhere based on their representative forms as text, weather data, satellite data streams, business transactions, telecommunications records, experimental runs, DNA sequences, histories of medical records, etc. Discovering sequential patterns can assist user or scientist on predicting coming activities, interpreting recurring phenomena or extracting similarities. For the sake of that purpose, the core of sequential pattern mining is finding the frequent sequence which is contained frequently in all data sequences. Beside the discovery of frequent itemsets, sequential pattern mining requires the arrangement of those itemsets in sequences and the discovery of which of those are frequent. So before mining sequences, the main task is checking if one sequence is a subsequence of another sequence in the database. In this paper, we implement the subsequence matching method as the preprocessing step for sequential pattern mining. Matched sequences in our implementation are the normalized sequences as the form of number chain. The result which is given by this method is the review of matching information between input mapped sequences.

The LMOF Preprocessing Tool for Mapping Laboratory Vocabulary to LOINC in Clinical Document Architecture (임상문서표준규격내 검사실 용어의 LOINC 매핑을 위한 LMOF 전처리 도구)

  • Do, Hyoung-Ho;Kim, Il-Kon;Lee, Sung-Kee;Kwak, Yun-Sik
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.35 no.4
    • /
    • pp.158-165
    • /
    • 2008
  • LOINC (Logical Observation Identifiers Names and Codes) is a database and universal standard for identifying laboratory and clinical test results that is developed and maintained by Regenstrief Institute. Exchanging laboratory test results is one of the most important area in EHR system and the terminology for laboratory test results has to be standardized. In this paper, we present a pre-preprocessing tool that converts a local database in healthcare organizations to LMOF format LMOF format is required by RELMA and our work helps mapping laboratory test results to LOINC very efficiently Our proposed tool provided user friendly interface and 15% keyword reduction in RELMA search compared to no pre-processing RELMA search.

Interoperability between NoSQL and RDBMS via Auto-mapping Scheme in Distributed Parallel Processing Environment (분산병렬처리 환경에서 오토매핑 기법을 통한 NoSQL과 RDBMS와의 연동)

  • Kim, Hee Sung;Lee, Bong Hwan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.11
    • /
    • pp.2067-2075
    • /
    • 2017
  • Lately big data processing is considered as an emerging issue. As a huge amount of data is generated, data processing capability is getting important. In processing big data, both Hadoop distributed file system and unstructured date processing-based NoSQL data store are getting a lot of attention. However, there still exists problems and inconvenience to use NoSQL. In case of low volume data, MapReduce of NoSQL normally consumes unnecessary processing time and requires relatively much more data retrieval time than RDBMS. In order to address the NoSQL problem, in this paper, an interworking scheme between NoSQL and the conventional RDBMS is proposed. The developed auto-mapping scheme enables to choose an appropriate database (NoSQL or RDBMS) depending on the amount of data, which results in fast search time. The experimental results for a specific data set shows that the database interworking scheme reduces data searching time by 35% at the maximum.

A Suggestion of a Spatial Data Model for the National Geographic Institute in Korea (지도제작을 수용하는 GIS 데이타모델에 관한 연구)

  • Kim, Eun-Hyung
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.3 no.2 s.6
    • /
    • pp.115-130
    • /
    • 1995
  • The National Geographic Institute(NGI), a national mapping agency, has begun to digitalize national base maps to vitalize nation-wide GIS implementations. However, the NGI's cartographic database design reflects only paper map production and is considered inflexible for various applications. In order to suggest an appropriate data model and database implementation method, approaches of two mapping agencies are analyzed: the United State Geological Survey and Ordnance Survey in the United Kingdom One important finding from the analysis is that each data model is designed to achieve two production purposes in the same time : map and data. By taking advantageous features from the two approaches, an ideal model is proposed. To adapt the ideal model to tile present situation in Korean GIS community, a realistic model is generated, which is an 'SDTS-oriented' data model. Because SDTS will be a Korean data transfer standard, it will be a common basis in developing other data models for different purposes.

  • PDF

Mapping Poverty Distribution of Urban Area using VIIRS Nighttime Light Satellite Imageries in D.I Yogyakarta, Indonesia

  • KHAIRUNNISAH;Arie Wahyu WIJAYANTO;Setia, PRAMANA
    • Asian Journal of Business Environment
    • /
    • v.13 no.2
    • /
    • pp.9-20
    • /
    • 2023
  • Purpose: This study aims to map the spatial distribution of poverty using nighttime light satellite images as a proxy indicator of economic activities and infrastructure distribution in D.I Yogyakarta, Indonesia. Research design, data, and methodology: This study uses official poverty statistics (National Socio-economic Survey (SUSENAS) and Poverty Database 2015) to compare satellite imagery's ability to identify poor urban areas in D.I Yogyakarta. National Socioeconomic Survey (SUSENAS), as poverty statistics at the macro level, uses expenditure to determine the poor in a region. Poverty Database 2015 (BDT 2015), as poverty statistics at the micro-level, uses asset ownership to determine the poor population in an area. Pearson correlation is used to identify the correlation among variables and construct a Support Vector Regression (SVR) model to estimate the poverty level at a granular level of 1 km x 1 km. Results: It is found that macro poverty level and moderate annual nighttime light intensity have a Pearson correlation of 74 percent. It is more significant than micro poverty, with the Pearson correlation being 49 percent in 2015. The SVR prediction model can achieve the root mean squared error (RMSE) of up to 8.48 percent on SUSENAS 2020 poverty data.Conclusion: Nighttime light satellite imagery data has potential benefits as alternative data to support regional poverty mapping, especially in urban areas. Using satellite imagery data is better at predicting regional poverty based on expenditure than asset ownership at the micro-level. Light intensity at night can better describe the use of electricity consumption for economic activities at night, which is captured in spending on electricity financing compared to asset ownership.

Geovisualization of Migration Statistics Using Flow Mapping Based on Web GIS (Web GIS 기반 유선도 작성을 통한 인구이동통계의 지리적 시각화)

  • Kim, Kam-Young;Lee, Sang-Il
    • Journal of the Korean Geographical Society
    • /
    • v.47 no.2
    • /
    • pp.268-281
    • /
    • 2012
  • In spite of the usefulness of migration statistics in spatially understanding social processes and identifying social effects of spatial processes, services and analyses of the statistics have been restricted due to the complexity of their data structure. In addition, flow mapping functionality which is a useful method to explore and visualize the migration statistics has yet to be fully represented in modern GIS applications. Given this, the purpose of this research is to demonstrate the possibility of flow mapping and the exploratory spatial analysis of the migration statistics in a Web GIS environment. For this, the characteristics of the statistics were examined from database, GIS, and cartographic perspectives. Then, O-D structure of the migration statistics was converted to spatial data appropriate to f low mapping based on the characteristics. The interface of Web GIS is specialized the migration statistics and provides exploratory visualization by allowing dynamic interactions such as spatial focusing and attribute filtering.

  • PDF

Classification of Underwater Transient Signals Using MFCC Feature Vector (MFCC 특징 벡터를 이용한 수중 천이 신호 식별)

  • Lim, Tae-Gyun;Hwang, Chan-Sik;Lee, Hyeong-Uk;Bae, Keun-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.8C
    • /
    • pp.675-680
    • /
    • 2007
  • This paper presents a new method for classification of underwater transient signals, which employs frame-based decision with Mel Frequency Cepstral Coefficients(MFCC). The MFCC feature vector is extracted frame-by-frame basis for an input signal that is detected as a transient signal, and Euclidean distances are calculated between this and all MFCC feature. vectors in the reference database. Then each frame of the detected input signal is mapped to the class having minimum Euclidean distance in the reference database. Finally the input signal is classified as the class that has maximum mapping rate in the reference database. Experimental results demonstrate that the proposed method is very promising for classification of underwater transient signals.

A Study on the Quality Assurance of National Basemap Digital Mapping Database (국가기본도 수치지도제작 데이터베이스의 품질 확보에 관한 연구)

  • 이현직;최석근;신동빈;박경열
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.15 no.1
    • /
    • pp.117-129
    • /
    • 1997
  • In recent years, the 1 : 5,000 scale Digital National Basemap(DNB) has been generated under National Geo-graphic Information System(NGIS) Project. The DNB database generated will be the backdrop data for thematic maps, underground facilities maps and so on. The DNB database will be distributed to the government and private sectors in near future so that it should meet the requirements as the basic data. In order to assure the quality of DNB database, the establishment of quality assurance process to database was in great need. In this study, we were mainly concerned with improving the quality of digital national basemap database in geomatric aspect as well as the processing time due to the amount of digital data generated. As a results of this study, the quality assuance process of DNB database is established and automatic quality assurance program is developed. Also, the program developed in this study is contributed to quality assurance of DNB database as well as economic aspects.

  • PDF