• Title/Summary/Keyword: Paper Repository

Search Result 422, Processing Time 0.03 seconds

The Development of a Spatial Middleware for Efficient Retrieval of Mass Spatial Data (대용량 공간 데이타의 효율적인 검색을 위한 공간 미들웨어의 개발)

  • Lee, Ki-Young;Kim, Dong-Oh;Shin, Jung-Su;Han, Ki-Joon
    • Journal of Korea Spatial Information System Society
    • /
    • v.10 no.1
    • /
    • pp.1-14
    • /
    • 2008
  • Recently, because of the need to wide-area spatial data for spatlal analysis and military purpose, there are increasing demand for the efficient retrieval of mass spatial data in Geographic Information System(GIS) fields. Oracle Spatial and ESRI ArcSDE, that are GIS Software, are to manage mass spatial data stably and to support various services but they are inefficient to retrieve mass spatial data because of the complexity of their spatial data models and spatial operations. Therefore, in this paper, we developed a spatial middleware that can retrieve mass spatial data efficiently. The spatial middleware used Oracle which is a representative commercial DBMS as a repository for the stable management of spatial data and utilized OCCI(Oracle C++ Call Interface) for the efficient access of mass spatial data in Oracle. In addition, various spatial operating methods and the Array Fetch method were used in the spatial middleware to perform efficient spatial operations and retrieval of mass spatial data in Oracle, respectively. Finally, by comparing the spatial middleware with Oracle Spatial and ESRI ArcSDE through the performance evaluation, we proved its excellent retrieval and storage performance of mass spatial data.

  • PDF

Review of Site Characterization Methodology for Deep Geological Disposal of Radioactive Waste (방사성폐기물의 심층 처분을 위한 부지특성조사 방법론 해외 사례 연구)

  • Park, Kyung-Woo;Kim, Kyung-Su;Koh, Yong-Kwon;Jo, Yeonguk;Ji, Sung-Hoon
    • Journal of Nuclear Fuel Cycle and Waste Technology(JNFCWT)
    • /
    • v.15 no.3
    • /
    • pp.239-256
    • /
    • 2017
  • In the process of site selection for a radioactive waste disposal, site characterization must be carried out to obtain input parameters to assess the safety and feasibility of deep geological repository. In this paper, methodologies of site characterization for radioactive waste disposal in Korea were suggested based on foreign cases of site characterization. The IAEA recommends that site characterization for radioactive waste disposal should be performed through stepwise processes, in which the site characterization period is divided into preliminary and detailed stages, in sequence. This methodology was followed by several foreign countries for their geological disposal programs. General properties related to geological environments were obtained at the preliminary site characterization stage; more detailed site characteristics were investigated during the detailed site characterization stage. The results of investigation of geology, hydro-geology, geochemistry, rock mechanics, solute transport and thermal properties at a site have to be combined and constructed in the form of a site descriptive model. Based on this site descriptive model, the site characteristics can be evaluated to assess suitability of site for radioactive waste disposal. According to foreign site characterization cases, 7 or 8 years are expected to be needed for site characterization; however, the time required may increase if the no proper national strategy is provided.

The Archival Heritage in China : Preservation, Digitalization and Standardization (중국의 당안유산(檔案遺産) 보존과 디지털화 방향)

  • Feng, Huiling
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.5 no.2
    • /
    • pp.153-165
    • /
    • 2005
  • China is a country with a long history. Chinese culture dates back thousands of years ago. Thousand years of history left the huge quantity of archival heritage, which consists of the memory of China. From tied knots, tortoise shell, bronze, bamboo to paper, film, CD, the mankind's history is kept and continued through the evolution of the documenting media and documenting methods. In the information era, when we are immersed in the sea of information technologies, archivists, as guards of human's memory, have to look for a balance point between new and old, between unchanged and changed. On one hand, archivists should try their best to protect traditional archives in a usable, authentic way in a long term; on the other hand, they must face the challenges posed by electronic record. The information age is a stage of the social development of mankind, the digitalization of archives is an important progress of human history. The report mainly is composed of three parts of the content: first, introduce the preserving situation of Chinese archival heritage ; focus are put on "China archival heritage program" and the construction of "Special archives repository"; second, the process of digitalization of traditional archives; third, the framework of electronic record standard.

A Study on Costs of Digital Preservation (디지털 보존의 비용요소에 관한 연구)

  • Chung, Hye-Kyung
    • Journal of the Korean Society for information Management
    • /
    • v.22 no.1 s.55
    • /
    • pp.47-64
    • /
    • 2005
  • To guarantee the long-term access to digital material, digital preservation needs to be systemized, and detailed investigation on cost elements of digital preservation should be done for the continued support of budget. To meet the needs in this area, this paper categorized the digital preservation cost into direct and indirect cost through deriving common elements used in prior research on this issue. For case analysis, two institutions, representing domestic University Library and National Library of Korea under large-scale digitization currently, are selected to analyze the current status of digital preservation and estimate the preservation cost. The case analysis shows the systematic preservation function should be performed to guarantee the long-term access digital material, even though a basic digital preservation is currently conducted. It was projected that the digital preservation cost for the two libraries, accounting for $11.8\%$ and $8.6\%$ of digitization cost, respectively, should be injected every year. However, the estimated figures are very conservative, because the cost for estimating the preservation function, such as installing digital repository and producing meta data, was excluded in the estimation. This proves that digital preservation is a synthetic activity linked directly and indirectly to various activities from production to access of digital object and an essential costs that should be considered from the beginning stage of digitization project.

Service Level Agreement Specification Model of Software and Its Mediation Mechanism for Cloud Service Broker (클라우드 서비스 브로커를 위한 소프트웨어의 서비스 수준 합의 명세 모델과 중개 방법)

  • Nam, Taewoo;Yeom, Keunhyuk
    • Journal of KIISE
    • /
    • v.42 no.5
    • /
    • pp.591-600
    • /
    • 2015
  • SLA (Service Level Agreement) is an essential factor that must be guaranteed to provide a reliable and consistent service to user in cloud computing environment. Especially, a contract between user and service provider with SLA is important in an environment using a cloud service brokerage. The cloud computing is classified into IaaS, PaaS, and SaaS according to IT resources of the various cloud service. The existing SLA is difficult to reflect the quality factors of service, because it only considers factors about the physical Network environment and have no methodological approach. In this paper, we suggested a method to specify the quality characteristics of software and proposed a mechanism and structure that can exchange SLA specification between the service provider and consumer. We defined a meta-model for the SLA specification in the SaaS level, and quality requirements of the SaaS were described by the proposed specification language. Through case studies, we verified proposed specification language that can present a variety of software quality factors. By using the UDDI-based mediation process and architecture to interchange this specification, it is stored in the repository of quality specifications and exchanged during service binding time.

Real-time and Parallel Semantic Translation Technique for Large-Scale Streaming Sensor Data in an IoT Environment (사물인터넷 환경에서 대용량 스트리밍 센서데이터의 실시간·병렬 시맨틱 변환 기법)

  • Kwon, SoonHyun;Park, Dongwan;Bang, Hyochan;Park, Youngtack
    • Journal of KIISE
    • /
    • v.42 no.1
    • /
    • pp.54-67
    • /
    • 2015
  • Nowadays, studies on the fusion of Semantic Web technologies are being carried out to promote the interoperability and value of sensor data in an IoT environment. To accomplish this, the semantic translation of sensor data is essential for convergence with service domain knowledge. The existing semantic translation technique, however, involves translating from static metadata into semantic data(RDF), and cannot properly process real-time and large-scale features in an IoT environment. Therefore, in this paper, we propose a technique for translating large-scale streaming sensor data generated in an IoT environment into semantic data, using real-time and parallel processing. In this technique, we define rules for semantic translation and store them in the semantic repository. The sensor data is translated in real-time with parallel processing using these pre-defined rules and an ontology-based semantic model. To improve the performance, we use the Apache Storm, a real-time big data analysis framework for parallel processing. The proposed technique was subjected to performance testing with the AWS observation data of the Meteorological Administration, which are large-scale streaming sensor data for demonstration purposes.

Apache NiFi-based ETL Process for Building Data Lakes (데이터 레이크 구축을 위한 Apache NiFi기반 ETL 프로세스)

  • Lee, Kyoung Min;Lee, Kyung-Hee;Cho, Wan-Sup
    • The Journal of Bigdata
    • /
    • v.6 no.1
    • /
    • pp.145-151
    • /
    • 2021
  • In recent years, digital data has been generated in all areas of human activity, and there are many attempts to safely store and process the data to develop useful services. A data lake refers to a data repository that is independent of the source of the data and the analytical framework that leverages the data. In this paper, we designed a tool to safely store various big data generated by smart cities in a data lake and ETL it so that it can be used in services, and a web-based tool necessary to use it effectively. Implement. A series of processes (ETLs) that quality-check and refine source data, store it safely in a data lake, and manage it according to data life cycle policies are often significant for costly infrastructure and development and maintenance. It is a labor-intensive technology. The mounting technology makes it possible to set and execute ETL work monitoring and data life cycle management visually and efficiently without specialized knowledge in the IT field. Separately, a data quality checklist guide is needed to store and use reliable data in the data lake. In addition, it is necessary to set and reserve data migration and deletion cycles using the data life cycle management tool to reduce data management costs.

A reuse recommendation framework of artifacts based on task similarity to improve R&D performance (연구개발 생산성 향상을 위한 태스크 유사도 기반 산출물 재사용 추천 프레임워크)

  • Nam, Seungwoo;Daneth, Horn;Hong, Jang-Eui
    • Journal of Convergence for Information Technology
    • /
    • v.9 no.2
    • /
    • pp.23-33
    • /
    • 2019
  • Research and development(R&D) activities consist of analytical survey and state-of-the-art report writing for technical information. As R & D activities become more concrete, it often happens that they refer to related technical documents that were created in previous steps or created in previous similar projects. This paper proposes a research-task based reuse recommendation framework(RTRF), which is a reuse recommendation system that enables researchers to efficiently reuse the existing artifacts. In addition to the existing keyword-based retrieval and reuse, the proposed framework also provides reusable information that researchers may need by recommending reusable artifacts based on task similarity; other developers who have a similar task to the researcher's work can recommend reusable documents. A case study was performed to show the researchers' efficiency in the process of writing the technology trend report by reusing existing documents. When reuse is performed using RTRF, it can be seen that documents of different stages or other research fields are reused more frequently than when RTRF is not used. The RTRF may contribute to the efficient reuse of the desired artifacts among huge amount of R&D documents stored in the repository.

A proposal on a proactive crawling approach with analysis of state-of-the-art web crawling algorithms (최신 웹 크롤링 알고리즘 분석 및 선제적인 크롤링 기법 제안)

  • Na, Chul-Won;On, Byung-Won
    • Journal of Internet Computing and Services
    • /
    • v.20 no.3
    • /
    • pp.43-59
    • /
    • 2019
  • Today, with the spread of smartphones and the development of social networking services, structured and unstructured big data have stored exponentially. If we analyze them well, we will get useful information to be able to predict data for the future. Large amounts of data need to be collected first in order to analyze big data. The web is repository where these data are most stored. However, because the data size is large, there are also many data that have information that is not needed as much as there are data that have useful information. This has made it important to collect data efficiently, where data with unnecessary information is filtered and only collected data with useful information. Web crawlers cannot download all pages due to some constraints such as network bandwidth, operational time, and data storage. This is why we should avoid visiting many pages that are not relevant to what we want and download only important pages as soon as possible. This paper seeks to help resolve the above issues. First, We introduce basic web-crawling algorithms. For each algorithm, the time-complexity and pros and cons are described, and compared and analyzed. Next, we introduce the state-of-the-art web crawling algorithms that have improved the shortcomings of the basic web crawling algorithms. In addition, recent research trends show that the web crawling algorithms with special purposes such as collecting sentiment words are actively studied. We will one of the introduce Sentiment-aware web crawling techniques that is a proactive web crawling technique as a study of web crawling algorithms with special purpose. The result showed that the larger the data are, the higher the performance is and the more space is saved.

Evaluation of Soil-Water Characteristic Curve for Domestic Bentonite Buffer (국내 벤토나이트 완충재의 함수특성곡선 평가)

  • Yoon, Seok;Jeon, Jun-Seo;Lee, Changsoo;Cho, Won-Jin;Lee, Seung-Rae;Kim, Geon-Young
    • Journal of Nuclear Fuel Cycle and Waste Technology(JNFCWT)
    • /
    • v.17 no.1
    • /
    • pp.29-36
    • /
    • 2019
  • High-level radioactive waste (HLW) such as spent fuel is inevitably produced when nuclear power plants are operated. A geological repository has been considered as one of the most adequate options for the disposal of HLW, and it will be constructed in host rock at a depth of 500~1,000 meters below ground level with the concept of an engineered barrier system (EBS) and a natural barrier system. The compacted bentonite buffer is one of the most important components of the EBS. As the compacted bentonite buffer is located between disposal canisters with spent fuel and the host rock, it can restrain the release of radionuclides and protect canisters from the inflow of groundwater. Because of inflow of groundwater into the compacted bentonite buffer, it is essential to investigate soil-water characteristic curves (SWCC) of the compacted bentonite buffer in order to evaluate the entire safety performance of the EBS. Therefore, this paper conducted laboratory experiments to analyze the SWCC for a Korean Ca-type compacted bentonite buffer considering dry density, confined or unconfined condition, and drying or wetting path. There was no significant difference of SWCC considering dry density under unconfined condition. Furthermore, it was found that there was higher water suction in unconfined condition that in confined condition, and higher water suction during drying path than during wetting path.