• Title/Summary/Keyword: sources of complexity

Search Result 128, Processing Time 0.023 seconds

Open GIS Component Software Ensuring an Interoperability of Spatial Information (공간정보 상호운용성 지원을 위한 컴포넌트 기반의 개방형 GIS 소프트웨어)

  • Choe, Hye-Ok;Kim, Gwang-Su;Lee, Jong-Hun
    • The KIPS Transactions:PartD
    • /
    • v.8D no.6
    • /
    • pp.657-664
    • /
    • 2001
  • The Information Technology has progressed to the open architecture, component, and multimedia services under Internet, ensuring interoperability, reusability, and realtime. The GIS is a system processing geo-spatial information such as natural resources, buildings, roads, and many kinds of facilities in the earth. The spatial information featured by complexity and diversity requires interoperability and reusability of pre-built databases under open architecture. This paper is for the development of component based open GIS Software. The goal of the open GIS component software is a middleware of GIS combining technology of open architecture and component ensuring interoperability of spatial information and reusability of elementary pieces of GIS software. The open GIS component conforms to the distributed open architecture for spatial information proposed by OGC (Open GIS Consortium). The system consists of data provider components, kernel (MapBase) components, clearinghouse components and five kinds of GIS application of local governments. The data provider component places a unique OLE DB interface to connect and access diverse data sources independent of their formats and locations. The MapBase component supports core and common technology of GIS feasible for various applications. The clearinghouse component provides functionality about discovery and access of spatial information under Internet. The system is implemented using ATL/COM and Visual C++ under MicroSoft's Windows environment and consisted of more than 20 components. As we made case study for KSDI (Korea Spatial Data Infrastructure) sharing spatial information between local governments, the advantage of component based open GIS software was proved. Now, we are undertaking another case study for sharing seven kinds of underground facilities using the open GIS component software.

  • PDF

Recent Technological Advances in Optical Instruments and Future Applications for in Situ Stable Isotope Analysis of CH4 in the Surface Ocean and Marine Atmosphere (표층해수 내 용존 메탄 탄소동위원소 실시간 측정을 위한 광학기기의 개발 및 활용 전망)

  • PARK, MI-KYUNG;PARK, SUNYOUNG
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.23 no.1
    • /
    • pp.32-48
    • /
    • 2018
  • The mechanisms of $CH_4$ uptake into and release from the ocean are not well understood due mainly to complexity of the biogeochemical cycle and to lack of regional-scale and/or process-scale observations in the marine boundary layers. Without complete understanding of oceanic mechanisms to control the carbon balance and cycles on a various spatial and temporal scales, however, it is difficult to predict future perturbation of oceanic carbon levels and its influence on the global and regional climates. High frequency, high precision continuous measurements for carbon isotopic compositions from dissolved $CH_4$ in the surface ocean and marine atmosphere can provide additional information about the flux pathways and production/consumption processes occurring in the boundary of two large reservoirs. This paper introduces recent advances on optical instruments for real time $CH_4$ isotope analysis to diagnose potential applications for in situ, continuous measurements of carbon isotopic composition of dissolved $CH_4$. Commercially available, three laser absorption spectrometers - quantum cascade laser spectroscopy (QCLAS), off-axis integrated cavity output spectrometer (OA-ICOS), and cavity ring-down spectrometer (CRDS) are discussed in comparison with the conventional isotope ratio mass spectrometry (IRMS). Details of functioning and performance of a CRDS isotope instrument for atmospheric ${\delta}^{13}C-CH_4$ are also given, showing its capability to detect localized methane emission sources.

Modified Empirical Formula of Dynamic Amplification Factor for Wind Turbine Installation Vessel (해상풍력발전기 설치선박의 수정 동적증폭계수 추정식)

  • Ma, Kuk-Yeol;Park, Joo-Shin;Lee, Dong-Hun;Seo, Jung-Kwan
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.27 no.6
    • /
    • pp.846-855
    • /
    • 2021
  • Eco-friendly and renewable energy sources are actively being researched in recent times, and of shore wind power generation requires advanced design technologies in terms of increasing the capacities of wind turbines and enlarging wind turbine installation vessels (WTIVs). The WTIV ensures that the hull is situated at a height that is not affected by waves. The most important part of the WTIV is the leg structure, which must respond dynamically according to the wave, current, and wind loads. In particular, the wave load is composed of irregular waves, and it is important to know the exact dynamic response. The dynamic response analysis uses a single degree of freedom (SDOF) method, which is a simplified approach, but it is limited owing to the consideration of random waves. Therefore, in industrial practice, the time-domain analysis of random waves is based on the multi degree of freedom (MDOF) method. Although the MDOF method provides high-precision results, its data convergence is sensitive and difficult to apply owing to design complexity. Therefore, a dynamic amplification factor (DAF) estimation formula is developed in this study to express the dynamic response characteristics of random waves through time-domain analysis based on different variables. It is confirmed that the calculation time can be shortened and accuracy enhanced compared to existing MDOF methods. The developed formula will be used in the initial design of WTIVs and similar structures.

Sources of Pioneering Advantage in High-tech Industries: The Mediating Role of Knowledge Management Competence (하이테크산업에서 선두이점의 원천에 관한 연구: 지식경영역량의 매개효과를 중심으로)

  • Cho, Yeonjin;Park, Kyungdo
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.10 no.4
    • /
    • pp.113-131
    • /
    • 2015
  • Decision effectiveness depends on type of knowledge within team members generated by decision making process. Thus, organization in accordance with teams' experience and capability ultimately achieve their desired outcome. However, previous research has not addressed a mediating role between different knowledge type in decision making and product competitive advantages(pioneering advantage and product quality superiority). Based on the knowledge-based view, we model how different knowledge characteristics in decision making affect to acquire each of knowledge in decision making effectively and then to apply acquired knowledge in decision making. Anchored in a source-position-performance (SPP) framework (Day and Wensley's, 1988), we shed light on the effects of three knowledge characteristics dimensions in decision making process on knowledge management competences in decision making for a new product project. We also examine the relationship between two dimensions of NPD knowledge management competences, and product competitive advantages which consist of market pioneering advantage and product quality superiority. To test the relationships, the empirical analyses are conducted using a sample of team managers who participated in NPD projects. This study suggest that managers should increase their acquirability and applicability of knowledge by integrating complexity of diverse and new knowledge, developing codifiability of well-documented knowledge, and creating the sharing common knowledge among NPD team members. Thus, they are able to outrun major competitors in terms of pioneering advantage and product quality superiority perspective.

  • PDF

A Genetic Algorithm for Materialized View Selection in Data Warehouses (데이터웨어하우스에서 유전자 알고리즘을 이용한 구체화된 뷰 선택 기법)

  • Lee, Min-Soo
    • The KIPS Transactions:PartD
    • /
    • v.11D no.2
    • /
    • pp.325-338
    • /
    • 2004
  • A data warehouse stores information that is collected from multiple, heterogeneous information sources for the purpose of complex querying and analysis. Information in the warehouse is typically stored In the form of materialized views, which represent pre-computed portions of frequently asked queries. One of the most important tasks of designing a warehouse is the selection of materialized views to be maintained in the warehouse. The goal is to select a set of views so that the total query response time over all queries can be minimized while a limited amount of time for maintaining the views is given(maintenance-cost view selection problem). In this paper, we propose an efficient solution to the maintenance-cost view selection problem using a genetic algorithm for computing a near-optimal set of views. Specifically, we explore the maintenance-cost view selection problem in the context of OR view graphs. We show that our approach represents a dramatic improvement in terms of time complexity over existing search-based approaches that use heuristics. Our analysis shows that the algorithm consistently yields a solution that only has an additional 10% of query cost of over the optimal query cost while at the same time exhibits an impressive performance of only a linear increase in execution time. We have implemented a prototype version of our algorithm that is used to evaluate our approach.

Study on Developing the Information System for ESG Disclosure Management (ESG 정보공시 관리를 위한 정보시스템 개발에 관한 연구)

  • Kim, Seung-wook
    • Journal of Venture Innovation
    • /
    • v.7 no.1
    • /
    • pp.77-90
    • /
    • 2024
  • While discussions on ESG are actively taking place in Europe and other countries, the number of countries pushing for mandatory ESG information disclosure related to non-financial information of listed companies is rapidly increasing. However, as companies respond to mandatory global ESG information disclosure, problems are emerging such as the stringent requirements of global ESG disclosure standards, the complexity of data management, and a lack of understanding and preparation of the ESG system itself. In addition, it requires a reasonable analysis of how business management opportunities and risk factors due to climate change affect the company's financial impact, so it is expected to be quite difficult to analyze the results that meet the disclosure standards. In order to perform tasks such as ESG management activities and information disclosure, data of various types and sources is required and management through an information system is necessary to measure this transparently, collect it without error, and manage it without omission. Therefore, in this study, we designed an ESG data integrated management model to integrate and manage various related indicators and data in order to transparently and efficiently convey the company's ESG activities to various stakeholders through ESG information disclosure. A framework for implementing an information system to handle management was developed. These research results can help companies facing difficulties in ESG disclosure at a practical level to efficiently manage ESG information disclosure. In addition, the presentation of an integrated data management model through analysis of the ESG disclosure work process and the development of an information system to support ESG information disclosure were significant in the academic aspects needed to study ESG in the future.

Design of Video Encoder activating with variable clocks of CCDs for CCTV applications (CCTV용 CCD를 위한 가변 clock으로 동작되는 비디오 인코더의 설계)

  • Kim, Joo-Hyun;Ha, Joo-Young;Kang, Bong-Soon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.1
    • /
    • pp.80-87
    • /
    • 2006
  • SONY corporation preoccupies $80\%$ of a market of the CCD used in a CCTV system. The CCD of SONY have high duality which can not follow the progress of capability. But there are some problems which differ the clock frequency used in CCD from the frequency used in common video encoder. To get the result by using common video encoder, the system needs a scaler that could adjust image size and PLL that synchronizes CCD's with encoder's clock So, this paper proposes the video encoder that is activated at equal clock used in CCD without scaler and PLL. The encoder converts ITU-R BT.601 4:2:2 or ITU-R BT.656 inputs from various video sources into NTSC or PAL signals in CVBS. Due to variable clock, property of filters used in the encoder is automatically changed by clock and filters adopt multiplier-free structures to reduce hardware complexity. The hardware bit width of programmable digital filters for luminance and chrominance signals, along with other operating blocks, are carefully determined to produce hish-quality digital video signals of ${\pm}1$ LSB error or less. The proposed encoder is experimentally demonstrated by using the Altera Stratix EP1S80B953C6ES device.

Transform Skip Mode Decision and Signaling Method for HEVC Screen Content Coding (HEVC 스크린 콘텐츠의 고속 변환 생략 결정 및 변환 생략 시그널링 방법)

  • Lee, Dahee;Yang, Seungha;Shim, HiukJae;Jeon, Byeungwoo
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.6
    • /
    • pp.130-136
    • /
    • 2016
  • HEVC (High Efficiency Video Coding) extension considers screen content as one of its main candidate sources for encoding. Among the tools already included in HEVC version 1, the technique of using transform skip mode allows transform to be skipped and to perform quantization process only. It is known to improve video coding efficiency for screen contents which are characterized to have much high frequency energy. But encoding complexity increases since its encoder should decide whether transform should be used or not in each $4{\times}4$ transform block. Based on statistical correlation between IBC (Intra block copy) and transform skip modes both of which are known effective in screen contents, this paper proposes a combined method of the fast transform skip mode decision and a modified transform skip signaling which signals transform_skip_flag at CU level as a representative transform skip signal. By simulation, the proposed method is shown to reduce encoding time of $4{\times}4$ transform blocks by about 32%.

On Adaptive Narrowband Interference Cancellers for Direct-Sequence Spread-Spectrum Communication Systems (주파수대역 직접 확산 통신시스템에서 협대역 간섭 신호 제거를 위한 적응 간섭제거기에 관한 연구)

  • 장원석;이재천
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.10C
    • /
    • pp.967-983
    • /
    • 2003
  • In wireless spread-spectrum communication systems utilizing PN (pseudo noise) sequences, a variety of noise sources from the channel affect the data reception performance. Among them, in this paper we are concerned with the narrowband interference that may arise from the use of the spectral bands overlapped by the existing narrowband users or the intentional jammers as in military communication. The effect of this interference can be reduced to some extent at the receiver with the PN demodulation by processing gain. It is known, however, that when the interferers are strong, the reduction cannot be sufficient and thereby requiring the extra use of narrowband interference cancellers (NIC's) at the receivers. A class of adaptive NIC's are studied here based on different two cost functions. One is the chip mean-squared error (MSE) computed prior to the PN demodulation and used in the conventional cancellers. Since thses conventional cancellers should be operated at the chip rate, the computational requirements are enormous. The other is the symbol MSE computed after the PN demodulation in which case the weights of the NIC's can be updated at a lot lower symbol rate. To compare the performance of these NIC's, we derive a common measure of performance, i.e., the symbol MSE after the PN demodulation. The analytical results are verified by computer simulation. As a result, it is shown that the cancellation capability of the symbol-rate NIC's are similar or better than the conventional one while the computational complexity can be reduced a lot.

A Study on the Usefulness of Backend Development Tools for Web-based ERP Customization (Web기반 ERP 커스터마이징을 위한 백엔드 개발도구의 유용성 연구)

  • Jung, Hoon;Lee, KangSu
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.12
    • /
    • pp.53-61
    • /
    • 2019
  • The risk of project failure has increased recently as ERP systems have been transformed into Web environments and task complexity has increased. Although low-code platform development tools are being used as a way to solve this problem, limitations exist as they are centered on UI. To overcome this, back-end development tools are required that can be developed quickly and easily, not only from the front development but also from a variety of development sources produced from the ERP development process, including back-end business services. In addition, the development tools included within existing ERP products require a lot of learning time from the perspective of beginner and intermediate developers due to high entry barriers. To address these shortcomings, this paper seeks to study ways to overcome the limitations of existing development tools within the ERP by providing customized development tool functions by enhancing the usability of ERP development tools suitable for each developer's skills and roles based on the requirements required by ERP development tools, such as reducing the time required for querying, automatic binding of data for testing for service-based units, and checking of source code quality.