• Title/Summary/Keyword: Dynamic Information

Search Result 8,220, Processing Time 0.035 seconds

Renaissance of Geographic Education in the United States since 1980: Its Dynamic Process and Implications to Geographic Education in Korea (1980년대 이후 美國 地理敎育 復興運動의 展開過程과 그 示唆點: 地理學, 地理敎育, 그리고 敎育政策의 關係)

  • Seo, Tae-Yeol
    • Journal of the Korean Geographical Society
    • /
    • v.28 no.2
    • /
    • pp.163-178
    • /
    • 1993
  • The purpose of this paper is to provide a better understanding of the unprecedented reform movement of geographic education in the United States since 1980 and extract some implications from this movement for geographic education in Korea. For the purpose, the history to this movement was reviewed through following three stages. In the first stage(1980~1984: form :HSGP" to :"Guideline"), the voluntary improvement movement appeared at California and the orgni-zational movement began in 1982 such as the Committee on Geography and International Knowledge. The national educational refrom imperatives, presented at "A Nation at Risk", and "Back to Basics" movement provided good opportunities to resurrect geography as a basic subject. For next real resurrection movement, the very important document "Guidelines for Geographic Education" was published at 1984. In the second stage(1985~1989: from "Guide-lines" to "Public"), the "Guideline" gave power-full motives and foci for reconstructiong the contents of geography, especially by the five fundamental themes(Location, Place, Relation-ships within Places, Movement, and Region). Also GENIP as the symbol of unity of all four major geography organization(AAG, NCGE, NGS, AGS) contributed to expanding and stren-gthening geography education. Also Geography Educagtion Program of NGS was a smart and well organized program to improve geographic education through it's a five strategies: Grass-roots organization(Alliances), Teacher education, Pu-blic awareness, Educational materials develo-pment, Targeted outreach to education decision-makers. In the late 1980s, the last focus of movement was the Public awareness and Edua-ction decision-making. In the third stage(1990-present: from "Public" to "Core Subject"), the initiative pendulum swung from geography organization to nation curricu-lum. In this National Curriculum, Geography was approved as a "Core Subject" and The 1994 National Geography Assessment Framework was constructed to assess the outcome of student's education in geography in grades, 4,8, and 12. Some Implications extracted from the process and contents of renaissance movement of geogr-aphic education in the Uinted States since 1980 are as follows. First, It shows the importance of the unity and target assignment among the geography organization. Second, interactive relationship between the academic geography and school geography develops each other. Third, teacher education, including pre-service education, including pre-service education and in-service education, is a key element to improve the quality of geography. And teacher organization is a good clearing house to exchange information for good geography. Forth, the positive and active response to changes in socketies such as globalism and inter-nationalizing, national education policy, and the trend of pedagogy is needed to rejuvenate geo-graphic education. Above all, we need to establish a well organized and powerfull program, sophisticated activities strategies, and long-term implementa-tion plan if we want more and better school geography.

  • PDF

An Empirical Study on the Asymmetric Correlation and Market Efficiency Between International Currency Futures and Spot Markets with Bivariate GJR-GARCH Model (이변량 GJR-GARCH모형을 이용한 국제통화선물시장과 통화현물시장간의 비대칭적 인과관계 및 시장효율성 비교분석에 관한 연구)

  • Hong, Chung-Hyo
    • The Korean Journal of Financial Management
    • /
    • v.27 no.1
    • /
    • pp.1-30
    • /
    • 2010
  • This paper tested the lead-lag relationship as well as the symmetric and asymmetric volatility spillover effects between international currency futures markets and cash markets. We use five kinds of currency spot and futures markets such as British pound, Australian and Canadian dollar, Brasilian Real and won/dollar spot and futures markets. daily closing prices covering from September 15, 2003 to July 30, 2009. For this purpose we employed dynamic time series models such as the Granger causality based on VAR and time-varying MA(1)-GJR-GARCH(1, 1)-M. The main empirical results are as follows; First, according to Granger causality test, we find that the bilateral lead-lag relationship between the five countries' currency spot and futures market. The price discover effect from currency futures markets to spot market is relatively stronger than that from currency spot to futures markets. Second, based on the time varying GARCH model, we find that there is a bilateral conditional mean spillover effects between the five currency spot and futures markets. Third, we also find that there is a bilateral asymmetric volatility spillover effects between British pound, Canadian dollar, Brasilian Real and won/dollar spot and futures market. However there is a unilateral asymmetric volatility spillover effect from Australian dollar futures to cash market, not vice versa. From these empirical results we infer that most of currency futures markets have a much better price discovery function than currency cash market and are inefficient to the information.

  • PDF

A Taxonomy of Workflow Architectures

  • Kim, Kwang-Hoon;Paik, Su-Ki
    • The Journal of Information Technology and Database
    • /
    • v.5 no.1
    • /
    • pp.97-108
    • /
    • 1998
  • This paper proposes a conceptual taxonomy of architectures for workflow management systems. The systematic classification work is based on a framework for workflow architectures. The framework, consisting of generic-level, conceptual-level and implementation-level architectures, provides common architectural principles for designing a workflow management system. We define the taxonomy by considering the possibilities for centralization or distribution of data, control, and execution. That is, we take into account three criteria. How are the major components of a workflow model and system, like activities, roles, actors, and workcases, concretized in workflow architecture. Which of the components is represented as software modules of the workflow architecture\ulcorner And how are they configured and operating in the architecture\ulcorner The workflow components might be embodied, as active (processes or threads) modules or as passive (data) modules, in the software architecture of a workflow management system. One or combinations of the components might become software modules in the software architecture. Finally, they might be centralized or distributed. The distribution of the components should be broken into three: Vertically, Horizontally and Fully distributed. Through the combination of these aspects, we can conceptually generate about 64 software Architectures for a workflow management system. That is, it should be possible to comprehend and characterize all kinds of software architectures for workflow management systems including the current existing systems as well as future systems. We believe that this taxonomy is a significant contribution because it adds clarity, completeness, and global perspective to workflow architectural discussions. The vocabulary suggested here includes workflow levels and aspects, allowing very different architectures to be discussed, compared, and contrasted. Added clarity is obtained because similar architectures from different vendors that used different terminology and techniques can now be seen to be identical at the higher level. Much of the complexity can be removed by thinking of workflow systems. Therefore, it is used to categorize existing workflow architectures and suggest a plethora of new workflow architectures. Finally, the taxonomy can be used for sorting out gems and stones amongst the architectures possibly generated. Thus, it might be a guideline not only for characterizing the existing workflow management systems, but also for solving the long-term and short-term architectural research issues, such as dynamic changes in workflow, transactional workflow, dynamically evolving workflow, large-scale workflow, etc., that have been proposed in the literature.

  • PDF

Interactive Navigational Structures

  • Czaplewski, Krzysztof;Wisniewski, Zbigniew
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • v.1
    • /
    • pp.495-500
    • /
    • 2006
  • Satellite systems for objects positioning appeared indispensable for performing basic tasks of maritime navigation. Navigation, understood as safe and effective conducting a vehicle from one point to another, within a specific physical-geographical environment. [Kopacz, $Urba{\acute{n}}ski$, 1998]. However, the systems have not solved the problem of accessibility to reliable and highly accurate information about a position of an object, especially if surveyed toward on-shore navigational signs or in sea depth. And it's of considerable significance for many navigational tasks, carried out within the frameworks of special works performance and submarine navigation. In addition, positioning precisely the objects other than vessels, while executing hydrographical works, is not always possible with a use of any satellite system. Difficulties with GPS application show up also while positioning such off-lying dangers as wrecks, underwater and aquatic rocks also other naturaland artificial obstacles. It is caused by impossibility of surveyors approaching directly any such object while its positioning. Moreover, determination of vessels positions mutually (mutual geometrical relations) by teams carrying out one common tasks at sea, demands applying the navigational techniques other than the satellite ones. Vessels'staying precisely on specified positions is of special importance in, among the others, the cases as follows: - surveying vessels while carrying out bathymetric works, wire dragging; - special tasks watercraft in course of carrying out scientific research, sea bottom exploration etc. The problems are essential for maritime economy and the Country defence readiness. Resolving them requires applying not only the satellite navigation methods, but also the terrestrial ones. The condition for implementation of the geo-navigation methods is at present the methods development both: in aspects of their techniques and technologies as well as survey data evaluation. Now, the classical geo-navigation comprises procedures, which meet out-of-date accuracy standards. To enable meeting the present-day requirements, the methods should refer to well-recognised and still developed methods of contemporary geodesy. Moreover, in a time of computerization and automation of calculating, it is feasible to create also such software, which could be applied in the integrated navigational systems, allowing carrying out navigation, provided with combinatory systems as well as with the new positioning methods. Whereas, as regards data evaluation, there should be applied the most advanced achievements in that subject; first of all the newest, although theoretically well-recognised estimation methods, including estimation [Hampel et al. 1986; $Wi{\acute{s}}niewski$ 2005; Yang 1997; Yang et al. 1999]. Such approach to the problem consisting in positioning a vehicle in motion and solid objects under observation enables an opportunity of creating dynamic and interactive navigational structures. The main subject of the theoretical suggested in this paper is the Interactive Navigational Structure. In this paper, the Structure will stand for the existing navigational signs systems, any observed solid objects and also vehicles, carrying out navigation (submarines inclusive), which, owing to mutual dependencies, (geometrical and physical) allow to determine coordinates of this new Structure's elements and to correct the already known coordinates of other elements.

  • PDF

Call-Site Tracing-based Shared Memory Allocator for False Sharing Reduction in DSM Systems (분산 공유 메모리 시스템에서 거짓 공유를 줄이는 호출지 추적 기반 공유 메모리 할당 기법)

  • Lee, Jong-Woo
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.7
    • /
    • pp.349-358
    • /
    • 2005
  • False sharing is a result of co-location of unrelated data in the same unit of memory coherency, and is one source of unnecessary overhead being of no help to keep the memory coherency in multiprocessor systems. Moreover. the damage caused by false sharing becomes large in proportion to the granularity of memory coherency. To reduce false sharing in a page-based DSM system, it is necessary to allocate unrelated data objects that have different access patterns into the separate shared pages. In this paper we propose call-site tracing-based shared memory allocator. shortly CSTallocator. CSTallocator expects that the data objects requested from the different call-sites may have different access patterns in the future. So CSTailocator places each data object requested from the different call-sites into the separate shared pages, and consequently data objects that have the same call-site are likely to get together into the same shared pages. We use execution-driven simulation of real parallel applications to evaluate the effectiveness of our CSTallocator. Our observations show that by using CSTallocator a considerable amount of false sharing misses can be additionally reduced in comparison with the existing techniques.

The Static and Dynamic Customization Technique of Component (컴포넌트 정적/동적 커스터마이제이션 기법)

  • Kim, Chul-Jin;Kim, Soo-Dong
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.9
    • /
    • pp.605-618
    • /
    • 2002
  • The CBD (Component Based Development) is a requisite technique for the Time-To-Market, and a highly reusable component should be provided to develop a variety of domain applications with the use of components. To increase the reusability of components, they should be developed by analyzing requirements of many different kinds of domains. However, to analyze requirements of a variety of domains related to the components to be developed and to include them inside the components will give burden to developers. Also, providing only general components that have common facilities for the several domains is not easy to accomplish the time-to-market since there are other domains that the developers have to develop. As such, developing common component through the analysis of several domains at the time of the CD (Component Development) does not always guarantee high reusability of the component, but gives burden to developers to develop another development since such components have common functions. Considering this, this paper proposes the component customization technique to reuse common components as well as special components. The reusability of the component can be increased by providing changeability of the attribute, behavior and message flow of the component. This customization technique can change the message flow to integrate developed components or to provide new functions within the component. Also, provides a technique to replace the class existing within the component with other class or to exchange the integrated component with the component having a different function so that requirements from a variety of domains may be satisfied. As such, this technique can accept the requirements of several domains. As such, this customization technique is not only the component with a common function, but it also secures reusability components in the special domain.

Hilbert Cube for Spatio-Temporal Data Warehouses (시공간 데이타웨어하우스를 위한 힐버트큐브)

  • 최원익;이석호
    • Journal of KIISE:Databases
    • /
    • v.30 no.5
    • /
    • pp.451-463
    • /
    • 2003
  • Recently, there have been various research efforts to develop strategies for accelerating OLAP operations on huge amounts of spatio-temporal data. Most of the work is based on multi-tree structures which consist of a single R-tree variant for spatial dimension and numerous B-trees for temporal dimension. The multi~tree based frameworks, however, are hardly applicable to spatio-temporal OLAP in practice, due mainly to high management cost and low query efficiency. To overcome the limitations of such multi-tree based frameworks, we propose a new approach called Hilbert Cube(H-Cube), which employs fractals in order to impose a total-order on cells. In addition, the H-Cube takes advantage of the traditional Prefix-sum approach to improve Query efficiency significantly. The H-Cube partitions an embedding space into a set of cells which are clustered on disk by Hilbert ordering, and then composes a cube by arranging the grid cells in a chronological order. The H-Cube refines cells adaptively to handle regional data skew, which may change its locations over time. The H-Cube is an adaptive, total-ordered and prefix-summed cube for spatio-temporal data warehouses. Our approach focuses on indexing dynamic point objects in static spatial dimensions. Through the extensive performance studies, we observed that The H-Cube consumed at most 20% of the space required by multi-tree based frameworks, and achieved higher query performance compared with multi-tree structures.

Parallel Computation For The Edit Distance Based On The Four-Russians' Algorithm (4-러시안 알고리즘 기반의 편집거리 병렬계산)

  • Kim, Young Ho;Jeong, Ju-Hui;Kang, Dae Woong;Sim, Jeong Seop
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.2 no.2
    • /
    • pp.67-74
    • /
    • 2013
  • Approximate string matching problems have been studied in diverse fields. Recently, fast approximate string matching algorithms are being used to reduce the time and costs for the next generation sequencing. To measure the amounts of errors between two strings, we use a distance function such as the edit distance. Given two strings X(|X| = m) and Y(|Y| = n) over an alphabet ${\Sigma}$, the edit distance between X and Y is the minimum number of edit operations to convert X into Y. The edit distance between X and Y can be computed using the well-known dynamic programming technique in O(mn) time and space. The edit distance also can be computed using the Four-Russians' algorithm whose preprocessing step runs in $O((3{\mid}{\Sigma}{\mid})^{2t}t^2)$ time and $O((3{\mid}{\Sigma}{\mid})^{2t}t)$ space and the computation step runs in O(mn/t) time and O(mn) space where t represents the size of the block. In this paper, we present a parallelized version of the computation step of the Four-Russians' algorithm. Our algorithm computes the edit distance between X and Y in O(m+n) time using m/t threads. Then we implemented both the sequential version and our parallelized version of the Four-Russians' algorithm using CUDA to compare the execution times. When t = 1 and t = 2, our algorithm runs about 10 times and 3 times faster than the sequential algorithm, respectively.

Evaluation of Distributed Intrusion Detection System Based on MongoDB (MongoDB 기반의 분산 침입탐지시스템 성능 평가)

  • Han, HyoJoon;Kim, HyukHo;Kim, Yangwoo
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.8 no.12
    • /
    • pp.287-296
    • /
    • 2019
  • Due to the development and increased usage of Internet services such as IoT and cloud computing, a large number of packets are being generated on the Internet. In order to create a safe Internet environment, malicious data that may exist among these packets must be processed and detected quickly. In this paper, we apply MongoDB, which is specialized for unstructured data analysis and big data processing, to intrusion detection system for rapid processing of big data security events. In addition, building the intrusion detection system(IDS) using some of the private cloud resources which is the target of protection, elastic and dynamic reconfiguration of the IDS is made possible as the number of security events increase or decrease. In order to evaluate the performance of MongoDB - based IDS proposed in this paper, we constructed prototype systems of IDS based on MongoDB as well as existing relational database, and compared their performance. Moreover, the number of virtual machine has been increased to find out the performance change as the IDS is distributed. As a result, it is shown that the performance is improved as the number of virtual machine is increased to make IDS distributed in MongoDB environment but keeping the overall system performance unchanged. The security event input rate based on distributed MongoDB was faster as much as 60%, and distributed MongoDB-based intrusion detection rate was faster up to 100% comparing to the IDS based on relational database.

A Level One Cache Organization for Chip-Size Limited Single Processor (칩의 크기가 제한된 단일칩 프로세서를 위한 레벨 1 캐시구조)

  • Ju YoungKwan;Kim Sukil
    • The KIPS Transactions:PartA
    • /
    • v.12A no.2 s.92
    • /
    • pp.127-136
    • /
    • 2005
  • This paper measured a proper ratio of the size of demand fetch cache $L_1$ to that of prefetch cache $L_P$ by imulation when the size of $L_1$ and $L_P$ are constant which organize space-limited level 1 cache of a single microprocessor chip. The analysis of our experiment showed that in the condition of the sum of the size of $L_1$ and $L_P$ are 16 KB, the level 1 cache organization by constituting $L_P$ with 4 KB and employing OBL and FIFO as a prefetch technique and a cache replacement policy respectively resulted in the best performance. Also, this analysis showed that in the condition of the sum of the size of $L_1$ and $L_P$ are over 32 KB, employing dynamic filtering as prefetch technique of $L_P$ are more advantageous and splitting level 1 cache by constituting $L_1$ with 28 KB and $L_P$ with 4 KB in the case of 32 KB of space are available, by constituting $L_1$ with 48 KB and $L_P$ with 16 KB in the case of 64 KB elicited the best performance.