• Title/Summary/Keyword: schema

Search Result 1,440, Processing Time 0.027 seconds

Comparative Analysis and Implications of Command and Control(C2)-related Information Exchange Models (지휘통제 관련 정보교환모델 비교분석 및 시사점)

  • Kim, Kunyoung;Park, Gyudong;Sohn, Mye
    • Journal of Internet Computing and Services
    • /
    • v.23 no.6
    • /
    • pp.59-69
    • /
    • 2022
  • For effective battlefield situation awareness and command resolution, information exchange without seams between systems is essential. However, since each system was developed independently for its own purposes, it is necessary to ensure interoperability between systems in order to effectively exchange information. In the case of our military, semantic interoperability is guaranteed by utilizing the common message format for data exchange. However, simply standardizing the data exchange format cannot sufficiently guarantee interoperability between systems. Currently, the U.S. and NATO are developing and utilizing information exchange models to achieve semantic interoperability further than guaranteeing a data exchange format. The information exchange models are the common vocabulary or reference model,which are used to ensure the exchange of information between systems at the content-meaning level. The information exchange models developed and utilized in the United States initially focused on exchanging information directly related to the battlefield situation, but it has developed into the universal form that can be used by whole government departments and related organizations. On the other hand, NATO focused on strictly expressing the concepts necessary to carry out joint military operations among the countries, and the scope of the models was also limited to the concepts related to command and control. In this paper, the background, purpose, and characteristics of the information exchange models developed and used in the United States and NATO were identified, and comparative analysis was performed. Through this, we intend to present implications when developing a Korean information exchange model in the future.

A Study on Spatial Data Integration using Graph Database: Focusing on Real Estate (그래프 데이터베이스를 활용한 공간 데이터 통합 방안 연구: 부동산 분야를 중심으로)

  • Ju-Young KIM;Seula PARK;Ki-Yun YU
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.26 no.3
    • /
    • pp.12-36
    • /
    • 2023
  • Graph databases, which store different types of data and their relationships modeled as a graph, can be effective in managing and analyzing real estate spatial data linked by complex relationships. However, they are not widely used due to the limited spatial functionalities of graph databases. In this study, we propose a uniform grid-based real estate spatial data management approach using a graph database to respond to various real estate-related spatial questions. By analyzing the real estate community to identify relevant data and utilizing national point numbers as unit grids, we construct a graph schema that linking diverse real estate data, and create a test database. After building a test database, we tested basic topological relationships and spatial functions using the Jackpine benchmark, and further conducted query tests based on various scenarios to verify the appropriateness of the proposed method. The results show that the proposed method successfully executed 25 out of 29 spatial topological relationships and spatial functions, and achieved about 97% accuracy for the 25 functions and 15 scenarios. The significance of this study lies in proposing an efficient data integration method that can respond to real estate-related spatial questions, considering the limited spatial operation capabilities of graph databases. However, there are limitations such as the creation of incorrect spatial topological relationships due to the use of grid-based indexes and inefficiency of queries due to list comparisons, which need to be improved in follow-up studies.

XQuery Query Rewriting for Query Optimization in Distributed Environments (분산 환경에 질의 최적화를 위한 XQuery 질의 재작성)

  • Park, Jong-Hyun;Kang, Ji-Hoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.3
    • /
    • pp.1-11
    • /
    • 2009
  • XQuery query proposed by W3C is one of the standard query languages for XML data and is widely accepted by many applications. Therefore the studies for efficient Processing of XQuery query have become a topic of critical importance recently and the optimization of XQuery query is one of new issues in these studies. However, previous researches just focus on the optimization techniques for a specific XML data management system and these optimization techniques can not be used under the any XML data management systems. Also, some previous researches use predefined XML data structure information such as XML schema or DTD for the optimization. In the real situation, however applications do not all refer to the structure information for XML data. Therefore, this paper analyzes only a XQuery query and optimize by using itself of the XQuery query. In this paper, we propose 3 kinds of optimization method that considers the characteristic of XQuery query. First method removes the redundant expressions described in XQuery query second method replaces the processing order of operation and clause in XQuery query and third method rewrites the XQuery query based on FOR clause. In case of third method, we consider FOR clause because generally FOR clause generates a loop in XQuery query and the loop often rises to execution frequency of redundant operation. Through a performance evaluation, we show that the processing time for rewritten queries is less than for original queries. also each method in our XQuery query optimizer can be used separately because the each method is independent.

The Effect of Color Incongruity on Brand Attitude: Moderating Effect of Self-Image Congruence (컬러 불일치가 브랜드 태도에 미치는 영향: 자아이미지 일치성의 조절효과를 고려하여)

  • Lee, Sang Eun;Kim, Sang Yong
    • Asia Marketing Journal
    • /
    • v.11 no.4
    • /
    • pp.69-93
    • /
    • 2010
  • In this research, through experiments, we show that incongruity of color between mediums has positive influence on brand attitude in terms of integrated management of brand. We also present that self-image congruence of 'brand-consumer' has moderating effect on such influence of color incongruity. Mediums were limited to the ones that magnifying visual influence in order only to observe influence of color. With the same reason, visual factors other than color were coherently set or held constant and we chose brands with either low familarity or no previous knowledge. As a result, we find that brand attitude by the incongruity of color between mediums was higher compared to brand attitude by the congruence of color. In case with lower self-image congruence of brand-consumer we show higher change in attitude compared to the one with higher self-image congruence of brand-consumer. We believe our findings are interesting to note that brand may be enhanced by forming positive brand attitude through brand expression i.e., color of visual factors. In addition, we suggest that level of congruence and diversity of brand expression is in fact deeper or wider than that of brand manager's intuition. We see that it is possible for studying brands the incongruity which has been studied as a strategy to reposition mature brands can be a way of improving the recognition on new brands.

  • PDF

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Radiation Absorbed Dose Calculation Using Planar Images after Ho-166-CHICO Therapy (Ho-166-CHICO 치료 후 평면 영상을 이용한 방사선 흡수선량의 계산)

  • 조철우;박찬희;원재환;왕희정;김영미;박경배;이병기
    • Progress in Medical Physics
    • /
    • v.9 no.3
    • /
    • pp.155-162
    • /
    • 1998
  • Ho-l66 was produced by neutron reaction in a reactor at the Korea Atomic Energy Institute (Taejon, Korea). Ho-l66 emits a high energy beta particles with a maximum energy of 1.85 MeV and small proportion of gamma rays (80 keV). Therefore, the radiation absorbed dose estimation could be based on the in-vivo quantification of the activity in tumors from the gamma camera images. Approximately 1 mCi of Ho-l66 in solution was mixed into the flood phantom and planar scintigraphic images were acquired with and without patient interposed between the phantom and scintillation camera. Transmission factor over an area of interest was calculated from the ratio of counts in selected regions of the two images described above. A dual-head gamma camera(Multispect2, Siemens, Hoffman Estates, IL, USA) equipped with medium energy collimators was utilized for imaging(80 keV${\pm}$10%). Fifty-nine year old female patient with hepatoma was enrolled into the therapeutic protocol after the informed consent obtained. Thirty millicuries(110MBq) of Ho-166-CHICO was injected into the right hepatic arterial branch supplying hepatoma. When the injection was completed, anterior and posterior scintigraphic views of the chest and pelvic regions were obtained for 3 successive days. Regions of interest (ROIs) were drawn over the organs in both the anterior and posterior views. The activity in those ROIs was estimated from geometric mean, calibration factor and transmission factors. Absorbed dose was calculated using the Marinelli formula and Medical Internal Radiation Dose (MIRD) schema. Tumor dose of the patient treated with 1110 MBq(30 mCi) Ho-l66 was calculated to be 179.7 Gy. Dose distribution to normal liver, spleen, lung and bone was 9.1, 10.3, 3.9, 5.0 % of the tumor dose respectively. In conclusion, tumor dose and absorbed dose to surrounding structures were calculated by daily external imaging after the Ho-l66 therapy for hepatoma. In order to limit the thresholding dose to each surrounding organ, absorbed dose calculation provides useful information.

  • PDF

TV Anytime and MPEG-21 DIA based Ubiquitous Consumption of TV Contents in Digital Home Environment (TV Anytime 및 MPEG-21 DIA 기반 콘텐츠 이동성을 이용한 디지털 홈 환경에서의 유비쿼터스 TV 콘텐츠 소비)

  • Kim Munjo;Yang Chanseok;Lim Jeongyeon;Kim Munchurl;Park Sungjin;Kim Kwanlae;Oh Yunje
    • Journal of Broadcast Engineering
    • /
    • v.10 no.4 s.29
    • /
    • pp.557-575
    • /
    • 2005
  • Much research in core technologies has been done to make it possible the ubiquitous video services over various kinds of user information terminals anytime anywhere in the way the users want to consume. In this paper, we design plototypesystem architecture for the ubiquitous TV program content consumption based on user preference via various kinds of intelligent information terminals in digital home environment, and present an implementation and testing results for the prototype system. For the system design, we utilize the TV Anytime specification fur the consumption of TV program contents based on user preference in TV programs, and also use the MPEG-21 DIA (Digital Item Adaptation) tools which are the representation schema formats in order to describe the context information for user environments, user terminal characteristics, user characteristics for universal access and consumption of the preferred TV program contents. The proposed ubiquitous content mobility prototype system is designed to make it possible to seamlessly consume contents by a single user or multiple users via various kinds of user terminals for the TV program contents they watch together. The proposed ubiquitous content mobility prototype system in digital home environment consists of a home server, a display TV terminal, and an intelligent information terminal. We use 42 TV programs contents in eight different genres from four different TV channels in order to test our prototype system.

A Dynamic Management Method for FOAF Using RSS and OLAP cube (RSS와 OLAP 큐브를 이용한 FOAF의 동적 관리 기법)

  • Sohn, Jong-Soo;Chung, In-Jeong
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.2
    • /
    • pp.39-60
    • /
    • 2011
  • Since the introduction of web 2.0 technology, social network service has been recognized as the foundation of an important future information technology. The advent of web 2.0 has led to the change of content creators. In the existing web, content creators are service providers, whereas they have changed into service users in the recent web. Users share experiences with other users improving contents quality, thereby it has increased the importance of social network. As a result, diverse forms of social network service have been emerged from relations and experiences of users. Social network is a network to construct and express social relations among people who share interests and activities. Today's social network service has not merely confined itself to showing user interactions, but it has also developed into a level in which content generation and evaluation are interacting with each other. As the volume of contents generated from social network service and the number of connections between users have drastically increased, the social network extraction method becomes more complicated. Consequently the following problems for the social network extraction arise. First problem lies in insufficiency of representational power of object in the social network. Second problem is incapability of expressional power in the diverse connections among users. Third problem is the difficulty of creating dynamic change in the social network due to change in user interests. And lastly, lack of method capable of integrating and processing data efficiently in the heterogeneous distributed computing environment. The first and last problems can be solved by using FOAF, a tool for describing ontology-based user profiles for construction of social network. However, solving second and third problems require a novel technology to reflect dynamic change of user interests and relations. In this paper, we propose a novel method to overcome the above problems of existing social network extraction method by applying FOAF (a tool for describing user profiles) and RSS (a literary web work publishing mechanism) to OLAP system in order to dynamically innovate and manage FOAF. We employed data interoperability which is an important characteristic of FOAF in this paper. Next we used RSS to reflect such changes as time flow and user interests. RSS, a tool for literary web work, provides standard vocabulary for distribution at web sites and contents in the form of RDF/XML. In this paper, we collect personal information and relations of users by utilizing FOAF. We also collect user contents by utilizing RSS. Finally, collected data is inserted into the database by star schema. The system we proposed in this paper generates OLAP cube using data in the database. 'Dynamic FOAF Management Algorithm' processes generated OLAP cube. Dynamic FOAF Management Algorithm consists of two functions: one is find_id_interest() and the other is find_relation (). Find_id_interest() is used to extract user interests during the input period, and find-relation() extracts users matching user interests. Finally, the proposed system reconstructs FOAF by reflecting extracted relationships and interests of users. For the justification of the suggested idea, we showed the implemented result together with its analysis. We used C# language and MS-SQL database, and input FOAF and RSS as data collected from livejournal.com. The implemented result shows that foaf : interest of users has reached an average of 19 percent increase for four weeks. In proportion to the increased foaf : interest change, the number of foaf : knows of users has grown an average of 9 percent for four weeks. As we use FOAF and RSS as basic data which have a wide support in web 2.0 and social network service, we have a definite advantage in utilizing user data distributed in the diverse web sites and services regardless of language and types of computer. By using suggested method in this paper, we can provide better services coping with the rapid change of user interests with the automatic application of FOAF.

On Method for LBS Multi-media Services using GML 3.0 (GML 3.0을 이용한 LBS 멀티미디어 서비스에 관한 연구)

  • Jung, Kee-Joong;Lee, Jun-Woo;Kim, Nam-Gyun;Hong, Seong-Hak;Choi, Beyung-Nam
    • 한국공간정보시스템학회:학술대회논문집
    • /
    • 2004.12a
    • /
    • pp.169-181
    • /
    • 2004
  • SK Telecom has already constructed GIMS system as the base common framework of LBS/GIS service system based on OGC(OpenGIS Consortium)'s international standard for the first mobile vector map service in 2002, But as service content appears more complex, renovation has been needed to satisfy multi-purpose, multi-function and maximum efficiency as requirements have been increased. This research is for preparation ion of GML3-based platform to upgrade service from GML2 based GIMS system. And with this, it will be possible for variety of application services to provide location and geographic data easily and freely. In GML 3.0, it has been selected animation, event handling, resource for style mapping, topology specification for 3D and telematics services for mobile LBS multimedia service. And the schema and transfer protocol has been developed and organized to optimize data transfer to MS(Mobile Stat ion) Upgrade to GML 3.0-based GIMS system has provided innovative framework in the view of not only construction but also service which has been implemented and applied to previous research and system. Also GIMS channel interface has been implemented to simplify access to GIMS system, and service component of GIMS internals, WFS and WMS, has gotten enhanded and expanded function.

  • PDF

Die Korrelation zwischen der DG und GB-Theorie (의존문법과 생성문법 간의 상호관계)

  • Rhie Jum-Chool
    • Koreanishche Zeitschrift fur Deutsche Sprachwissenschaft
    • /
    • v.5
    • /
    • pp.143-166
    • /
    • 2002
  • Das Ziel dieses Aufsatzes liegt darin, auf Grund der Theorien von einigen Grammatikern die Korrelation zwischen der Dependenzgrammatik (=DG) und GB­Theorie(=GB) zu beschreiben. Nach Eroms(2000) sind VPn in eine engere Phrase, die die $Erg\"{a}nzungen$ in obliquen Kasus bindet, und eine $\"{a}u{\ss}ere$, die Subjektsvalenz abbindet, zu trennen. Dem INFL-Phrasenkopf der GB entspricht das Finitheitsmorphem in der DG. In der $herk\"{o}mmlichen$ DG steht das Verb an der Spitze des Satzes, aber Eroms(2000) stellt S an die Satzspitze. Die S(mit S., S?, S!) sind $n\"{a}mlich$ oberstes Element des Satzes, von dem alle anderen $W\"{o}rter\;abh\"{a}ngig$ sind. Die Konjunktionen regieren ihren Satz, bzw. der Satz wird durch sie gesteuert. Damit ist die Analogie zur X-bar-Syntax $vollst\"{a}ndig$. Nach Vater(l996) lasst sich die Struktur der deutschen VP systernatisch nach dem X-bar-Schema darstellen, wobei die Zuordnung der Objekte mehrere $M\"{o}glichkeiten\;zul\"{a}sst$. X-bar-Regeln allein $k\"{o}nnen$ nicht die Vielfalt $m\"{o}glicher$ VP-Strukturen erzeugen. Hier greifen V-bar-Regeln und Valenz ineinander. Vater(l996) $schl\"{a}gt$ vor, den von $\'{A}gel(1993)$ vorgeschlagenen Unterschied zwischen Valenzpotenz und Valenzrealisierung zu $ber\"{u}cksichtigen$, um die verschiedenen syntaktischen und morphologischen Realisierungen zwischen verschiedenen Sprachen in den Griff zu bekommen. $\'{A}gel(1993, 2000)$ nimmt in Anlehnung an $L\'{a}szlo(l988)$ zwei Realisierungsebenen der Valenz, die Mikroebene(=die der morphologischen Aktanten) und die Makroebene(=die der syntaktischen Aktanten) an. Der $Tr\"{a}ger$ auf Mikroebene ist der Mikroaktant, also der Teil des Verbflexivs, der $Tr\"{a}ger$ auf Makroebene ist der Makroaktant, also das syntaktische Subjekt. Die Begriffe 'Mikroebene' und 'Makroebene' werden sowohl statisch und als auch dynamisch interpretiert. Auf Grund dieser Begriffe versucht $\'{A}gel(1993, 2000)$, nach Parallelen der Valenzrealisierung in S und NP zu suchen. Aber die Untersuchung der $Valenzrealisierungsverh\"{a}ltnisse$ in der NP wird zu einem scheinbaren typologischen Gegensatz zwischen S und NP $f\"{u}hren$. Um diesen Gegensatz $aufzul\"{o}sen$, wird das Konzept 'finites Substantiv' $eingef\"{u}hrt$, das analog zum Konzept 'finites Verb' ist. Dabei wird die sog. starke Adjektivflexion in einen Teil der Substantivflexion uminterpretiert. Die GB-Theorie definiert sich als eine Theorie der sog. mentalen Grammatik, des $Sprachverm\"{o}gens$. Da als der Gegenstand der DG die sog. objektive Grammatik angesehen werden kann, scheint der grundlegende Unterschied zwischen den $Gegenst\"{a}nden$ beider Theorien in der Opposition 'intem(=mental vs. extern (=objektiv)' zu liegen. Da die GB eine 'naturwissenschaftlicher' organisierte Theorie ist als die empirische Richtung der DG, ware eine Vergleichsgrundlage in der Tat nicht einfach zu etablieren. Obwohl die beiden Theorien von $v\"{o}llig$ anderen Voraussetzungen und Zielsetzungen ausgehen, gibt es zwischen der DG und der GB $\"{U}berlappungen$ und zunehmende $Ann\"{a}herungen$. Generell ist es notwendig, dass Teileinsichten aus einer Theorie in eine andere Theorie $\"{u}bernommen$ werden. In diesem Aufsatz konnten wir durch die Untersuchung der Korrelation von beiden Theorien partielle Konvergenzen erkennen. Nach Engel(l994) gibt es keine Grammatik 'an sich', sie wird von Grammatikern gemacht. Grammatiken sind Menschenwerk und Linguistenwerk. Wir sind auf der Suche nach 'einer besseren Grammatik.'

  • PDF