• Title/Summary/Keyword: Design document

Search Result 915, Processing Time 0.029 seconds

A Study on the Design of the Appraisal System of Permanent Archival Institutions : Focused on the Seoul Metropolitan Archives (영구기록물관리기관의 재평가체계 설계 연구 서울기록원을 중심으로)

  • Lee, Eunjung;Kim, Dabeen;Kim, Sunyou;Kim, Heejin;Ryu, Hanjo
    • The Korean Journal of Archival Studies
    • /
    • no.76
    • /
    • pp.5-37
    • /
    • 2023
  • This study aimed to design an evaluation system applicable to permanent record management institutions, focusing on the Seoul Archives, in order to implement the reevaluation of permanent record management institutions. As a process for this, an area for evaluating evidence, administrative, and historical values was established and detailed evaluation factors were derived. In order to effectively apply the set evaluation factors, the evaluation procedure was designed by dividing them into three stages. In the first stage of law-based evaluation, long-term preservation was determined by identifying the position and legal form of policymakers that can be immediately evaluated according to clear standards. Records that have not been determined for long-term preservation were reorganized into evaluation factors, such as record management standards, official document classification tables, pledges, and policies, which are the second stage of business function-based evaluation, and then comprehensively applied to review the validity of long-term preservation of held records. In the second stage of evaluation, records that were not judged as long-term preservation were judged by applying historical events, cultural assets, and collection policies in the subject-based evaluation stage, which is the third stage of evaluation. The designed evaluation system can find significance in minimizing the arbitrariness reflected in the evaluation and increasing the efficiency of the evaluation, and it has been confirmed that it is possible to evaluate comprehensively reflecting the various contexts and values of the records. In addition, a re-evaluation system suitable for permanent records management institutions was established by combining balanced macro-evaluation and micro-evaluation.

A Method for Extracting Equipment Specifications from Plant Documents and Cross-Validation Approach with Similar Equipment Specifications (플랜트 설비 문서로부터 설비사양 추출 및 유사설비 사양 교차 검증 접근법)

  • Jae Hyun Lee;Seungeon Choi;Hyo Won Suh
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.29 no.2
    • /
    • pp.55-68
    • /
    • 2024
  • Plant engineering companies create or refer to requirements documents for each related field, such as plant process/equipment/piping/instrumentation, in different engineering departments. The process-related requirements document includes not only a description of the process but also the requirements of the equipment or related facilities that will operate it. Since the authors and reviewers of the requirements documents are different, there is a possibility that inconsistencies may occur between equipment or parts design specifications described in different requirement documents. Ensuring consistency in these matters can increase the reliability of the overall plant design information. However, the amount of documents and the scattered nature of requirements for a same equipment and parts across different documents make it challenging for engineers to trace and manage requirements. This paper proposes a method to analyze requirement sentences and calculate the similarity of requirement sentences in order to identify semantically identical sentences. To calculate the similarity of requirement sentences, we propose a named entity recognition method to identify compound words for the parts and properties that are semantically central to the requirements. A method to calculate the similarity of the identified compound words for parts and properties is also proposed. The proposed method is explained using sentences in practical documents, and experimental results are described.

Exploring Development Achievement of the 2022 Revised High School Earth Science Curriculum to Cultivate Transformative Competency (변혁적 역량 함양을 위한 2022 개정 고등학교 과학과 지구과학 교육과정 개발 성과 탐색)

  • Youngsun Kwak;Jong-Hee Kim;Hyunjong Kim
    • Journal of the Korean Society of Earth Science Education
    • /
    • v.17 no.1
    • /
    • pp.49-59
    • /
    • 2024
  • In this study, we investigated the philosophical background and progress of the 2022 revised curriculum development in the high school earth science field. Research that was not covered in the research report includes the relevance of the transformative competency of OECD Education 2030, and that core ideas and achievement standards are organized around knowledge understanding, process functions, and value attitudes that constitute the learning compass needle. In addition, the composition of core ideas and Earth science electives in light of the understanding-centered curriculum, and IB type inquiry-based teaching and learning. Main research results include that the 2022 revised Earth science curriculum emphasized the student agency to foster the transformative competency and scientific literacy, and the curriculum document system in the field of earth science uses a learning compass needle. In addition, based on the understanding-centered curriculum, core ideas of Earth science were derived, and elective courses were organized to help students reach these core ideas. Also, IB-type inquiry-based teaching and learning was emphasized to foster student agency with knowledge construction competency. Based on the research results, slimming of the national and general level curriculum, the need to develop process-centered assessment methods for value and attitudes, the need for curriculum backward design, and ways to develop student agency through inquiry-based teaching and learning were suggested.

A Study on Ontology and Topic Modeling-based Multi-dimensional Knowledge Map Services (온톨로지와 토픽모델링 기반 다차원 연계 지식맵 서비스 연구)

  • Jeong, Hanjo
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.79-92
    • /
    • 2015
  • Knowledge map is widely used to represent knowledge in many domains. This paper presents a method of integrating the national R&D data and assists of users to navigate the integrated data via using a knowledge map service. The knowledge map service is built by using a lightweight ontology and a topic modeling method. The national R&D data is integrated with the research project as its center, i.e., the other R&D data such as research papers, patents, and reports are connected with the research project as its outputs. The lightweight ontology is used to represent the simple relationships between the integrated data such as project-outputs relationships, document-author relationships, and document-topic relationships. Knowledge map enables us to infer further relationships such as co-author and co-topic relationships. To extract the relationships between the integrated data, a Relational Data-to-Triples transformer is implemented. Also, a topic modeling approach is introduced to extract the document-topic relationships. A triple store is used to manage and process the ontology data while preserving the network characteristics of knowledge map service. Knowledge map can be divided into two types: one is a knowledge map used in the area of knowledge management to store, manage and process the organizations' data as knowledge, the other is a knowledge map for analyzing and representing knowledge extracted from the science & technology documents. This research focuses on the latter one. In this research, a knowledge map service is introduced for integrating the national R&D data obtained from National Digital Science Library (NDSL) and National Science & Technology Information Service (NTIS), which are two major repository and service of national R&D data servicing in Korea. A lightweight ontology is used to design and build a knowledge map. Using the lightweight ontology enables us to represent and process knowledge as a simple network and it fits in with the knowledge navigation and visualization characteristics of the knowledge map. The lightweight ontology is used to represent the entities and their relationships in the knowledge maps, and an ontology repository is created to store and process the ontology. In the ontologies, researchers are implicitly connected by the national R&D data as the author relationships and the performer relationships. A knowledge map for displaying researchers' network is created, and the researchers' network is created by the co-authoring relationships of the national R&D documents and the co-participation relationships of the national R&D projects. To sum up, a knowledge map-service system based on topic modeling and ontology is introduced for processing knowledge about the national R&D data such as research projects, papers, patent, project reports, and Global Trends Briefing (GTB) data. The system has goals 1) to integrate the national R&D data obtained from NDSL and NTIS, 2) to provide a semantic & topic based information search on the integrated data, and 3) to provide a knowledge map services based on the semantic analysis and knowledge processing. The S&T information such as research papers, research reports, patents and GTB are daily updated from NDSL, and the R&D projects information including their participants and output information are updated from the NTIS. The S&T information and the national R&D information are obtained and integrated to the integrated database. Knowledge base is constructed by transforming the relational data into triples referencing R&D ontology. In addition, a topic modeling method is employed to extract the relationships between the S&T documents and topic keyword/s representing the documents. The topic modeling approach enables us to extract the relationships and topic keyword/s based on the semantics, not based on the simple keyword/s. Lastly, we show an experiment on the construction of the integrated knowledge base using the lightweight ontology and topic modeling, and the knowledge map services created based on the knowledge base are also introduced.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Odysseus/Parallel-OOSQL: A Parallel Search Engine using the Odysseus DBMS Tightly-Coupled with IR Capability (오디세우스/Parallel-OOSQL: 오디세우스 정보검색용 밀결합 DBMS를 사용한 병렬 정보 검색 엔진)

  • Ryu, Jae-Joon;Whang, Kyu-Young;Lee, Jae-Gil;Kwon, Hyuk-Yoon;Kim, Yi-Reun;Heo, Jun-Suk;Lee, Ki-Hoon
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.4
    • /
    • pp.412-429
    • /
    • 2008
  • As the amount of electronic documents increases rapidly with the growth of the Internet, a parallel search engine capable of handling a large number of documents are becoming ever important. To implement a parallel search engine, we need to partition the inverted index and search through the partitioned index in parallel. There are two methods of partitioning the inverted index: 1) document-identifier based partitioning and 2) keyword-identifier based partitioning. However, each method alone has the following drawbacks. The former is convenient in inserting documents and has high throughput, but has poor performance for top h query processing. The latter has good performance for top-k query processing, but is inconvenient in inserting documents and has low throughput. In this paper, we propose a hybrid partitioning method to compensate for the drawback of each method. We design and implement a parallel search engine that supports the hybrid partitioning method using the Odysseus DBMS tightly coupled with information retrieval capability. We first introduce the architecture of the parallel search engine-Odysseus/parallel-OOSQL. We then show the effectiveness of the proposed system through systematic experiments. The experimental results show that the query processing time of the document-identifier based partitioning method is approximately inversely proportional to the number of blocks in the partition of the inverted index. The results also show that the keyword-identifier based partitioning method has good performance in top-k query processing. The proposed parallel search engine can be optimized for performance by customizing the methods of partitioning the inverted index according to the application environment. The Odysseus/parallel OOSQL parallel search engine is capable of indexing, storing, and querying 100 million web documents per node or tens of billions of web documents for the entire system.

The Relationship Between Social Security Network and Security Life Satisfaction in Community Residents: Scale Development and Application of Social Security Network (사회안전망과 지역사회주민의 안전생활만족의 관계: 사회안전망 척도개발과 적용)

  • Kim, Chan-Sun
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.6
    • /
    • pp.108-118
    • /
    • 2014
  • The purpose of this study is to develop a relationship of measuring method for the social security network and verify its validity and reliability and apply it to investigate the due to security life satisfaction. This study is based by setting general residents of Seoul in 2013 and using the stratified cluster random sampling method to analyze a total amount of 203 examples. The measuring methods for the social security network was developed through document research, conceptual definition and drafting the survey, experts' conference, preliminary inspection and original examination, verification of the validity and reliability of the survey. An experts' conference took pace to verify the validity of the survey, and 6 factors were extracted through exploratory factor analysis crime prevention design, street CCTV facilities, volunteer neighborhood patrol, local government security education, police public peace service, private security service. The conclusion are the following. Collected data was analyzed based on the aim of this study using SPSSWIN 18.0, and practice frequency analysis, F test, factor analysis, reliability analysis, correlation analysis, multiple regression analysis. First, the validity of the social security network measurement is very high. Thus, the factors constituting the social security network were found to be crime prevention design, street CCTV facilities, volunteer neighborhood patrol, local government security education, police public peace services, and private security services, and the crime prevention design factor was found to be most explanatory. Second, the reliability of the social security network measurement is very high. Thus, the correlation between the questions and the sector, the questions and the social security net was very high, and the internal consistency showed a Cronbach's${\alpha}$ value of over 0.865. Third, the establishment of a social security network had the biggest effect on people in their forties. Thus, when the crime prevention design, street CCTV facilities, local government security education, police public peace services are systematically established, the social anxiety of citizens was reduced.

Geographic Conditions and Garden Designs of Byeol-seo Scenic Site of Gimcheon Bangcho-Pavilion and Mrs Choi's Pond (별서 명승 김천 방초정(芳草亭)과 최씨담(崔氏潭)의 입지 및 조영 특성)

  • Rho, Jae-Hyun;Lee, Hyun-Woo
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.34 no.1
    • /
    • pp.71-82
    • /
    • 2016
  • Through literature review and on-site survey of Gimcheon Bangcho pavilion(芳草亭), the features of garden design(庭園意匠) including geographic conditions, landscape modeling of Nujeong(樓亭) and Jidang(池塘, Pond), and scenic interpretations in Nujeong Jeiyoung poetry(樓亭題詠詩) have been carefully researched and the findings are presented below. Bangcho pavilion is located in a village called Wonteomaeul, which belongs to the feng shui state of golden hairpin and floating lotus. It has long been the cultural hub of communication and social interactions among the villagers. The Head House of Jeongyanggong(靖襄公宗宅), the main house(本第) of the Yeonan Yi Clan(延安李氏), is about 150m away from Bangcho pavilion, an artistic space whose landscape modeling is of the form called Nujeong. The name 'Bangcho' reflects the noble man(君子)'s determination: "I yearn for the place where honey parrots fly and the fragrant grass grow." From the two story structure of the pavilion where there is an additional floor installed to the central ondol room by a four-sided subdivision door, one can detect the aspiration of the men for pursuing an open view. One can also observe the efforts in designing the room to be used for multiple purposes from a private place to an office for periodic publication of a family lineage document called "Garyejunghae(家禮增解)". Bangcho pavilion's main sight of interest is Mrs Choi's Pond(崔氏潭), the one and only garden structure that comprises the twin round island of square pond(方池雙圓島) among the existing Jidangs in Korea. In this special Jidang, there are two coexisting islands that represent a well thought out garden facility for symbolizing conjugal affection and unyielding fidelity between master and servent(主從). In addition, the three inflows and one outflow facing the Ramcheon valley is regarded as an ideal garden design optimized for performing the function of a village bangjuk which is the site for undercurrent creation and ecological reprocessing. At present, Giant pussy willow is the only circular vegetation identified in the area of Bangcho pavilion, although this plant species is about to wither away judging from the signs of decrepitude that seems to persist for two out of three weeks. The old pine tree that appears in the 1872 Jeiyoung poetry of Byeongseon Song(宋秉璿) no longer exists. Anjae(安齋) Jang Yoo(張瑠)'s "Eight Scenary on Bangcho pavilion(芳草亭八詠)" and its expansive reproduction "Ten Scenary on Bangcho pavilion(芳草亭十景)" from Gwagang(守岡) Lee Manyoung(李晩永) depict vividly the pastoric scenery of an idyll(田園景) that stretches throughout the natural and cultural landscape of the province of Gimcheon and Guseong surrounding the Bangcho pavilion. The Bangcho pavilion sutra aims to establish Bangcho pavilion and the village of Wonteomaeul as the centre of microcosmos by dividing and allocating its scenic features according to the four seasons and times(四季四時), the eight courses(八方) and the meteorological phenomena, and it is the incarnation(顯現) of landscape perception upon the Byeol-seo Scenic site of Bangcho pavilion, the cultural hub of the region.

A Design and Implementation of XML DTDs for Integrated Medical Information System (통합의료정보 시스템을 위한 XML DTD 설계 및 구현)

  • 안철범;나연묵
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.40 no.6
    • /
    • pp.106-117
    • /
    • 2003
  • The advanced medical information systems usually consist of loosely-coupled interaction of independent systems, such as HIS/RIS and PACS. To support easier information exchange between these systems and between hospitals, and to support new types of medical service such as teleradiology, it becomes essential to integrate separated medical information and allow them to be exchanged and retrieved through internet. This thesis proposes an integrated medical information system using XML. We analyzed HL7 and DICOM standard formats, and designed an integrated XML DTD. We extracted information from HL7 messages and DICOM files and generated XML document instances and XSL stylesheets based on the proposed XML DTD. We implemented the web interface for the integrated medical information system, which supports data sharing, information exchange and retrieval between two different standard formats. The proposed XML-based integrated medical information system will contribute to solve the problems of current medical information systems, by enabling integration of separated medical informations and by allowing data exchange and sharing through internet. The proposed system with XML is more robust than web-based medical information systems developed by using HTML, because XML itself provides more flexibility and extensibility than HTML.

Sports Celebrities as a Determinant of Sport Media Distribution Contents: Focusing on Tacit Premise of Agenda Setting Theory (스포츠미디어의 유통 콘텐츠 결정요인으로서 스포츠 스타: 의제설정 이론의 암묵적 전제를 중심으로)

  • YOO, Sang-Keon;KIM, Yong-Eun;SEO, Won-Jae
    • Journal of Distribution Science
    • /
    • v.17 no.10
    • /
    • pp.83-91
    • /
    • 2019
  • Purpose - Media is a significant distributional channel in sport. In terms of determining the influencer in building sport media contents, recent sport media studies have employed agenda-setting theory, assuming media itself as the agenda provider. In a real-world situation, however, sports stars have been deemed key factor determining distribution contents in sport. The starting point of this study is the "tacit premise" of agenda-setting theory. Given the agenda-setting theory, the current study attempted to explore the function of sport stars as an agenda provider, which is a key determinant of sport distribution. Research design, data, and methodology - This study has reviewed articles of Yuna Kim, Sang-hwa Lee, and Hyun-jin Ryu from daily newspapers including as dong-a ilbo and joongang ilbo (2013 to 2017). The study collected data, portable document format (PDF), from the online archive of dong-a ilbo and joongang ilbo. We coded the length of the article, the frequency, the size of the picture, and the structural form of the article. Inter-coder reliability was compared with data previously investigated by the researcher. Inter-coder reliabilities for study 1 and 2 was .89 and .85. To examine hypotheses, descriptive analysis, correlations, and cross-tap analysis were performed. Results - The results partially supported the hypotheses proposing the significant role of sports stars as the agenda setters in distributing sport media contents. In specific, the study found that the number of articles about sports stars prevailed the number of articles about regular athletes. Besides, studies found that the use of photos was more frequent in articles of sports starts than that of regular athletes. In sports newspaper articles, featured story articles were used more than straight-articles for news relating to sports stars. Also, sports newspaper of sports stars contained more information associated within an event rather than outside of an event. Conclusions - In sports journalism, this study challenges the current theory that the media affects the composition and the content of sports coverages. As the principle of the agenda-setting of sports media, the influence of sports stars must be continuously studied along with a follow-up study.