• Title/Summary/Keyword: 컴퓨팅 시스템

Search Result 3,823, Processing Time 0.032 seconds

Changes in Meteorological Variables by SO2 Emissions over East Asia using a Linux-based U.K. Earth System Model (리눅스 기반 U.K. 지구시스템모형을 이용한 동아시아 SO2 배출에 따른 기상장 변화)

  • Youn, Daeok;Song, Hyunggyu;Lee, Johan
    • Journal of the Korean earth science society
    • /
    • v.43 no.1
    • /
    • pp.60-76
    • /
    • 2022
  • This study presents a software full setup and the following test execution times in a Linux cluster for the United Kingdom Earth System Model (UKESM) and then compares the model results from control and experimental simulations of the UKESM relative to various observations. Despite its low resolution, the latest version of the UKESM can simulate tropospheric chemistry-aerosol processes and the stratospheric ozone chemistry using the United Kingdom Chemistry and Aerosol (UKCA) module. The UKESM with UKCA (UKESM-UKCA) can treat atmospheric chemistryaerosol-cloud-radiation interactions throughout the whole atmosphere. In addition to the control UKESM run with the default CMIP5 SO2 emission dataset, an experimental run was conducted to evaluate the aerosol effects on meteorology by changing atmospheric SO2 loading with the newest REAS data over East Asia. The simulation period of the two model runs was 28 years, from January 1, 1982 to December 31, 2009. Spatial distributions of monthly mean aerosol optical depth, 2-m temperature, and precipitation intensity from model simulations and observations over East Asia were compared. The spatial patterns of surface temperature and precipitation from the two model simulations were generally in reasonable agreement with the observations. The simulated ozone concentration and total column ozone also agreed reasonably with the ERA5 reanalyzed one. Comparisons of spatial patterns and linear trends led to the conclusion that the model simulation with the newest SO2 emission dataset over East Asia showed better temporal changes in temperature and precipitation over the western Pacific and inland China. Our results are in line with previous finding that SO2 emissions over East Asia are an important factor for the atmospheric environment and climate change. This study confirms that the UKESM can be installed and operated in a Linux cluster-computing environment. Thus, researchers in various fields would have better access to the UKESM, which can handle the carbon cycle and atmospheric environment on Earth with interactions between the atmosphere, ocean, sea ice, and land.

A Study on the Serialized Event Sharing System for Multiple Telecomputing User Environments (원격.다원 사용자 환경에서의 순차적 이벤트 공유기에 관한 연구)

  • 유영진;오용선
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2003.05a
    • /
    • pp.344-350
    • /
    • 2003
  • In this paper, we propose a novel sharing method ordering the events occurring between users collaborated with the common telecomputing environment. We realize the sharing method with multimedia data to improve the coworking effect using teleprocessing network. This sharing method advances the efficiency of communicating projects such as remote education, tele-conference, and co-authoring of multimedia contents by offering conveniences of presentation, group authoring, common management, and transient event productions of the users. As for the conventional sharing white board system, all the multimedia contents segments should be authored by the exclusive program, and we cannot use any existing contents or program. Moreover we suffer from the problem that ordering error occurs in the teleprocessing operation because we do not have any line-up technology for the input ordering of commands. Therefore we develop a method of retrieving input and output events from the windows system and the message hooking technology which transmits between programs in the operating system In addition, we realize the allocation technology of the processing results for all sharing users of the distributed computing environment without any error. Our sharing technology should contribute to improve the face-to-face coworking efficiency for multimedia contents authoring, common blackboard system in the area of remote educations, and presentation display in visual conference.

  • PDF

Odysseus/Parallel-OOSQL: A Parallel Search Engine using the Odysseus DBMS Tightly-Coupled with IR Capability (오디세우스/Parallel-OOSQL: 오디세우스 정보검색용 밀결합 DBMS를 사용한 병렬 정보 검색 엔진)

  • Ryu, Jae-Joon;Whang, Kyu-Young;Lee, Jae-Gil;Kwon, Hyuk-Yoon;Kim, Yi-Reun;Heo, Jun-Suk;Lee, Ki-Hoon
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.4
    • /
    • pp.412-429
    • /
    • 2008
  • As the amount of electronic documents increases rapidly with the growth of the Internet, a parallel search engine capable of handling a large number of documents are becoming ever important. To implement a parallel search engine, we need to partition the inverted index and search through the partitioned index in parallel. There are two methods of partitioning the inverted index: 1) document-identifier based partitioning and 2) keyword-identifier based partitioning. However, each method alone has the following drawbacks. The former is convenient in inserting documents and has high throughput, but has poor performance for top h query processing. The latter has good performance for top-k query processing, but is inconvenient in inserting documents and has low throughput. In this paper, we propose a hybrid partitioning method to compensate for the drawback of each method. We design and implement a parallel search engine that supports the hybrid partitioning method using the Odysseus DBMS tightly coupled with information retrieval capability. We first introduce the architecture of the parallel search engine-Odysseus/parallel-OOSQL. We then show the effectiveness of the proposed system through systematic experiments. The experimental results show that the query processing time of the document-identifier based partitioning method is approximately inversely proportional to the number of blocks in the partition of the inverted index. The results also show that the keyword-identifier based partitioning method has good performance in top-k query processing. The proposed parallel search engine can be optimized for performance by customizing the methods of partitioning the inverted index according to the application environment. The Odysseus/parallel OOSQL parallel search engine is capable of indexing, storing, and querying 100 million web documents per node or tens of billions of web documents for the entire system.

A Study on the Component-based GIS Development Methodology using UML (UML을 활용한 컴포넌트 기반의 GIS 개발방법론에 관한 연구)

  • Park, Tae-Og;Kim, Kye-Hyun
    • Journal of Korea Spatial Information System Society
    • /
    • v.3 no.2 s.6
    • /
    • pp.21-43
    • /
    • 2001
  • The environment to development information system including a GIS has been drastically changed in recent years in the perspectives of the complexity and diversity of the software, and the distributed processing and network computing, etc. This leads the paradigm of the software development to the CBD(Component Based Development) based object-oriented technology. As an effort to support these movements, OGC has released the abstract and implementation standards to enable approaching to the service for heterogeneous geographic information processing. It is also common trend in domestic field to develop the GIS application based on the component technology for municipal governments. Therefore, it is imperative to adopt the component technology considering current movements, yet related research works have not been made. This research is to propose a component-based GIS development methodology-ATOM(Advanced Technology Of Methodology)-and to verify its adoptability through the case study. ATOM can be used as a methodology to develop component itself and enterprise GIS supporting the whole procedure for the software development life cycle based on conventional reusable component. ATOM defines stepwise development process comprising activities and work units of each process. Also, it provides input and output, standardized items and specs for the documentation, detailed instructions for the easy understanding of the development methodology. The major characteristics of ATOM would be the component-based development methodology considering numerous features of the GIS domain to generate a component with a simple function, the smallest size, and the maximum reusability. The case study to validate the adoptability of the ATOM showed that it proves to be a efficient tool for generating a component providing relatively systematic and detailed guidelines for the component development. Therefore, ATOM would lead to the promotion of the quality and the productivity for developing application GIS software and eventually contribute to the automatic production of the GIS software, the our final goal.

  • PDF

An Efficient Heuristic for Storage Location Assignment and Reallocation for Products of Different Brands at Internet Shopping Malls for Clothing (의류 인터넷 쇼핑몰에서 브랜드를 고려한 상품 입고 및 재배치 방법 연구)

  • Song, Yong-Uk;Ahn, Byung-Hyuk
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.2
    • /
    • pp.129-141
    • /
    • 2010
  • An Internet shopping mall for clothing operates a warehouse for packing and shipping products to fulfill its orders. All the products in the warehouse are put into the boxes of same brands and the boxes are stored in a row on shelves equiped in the warehouse. To make picking and managing easy, boxes of the same brands are located side by side on the shelves. When new products arrive to the warehouse for storage, the products of a brand are put into boxes and those boxes are located adjacent to the boxes of the same brand. If there is not enough space for the new coming boxes, however, some boxes of other brands should be moved away and then the new coming boxes are located adjacent in the resultant vacant spaces. We want to minimize the movement of the existing boxes of other brands to another places on the shelves during the warehousing of new coming boxes, while all the boxes of the same brand are kept side by side on the shelves. Firstly, we define the adjacency of boxes by looking the shelves as an one dimensional series of spaces to store boxes, i.e. cells, tagging the series of cells by a series of numbers starting from one, and considering any two boxes stored in the cells to be adjacent to each other if their cell numbers are continuous from one number to the other number. After that, we tried to formulate the problem into an integer programming model to obtain an optimal solution. An integer programming formulation and Branch-and-Bound technique for this problem may not be tractable because it would take too long time to solve the problem considering the number of the cells or boxes in the warehouse and the computing power of the Internet shopping mall. As an alternative approach, we designed a fast heuristic method for this reallocation problem by focusing on just the unused spaces-empty cells-on the shelves, which results in an assignment problem model. In this approach, the new coming boxes are assigned to each empty cells and then those boxes are reorganized so that the boxes of a brand are adjacent to each other. The objective of this new approach is to minimize the movement of the boxes during the reorganization process while keeping the boxes of a brand adjacent to each other. The approach, however, does not ensure the optimality of the solution in terms of the original problem, that is, the problem to minimize the movement of existing boxes while keeping boxes of the same brands adjacent to each other. Even though this heuristic method may produce a suboptimal solution, we could obtain a satisfactory solution within a satisfactory time, which are acceptable by real world experts. In order to justify the quality of the solution by the heuristic approach, we generate 100 problems randomly, in which the number of cells spans from 2,000 to 4,000, solve the problems by both of our heuristic approach and the original integer programming approach using a commercial optimization software package, and then compare the heuristic solutions with their corresponding optimal solutions in terms of solution time and the number of movement of boxes. We also implement our heuristic approach into a storage location assignment system for the Internet shopping mall.

A Construction of TMO Object Group Model for Distributed Real-Time Services (분산 실시간 서비스를 위한 TMO 객체그룹 모델의 구축)

  • 신창선;김명희;주수종
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.5_6
    • /
    • pp.307-318
    • /
    • 2003
  • In this paper, we design and construct a TMO object group that provides the guaranteed real-time services in the distributed object computing environments, and verify execution power of its model for the correct distributed real-time services. The TMO object group we suggested is based on TINA's object group concept. This model consists of TMO objects having real-time properties and some components that support the object management service and the real-time scheduling service in the TMO object group. Also TMO objects can be duplicated or non-duplicated on distributed systems. Our model can execute the guaranteed distributed real-time service on COTS middlewares without restricting the specially ORB or the of operating system. For achieving goals of our model. we defined the concepts of the TMO object and the structure of the TMO object group. Also we designed and implemented the functions and interactions of components in the object group. The TMO object group includes the Dynamic Binder object and the Scheduler object for supporting the object management service and the real-time scheduling service, respectively The Dynamic Binder object supports the dynamic binding service that selects the appropriate one out of the duplicated TMO objects for the clients'request. And the Scheduler object supports the real-time scheduling service that determines the priority of tasks executed by an arbitrary TMO object for the clients'service requests. And then, in order to verify the executions of our model, we implemented the Dynamic Binder object and the Scheduler object adopting the binding priority algorithm for the dynamic binding service and the EDF algorithm for the real-time scheduling service from extending the existing known algorithms. Finally, from the numerical analyzed results we are shown, we verified whether our TMO object group model could support dynamic binding service for duplicated or non-duplicated TMO objects, also real-time scheduling service for an arbitrary TMO object requested from clients.

Deep Learning OCR based document processing platform and its application in financial domain (금융 특화 딥러닝 광학문자인식 기반 문서 처리 플랫폼 구축 및 금융권 내 활용)

  • Dongyoung Kim;Doohyung Kim;Myungsung Kwak;Hyunsoo Son;Dongwon Sohn;Mingi Lim;Yeji Shin;Hyeonjung Lee;Chandong Park;Mihyang Kim;Dongwon Choi
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.1
    • /
    • pp.143-174
    • /
    • 2023
  • With the development of deep learning technologies, Artificial Intelligence powered Optical Character Recognition (AI-OCR) has evolved to read multiple languages from various forms of images accurately. For the financial industry, where a large number of diverse documents are processed through manpower, the potential for using AI-OCR is great. In this study, we present a configuration and a design of an AI-OCR modality for use in the financial industry and discuss the platform construction with application cases. Since the use of financial domain data is prohibited under the Personal Information Protection Act, we developed a deep learning-based data generation approach and used it to train the AI-OCR models. The AI-OCR models are trained for image preprocessing, text recognition, and language processing and are configured as a microservice architected platform to process a broad variety of documents. We have demonstrated the AI-OCR platform by applying it to financial domain tasks of document sorting, document verification, and typing assistance The demonstrations confirm the increasing work efficiency and conveniences.

Development of a Testing Environment for Parallel Programs based on MSC Specifications (MSC 명세를 기반으로 한 병렬 프로그램 테스팅 환경의 개발)

  • Kim, Hyeon-Soo;Bae, Hyun-Seop;Chung, In-Sang;Kwon, Yong-Rae;Chung, Young-Sik;Lee, Byung-Sun;Lee, Dong-Gil
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.6 no.2
    • /
    • pp.135-149
    • /
    • 2000
  • Most of prior works on testing parallel programs have concentrated on how to guarantee the reproducibility by employing event traces exercised during executions of a program. Consequently, little work has been done to generate test cases, especially, from specifications produced from software development process. In this research work, we devise the techniques for deriving test cases automatically from the specifications written in Message Sequence Charts(MSCs) which are widely used in telecommunication areas and develop the testing environment for performing module testing of parallel programs with derived test cases. For deriving test cases from MSCs, we have to uncover the causality relations among events embedded implicitly in MSCs. For this, we devise the methods for adapting vector time stamping to MSCs, Then, valid event sequences, satisfying the causality relations, are generated and these are used as test cases. The generated test cases, written in TTCN, are translated into CHILL source codes, which interact with a target module to be tested and test the validity of behaviors of the module. Since the testing method developed in this research work extracts test cases from the MSC specifications produced front telecommunications software development process, it is not necessary to describe auxiliary specifications for testing. In audition adapting vector time stamping generates automatically the event sequences, the generated event sequences that are ones for whole system can be used for individual testing purpose.

  • PDF

The Role of Home Economics Education in the Fourth Industrial Revolution (4차 산업혁명시대 가정과교육의 역할)

  • Lee, Eun-hee
    • Journal of Korean Home Economics Education Association
    • /
    • v.31 no.4
    • /
    • pp.149-161
    • /
    • 2019
  • At present, we are at the point of change of the 4th industrial revolution era due to the development of artificial intelligence(AI) and rapid technological innovation that no one can predict until now. This study started from the question of 'What role should home economics education play in the era of the Fourth Industrial Revolution?'. The Fourth Industrial Revolution is characterized by AI, cloud computing, Internet of Things(IoT), big data, and Online to Offline(O2O). It will drastically change the social system, science and technology and the structure of the profession. Since the dehumanization of robots and artificial intelligence may occur, the 4th Industrial Revolution Education should be sought to foster future human resources with humanity and citizenship for the future community. In addition, the implication of education in the fourth industrial revolution, which will bring about a change to a super-intelligent and hyper-connected society, is that the role of education should be emphasized so that humans internalize their values as human beings. Character education should be established as a generalized and internalized consciousness with a concept established in the integration of the curriculum, and concrete practical strategies should be prepared. In conclusion, home economics education in the 4th industrial revolution era should play a leading role in the central role of character education, and intrinsic improvement of various human lives. The fourth industrial revolution will change not only what we do, or human mental and physical activities, but also who we are, or human identity. In the information society and digital society, it is important how quickly and accurately it is possible to acquire scattered knowledge. In the information society, it is required to learn how to use knowledge for human beings in rapid change. As such, the fourth industrial revolution seeks to lead the family, organization, and community positively by influencing the systems that shape our lives. Home economics education should take the lead in this role.

Analysis on the Cooling Efficiency of High-Performance Multicore Processors according to Cooling Methods (기계식 쿨링 기법에 따른 고성능 멀티코어 프로세서의 냉각 효율성 분석)

  • Kang, Seung-Gu;Choi, Hong-Jun;Ahn, Jin-Woo;Park, Jae-Hyung;Kim, Jong-Myon;Kim, Cheol-Hong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.7
    • /
    • pp.1-11
    • /
    • 2011
  • Many researchers have studied on the methods to improve the processor performance. However, high integrated semiconductor technology for improving the processor performance causes many problems such as battery life, high power density, hotspot, etc. Especially, as hotspot has critical impact on the reliability of chip, thermal problems should be considered together with performance and power consumption when designing high-performance processors. To alleviate the thermal problems of processors, there have been various researches. In the past, mechanical cooling methods have been used to control the temperature of processors. However, up-to-date microprocessors causes severe thermal problems, resulting in increased cooling cost. Therefore, recent studies have focused on architecture-level thermal-aware design techniques than mechanical cooling methods. Even though architecture-level thermal-aware design techniques are efficient for reducing the temperature of processors, they cause performance degradation inevitably. Therefore, if the mechanical cooling methods can manage the thermal problems of processors efficiently, the performance can be improved by reducing the performance degradation due to architecture-level thermal-aware design techniques such as dynamic thermal management. In this paper, we analyze the cooling efficiency of high-performance multicore processors according to mechanical cooling methods. According to our experiments using air cooler and liquid cooler, the liquid cooler consumes more power than the air cooler whereas it reduces the temperature more efficiently. Especially, the cost for reducing $1^{\circ}C$ is varied by the environments. Therefore, if the mechanical cooling methods can be used appropriately, the temperature of high-performance processors can be managed more efficiently.