• Title/Summary/Keyword: Scientific Computing

Search Result 182, Processing Time 0.022 seconds

Design and manufacture of carrying along style HRV operational bioinstrumentation system that apply AVR MCU(II) (AVR MCU를 적용한 휴대형 HRV 생체 계측시스템의 설계 및 제작(II))

  • Kim, Whi-Young;Park, Doo-Yul
    • Journal of the Korea Computer Industry Society
    • /
    • v.8 no.4
    • /
    • pp.295-302
    • /
    • 2007
  • Because Mobile computing uses radio transfer communications division carrying along information terminal, internet link computer and information technology of human body effectively, when, where, who, can offer role that is center enemy of available modern technology moving and reconsider new technology to physiological sounding, and reconstruct creatively. Specially, can offer possibility that can intervene in process that motive living body change before military register symptoms are developed of disease on silver society. But, much parameters data processing, standard anger of data of that is vague. same time collection of data can lift difficulty etc.. Therefore, this research excludes time limitation constituent inflecting Mobile computing, and result that analysis experiments because is proper and Mobile nerve mechanical code Tuesday that do with bioelectricity signal method select and embody system by access that become correct analysis, is becomes model of living body signal Mobile analysis device, and person could apply Mobile living body measuring device m-HSS (mobile-Hardware-software system) that measuring is possible by scientific access.

  • PDF

Multi-platform Visualization System for Earth Environment Data (지구환경 데이터를 위한 멀티플랫폼 가시화 시스템)

  • Jeong, Seokcheol;Jung, Seowon;Kim, Jongyong;Park, Sanghun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.21 no.3
    • /
    • pp.36-45
    • /
    • 2015
  • It is important subject of research in engineering and natural science field that creating continuing high-definition image from very large volume data. The necessity of software that helps analyze useful information in data has improved by effectively showing visual image information of high resolution data with visualization technique. In this paper, we designed multi-platform visualization system based on client-server to analyze and express earth environment data effectively constructed with observation and prediction. The visualization server comprised of cluster transfers data to clients through parallel/distributed computing, and the client is developed to be operated in various platform and visualize data. In addition, we aim user-friendly program through multi-touch, sensor and have made realistic simulation image with image-based lighting technique.

A Multi-objective Optimization Approach to Workflow Scheduling in Clouds Considering Fault Recovery

  • Xu, Heyang;Yang, Bo;Qi, Weiwei;Ahene, Emmanuel
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.3
    • /
    • pp.976-995
    • /
    • 2016
  • Workflow scheduling is one of the challenging problems in cloud computing, especially when service reliability is considered. To improve cloud service reliability, fault tolerance techniques such as fault recovery can be employed. Practically, fault recovery has impact on the performance of workflow scheduling. Such impact deserves detailed research. Only few research works on workflow scheduling consider fault recovery and its impact. In this paper, we investigate the problem of workflow scheduling in clouds, considering the probability that cloud resources may fail during execution. We formulate this problem as a multi-objective optimization model. The first optimization objective is to minimize the overall completion time and the second one is to minimize the overall execution cost. Based on the proposed optimization model, we develop a heuristic-based algorithm called Min-min based time and cost tradeoff (MTCT). We perform extensive simulations with four different real world scientific workflows to verify the validity of the proposed model and evaluate the performance of our algorithm. The results show that, as expected, fault recovery has significant impact on the two performance criteria, and the proposed MTCT algorithm is useful for real life workflow scheduling when both of the two optimization objectives are considered.

An Adaptive Grid Resource Selection Method Using Statistical Analysis of Job History (작업 이력의 통계 분석을 통한 적응형 그리드 자원 선택 기법)

  • Hur, Cin-Young;Kim, Yoon-Hee
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.37 no.3
    • /
    • pp.127-137
    • /
    • 2010
  • As large-scale computational applications in various scientific domains have been utilized over many integrated sets of grid computing resources, the difficulty of their execution management and control has been increased. It is beneficial to refer job history generated from many application executions, in order to identify application‘s characteristics and to decide selection policies of grid resource meaningfully. In this paper, we apply a statistical technique, Plackett-Burman design with fold-over (PBDF), for analyzing grid environments and execution history of applications. PBDF design identifies main factors in grid environments and applications, ranks based on how much they affect to their execution time. The effective factors are used for selecting reference job profiles and then preferable resource based on the reference profiles is chosen. An application is performed on the selected resource and its execution result is added to job history. Factor's credit is adjusted according to the actual execution time. For a proof-of-concept, we analyzed job history from an aerospace research grid system to get characteristics of grid resource and applications. We built JARS algorithm and simulated the algorithm with the analyzed job history. The simulation result shows good reliability and considerable performance in grid environment with frequently crashed resources.

Big Wave in R&D in Quantum Information Technology -Quantum Technology Flagship (양자정보기술 연구개발의 거대한 물결)

  • Hwang, Y.;Baek, C.H.;Kim, T.;Huh, J.D.
    • Electronics and Telecommunications Trends
    • /
    • v.34 no.1
    • /
    • pp.75-85
    • /
    • 2019
  • Quantum technology is undergoing a revolution. Theoretically, strange phenomena of quantum mechanics, such as superposition and entanglement, can enable high-performance computing, unconditionally secure communication, and high-precision sensing. Such theoretical possibilities have been examined in the last few decades. The goal now is to apply these quantum advantages to daily life. Europe, where quantum mechanics was born a 100 years ago, is struggling to be placed at the front of this quantum revolution. Thus, the European Commission has decided to invest 1 billion EUR over 10 years and has initiated the ramp-up phase with 20 projects in the fields of communication, simulation, sensing and metrology, computing, and fundamental science. This program, approved by the European Commission, is called the "Quantum Technology Flagship" program. Its first objective is to consolidate and expand European scientific leadership and excellence in quantum research. Its second objective is to kick-start a competitive European industry in quantum technology and develop future global industrial leaders. Its final objective is to make Europe a dynamic and attractive region for innovative and collaborative research and business in quantum technology. This program also trains next-generation quantum engineers to achieve a world-leading position in quantum technology. However, the most important principle of this program is to realize quantum technology and introduce it to the market. To this end, the program emphasizes that academic institutes and industries in Europe have to collaborate to research and develop quantum technology. They believe that without commercialization, no technology can be developed to its full potential. In this study, we review the strategy of the Quantum Europe Flagship program and the 20 projects of the ramp-up phase.

Simulation and Analysis of Wildfire for Disaster Planning and Management

  • Yang, Fan;Zhang, Jiansong
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.443-449
    • /
    • 2022
  • With climate change and the global population growth, the frequency and scope of wildfires are constantly increasing, which threatened people's lives and property. For example, according to California Department of Forestry and Fire Protection, in 2020, a total of 9,917 incidents related to wildfires were reported in California, with an estimated burned area of 4,257,863 acres, resulting in 33 fatalities and 10,488 structures damaged or destroyed. At the same time, the ongoing development of technology provides new tools to simulate and analyze the spread of wildfires. How to use new technology to reduce the losses caused by wildfire is an important research topic. A potentially feasible strategy is to simulate and analyze the spread of wildfires through computing technology to explore the impact of different factors (such as weather, terrain, etc.) on the spread of wildfires, figure out how to take preemptive/responsive measures to minimize potential losses caused by wildfires, and as a result achieve better management support of wildfires. In preparation for pursuing these goals, the authors used a powerful computing framework, Spark, developed by the Commonwealth Scientific and Industrial Research Organization (CSIRO), to study the effects of different weather factors (wind speed, wind direction, air temperature, and relative humidity) on the spread of wildfires. The test results showed that wind is a key factor in determining the spread of wildfires. A stable weather condition (stable wind and air conditions) is beneficial to limit the spread of wildfires. Joint consideration of weather factors and environmental obstacles can help limit the threat of wildfires.

  • PDF

The SAR Chart Viewer design and Implementation in mobile web (모바일 웹에서의 SAR Chart Viewer 설계 및 구현)

  • Lim, Il-Kwon;Kim, Young-Hyuk;Lee, Jae-Gwang;Lee, Jae-Pil;Jang, Haeng-Jin;Lee, Jae-Kwang
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.9
    • /
    • pp.2097-2104
    • /
    • 2013
  • Scientific and technological research data is increasing exponentially and, Kisti(Korea Institute of Science and Technology Information) built and supported GSDC(Global Science experimental Data hub Center) depending on the needs processing large data computing and storage devices. Mobile web standards-based the MUPS was built for global community and services of gsdc to spreading mobile devices rapidly. And for analyze Operational status and system resources of GSDC at n this paper researched and implemented system resource monitor method of gsdc to mobile web environment, in this paper researched and implemented system resource monitor method of gsdc to mobile web environment. Research support system of GSDC operated by scientific linux. Sysstat resource monitoring tools create a daily report through sar(system analysis report), after sadc(system activity data collector) collected system resource utilization information. In this paper, sar reports designed and implemented in mobile web to can visualize in a mobile environment. We do not depend specific OS by implementation of the mobile web. So we are available in variety mobile OS. And through provided visual graph, this system can monitor easily and more conveniently then the existing system.

A study of grid network performance management system through web service platform-independent (플랫폼 독립적인 웹서비스를 이용한 그리드 네트워크 성능 관리 시스템에 대한 연구)

  • Song, Ji-Hyun;Ahn, Seong-Jin;Chung, Jin-Wook
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.10 no.2
    • /
    • pp.81-88
    • /
    • 2006
  • The advent of supercomputers contribute greatly in overcoming scientific and academic problems that were previously difficult to solve. However, the supercomputer itself suffers from the problem of being considerable cost. In response, the concept of grid computing, to use the resources of distribute computers connected with each other, was created. This system uses connection oriented protocols to integrate and manage the resources of different types of distributed systems, yet it has the problem of compatibility between protocols of other types. In this paper, a system to manage grid network performance through XML-based SOAP web service which is platform-independent, is proposed.

  • PDF

Extraction of Protein-Protein Interactions based on Convolutional Neural Network (CNN) (Convolutional Neural Network (CNN) 기반의 단백질 간 상호 작용 추출)

  • Choi, Sung-Pil
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.3
    • /
    • pp.194-198
    • /
    • 2017
  • In this paper, we propose a revised Deep Convolutional Neural Network (DCNN) model to extract Protein-Protein Interaction (PPIs) from the scientific literature. The proposed method has the merit of improving performance by applying various global features in addition to the simple lexical features used in conventional relation extraction approaches. In the experiments using AIMed, which is the most famous collection used for PPI extraction, the proposed model shows state-of-the art scores (78.0 F-score) revealing the best performance so far in this domain. Also, the paper shows that, without conducting feature engineering using complicated language processing, convolutional neural networks with embedding can achieve superior PPIE performance.

Compressing Method of NetCDF Files Based on Sparse Matrix (희소행렬 기반 NetCDF 파일의 압축 방법)

  • Choi, Gyuyeun;Heo, Daeyoung;Hwang, Suntae
    • KIISE Transactions on Computing Practices
    • /
    • v.20 no.11
    • /
    • pp.610-614
    • /
    • 2014
  • Like many types of scientific data, results from simulations of volcanic ash diffusion are of a clustered sparse matrix in the netCDF format. Since these data sets are large in size, they generate high storage and transmission costs. In this paper, we suggest a new method that reduces the size of the data of volcanic ash diffusion simulations by converting the multi-dimensional index to a single dimension and keeping only the starting point and length of the consecutive zeros. This method presents performance that is almost as good as that of ZIP format compression, but does not destroy the netCDF structure. The suggested method is expected to allow for storage space to be efficiently used by reducing both the data size and the network transmission time.