• Title/Summary/Keyword: Data Requirement

Search Result 1,800, Processing Time 0.031 seconds

Definition of Security Requirement in Encryption (암호화에서 보안 요건 정의)

  • Shin, Seong-Yoon;Kim, Chang-Ho;Jang, Dai-Hyun;Lee, Hyun Chang;Rhee, Yang-Won
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2014.05a
    • /
    • pp.187-188
    • /
    • 2014
  • Encryption is the process of encoding messages or information in such a way that only authorized parties can read it. Encryption doesn't prevent hacking but it reduces the likelihood that the hacker will be able to read the data that is encrypted. Important information (data) information during transmission or storage of the confidentiality, integrity should be guaranteed. Encryption is one-way and two-way encryption is applied. The encryption key must be guaranteed safety.

  • PDF

Telemedicine Cooperation Experience of Nurses Working in Remote Areas (의료취약지 근무 간호인력의 원격협진 수행 경험)

  • Chin, Young Ran;Kim, Hyun
    • Journal of Korean Academy of Rural Health Nursing
    • /
    • v.17 no.2
    • /
    • pp.43-49
    • /
    • 2022
  • Purpose: This study was conducted to explore the telemedicine cooperation experience of nurses working in remote areas. Methods: A focus group interviews were used to collect data. All interviews were recorded and transcribed. Content analysis was used to analyze the data. Results: The three main categories and seven sub-categories of telemedicine cooperation experience that emerged are 1) requirement of education on remote support service, 2) consideration of the recipients of medical support services and the characteristics of the area, and 3) difficulties in conducting telemedicine cooperation. Conclusion: As a result of the study, legal protection should be given priority, and it is necessary to select an area where remote cooperation is essential, to discover subjects, and to reduce the burden of work and division of manpower and duties.

A Privacy-preserving Image Retrieval Scheme in Edge Computing Environment

  • Yiran, Zhang;Huizheng, Geng;Yanyan, Xu;Li, Su;Fei, Liu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.2
    • /
    • pp.450-470
    • /
    • 2023
  • Traditional cloud computing faces some challenges such as huge energy consumption, network delay and single point of failure. Edge computing is a typical distributed processing platform which includes multiple edge servers closer to the users, thus is more robust and can provide real-time computing services. Although outsourcing data to edge servers can bring great convenience, it also brings serious security threats. In order to provide image retrieval while ensuring users' data privacy, a privacy preserving image retrieval scheme in edge environment is proposed. Considering the distributed characteristics of edge computing environment and the requirement for lightweight computing, we present a privacy-preserving image retrieval scheme in edge computing environment, which two or more "honest but curious" servers retrieve the image quickly and accurately without divulging the image content. Compared with other traditional schemes, the scheme consumes less computing resources and has higher computing efficiency, which is more suitable for resource-constrained edge computing environment. Experimental results show the algorithm has high security, retrieval accuracy and efficiency.

A Controllable Parallel CBC Block Cipher Mode of Operation

  • Ke Yuan;Keke Duanmu;Jian Ge;Bingcai Zhou;Chunfu Jia
    • Journal of Information Processing Systems
    • /
    • v.20 no.1
    • /
    • pp.24-37
    • /
    • 2024
  • To address the requirement for high-speed encryption of large amounts of data, this study improves the widely adopted cipher block chaining (CBC) mode and proposes a controllable parallel cipher block chaining (CPCBC) block cipher mode of operation. The mode consists of two phases: extension and parallel encryption. In the extension phase, the degree of parallelism n is determined as needed. In the parallel encryption phase, n cipher blocks generated in the expansion phase are used as the initialization vectors to open n parallel encryption chains for parallel encryption. The security analysis demonstrates that CPCBC mode can enhance the resistance to byte-flipping attacks and padding oracle attacks if parallelism n is kept secret. Security has been improved when compared to the traditional CBC mode. Performance analysis reveals that this scheme has an almost linear acceleration ratio in the case of encrypting a large amount of data. Compared with the conventional CBC mode, the encryption speed is significantly faster.

Ensemble trading algorithm Using Dirichlet distribution-based model contribution prediction (디리클레 분포 기반 모델 기여도 예측을 이용한 앙상블 트레이딩 알고리즘)

  • Jeong, Jae Yong;Lee, Ju Hong;Choi, Bum Ghi;Song, Jae Won
    • Smart Media Journal
    • /
    • v.11 no.3
    • /
    • pp.9-17
    • /
    • 2022
  • Algorithmic trading, which uses algorithms to trade financial products, has a problem in that the results are not stable due to many factors in the market. To alleviate this problem, ensemble techniques that combine trading algorithms have been proposed. However, there are several problems with this ensemble method. First, the trading algorithm may not be selected so as to satisfy the minimum performance requirement (more than random) of the algorithm included in the ensemble, which is a necessary requirement of the ensemble. Second, there is no guarantee that an ensemble model that performed well in the past will perform well in the future. In order to solve these problems, a method for selecting trading algorithms included in the ensemble model is proposed as follows. Based on past data, we measure the contribution of the trading algorithms included in the ensemble models with high performance. However, for contributions based only on this historical data, since there are not enough past data and the uncertainty of the past data is not reflected, the contribution distribution is approximated using the Dirichlet distribution, and the contribution values are sampled from the contribution distribution to reflect the uncertainty. Based on the contribution distribution of the trading algorithm obtained from the past data, the Transformer is trained to predict the future contribution. Trading algorithms with high predicted future contribution are selected and included in the ensemble model. Through experiments, it was proved that the proposed ensemble method showed superior performance compared to the existing ensemble methods.

Design and Implementation of Data Binder for Dynamic Data Delivery in Healthcare Service (헬스케어 서비스에서 동적인 데이터 전달을 위한 데이터 결합기 설계 및 구현)

  • Kang, Kyu-Chang;Lee, Jeun-Woo;Choi, Hoon
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.12
    • /
    • pp.891-898
    • /
    • 2009
  • This paper suggests producer/consumer-based Data Binder enabling applications and biomedical devices developed by mutually different vendors to transfer data dynamically. Data Binder is implemented as a bundle of OSGi platform providing component-based programming model and service-oriented operation architecture. Data Binder complements the disadvantage of OSGi WireAdmin service enabling static data delivery between a producer and a consumer of data. Data Binder normalizes an application requirement as an application descriptor and a device capability as a device descriptor so that it enables dynamic data delivery by making data producer/consumer pair in runtime. Therefore, Data Binder can be used as a connection management of a data link between a data producer and a data consumer in sensor-based application development. The object of this paper is to provide the facility of the healthcare service development by separating a data producer such as a biomedical device from a data consumer such as a healthcare application.

SENSOR DATA MINING TECHNIQUES AND MIDDLEWARE STRUCTURE FOR USN ENVIRONMENT

  • Jin, Cheng-Hao;Lee, Yong-Mi;Kim, Hi-Seok;Pok, Gou-Chol;Ryu, Keun-Ho
    • Proceedings of the KSRS Conference
    • /
    • 2007.10a
    • /
    • pp.353-356
    • /
    • 2007
  • With advances in sensor technology, current researches on the pertinent techniques are actively directed toward the way which enables the USN computing service. For many applications using sensor networks, the incoming data are by nature characterized as high-speed, continuous, real-time and infinite. Due to such uniqueness of sensor data characteristics, for some instances a finite-sized buffer may not accommodate the entire incoming data, which leads to inevitable loss of data, and requirement for fast processing makes it impossible to conduct a thorough investigation of data. In addition to the potential problem of loss of data, incoming data in its raw form may exhibit high degree of complexity which evades simple query or alerting services for capturing and extracting useful information. Furthermore, as traditional mining techniques are developed to handle fixed, static historical data, they are not useful and directly applicable for analyzing the sensor data. In this paper, (1) describe how three mining techniques (sensor data outlier analysis, sensor pattern analysis, and sensor data prediction analysis) are appropriate for the USN middleware structure, with their application to the stream data in ocean environment. (2) Another proposal is a middleware structure based on USN environment adaptive to above mining techniques. This middleware structure includes sensor nodes, sensor network common interface, sensor data processor, sensor query processor, database, sensor data mining engine, user interface and so on.

  • PDF

An Advanced Embedded SRAM Cell with Expanded Read/Write Stability and Leakage Reduction

  • Chung, Yeon-Bae
    • Journal of IKEEE
    • /
    • v.16 no.3
    • /
    • pp.265-273
    • /
    • 2012
  • Data stability and leakage power dissipation have become a critical issue in scaled SRAM design. In this paper, an advanced 8T SRAM cell improving the read and write stability of data storage elements as well as reducing the leakage current in the idle mode is presented. During the read operation, the bit-cell keeps the noise-vulnerable data 'low' node voltage close to the ground level, and thus producing near-ideal voltage transfer characteristics essential for robust read functionality. In the write operation, a negative bias on the cell facilitates to change the contents of the bit. Unlike the conventional 6T cell, there is no conflicting read and write requirement on sizing the transistors. In the standby mode, the built-in stacked device in the 8T cell reduces the leakage current significantly. The 8T SRAM cell implemented in a 130 nm CMOS technology demonstrates almost 100 % higher read stability while bearing 20 % better write-ability at 1.2 V typical condition, and a reduction by 45 % in leakage power consumption compared to the standard 6T cell. The stability enhancement and leakage power reduction provided with the proposed bit-cell are confirmed under process, voltage and temperature variations.

Software Design and Verification Method of Flight Data Recorder for Unmanned Aerial Vehicle (무인항공기용 비행자료 기록장치 소프트웨어 설계 및 검증 방안)

  • Yang, Seo-hee
    • Journal of Advanced Navigation Technology
    • /
    • v.24 no.3
    • /
    • pp.163-172
    • /
    • 2020
  • Flight data recorder (FDR) for accident investigation is required to comply with EUROCAE(ED-112) standard so that flight data can be restored when exposed to extreme conditions due to aircraft crash. Since the ED-112 standard defines the general requirements for all aircraft, it is essential to analyze detailed requirements for FDR software to apply appropriate requirements selectively according to the configuration and operation concept of a specific aircraft. In this paper, the software requirements applicable to unmanned aircraft will be analyzed and the FDR software design will be proposed. Also, a software verification method for each requirement will be presented to verify that the implemented software is designed to satisfy all requirements.

Improvement of OPW-TR Algorithm for Compressing GPS Trajectory Data

  • Meng, Qingbin;Yu, Xiaoqiang;Yao, Chunlong;Li, Xu;Li, Peng;Zhao, Xin
    • Journal of Information Processing Systems
    • /
    • v.13 no.3
    • /
    • pp.533-545
    • /
    • 2017
  • Massive volumes of GPS trajectory data bring challenges to storage and processing. These issues can be addressed by compression algorithm which can reduce the size of the trajectory data. A key requirement for GPS trajectory compression algorithm is to reduce the size of the trajectory data while minimizing the loss of information. Synchronized Euclidean distance (SED) as an important error measure is adopted by most of the existing algorithms. In order to further reduce the SED error, an improved algorithm for open window time ratio (OPW-TR) called local optimum open window time ratio (LO-OPW-TR) is proposed. In order to make SED error smaller, the anchor points are selected by calculating point's accumulated synchronized Euclidean distance (ASED). A variety of error metrics are used for the algorithm evaluation. The experimental results show that the errors of our algorithm are smaller than the existing algorithms in terms of SED and speed errors under the same compression ratio.