• Title, Summary, Keyword: 무결성

Search Result 1,163, Processing Time 0.041 seconds

A Proposal for Mobile Gallery Auction Method Using NFC-based FIDO and 2 Factor Technology and Permission-type Distributed Director Block-chain (NFC 기반 FIDO(Fast IDentity Online) 및 2 Factor 기술과 허가형 분산원장 블록체인을 이용한 모바일 갤러리 경매 방안 제안)

  • Noh, Sun-Kuk
    • Journal of Internet Computing and Services
    • /
    • v.20 no.6
    • /
    • pp.129-135
    • /
    • 2019
  • Recently, studies have been conducted to improve the m-commerce process in the NFC-based mobile environment and the increase of the number of smart phones built in NFC. Since authentication is important in mobile electronic payment, FIDO(Fast IDentity Online) and 2 Factor electronic payment system are applied. In addition, block-chains using distributed raw materials have emerged as a representative technology of the fourth industry. In this study, for the mobile gallery auction of the traders using NFC embedded terminal (smartphone) in a small gallery auction in which an unspecified minority participates, password-based authentication and biometric authentication technology (fingerprint) were applied to record transaction details and ownership transfer of the auction participants in electronic payment. And, for the cost reduction and data integrity related to gallery auction, the private distributed director block chain was constructed and used. In addition, domestic and foreign cases applying block chain in the auction field were investigated and compared. In the future, the study will also study the implementation of block chain networks and smart contract and the integration of block chain and artificial intelligence to apply the proposed method.

Research Trends of SCADA Digital Forensics and Future Research Proposal (SCADA 디지털포렌식 동향과 향후 연구 제안)

  • Shin, Jiho;Seo, Jungtaek
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.29 no.6
    • /
    • pp.1351-1364
    • /
    • 2019
  • When SCADA is exposed to cyber threats and attacks, serious disasters can occur throughout society. This is because various security threats have not been considered when building SCADA. The bigger problem is that it is difficult to patch vulnerabilities quickly because of its availability. Digital forensics procedures and techniques need to be used to analyze and investigate vulnerabilities in SCADA systems in order to respond quickly against cyber threats and to prevent incidents. This paper addresses SCADA forensics taxonomy and research trends for effective digital forensics investigation on SCADA system. As a result, we have not been able to find any research that goes far beyond traditional digital forensics on procedures and methodologies. But it is meaningful to develop an approach methodology using the characteristics of the SCADA system, or an exclusive tool for SCADA. Analysis techniques mainly focused on PLC and SCADA network protocol. It is because the cyber threats and attacks targeting SCADA are mostly related to PLC or network protocol. Such research seems to continue in the future. Unfortunately, there is lack of discussion about the 'Evidence Capability' such as the preservation or integrity of the evidence extracting from SCADA system in the past researches.

Adaptive Consensus Bound PBFT Algorithm Design for Eliminating Interface Factors of Blockchain Consensus (블록체인 합의 방해요인 제거를 위한 Adaptive Consensus Bound PBFT 알고리즘 설계)

  • Kim, Hyoungdae;Yun, Jusik;Goh, Yunyeong;Chung, Jong-Moon
    • Journal of Internet Computing and Services
    • /
    • v.21 no.1
    • /
    • pp.17-31
    • /
    • 2020
  • With the rapid development of block chain technology, attempts have been made to put the block chain technology into practical use in various fields such as finance and logistics, and also in the public sector where data integrity is very important. Defense Operations In addition, strengthening security and ensuring complete integrity of the command communication network is crucial for operational operation under the network-centered operational environment (NCOE). For this purpose, it is necessary to construct a command communication network applying the block chain network. However, the block chain technology up to now can not solve the security issues such as the 51% attack. In particular, the Practical Byzantine fault tolerance (PBFT) algorithm which is now widely used in blockchain, does not have a penalty factor for nodes that behave maliciously, and there is a problem of failure to make a consensus even if malicious nodes are more than 33% of all nodes. In this paper, we propose a Adaptive Consensus Bound PBFT (ACB-PBFT) algorithm that incorporates a penalty mechanism for anomalous behavior by combining the Trust model to improve the security of the PBFT, which is the main agreement algorithm of the blockchain.

CFI Approach to Defend against GOT Overwrite Attacks (CFI(Control Flow Integrity) 적용을 통한 GOT(Global Offset Table) 변조 공격 방지 방안 연구)

  • Jeong, Seunghoon;Hwang, Jaejoon;Kwon, Hyukjin;Shin, Dongkyoo
    • Journal of Internet Computing and Services
    • /
    • v.21 no.1
    • /
    • pp.179-190
    • /
    • 2020
  • In the Unix-like system environment, the GOT overwrite attack is one of the traditional control flow hijacking techniques for exploiting software privileges. Several techniques have been proposed to defend against the GOT overwrite attack, and among them, the Full Relro(Relocation Read only) technique, which blocks GOT overwrites at runtime by arranging the GOT section as read-only in the program startup, has been known as the most effective defense technique. However, it entails loading delay, which limits its application to a program sensitive to startup performance, and it is not currently applied to the library due to problems including a chain loading delay problem caused by nested library dependency. Also, many compilers, including LLVM, do not apply the Full Relro technique by default, so runtime programs are still vulnerable to GOT attacks. In this paper, we propose a GOT protection scheme using the Control Flow Integrity(CFI) technique, which is currently recognized as the most suitable technique for defense against code reuse attacks. We implemented this scheme based on LLVM and applied it to the binutils-gdb program group to evaluate security, performance and compatibility. The GOT protection scheme with CFI is difficult to bypass, fast, and compatible with existing library programs.

Implementation of Secure System for Blockchain-based Smart Meter Aggregation (블록체인 기반 스마트 미터 집계 보안 시스템 구축)

  • Kim, Yong-Gil;Moon, Kyung-Il
    • The Journal of The Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.2
    • /
    • pp.1-11
    • /
    • 2020
  • As an important basic building block of the smart grid environment, smart meter provides real-time electricity consumption information to the utility. However, ensuring information security and privacy in the smart meter data aggregation process is a non-trivial task. Even though the secure data aggregation for the smart meter has been a lot of attention from both academic and industry researchers in recent years, most of these studies are not secure against internal attackers or cannot provide data integrity. Besides, their computation costs are not satisfactory because the bilinear pairing operation or the hash-to-point operation is performed at the smart meter system. Recently, blockchains or distributed ledgers are an emerging technology that has drawn considerable interest from energy supply firms, startups, technology developers, financial institutions, national governments and the academic community. In particular, blockchains are identified as having the potential to bring significant benefits and innovation for the electricity consumption network. This study suggests a distributed, privacy-preserving, and simple secure smart meter data aggregation system, backed up by Blockchain technology. Smart meter data are aggregated and verified by a hierarchical Merkle tree, in which the consensus protocol is supported by the practical Byzantine fault tolerance algorithm.

A Quality-control Experiment Involving an Optical Televiewer Using a Fractured Borehole Model (균열모형시추공을 이용한 광학영상화검층 품질관리 시험)

  • Jeong, Seungho;Shin, Jehyun;Hwang, Seho;Kim, Ji-Soo
    • The Journal of Engineering Geology
    • /
    • v.30 no.1
    • /
    • pp.17-30
    • /
    • 2020
  • An optical televiewer is a geophysical logging device that produces continuous high-resolution full-azimuth images of a borehole wall using a light-emitting-diode and a complementary metal-oxide semiconductor image sensor to provide valuable information on subsurface discontinuities. Recently, borehole imaging logging has been applied in many fields, including ground subsidence monitoring, rock mass integrity evaluation, stress-induced fracture detection, and glacial annual-layer measurements in polar regions. Widely used commercial borehole imaging logging systems typically have limitations depending on equipment specifications, meaning that it is necessary to clearly verify the scope of applications while maintaining appropriate quality control for various borehole conditions. However, it is difficult to directly check the accuracy, implementation, and reliability for outcomes, as images derived from an optical televiewer constitute in situ data. In this study, we designed and constructed a modular fractured borehole model having similar conditions to a borehole environment to report unprecedented results regarding reliable data acquisition and processing. We investigate sonde magnetometer accuracy, color realization, and fracture resolution, and suggest data processing methods to obtain accurate aperture measurements. The experiment involving the fractured borehole model should enhance not only measurement quality but also interpretations of high-resolution and reliable optical imaging logs.

Analysis of S/W Test Coverage Automated Tool & Standard in Railway System (철도시스템 소프트웨어 테스트 커버리지 자동화 도구 및 기준 분석)

  • Jo, Hyun-Jeong;Hwang, Jong-Gyu;Shin, Seung-Kwon;Oh, Suk-Mun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.11
    • /
    • pp.4460-4467
    • /
    • 2010
  • Recent advances in computer technology have brought more dependence on software to railway systems and changed to computer systems. Hence, the reliability and safety assurance of the vital software running on the embedded railway system is going to tend toward very critical task. Accordingly, various software test and validation activities are highly recommended in the international standards related railway software. In this paper, we presented an automated analysis tool and standard for software testing coverage in railway system, and presented its result of implementation. We developed the control flow analysis tool estimating test coverage as an important quantitative item for software safety verification in railway software. Also, we proposed judgement standards due to railway S/W Safety Integrity Level(SWSIL) based on analysis of standards in any other field for utilizing developed tool widely at real railway industrial sites. This tool has more advantage of effective measuring various test coverages than other countries, so we can expect railway S/W development and testing technology of real railway industrial sites in Korea.

eMRA: Extension of MRA Considering the Relationships Between MDR Concepts (eMRA: MDR의 개념간 관계성을 고려한 MRA 확장)

  • Joo, Young-Min;Kim, Jangwon;Jeong, Dongwon;Baik, Doo-Kwon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.3
    • /
    • pp.161-172
    • /
    • 2013
  • Metadata registry (MDR) is the international standard, developed by ISO/IEC for exchange and sharing data between databases. Many MDR systems are used in diverse domains such as medical service, bibliography, environment for sharing and integrating data. However, those systems have different physical structures individually because the MDR standard defines only the metamodel for registering and storing metadata. It causes heterogeneity between the system structures and requires additional cost to maintain interoperability. ISO/IEC 13249-8 Metadata Registry Access (MRA) is developing as an international standard to provide a consistent access facility to data stored in different metadata registries. However, MRA does not consider the relationships between the concepts (classes) defined in the MDR specification. It causes that incorrect query results returned from MDR systems. It also requires additional cost of modeling and rewriting queries to reflect each physical model. Therefore, this paper suggests eMRA which considers the relationships between the concepts in MDR. The comparative evaluations are described to show the advantages of eMRA. eMRA has superior performance in query modeling and referential integrity than MRA defined by the relationship between the concept of MDR.

A Semantic Classification Model for e-Catalogs (전자 카탈로그를 위한 의미적 분류 모형)

  • Kim Dongkyu;Lee Sang-goo;Chun Jonghoon;Choi Dong-Hoon
    • Journal of KIISE:Databases
    • /
    • v.33 no.1
    • /
    • pp.102-116
    • /
    • 2006
  • Electronic catalogs (or e-catalogs) hold information about the goods and services offered or requested by the participants, and consequently, form the basis of an e-commerce transaction. Catalog management is complicated by a number of factors and product classification is at the core of these issues. Classification hierarchy is used for spend analysis, custom3 regulation, and product identification. Classification is the foundation on which product databases are designed, and plays a central role in almost all aspects of management and use of product information. However, product classification has received little formal treatment in terms of underlying model, operations, and semantics. We believe that the lack of a logical model for classification Introduces a number of problems not only for the classification itself but also for the product database in general. It needs to meet diverse user views to support efficient and convenient use of product information. It needs to be changed and evolved very often without breaking consistency in the cases of introduction of new products, extinction of existing products, class reorganization, and class specialization. It also needs to be merged and mapped with other classification schemes without information loss when B2B transactions occur. For these requirements, a classification scheme should be so dynamic that it takes in them within right time and cost. The existing classification schemes widely used today such as UNSPSC and eClass, however, have a lot of limitations to meet these requirements for dynamic features of classification. In this paper, we try to understand what it means to classify products and present how best to represent classification schemes so as to capture the semantics behind the classifications and facilitate mappings between them. Product information implies a plenty of semantics such as class attributes like material, time, place, etc., and integrity constraints. In this paper, we analyze the dynamic features of product databases and the limitation of existing code based classification schemes. And describe the semantic classification model, which satisfies the requirements for dynamic features oi product databases. It provides a means to explicitly and formally express more semantics for product classes and organizes class relationships into a graph. We believe the model proposed in this paper satisfies the requirements and challenges that have been raised by previous works.

Design and Implementation of the SSL Component based on CBD (CBD에 기반한 SSL 컴포넌트의 설계 및 구현)

  • Cho Eun-Ae;Moon Chang-Joo;Baik Doo-Kwon
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.12 no.3
    • /
    • pp.192-207
    • /
    • 2006
  • Today, the SSL protocol has been used as core part in various computing environments or security systems. But, the SSL protocol has several problems, because of the rigidity on operating. First, SSL protocol brings considerable burden to the CPU utilization so that performance of the security service in encryption transaction is lowered because it encrypts all data which is transferred between a server and a client. Second, SSL protocol can be vulnerable for cryptanalysis due to the key in fixed algorithm being used. Third, it is difficult to add and use another new cryptography algorithms. Finally. it is difficult for developers to learn use cryptography API(Application Program Interface) for the SSL protocol. Hence, we need to cover these problems, and, at the same time, we need the secure and comfortable method to operate the SSL protocol and to handle the efficient data. In this paper, we propose the SSL component which is designed and implemented using CBD(Component Based Development) concept to satisfy these requirements. The SSL component provides not only data encryption services like the SSL protocol but also convenient APIs for the developer unfamiliar with security. Further, the SSL component can improve the productivity and give reduce development cost. Because the SSL component can be reused. Also, in case of that new algorithms are added or algorithms are changed, it Is compatible and easy to interlock. SSL Component works the SSL protocol service in application layer. First of all, we take out the requirements, and then, we design and implement the SSL Component, confidentiality and integrity component, which support the SSL component, dependently. These all mentioned components are implemented by EJB, it can provide the efficient data handling when data is encrypted/decrypted by choosing the data. Also, it improves the usability by choosing data and mechanism as user intend. In conclusion, as we test and evaluate these component, SSL component is more usable and efficient than existing SSL protocol, because the increase rate of processing time for SSL component is lower that SSL protocol's.