• Title/Summary/Keyword: decision redundancy

Search Result 46, Processing Time 0.023 seconds

SOCMTD: Selecting Optimal Countermeasure for Moving Target Defense Using Dynamic Game

  • Hu, Hao;Liu, Jing;Tan, Jinglei;Liu, Jiang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.10
    • /
    • pp.4157-4175
    • /
    • 2020
  • Moving target defense, as a 'game-changing' security technique for network warfare, realizes proactive defense by increasing network dynamics, uncertainty and redundancy. How to select the best countermeasure from the candidate countermeasures to maximize defense payoff becomes one of the core issues. In order to improve the dynamic analysis for existing decision-making, a novel approach of selecting the optimal countermeasure using game theory is proposed. Based on the signal game theory, a multi-stage adversary model for dynamic defense is established. Afterwards, the payoffs of candidate attack-defense strategies are quantified from the viewpoint of attack surface transfer. Then the perfect Bayesian equilibrium is calculated. The inference of attacker type is presented through signal reception and recognition. Finally the countermeasure for selecting optimal defense strategy is designed on the tradeoff between defense cost and benefit for dynamic network. A case study of attack-defense confrontation in small-scale LAN shows that the proposed approach is correct and efficient.

A Study on Fault-Tolerant System Construction Algorithm in General Network (일반적 네트워크에서의 결함허용 시스템 구성 알고리즘에 관한 연구)

  • 문윤호;김병기
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.23 no.6
    • /
    • pp.1538-1545
    • /
    • 1998
  • System reliability has been a major concern since the beginning age of the electronic digital computers. One of the modest ways of increasing reliability is to design fault-tolerant system. This paper propose a construction mechanism of fault-tolerant system for the general graph topology. This system has several spare nodes. Up to date, fault-tolerant system design is applied only to loop and tree networks. But they are very limited cases. New algorithm of this paper tried to have a capability which can be applied to any kinds of topologies without such a many restriction. the algorithm consist of several steps : minimal diameter spaning tree extraction step, optimal node decision step, original connectivity restoration step and finally redundancy graph construction step.

  • PDF

Design of video encoder using Multi-dimensional DCT (다차원 DCT를 이용한 비디오 부호화기 설계)

  • Jeon, S.Y.;Choi, W.J.;Oh, S.J.;Jeong, S.Y.;Choi, J.S.;Moon, K.A.;Hong, J.W.;Ahn, C.B.
    • Journal of Broadcast Engineering
    • /
    • v.13 no.5
    • /
    • pp.732-743
    • /
    • 2008
  • In H.264/AVC, 4$\times$4 block transform is used for intra and inter prediction instead of 8$\times$8 block transform. Using small block size coding, H.264/AVC obtains high temporal prediction efficiency, however, it has limitation in utilizing spatial redundancy. Motivated on these points, we propose a multi-dimensional transform which achieves both the accuracy of temporal prediction as well as effective use of spatial redundancy. From preliminary experiments, the proposed multi-dimensional transform achieves higher energy compaction than 2-D DCT used in H.264. We designed an integer-based transform and quantization coder for multi-dimensional coder. Moreover, several additional methods for multi-dimensional coder are proposed, which are cube forming, scan order, mode decision and updating parameters. The Context-based Adaptive Variable-Length Coding (CAVLC) used in H.264 was employed for the entropy coder. Simulation results show that the performance of the multi-dimensional codec appears similar to that of H.264 in lower bit rates although the rate-distortion curves of the multi-dimensional DCT measured by entropy and the number of non-zero coefficients show remarkably higher performance than those of H.264/AVC. This implies that more efficient entropy coder optimized to the statistics of multi-dimensional DCT coefficients and rate-distortion operation are needed to take full advantage of the multi-dimensional DCT. There remains many issues and future works about multi-dimensional coder to improve coding efficiency over H.264/AVC.

Aggregated Encoder Control Exploiting Interlayer Statistical Characteristics for Advanced Terrestrial-DMB (지상파 DMB 고도화망에서 계층간 통계적 특성을 이용한 통합 부호기 제어)

  • Kim, Jin-Soo;Park, Jong-Kab;Seo, Kwang-Deok;Kim, Jae-Gon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.8
    • /
    • pp.1513-1526
    • /
    • 2009
  • The SVC (Scalable Video Coding) scheme can be effectively used for reducing the redundancy and for improving the coding efficiency but, it requires very high computational complexities. In order to accelerate the successful standardization and commercialization of the Advanced Terrestrial-DMB service, it is necessary to overcome this problem. For this aim, in this paper, we propose an efficient aggregated encoder control algorithm, which shows better performances than the conventional control scheme. Computer simulation result shows that the proposed scheme performs about up to 0.3dB better than those of the conventional scheme. Additionally, based on this control scheme, we propose a fast mode decision method by constraining the redundant coding modes based on the statistical properties of the quantization parameter in the spatial scalable encoder. Through computer simulations, it is shown that the proposed control schemes reduce the heavy computational burden up to 12% compared to the conventional scheme, while keeping the objective visual qualify very high.

The Intelligent Clinical Laboratory as a Tool to Increase Cancer Care Management Productivity

  • Mohammadzadeh, Niloofar;Safdari, Reza
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.15 no.6
    • /
    • pp.2935-2937
    • /
    • 2014
  • Studies of the causes of cancer, early detection, prevention or treatment need accurate, comprehensive, and timely cancer data. The clinical laboratory provides important cancer information needed for physicians which influence clinical decisions regarding treatment, diagnosis and patient monitoring. Poor communication between health care providers and clinical laboratory personnel can lead to medical errors and wrong decisions in providing cancer care. Because of the key impact of laboratory information on cancer diagnosis and treatment the quality of the tests, lab reports, and appropriate lab management are very important. A laboratory information management system (LIMS) can have an important role in diagnosis, fast and effective access to cancer data, decrease redundancy and costs, and facilitate the integration and collection of data from different types of instruments and systems. In spite of significant advantages LIMS is limited by factors such as problems in adaption to new instruments that may change existing work processes. Applications of intelligent software simultaneously with existing information systems, in addition to remove these restrictions, have important benefits including adding additional non-laboratory-generated information to the reports, facilitating decision making, and improving quality and productivity of cancer care services. Laboratory systems must have flexibility to change and have the capability to develop and benefit from intelligent devices. Intelligent laboratory information management systems need to benefit from informatics tools and latest technologies like open sources. The aim of this commentary is to survey application, opportunities and necessity of intelligent clinical laboratory as a tool to increase cancer care management productivity.

A study on AHP application of selection method for the best treatment technology of public sewage treatment works (공공하수처리시설 공법 선정을 위한 계층화분석법 적용방안 고찰)

  • Jeong, Dong-Hwan;Cho, Yangseok;Ahn, Kyunghee;Choi, In-Cheol;Chung, Hyen-Mi;Lee, Jaekwan
    • Journal of Korean Society of Water and Wastewater
    • /
    • v.30 no.4
    • /
    • pp.427-440
    • /
    • 2016
  • Various kinds of processes are used in the Public Sewage Treatment Works(PSTWs) in order to achieve water quality criteria and TMDL in the watershed. The performance of the existing processes at PSTWs depends on influent characteristics, effluent quality target, amount of sludge production, power cost and other factors. In present, the Selection Guideline for the Available Treatment Process of PSTWs is used for a process decision in the country. But there are some problems regarding redundancy of assessment factors and complexity of assessment procedure in the guideline. In this study, we did a test application of AHP for process selection of PSTWs, which propose is to simplify assessment factors such as pollutant removal amount, sludge generation, electricity consumption, stability of operation, convenience of maintenance, easiness of existing process application, installation cost, and operating cost concerning of environmental factors, technical factors and economical factors. According to the study, the PSTWs selection procedure guideline can be improved using application of AHP method.

Decision Analysis System for Job Guidance using Rough Set (러프집합을 통한 취업의사결정 분석시스템)

  • Lee, Heui-Tae;Park, In-Kyoo
    • Journal of Digital Convergence
    • /
    • v.11 no.10
    • /
    • pp.387-394
    • /
    • 2013
  • Data mining is the process of discovering hidden, non-trivial patterns in large amounts of data records in order to be used very effectively for analysis and forecasting. Because hundreds of variables give rise to a high level of redundancy and dimensionality with time complexity, they are more likely to have spurious relationships, and even the weakest relationships will be highly significant by any statistical test. Hence cluster analysis is a main task of data mining and is the task of grouping a set of objects in such a way that objects in the same group are more similar to each other than to those in other groups. In this paper system implementation is of great significance, which defines a new definition based on information-theoretic entropy and analyse the analogue behaviors of objects at hand so as to address the measurement of uncertainties in the classification of categorical data. The sources were taken from a survey aimed to identify of job guidance from students in high school pyeongtaek. we show how variable precision information-entropy based rough set can be used to group student in each section. It is proved that the proposed method has the more exact classification than the conventional in attributes more than 10 and that is more effective in job guidance for students.

Building an Ontology-Based Diagnosis Process of Crohn's Disease Using the Differentiation Rule (감별 규칙을 이용한 온톨로지 기반 크론병 진단 프로세스 정의)

  • Yoo, Dong Yeon;Park, Ye-Seul;Lee, Jung-Won
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.11
    • /
    • pp.443-450
    • /
    • 2018
  • Crohn's disease, which is recently increasing in Korea, may be seen throughout the gastrointestinal tract and cause various symptoms. In particular, Crohn's disease is especially difficult to diagnose with several symptoms similar to other ulcerative colonic diseases. Thus, some studies are underway to distinguish two or more similar diseases. However, the previous studies have not described the procedural diagnosis process of it, which may lead to over-examination in the process. Therefore, we propose a diagnosis process of Crohn's disease based on the analysis of redundancy, sequential linkage and decision point in the diagnosis of Crohn's disease, so that it enables to identify ulcerative colonic diseases with symptoms similar to Crohn's disease. Finally, we can distinguish the colon diseases that have symptoms similar to Crohn's disease and help diagnose Crohn's disease effectively by defining the proposed process-oriented association as an ontology. Applying the proposed ontology to 5 cases showed that more accurate diagnosis was possible and in one case it could be diagnosed even with fewer tests.

A CPU-GPU Hybrid System of Environment Perception and 3D Terrain Reconstruction for Unmanned Ground Vehicle

  • Song, Wei;Zou, Shuanghui;Tian, Yifei;Sun, Su;Fong, Simon;Cho, Kyungeun;Qiu, Lvyang
    • Journal of Information Processing Systems
    • /
    • v.14 no.6
    • /
    • pp.1445-1456
    • /
    • 2018
  • Environment perception and three-dimensional (3D) reconstruction tasks are used to provide unmanned ground vehicle (UGV) with driving awareness interfaces. The speed of obstacle segmentation and surrounding terrain reconstruction crucially influences decision making in UGVs. To increase the processing speed of environment information analysis, we develop a CPU-GPU hybrid system of automatic environment perception and 3D terrain reconstruction based on the integration of multiple sensors. The system consists of three functional modules, namely, multi-sensor data collection and pre-processing, environment perception, and 3D reconstruction. To integrate individual datasets collected from different sensors, the pre-processing function registers the sensed LiDAR (light detection and ranging) point clouds, video sequences, and motion information into a global terrain model after filtering redundant and noise data according to the redundancy removal principle. In the environment perception module, the registered discrete points are clustered into ground surface and individual objects by using a ground segmentation method and a connected component labeling algorithm. The estimated ground surface and non-ground objects indicate the terrain to be traversed and obstacles in the environment, thus creating driving awareness. The 3D reconstruction module calibrates the projection matrix between the mounted LiDAR and cameras to map the local point clouds onto the captured video images. Texture meshes and color particle models are used to reconstruct the ground surface and objects of the 3D terrain model, respectively. To accelerate the proposed system, we apply the GPU parallel computation method to implement the applied computer graphics and image processing algorithms in parallel.

Towards Group-based Adaptive Streaming for MPEG Immersive Video (MPEG Immersive Video를 위한 그룹 기반 적응적 스트리밍)

  • Jong-Beom Jeong;Soonbin Lee;Jaeyeol Choi;Gwangsoon Lee;Sangwoon Kwak;Won-Sik Cheong;Bongho Lee;Eun-Seok Ryu
    • Journal of Broadcast Engineering
    • /
    • v.28 no.2
    • /
    • pp.194-212
    • /
    • 2023
  • The MPEG immersive video (MIV) coding standard achieved high compression efficiency by removing inter-view redundancy and merging the residuals of immersive video which consists of multiple texture (color) and geometry (depth) pairs. Grouping of views that represent similar spaces enables quality improvement and implementation of selective streaming, but this has not been actively discussed recently. This paper introduces an implementation of group-based encoding into the recent version of MIV reference software, provides experimental results on optimal views and videos per group, and proposes a decision method for optimal number of videos for global immersive video representation by using portion of residual videos.