1. Introduction
Information security environment has been experiencing tremendous shift over the last decade. The fact that capability of brute force and scale of botnet dominate network attack effect has been brushed into background. Adversaries tend to adopt complicated invasive actions to achieve goal with an information-driven precision instead of blindness of target selection [1]. All these changes have led to emerging threats and sophisticated attackers such as Advanced Persistent Threat (APT) and Determined Human Adversaries (DHA). When dealing with these sophisticated environment and determined adversaries, precise environmental model, interactive adversary pattern and advisable risk analysis are required for the sake of potential prevention.
Since manual security analysis is error-prone and tedious, and gradually becomes infeasible for large and complicated networks. Paradigms like attack graphs [2,3] and attack trees [4,5] have been commonly adopted by researchers to build security model and determine attack scenarios that could lead to damage. With the aid of such methods, security analysts are able to obtain concise representation of all the paths an attacker may follow to compromise a security goal through leveraging dependencies among known vulnerabilities [6]. However, while attack graphs can reveal threats, they do not directly provide solutions of security hardening. In practice, it is almost infeasible to remove all identified threats since the system administrator always has to work within a given set of fixed budget constraints. Moreover, under no circumstances should security mitigation measures affect the normal operation of network infrastructure. Therefore, the crucial question in defending against those nontrivial invasions is that: which of the vulnerabilities should be removed to mitigate security risk to acceptable levels without causing a breakdown of normal services, where such removal incurs the least cost? [7]
To make this specific problem from intractable to approachable, efforts have been spent in this context. Researches have been performed to seek for a trade-off between the cost of securing chosen vulnerability subsets and the residual damage caused to network if certain weak points are left unpatched [7,8].
The above research works motivate us to study the optimal security hardening problem. In particular, we first introduce a series of risk metrics and augment the attack graph to risk flow attack graph. This extension not only preserves the advantages of traditional attack graph to represent network state and vulnerabilities, but also depicts the adversary’s status transition sequence. By encoding the attack pattern and defending strategy into binary sequences, we employ a multi-objective genetic algorithm in deriving hardening solutions. Performance of a solution is measured by a pair of risk function and cost function, the value of which is related to the risk flow attack graph metrics. This implementation enables us to revisit network risk by following the risk path of originating, transferring, redistributing and converging. We observe that this approach is able to achieve an optimal solution set of security hardening strategies which takes full account of residual risk and enhancement cost.
The rest of this article is organized as follows. Section 2 gives an overview of some related work. Section 3 describes a risk flow attack graph model to illustrate our method. Section 4 presents our approach of using multi-objective genetic algorithm for optimal security hardening in detail. The experimental results are presented in Section 5. Finally, Section 6 summarizes this paper and discusses future work.
2. Related Work
The network security hardening problem has been extensively studied in all manner of ways. Among the massive explorations, different variants of attack graphs have been applied by researchers. In [7,9], the exploit dependency graph was utilized to make assignments of initial network conditions, represent given set of critical resources as a logic proposition of these initial conditions, and compute actual sets of hardening measures to guarantee the safety of given critical resources. Their approaches worked from a new perspective of initial conditions rather than independent exploits. Ref [10,11] focused on achieving cost-effective security controls by exploring the logical attack graph to represent network observations. They identified vulnerabilities existed in network and explored their causal relationships. Similarly, [12-14] concentrated on accurately measuring risk for enterprise networks. They considered from the perspective of security defenders and made efforts to select the most effective countermeasures against multi-step network penetration such as zero-day exploits, client-side attacks, etc.
Ref [15] on the other hand, took not only the defender’s cost, but also the attacker’s strategy into account. Their approach modeled the attacker-defender interaction as an arms-race, and explored how security controls can be placed in a network to induce a maximum return on investment. They also developed a multi-objective approach to formulate the vulnerability-patching problem, taking advantage of an attack tree model and using an evolutionary algorithm to search for the solution.
Because the security hardening issue is influenced by many real-world elements, such as residual damage, network reliability, enhancement payoff, etc. It is feasible to model it into a multi-objective problem. Ref [16] provided such a formulation and emphasized that the settlement of specific vulnerabilities may introduce additional potential damage to network. Moreover, a genetic algorithm was adopted to choose the minimal-cost security profile providing the maximal vulnerability coverage. Ref [17] demonstrated a model of quantitative risk analysis, and deplyed a genetic algorithm to search for the best countermeasure combination, while multiple risk factors are considered. Apart from genetic algorithm, efforts have been spent to seek for more possibilities of solving the problem. Frigault et. al. [18] introduced security metrics into attack graph for measuring network security risks using dynamic Bayesian network, and Xie et.al. [19] extended their work by capturing uncertainties in attacker action, intrusion alerts, etc. Zhang et. al. [20] adopted Hidden Markov Model (HMM) instead. They constructed a quantitative model to specify cost factors and design heuristic algorithms for automated inference.
In summary, researchers have explored various possibilities on modeling vulnerabilities and analysing security. Yet there are still problems that remain unsettled. For example, some approaches define risk functions of static network metrics, which are always based on empirical statistics. Furthermore, most of previous works tend to work out an optimal solution of security hardening, which will limit the application of such methods. Because when facing diversified security goals, these methods have to be modified to meet the security hardening targets.
Our work has fundamental differences with previous works because it adopts risk flow attack graph to represent and simplify network observations rather than using complicated attack trees or attack graphs. It also develops a different way of calculating network risk by following the risk state transitions of two attacker prototypes in the attack graph. This calculation can dynamically relate network risk to the implementation of security precautions. Then, by relating the objective functions of a multi-objective genetic algorithm to the risk function, our approach can arrive at a Pareto optimal set of hardening strategies. This optimal set can help security analysts to filtrate most inferior solutions and reduce the amount of alternatives by orders of magnitudes. Moreover, it’s convenient for the optimal set to cooperate with different security goals and hardening constraints, because defenders can choose specific solutions flexibly from the optimal set to address practical security problems. Compare with previous works which also adopt genetic algorithms to work out the problem, such as [8][17] and [20], our method have three major differences. First, our multi-objective model is constructed on the basis of simplified risk flow attack graph rather than complex attack trees or attack graphs. Second, we introduce a fitness based crowd distance sort scheme to facilitate global convergence of genetic algorithm. Third, we employ an elite strategy in our method, which can help maintain diversity of individuals and guarantee the efficiency of algorithm.
3. Model and Background
As has been noted, network environment is becoming more and more sophisticated and attackers tend to adopt multiple actions to achieve goal. Among these prominent changes, which remains unaltered is the fact that vulnerabilities have always been one of the most favorable means for attackers to utilize when penetrating a network. In this section, we describe how to model and quantify network vulnerabilities as well as attack patterns using risk flow attack graph. For simplicity, we focus only on the application rather than the generation process of the graph. We also introduce some basic definitions and the approach of measuring security risk in risk flow attack graph. These elaborations are requisite for Section 4 where we show how to find efficient hardening method.
We consider the hypothetical network shown in Fig. 1. The setup consists of three servers and an internal host. The Database Server (DS) and the File Server (FS) are located inside the internal firewall, as well as the internal user on Host 1. Each dashed box on the server icon depicts services an external user can utilize to communicate with these servers. Other unauthorized accesses will be blocked by the external firewall policy. In addition, we assume that the adversary on Host0 intends to compromise the DS.
Fig. 1.Hypothetical network model
Formally, a risk flow attack graph can be described as a tuple,
1) V:V=Vs∪Vg∪Vm constitute the set of nodes. Vs and Vg indicate initial capabilities and the ultimate goal of an attacker, respectively. Vm is the intermediate node set representing the status of individual network assets. Each element in the node set has a value of true or false, indicating whether this asset is compromised by an attacker.
2) E:E⊂((Vs∪Vm)×(Vm∪Vg) is the set of edges in the graph. Each edge can be mapped to a vulnerability which can be exploited. The binary values of {0,1} can be assigned to indicate whether the corresponding vulnerability is utilized by an attacker to penetrate the network. Specifically, ‘1’ stands for a successful penetration and ‘0’ otherwise.
3) τ:τ⊆(V×V). An ordered pair (Vpre, Vpost)∈τ if there exists an edge ε that (Vpre∈pre(ε))Λ(Vpost∈post(ε)).
4) μ:μ⊂(E→Vuls) is a mapping from an edge to its corresponding vulnerability. The metrics of the vulnerability will help determine the risk on edge and of the network.
5) f: f is the risk function defined on exploit edges, the calculation of f depends on specific application scenarios. Our definition of risk function will be given later in this section.
For better illustration, we exemplify the attack graph of hypothetical network model in Fig. 2, which is similar to that in [20]. Some modeling specifications have been made, such as defining two typical attack prototypes, emphasizing exploit edges and constructing risk function.
Fig. 2.Risk flow attack graph for hypothetical network
The corresponding nodes and edges are defined in Table 1 to amplify the sample scenario.,
Table 1.Definition of nodes and edges
As depicted in Fig. 2, exploits appear as edges, and network conditions as nodes. As an example of attack paths, a path AP1=[e13,e34,e46,e68|v1,v3,v4,v6,v8] consists of four exploits and five conditions, including the initial condition1vand the ultimate goal v8. Following this attack path, an attacker (host 0) can first establish a trust relationship and gain root privilege on WS by exploiting a ssh vulnerability on it, then gain user privilege on host 1 via a remote ssh attack. After that, a local buffer overflow vulnerability makes the attacker able to get a root privilege on host 1. Finally, the goal of root privilege on DS is achieved by compromising a remote ftp connection vulnerability. In total, there are three feasible attack paths which can be generated using existing algorithms [2]:
Generally, security risk will always conceal in these feasible attack paths. However, when an exploit occurs, it will bring the latent risk to the table, causing a series of safety problems. By modeling of RFAG, we represent attackers’ behavior by the binary value of exploit edges. Under normal circumstances, the states of exploit edges are set to ‘0’ and security risk is implicit. When the network is under attack, adversaries will choose certain attack paths on their own preferences. These attacks will activate relevant exploit edges and set their states to ‘1’, which means that the implicit risk will become explicit and do actual harm to network. Moreover, this kind of risk will always originate, transfer, redistribute and converge along the attack path.
Although the RFAG can give intuitive analysis of attack paths, an optimal solution to harden the network is still not apparent from the attack graph itself. To address this problem, we give several requisite definitions in this section to help find an efficient hardening method.
3.1 Attack Path
Generally, there is at least one feasible attack path in an attack graph, pointing from the adversaries’ initial status to their ultimate attack goals. Formally, an attack path APk is an ordered set of condition nodes and exploit edges where
3.2 Attacker Prototype
As depicted in Fig. 2, there are two prototypes in the risk flow attack graph, the divergence prototype and the convergence prototype.
Table 2.Logic truth table of divergence prototype
As inferred from the logic truth table, the logical relationship between a node vb and its parent node va in a divergence prototype is that vb=vaeab. Accordingly, the potential risk PR of vb can be calculated as:
In (1), we take the Common Vulnerability Scoring System (CVSS) base score to represent PR(eab). The detailed calculation is omitted here and can be found in the Common Vulnerability Scoring System in [21].
Table 3.Logic truth table of convergence prototype
Similar with the divergence prototype, the logic truth values in Table 3 depicts a vc=vaeac+vbebc relationship in the convergence prototype. The potential risk PR of vc can be calculated by:
where v* is the parent node of vc connected by e*c.
3.3 Hardening Strategy
For a given set of h exploit edges, the hardening strategy HST=(ST1,ST2,…,STh) is a Boolean vector indicating which strategy STi is implemented on exploit edge ei. Particularly, STi=1 if hardening strategy for ei is chosen, otherwise STi=0.
Specifically, countermeasures that are frequently-adopted by defenders can be divided to the following types according to an OSVDB (Open Source Vulnerability Database) [22] classification:
The implementation of security countermeasures will incur different security control cost, including installation cost, operation cost, system downtime, incompatibility cost, etc. In practical applications, these data can be obtained by statistics. For simplification, in this work we omit the acquisition of the costs and use a decimal Ci∈[0,1] to epresent the cost of STi. And the overall cost C(HST) of harden strategy HST can be formulated as:
3.4 Risk Function
The value of risk function f can not only measure the security status of network, but also indicate the validity of enhancement measures. Suppose there are l attack paths in a given risk flow attack graph G, the attack path set AP={APi|i≤l,i∈Z+}, and the hardening strategy HST=(STe1,STe2,…,STeh), the risk polynomial of an attack path can be formulated as:
We take the accumulation risk of network assets to define the risk of an attack path. This risk value is an accumulative function of harden strategy HST and independent node risk. In this formulation, the risk value PR(vj) of an independent node vj is calculated recursively by (1) and (2) following the exploit sequence of the path.
Based on common threat behavior, it’s a reasonable assumption that higher risk paths are likely to have a larger selection probability. Under this assumption, we measure the probability of attack path selection as:
Here, HST0 stands for the harden strategy of {000…00}, considering that adversaries always hold the hypothesis that the conditions are available to exploit. Apparently, our definition of pi satisfies the basic requirements of non-negativity, normalization and countable additivity. The properties of pAPi are listed as follows:
In practice, adversaries tend to choose m of l attack paths at a time to penetrate the network. Hence, the risk value of the whole network can be calculated as:
4. Using GA for Optimal Hardening
Our approach to seek for efficient hardening strategies is to formulate it into a multi-objective genetic algorithm problem that:
Given a risk flow attack graph G, find an optimal vector HST , which minimize the overall security control cost C(HST) and network risk PR(HST).
The solver starts with an initial population of chromosomes, representing possible combination of hardening strategies HST. In each generation, every strategy in a population is evaluated by the fitness selection based on harden cost C(HST) and network risk PR(HST). The selection is determined by a fitness based crowd distance metric to preserve the diversity of individuals. Those individuals will experience several rounds of selection, crossover, and mutation until a set of Pareto optima is created. Particularly, we adopt an elitist preservation strategy to preserve the best individual in every genetic process and directly copied it to the next generation. This strategy can protect the elitist individual from being decomposed by the crossover and mutation operator. The elitist preservation strategy is also able to maintain the global convergence as has been proved by Rudolph [23]. The procedure of proposed approach is shown in Fig. 3.
Fig. 3.Block diagram of our approach
4.1 Chromosome Coding
The algorithm starts by generating an initial population of chromosomes. We adopt the binary encoding format and define the implementation of security countermeasures on exploit edges as the genes of chromosomes. Fig. 4 shows a sample chromosome of the attack graph in Section 3.
Fig. 4.Sample chromosome of nine exploit edges
This sample chromosome is encoded as HST= {110100101} for the risk flow attack graph of nine exploit edges. Those genes encoded by ‘1’ represent edges that are hardened by defenders. On the contrary, the ‘0’ genes stand for unhandled edges.
4.2 Initial Population
We populate the initial population population[0] by harden strategy set HST={HST1,HST2,…,HSTi}, where sizeof(population[0])=card(HST)=l. Here l is an input parameter of genetic algorithm. And each chromosome in population[0] is encoded as a binary string by the above chromosome coding scheme, where HSTi=(ST1,ST2,…,STh), 1≤i≤l. The values of binary strings are initialized randomly without loss of generality.
4.3 Objective Function
As mentioned before, defenders are always faced with the challenge to reduce network risk as much as possible within a fixed budget. Thus, the two objective functions consist of security control cost C(HST) and network risk PR(HST):
4.4 Fitness Selection
In order to guarantee the diversity of individuals and the uniformity of non-inferior solutions, we performed a crowd-distance selection by a 2-Tournament strategy in our work. The selection procedure can be divided into three steps:
It’s worth noting that the non-inferior individuals in each population will be preserved as elite individuals, whose selection probabilities are ‘1’.
4.5 Crossover
Taking L as the length of chromosome, we perform a two point crossover process by generating two random integers m and n within the interval [1, L]. Suppose there are two parent chromosomes to perform a two point crossover. The function selects:
As depicted above, three intervals are selected respectively from two parents to form a single gene as the child chromosome. The two point crossover process is illustrated in Fig. 5:
Fig. 5.Two point crossover process
For example, the parent chromosomes in Fig. 5 are {110100101} and {101101011}. If the crossover points {m, n} are set to {3, 7}, the child chromosome of {110101001} is achieved.
4.6 Mutation
We perform a two-step Uniform Mutation process. First of all, a fraction of the vector entries of an individual is selected for mutation, where each entry has a probability rate r of being mutated. The default rate is set to r=0.01 and can be adjusted according to the performance of algorithm. In the second step, genes on each selected entry is replaced by the opposite value.
5. Experiments and Analysis
Taking the example network given in Section 3 as a scenario, we present some numerical results about the proposed method to evaluate its feasibility and effectiveness. The experimental implementation of our approach mainly uses a genetic algorithm toolbox gatbx [24] developed by University of Sheffield for Matlab of version R2011b. The experiments are performed on a PC with 2.3GHz Intel(R) Core(TM) i5-2410M CPU with 4G RAM running Windows 7 Ultimate Operating System. The performed experiments include: A) the evolutionary process of harden cost and network risk; B) average distance between individuals (harden strategies) during evolution; C) rank histogram of individuals in each Pareto tier; D) Pareto frontier of optimal harden strategy for cost and risk and E) Scalability of the multi-objective genetic algorithm.
The simulation parameters are listed in Table 4.
Table 4.Simulation parameters
The sample risk flow attack graph is a 8-node graph, with 9 valid exploit edges. Our simulation take an 8×8 weighted matrix R=[rij]8×8 to represent the sample attack graph, where
For the convenience of simulation, the nonzero values of rij are generated randomly within the interval of (0, 1). Upon initialization, our simulation encode the harden strategy HST into a 9-bit string with the population size of 100. And other essential parameters of this multi-objective GA approach can be found in Table 4.
5.1 Value Trend of Objective Functions
After 100 generations of selection, crossover and mutation, we can obtain the value trend of two objective functions in Fig. 6.
Fig. 6.Value trend of objective functions
Fig. 6 depicts the trend of the two objective functions: PR(HST) and C(HST) over generation. As shown in Fig. 6, the solid line with marker ‘+’ represents risk value PR(HST), and the dotted line marked by ‘*’ stands for harden cost C(HST). In the 100-generation evolutionary process, these two objective functions are optimized by the genetic process described in Section 3. It’s worth noting that, every time when the harden cost reaches a peak value, the corresponding risk value falls to a low point, and vice versa. Obviously in a real case, the more effort defenders spent, the less risk will remain in network. It can be learned from this trend that the determination of optimal harden strategy is not a simple maximize or minimize problem. Therefore, we adopt a Pareto optima set approach to work out the optimal solutions.
5.2 Average Distance between Individuals
When determining defense strategies, security analysts always need to prioritize the alternatives and apply the most efficient ones. Due to the high similarity of possible hardening strategy combinations, our approach enforces a diversity-preserving mechanism based on a crowding distance metric. This metric for an individual is the sum of the average side-lengths of the cuboid generated by its neighboring individuals in objective space. The value of average distance metric over generation is shown in Fig. 7.
Fig. 7.Average distance between individuals
As depicted in Fig. 7, the average distance of individuals varies within the ranges of 0.3 to 1.9, and slowly converges from around 2 to 1.15 in 100 generations. At the beginning of simulation, the average distances change around 2, by this time, there’s only few non-inferior solutions, i.e. optimal hardening strategies in the solution space. With the enforcement of crowd distance selection, individuals with lower rank and larger crowd distance are given more preference. Thereby forcing the mechanism to search in the area with lesser density in the solution space. After 100 generations of evolution, the hardening strategies with better performances have been obtained. Meanwhile, the average distance between individuals converges to 1.15 at the stopping criterion of 100 generation.
5.3 Distribution of Optimal Solutions
As the consequence of generations of selection, crossover and mutation, the hardening strategies in the solution space are divided into several ranks according to their performances. These ranks are known as Pareto tiers, which give an intuitive view of the distribution of individuals. The fraction of individuals in each Pareto tier is shown in Fig. 8 as a rank histogram.
Fig. 8.Rank histogram of Pareto tiers
As depicted in Fig. 8, the individuals are partitioned into 11 ranks according to their performances. For example, in Fig. 8, there are 10 individuals in Rank 5, which are dominated by Rank {i|i<5}, meaning that given arbitrary hardening strategy HSTx and HSTy in Rank 5 and Rank (i<5), respectively, the following relation holds:
Those individuals in Rank 1 are best, which are also known as the non-inferior solutions. In our experiment population, there are 12 individuals in Rank 1. And these 12 non-inferior harden strategy solutions form the Pareto frontier of our multi-objective optimization. Fig. 9 plots the function values for all non-inferior individuals.
Fig. 9.Distribution of optimal hardening strategies (Pareto front)
As is shown in Fig. 9, the 12 optimal strategies are denoted by marker ‘☆’. These individuals are in Rank 1 that is not dominated by any rank. The hardening strategies in Pareto front provide security analysts with an optimal decision set to choose from, depending on the actual optimization goals. For example, the five annotations in Fig. 9 correspond to the following optimization goals, from left to right:
α*PR(HST)+β*C(HST), where α=0.9 and β=0.1;
The corresponding bit string value of harden strategies and numeric result of risk and cost can be seen in Table 5.
Table 5.Pareto non-inferior solutions
5.4 Scalability
As stated in Section 3, we use the risk flow attack graph in Fig. 2 to illustrate our multi-objective genetic algorithm. The graph model is relatively simple but practical. Because taking the intruders’ attack pattern into consideration, an adversary always tend to limit the attack process in as few steps as possible to avoid exposure. Moreover, we conduct a series of experiments to verify the scalability of our method. The result is shown in Fig. 10.
Fig. 10.Time complexity of our genetic algorithm over scale
As depicted in Fig. 10, we computed the execution times for working out optimal hardening strategies in 6 risk flow attack graphs with similar structures from 50 to 300 nodes. The solid line in Fig. 10 show the tendency of increasing time complexity of our multi-objective genetic algorithm. The time complexity experiences an acceptable trend of modest increasing over scale, which verifies the feasibility and scalability of our method.
5. Conclusion
The risk flow attack graph based approach presented in this paper models vulnerabilities and quantifies network risk by a multi-objective GA solver. By means of this strategy, network security risk is calculated by an iterative process according to two attacker prototypes defined in the attack graph. The metrics of network risk and harden cost are taken as objective functions to be optimized, which are two non-ignorable elements on security analysts’ side.
Compared with existing security hardening methods, this work has the following differences:
As we all know, that risk will always conceal in a network. Once an exploit occurs, the latent risk will be brought to the table, causing defenders to take preventive actions. These measures will in turn prompt adversaries to make improvements of their ways of attacks. Thus, in future work, the dynamic relationship between security hardening strategies and attack behaviors will be researched on the basis of this work. More network factors which could affect security risk to will be studied to improve and refine our approach, such as concealment of attackers and risk threshold. Furthermore, we would be interested in analyzing large-scale attack scenarios to test the performance and scalability of proposed approach.