Solvers

The three basic verification methods are "reachability", "optimization", and "search". These are further divided into the five categories listed below. Note that all of the optimization methods use the JuMP.jl library.

Reachability-Based Methods

These methods perform exact or approximate reachability analysis to determine the output set corresponding to a given input set. In addition, MaxSens, which computes lower and upper bounds for each layer, is called within other solver types in the form of get_bounds.

ExactReach

NeuralVerification.ExactReachType
ExactReach

ExactReach performs exact reachability analysis to compute the output reachable set for a network.

Problem requirement

  1. Network: any depth, ReLU activation
  2. Input: HPolytope
  3. Output: AbstractPolytope

Return

ReachabilityResult

Method

Exact reachability analysis.

Property

Sound and complete.

Reference

W. Xiang, H.-D. Tran, and T. T. Johnson, "Reachable Set Computation and Safety Verification for Neural Networks with ReLU Activations," ArXiv Preprint ArXiv:1712.08163, 2017.

source

Ai2

NeuralVerification.Ai2Type
Ai2{T}

Ai2 performs over-approximated reachability analysis to compute the over-approximated output reachable set for a network. T can be Hyperrectangle, Zonotope, or HPolytope, and determines the amount of over-approximation (and hence also performance tradeoff). The original implementation (from [1]) uses Zonotopes, so we consider this the "benchmark" case. The HPolytope case is more precise, but slower, and the opposite is true of the Hyperrectangle case.

Note that initializing Ai2() defaults to Ai2{Zonotope}. The following aliases also exist for convenience:

const Ai2h = Ai2{HPolytope}
const Ai2z = Ai2{Zonotope}
const Box = Ai2{Hyperrectangle}

Problem requirement

  1. Network: any depth, ReLU activation (more activations to be supported in the future)
  2. Input: AbstractPolytope
  3. Output: AbstractPolytope

Return

ReachabilityResult

Method

Reachability analysis using split and join.

Property

Sound but not complete.

Reference

[1] T. Gehr, M. Mirman, D. Drashsler-Cohen, P. Tsankov, S. Chaudhuri, and M. Vechev, "Ai2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation," in 2018 IEEE Symposium on Security and Privacy (SP), 2018.

Note

Efficient over-approximation of intersections and unions involving zonotopes relies on Theorem 3.1 of

[2] Singh, G., Gehr, T., Mirman, M., Püschel, M., & Vechev, M. (2018). Fast and effective robustness certification. In Advances in Neural Information Processing Systems (pp. 10802-10813).

source

MaxSens

NeuralVerification.MaxSensType
MaxSens(resolution::Float64, tight::Bool)

MaxSens performs over-approximated reachability analysis to compute the over-approximated output reachable set for a network.

Problem requirement

  1. Network: any depth, any activation that is monotone
  2. Input: Hyperrectangle or HPolytope
  3. Output: AbstractPolytope

Return

ReachabilityResult

Method

First partition the input space into small grid cells according to resolution. Then use interval arithmetic to compute the reachable set for each cell. Two versions of interval arithmetic is implemented with indicator tight. Default resolution is 1.0. Default tight = false.

Property

Sound but not complete.

Reference

W. Xiang, H.-D. Tran, and T. T. Johnson, "Output Reachable Set Estimation and Verification for Multi-Layer Neural Networks," ArXiv Preprint ArXiv:1708.03322, 2017.

source

ReluVal

NeuralVerification.ReluValType
ReluVal(max_iter::Int64, tree_search::Symbol)

ReluVal combines symbolic reachability analysis with iterative interval refinement to minimize over-approximation of the reachable set.

Problem requirement

  1. Network: any depth, ReLU activation
  2. Input: Hyperrectangle
  3. Output: LazySet

Return

CounterExampleResult

Method

Symbolic reachability analysis and iterative interval refinement (search).

  • max_iter default 10.
  • tree_search default :DFS - depth first search.

Property

Sound but not complete.

Reference

S. Wang, K. Pei, J. Whitehouse, J. Yang, and S. Jana, "Formal Security Analysis of Neural Networks Using Symbolic Intervals," CoRR, vol. abs/1804.10829, 2018. arXiv: 1804.10829.

https://github.com/tcwangshiqi-columbia/ReluVal

source

Neurify

Missing docstring.

Missing docstring for Neurify. Check Documenter's build log for details.

FastLin

NeuralVerification.FastLinType
FastLin(maxIter::Int64, ϵ0::Float64, accuracy::Float64)

FastLin combines reachability analysis with binary search to find maximum allowable disturbance.

Problem requirement

  1. Network: any depth, ReLU activation
  2. Input: hypercube
  3. Output: AbstractPolytope

Return

AdversarialResult

Method

Reachability analysis by network approximation and binary search.

  • max_iter is the maximum iteration in search, default 10;
  • ϵ0 is the initial search radius, default 100.0;
  • accuracy is the stopping criteria, default 0.1;

Property

Sound but not complete.

Reference

T.-W. Weng, H. Zhang, H. Chen, Z. Song, C.-J. Hsieh, D. Boning, I. S. Dhillon, and L. Daniel, "Towards Fast Computation of Certified Robustness for ReLU Networks," ArXiv Preprint ArXiv:1804.09699, 2018.

source

FastLip

NeuralVerification.FastLipType
FastLip(maxIter::Int64, ϵ0::Float64, accuracy::Float64)

FastLip adds Lipschitz estimation on top of FastLin.

Problem requirement

  1. Network: any depth, ReLU activation
  2. Input: hypercube
  3. Output: halfspace

Return

AdversarialResult

Method

Lipschitz estimation + FastLin. All arguments are for FastLin.

  • max_iter is the maximum iteration in search, default 10;
  • ϵ0 is the initial search radius, default 100.0;
  • accuracy is the stopping criteria, default 0.1;

Property

Sound but not complete.

Reference

T.-W. Weng, H. Zhang, H. Chen, Z. Song, C.-J. Hsieh, D. Boning, I. S. Dhillon, and L. Daniel, "Towards Fast Computation of Certified Robustness for ReLU Networks," ArXiv Preprint ArXiv:1804.09699, 2018.

source

DLV

NeuralVerification.DLVType
DLV(ϵ::Float64)

DLV searches layer by layer for counter examples in hidden layers.

Problem requirement

  1. Network: any depth, any activation (currently only support ReLU)
  2. Input: Hyperrectangle
  3. Output: AbstractPolytope

Return

CounterExampleResult

Method

The following operations are performed layer by layer. for layer i

  1. determine a reachable set from the reachable set in layer i-1
  2. determine a search tree in the reachable set by refining the search tree in layer i-1
  3. Verify
    • True -> continue to layer i+1
    • False -> counter example

The argument ϵ is the resolution of the initial search tree. Default 1.0.

Property

Sound but not complete.

Reference

X. Huang, M. Kwiatkowska, S. Wang, and M. Wu, "Safety Verification of Deep Neural Networks," in International Conference on Computer Aided Verification, 2017.

https://github.com/VeriDeep/DLV

source

Optimization-Based Methods

Example

using NeuralVerification # hide
nnet = read_nnet("../../examples/networks/small_nnet.nnet")
input  = Hyperrectangle([0.0], [.5])
output = HPolytope(ones(1,1), [102.5])

problem = Problem(nnet, input, output)
# set the JuMP solver with `optimizer` keyword or use default:
solver = MIPVerify(optimizer = GLPKSolverMIP())

solve(solver,  problem)

NSVerify

NeuralVerification.NSVerifyType
NSVerify(optimizer, m::Float64)

NSVerify finds counter examples using mixed integer linear programming.

Problem requirement

  1. Network: any depth, ReLU activation
  2. Input: hyperrectangle or hpolytope
  3. Output: PolytopeComplement

Return

CounterExampleResult

Method

MILP encoding (using m). No presolve. Default optimizer is GLPKSolverMIP(). Default m is 1000.0 (should be large enough to avoid approximation error).

Property

Sound and complete.

Reference

A. Lomuscio and L. Maganti, "An Approach to Reachability Analysis for Feed-Forward Relu Neural Networks," ArXiv Preprint ArXiv:1706.07351, 2017.

source

MIPVerify

NeuralVerification.MIPVerifyType
MIPVerify(optimizer)

MIPVerify computes maximum allowable disturbance using mixed integer linear programming.

Problem requirement

  1. Network: any depth, ReLU activation
  2. Input: hyperrectangle
  3. Output: PolytopeComplement

Return

AdversarialResult

Method

MILP encoding. Use presolve to compute a tight node-wise bounds first. Default optimizer is GLPKSolverMIP().

Property

Sound and complete.

Reference

V. Tjeng, K. Xiao, and R. Tedrake, "Evaluating Robustness of Neural Networks with Mixed Integer Programming," ArXiv Preprint ArXiv:1711.07356, 2017.

https://github.com/vtjeng/MIPVerify.jl

source

ILP

NeuralVerification.ILPType
ILP(optimizer, max_iter)

ILP iteratively solves a linearized primal optimization to compute maximum allowable disturbance. It iteratively adds the linear constraint to the problem.

Problem requirement

  1. Network: any depth, ReLU activation
  2. Input: hyperrectangle
  3. Output: PolytopeComplement

Return

AdversarialResult

Method

Iteratively solve a linear encoding of the problem. It only considers the linear piece of the network that has the same activation pattern as the reference input. Default optimizer is GLPKSolverMIP(). We provide both iterative method and non-iterative method to solve the LP problem. Default iterative is true.

Property

Sound but not complete.

Reference

O. Bastani, Y. Ioannou, L. Lampropoulos, D. Vytiniotis, A. Nori, and A. Criminisi, "Measuring Neural Net Robustness with Constraints," in Advances in Neural Information Processing Systems, 2016.

source

Duality

NeuralVerification.DualityType
Duality(optimizer)

Duality uses Lagrangian relaxation to compute over-approximated bounds for a network

Problem requirement

  1. Network: any depth, any activation function that is monotone
  2. Input: hyperrectangle
  3. Output: halfspace

Return

BasicResult

Method

Lagrangian relaxation. Default optimizer is GLPKSolverMIP().

Property

Sound but not complete.

Reference

K. Dvijotham, R. Stanforth, S. Gowal, T. Mann, and P. Kohli, "A Dual Approach to Scalable Verification of Deep Networks," ArXiv Preprint ArXiv:1803.06567, 2018.

source

ConvDual

NeuralVerification.ConvDualType
ConvDual

ConvDual uses convex relaxation to compute over-approximated bounds for a network

Problem requirement

  1. Network: any depth, ReLU activation
  2. Input: hypercube
  3. Output: halfspace

Return

BasicResult

Method

Convex relaxation with duality.

Property

Sound but not complete.

Reference

E. Wong and J. Z. Kolter, "Provable Defenses against Adversarial Examples via the Convex Outer Adversarial Polytope," ArXiv Preprint ArXiv:1711.00851, 2017.

https://github.com/locuslab/convex_adversarial

source

Certify

NeuralVerification.CertifyType
Certify(optimizer)

Certify uses semidefinite programming to compute over-approximated certificates for a neural network with only one hidden layer.

Problem requirement

  1. Network: one hidden layer, any activation that is differentiable almost everywhere whose derivative is bound by 0 and 1
  2. Input: hypercube
  3. Output: halfspace

Return

BasicResult

Method

Semindefinite programming. Default optimizer is SCSSolver().

Property

Sound but not complete.

Reference

A. Raghunathan, J. Steinhardt, and P. Liang, "Certified Defenses against Adversarial Examples," ArXiv Preprint ArXiv:1801.09344, 2018.

source

Sherlock

NeuralVerification.SherlockType
Sherlock(optimizer, ϵ::Float64)

Sherlock combines local and global search to estimate the range of the output node.

Problem requirement

  1. Network: any depth, ReLU activation, single output
  2. Input: hpolytope and hyperrectangle
  3. Output: hyperrectangle (1d interval)

Return

CounterExampleResult or ReachabilityResult

Method

Local search: solve a linear program to find local optima on a line segment of the piece-wise linear network. Global search: solve a feasibilty problem using MILP encoding (default calling NSVerify).

  • optimizer default GLPKSolverMIP()
  • ϵ is the margin for global search, default 0.1.

Property

Sound but not complete.

Reference

S. Dutta, S. Jha, S. Sanakaranarayanan, and A. Tiwari, "Output Range Analysis for Deep Neural Networks," ArXiv Preprint ArXiv:1709.09130, 2017.

https://github.com/souradeep-111/sherlock

source

BaB

NeuralVerification.BaBType
BaB(optimizer, ϵ::Float64)

BaB uses branch and bound to estimate the range of the output node.

Problem requirement

  1. Network: any depth, ReLU activation, single output
  2. Input: hyperrectangle
  3. Output: hyperrectangle (1d interval)

Return

CounterExampleResult or ReachabilityResult

Method

Branch and bound. For branch, it uses iterative interval refinement. For bound, it computes concrete bounds by sampling, approximated bound by optimization.

  • optimizer default GLPKSolverMIP()
  • ϵ is the desired accurancy for termination, default 0.1.

Property

Sound and complete.

Reference

R. Bunel, I. Turkaslan, P. H. Torr, P. Kohli, and M. P. Kumar, "A Unified View of Piecewise Linear Neural Network Verification," ArXiv Preprint ArXiv:1711.00455, 2017.

source

Planet

NeuralVerification.PlanetType
Planet(optimizer, eager::Bool)

Planet integrates a SAT solver (PicoSAT.jl) to find an activation pattern that maps a feasible input to an infeasible output.

Problem requirement

  1. Network: any depth, ReLU activation
  2. Input: hyperrectangle or bounded hpolytope
  3. Output: PolytopeComplement

Return

BasicResult

Method

Binary search of activations (0/1) and pruning by optimization. Our implementation is non eager.

  • optimizer default GLPKSolverMIP();
  • eager default false;

Property

Sound and complete.

Reference

R. Ehlers, "Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks," in International Symposium on Automated Technology for Verification and Analysis, 2017.

https://github.com/progirep/planet

source

Reluplex

NeuralVerification.ReluplexType
Reluplex(optimizer, eager::Bool)

Reluplex uses binary tree search to find an activation pattern that maps a feasible input to an infeasible output.

Problem requirement

  1. Network: any depth, ReLU activation
  2. Input: hyperrectangle
  3. Output: PolytopeComplement

Return

CounterExampleResult

Method

Binary search of activations (0/1) and pruning by optimization.

Property

Sound and complete.

Reference

G. Katz, C. Barrett, D. L. Dill, K. Julian, and M. J. Kochenderfer, "Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks," in International Conference on Computer Aided Verification, 2017.

source