Problem Definitions
Problem
NeuralVerification.Problem — TypeProblem{P, Q}(network::Network, input::P, output::Q)Problem definition for neural verification.
The verification problem consists of: for all points in the input set, the corresponding output of the network must belong to the output set.
Input/Output Sets
Different solvers require problems formulated with particular input and output sets. The table below lists all of the solvers with their required input/output sets.
- HR =
Hyperrectangle - HS =
HalfSpace - HP =
HPolytope - PC =
PolytopeComplement
| Solver | Input set | Output set |
|---|---|---|
ExactReach | HP | HP (bounded[1]) |
AI2 | HP | HP (bounded[1]) |
MaxSens | HR | HP (bounded[1]) |
NSVerify | HR | PC[2] |
MIPVerify | HR | PC[2] |
ILP | HR | PC[2] |
Duality | HR(uniform) | HS |
ConvDual | HR(uniform) | HS |
Certify | HR | HS |
FastLin | HR | HS |
FastLip | HR | HS |
ReluVal | HR | HR |
Neurify | HP | HP |
DLV | HR | HR[3] |
Sherlock | HR | HR[3] |
BaB | HR | HR[3] |
Planet | HR | PC[2] |
Reluplex | HP | PC[2] |
[1]: This restriction is not due to a theoretic limitation, but rather to our implementation, and will eventually be relaxed.
[2]: See PolytopeComplement for a justification of this output set restriction.
[3]: The set can only have one output node. I.e. it must be a set of dimension 1.
Note that solvers which require Hyperrectangles also work on HPolytopes by overapproximating the input set. This is likewise true for solvers that require HPolytopes converting a Hyperrectangle input to H-representation. Any set which can be made into the required set is converted, wrapped, or approximated appropriately.
PolytopeComplements
Some optimization-based solvers work on the principle of a complementary output constraint. Essentially, they test whether a point is not in a set, by checking whether it is in the complement of the set (or vice versa). To represent the kinds of sets we are interested in for these solvers, we define the PolytopeComplement, which represents the complement of a convex set. Note that in general, the complement of a convex set is neither convex nor closed.
Although it is possible to represent the complement of a HalfSpace as another HalfSpace, we require that it be specified as a PolytopeComplement to disambiguate the boundary.
Missing docstring for PolytopeComplement. Check Documenter's build log for details.
Network
NeuralVerification.Layer — TypeLayer{F, N}Consists of weights and bias for linear mapping, and activation for nonlinear mapping.
Fields
weights::Matrix{N}bias::Vector{N}activation::F
See also: Network
NeuralVerification.Network — TypeNeuralVerification.n_nodes — Methodn_nodes(L::Layer)Returns the number of neurons in a layer.
Activation Functions
Note that of the activation functions listed below, only ReLU is fully supported throughout the library.
NeuralVerification.GeneralAct — TypeGeneralAct <: ActivationFunctionWrapper type for a general activation function.
Usage
act = GeneralAct(tanh)
act(0) == tanh(0) # true
act(10.0) == tanh(10.0) # trueact = GeneralAct(x->tanh.(x))
julia> act(-2:2)
5-element Array{Float64,1}:
-0.9640275800758169
-0.7615941559557649
0.0
0.7615941559557649
0.9640275800758169NeuralVerification.Id — TypeId <: ActivationFunctionIdentity operator
(Id())(x) -> xNeuralVerification.Max — TypeMax <: ActivationFunction
(Max())(x) -> max(maximum(x), 0)NeuralVerification.PiecewiseLinear — TypePiecewiseLinear <: ActivationFunctionActivation function that uses linear interpolation between supplied knots. An extrapolation condition can be set for values outside the set of knots. Default is Linear.
PiecewiseLinear(knots_x, knots_y, [extrapolation = Line()])Usage
kx = [0.0, 1.2, 1.7, 3.1]
ky = [0.0, 0.5, 1.0, 1.5]
act = PiecewiseLinear(kx, ky)
act(first(kx)) == first(ky) == 0.0
act(last(kx)) == last(ky) == 1.5
act(1.0) # 0.4166666666666667
act(-102) # -42.5act = PiecewiseLinear(kx, ky, Flat())
act(-102) # 0.0
act(Inf) # 1.5Extrapolations
- Flat()
- Line()
- constant (supply a number as the argument)
- Throw() (throws bounds error)
PiecewiseLinear uses Interpolations.jl.
NeuralVerification.ReLU — TypeReLU <: ActivationFunction
(ReLU())(x) -> max.(x, 0)NeuralVerification.Sigmoid — TypeSigmoid <: ActivationFunction
(Sigmoid())(x) -> 1 ./ (1 .+ exp.(-x))NeuralVerification.Tanh — TypeTanh <: ActivationFunction
(Tanh())(x) -> tanh.(x)Results
NeuralVerification.Result — TypeResultSupertype of all result types.
See also: BasicResult, CounterExampleResult, AdversarialResult, ReachabilityResult
NeuralVerification.BasicResult — TypeBasicResult(status::Symbol)Result type that captures whether the input-output constraint is satisfied. Possible status values:
:holds (io constraint is satisfied always)
:violated (io constraint is violated)
:unknown (could not be determined)NeuralVerification.CounterExampleResult — TypeCounterExampleResult(status, counter_example)Like BasicResult, but also returns a counter_example if one is found (if status = :violated). The counter_example is a point in the input set that, after the NN, lies outside the output set.
NeuralVerification.AdversarialResult — TypeAdversarialResult(status, max_disturbance)Like BasicResult, but also returns the maximum allowable disturbance in the input (if status = :violated).
NeuralVerification.ReachabilityResult — TypeReachabilityResult(status, reachable)Like BasicResult, but also returns the output reachable set given the input constraint (if status = :violated).