Helper Functions
NeuralVerification.read_nnet
— Functionread_nnet(fname::String; last_layer_activation = Id())
Read in neural net from a .nnet
file and return Network struct. The .nnet
format is borrowed from NNet. The format assumes all hidden layers have ReLU activation. Keyword argument last_layer_activation
sets the activation of the last layer, and defaults to Id()
, (i.e. a linear output layer).
Missing docstring for NeuralVerification.init_layer
. Check Documenter's build log for details.
NeuralVerification.compute_output
— Functioncompute_output(nnet::Network, input::Vector{Float64})
Propagate a given vector through a nnet and compute the output.
NeuralVerification.get_activation
— Functionget_activation(L, x::Vector)
Finds the activation pattern of a vector x
subject to the activation function given by the layer L
. Returns a Vector{Bool} where true
denotes the node is "active". In the sense of ReLU, this would be x[i] >= 0
.
get_activation(nnet::Network, x::Vector)
Given a network, find the activation pattern of all neurons at a given point x. Returns Vector{Vector{Bool}}. Each Vector{Bool} refers to the activation pattern of a particular layer.
get_activation(nnet::Network, input::Hyperrectangle)
Given a network, find the activation pattern of all neurons for a given input set. Assume ReLU. return Vector{Vector{Int64}}.
- 1: activated
- 0: undetermined
- -1: not activated
get_activation(nnet::Network, bounds::Vector{Hyperrectangle})
Given a network, find the activation pattern of all neurons given the node-wise bounds. Assume ReLU. Assume pre-activation bounds where the bounds on the input are given by the first hyperrectangle, the first hidden layer by the second hyperrectangle, and so on. return Vector{Vector{Int64}}.
- 1: activated
- 0: undetermined
- -1: not activated
get_activation(L::Layer{ReLU}, bounds::Hyperrectangle)
Given a layer, find the activation pattern of all neurons in the layer given the node-wise bounds. Assume ReLU. Assume bounds is the pre-activation bounds for each ReLU in the layer. return Vector{Vector{Int64}}.
- 1: activated
- 0: undetermined
- -1: not activated
NeuralVerification.get_gradient
— Functionget_gradient(nnet::Network, x::Vector)
Given a network, find the gradient at the input x
NeuralVerification.act_gradient
— Functionact_gradient(act, z_hat::Vector{N}) where N
Compute the gradient of an activation function at point z_hat. Currently only supports ReLU and Id.
NeuralVerification.act_gradient_bounds
— Functionact_gradient_bounds(nnet::Network, input::AbstractPolytope)
Compute the bounds on the gradient of all activation functions given an input set. Currently only support ReLU. Return:
LΛ, UΛ::NTuple{2, Vector{BitVector}}
: lower and upper bounds on activation
NeuralVerification.interval_map
— Functioninterval_map(W::Matrix, l::AbstractVecOrMat, u::AbstractVecOrMat)
Simple linear mapping on intervals. L, U := ([W]₊*l + [W]₋*u), ([W]₊*u + [W]₋*l)
Outputs:
(lbound, ubound)
(after the mapping)
NeuralVerification.get_bounds
— Functionget_bounds(problem::Problem)
get_bounds(nnet::Network, input::Hyperrectangle, [true])
Computes node-wise bounds given a input set. The optional last argument determines whether the bounds are pre- or post-activation.
Return:
Vector{Hyperrectangle}
: bounds for all nodes.bounds[1]
is the input set.
Missing docstring for NeuralVerification.linear_transformation
. Check Documenter's build log for details.
NeuralVerification.split_interval
— Functionsplit_interval(dom, i)
Split a set into two at the given index.
Inputs:
dom::Hyperrectangle
: the set to be spliti
: the index to split at
Return:
(left, right)::Tuple{Hyperrectangle, Hyperrectangle}
: two sets after split