Helper Functions

NeuralVerification.read_nnetFunction
read_nnet(fname::String; last_layer_activation = Id())

Read in neural net from a .nnet file and return Network struct. The .nnet format is borrowed from NNet. The format assumes all hidden layers have ReLU activation. Keyword argument last_layer_activation sets the activation of the last layer, and defaults to Id(), (i.e. a linear output layer).

source
Missing docstring.

Missing docstring for NeuralVerification.init_layer. Check Documenter's build log for details.

NeuralVerification.get_activationFunction
get_activation(L, x::Vector)

Finds the activation pattern of a vector x subject to the activation function given by the layer L. Returns a Vector{Bool} where true denotes the node is "active". In the sense of ReLU, this would be x[i] >= 0.

source
get_activation(nnet::Network, x::Vector)

Given a network, find the activation pattern of all neurons at a given point x. Returns Vector{Vector{Bool}}. Each Vector{Bool} refers to the activation pattern of a particular layer.

source
get_activation(nnet::Network, input::Hyperrectangle)

Given a network, find the activation pattern of all neurons for a given input set. Assume ReLU. return Vector{Vector{Int64}}.

  • 1: activated
  • 0: undetermined
  • -1: not activated
source
get_activation(nnet::Network, bounds::Vector{Hyperrectangle})

Given a network, find the activation pattern of all neurons given the node-wise bounds. Assume ReLU. Assume pre-activation bounds where the bounds on the input are given by the first hyperrectangle, the first hidden layer by the second hyperrectangle, and so on. return Vector{Vector{Int64}}.

  • 1: activated
  • 0: undetermined
  • -1: not activated
source
get_activation(L::Layer{ReLU}, bounds::Hyperrectangle)

Given a layer, find the activation pattern of all neurons in the layer given the node-wise bounds. Assume ReLU. Assume bounds is the pre-activation bounds for each ReLU in the layer. return Vector{Vector{Int64}}.

  • 1: activated
  • 0: undetermined
  • -1: not activated
source
NeuralVerification.act_gradientFunction
act_gradient(act, z_hat::Vector{N}) where N

Compute the gradient of an activation function at point z_hat. Currently only supports ReLU and Id.

source
NeuralVerification.act_gradient_boundsFunction
act_gradient_bounds(nnet::Network, input::AbstractPolytope)

Compute the bounds on the gradient of all activation functions given an input set. Currently only support ReLU. Return:

  • LΛ, UΛ::NTuple{2, Vector{BitVector}}: lower and upper bounds on activation
source
NeuralVerification.interval_mapFunction
interval_map(W::Matrix, l::AbstractVecOrMat, u::AbstractVecOrMat)

Simple linear mapping on intervals. L, U := ([W]₊*l + [W]₋*u), ([W]₊*u + [W]₋*l)

Outputs:

  • (lbound, ubound) (after the mapping)
source
NeuralVerification.get_boundsFunction
get_bounds(problem::Problem)
get_bounds(nnet::Network, input::Hyperrectangle, [true])

Computes node-wise bounds given a input set. The optional last argument determines whether the bounds are pre- or post-activation.

Return:

  • Vector{Hyperrectangle}: bounds for all nodes. bounds[1] is the input set.
source
Missing docstring.

Missing docstring for NeuralVerification.linear_transformation. Check Documenter's build log for details.

NeuralVerification.split_intervalFunction
split_interval(dom, i)

Split a set into two at the given index.

Inputs:

  • dom::Hyperrectangle: the set to be split
  • i: the index to split at

Return:

  • (left, right)::Tuple{Hyperrectangle, Hyperrectangle}: two sets after split
source