ActivationFunction.cs

Summary
ActivationFunction.cs
Enumerations
FANN C# ActivationFunction enumerator
ActivationFunctionThe activation functions used for the neurons during training.

Enumerations

FANN C# ActivationFunction enumerator

public enum ActivationFunction

ActivationFunction

The activation functions used for the neurons during training.  The activation functions can either be defined for a group of neurons by FANNCSharp.Float::NeuralNet::ActivationFunctionHidden and FANNCSharp.Float::NeuralNet::ActivationFunctionOutput or it can be defined for a single neuron by FANNCSharp.Float::NeuralNet::SetActivationFunction.

The steepness of an activation function is defined in the same way by FANNCSharp.Float::NeuralNet::ActivationSteepnessHidden, FANNCSharp.Float::NeuralNet::ActivationSteepnessOutput and FANNCSharp.Float::NeuralNet::SetActivationSteepness.

The functions are described with functions where

  • x is the input to the activation function,
  • y is the output,
  • s is the steepness and
  • d is the derivation.
FANN_LINEARLinear activation function.
  • span: -inf < y < inf
  • y = x*s, d = 1*s
  • Can NOT be used in fixed point.
FANN_THRESHOLDThreshold activation function.
  • x < 0 -> y = 0, x >= 0 -> y = 1
  • Can NOT be used during training.
FANN_THRESHOLD_SYMMETRICThreshold activation function.
  • x < 0 -> y = -1, x >= 0 -> y = 1
  • Can NOT be used during training.
FANN_SIGMOIDSigmoid activation function.
  • One of the most used activation functions.
  • span: 0 < y < 1
  • y = 1/(1 + exp(-2*s*x))
  • d = 2*s*y*(1 - y)
FANN_SIGMOID_STEPWISEStepwise linear approximation to sigmoid.
  • Faster than sigmoid but a bit less precise.
FANN_SIGMOID_SYMMETRICSymmetric sigmoid activation function, aka. tanh.
  • One of the most used activation functions.
  • span: -1 < y < 1
  • y = tanh(s*x) = 2/(1 + exp(-2*s*x)) - 1
  • d = s*(1-(y*y))
FANN_SIGMOID_SYMMETRICStepwise linear approximation to symmetric sigmoid.
  • Faster than symmetric sigmoid but a bit less precise.
FANN_GAUSSIANGaussian activation function.
  • 0 when x = -inf, 1 when x = 0 and 0 when x = inf
  • span: 0 < y < 1
  • y = exp(-x*s*x*s)
  • d = -2*x*s*y*s
FANN_GAUSSIAN_SYMMETRICSymmetric gaussian activation function.
  • -1 when x = -inf, 1 when x = 0 and 0 when x = inf
  • span: -1 < y < 1
  • y = exp(-x*s*x*s)*2-1
  • d = -2*x*s*(y+1)*s
FANN_ELLIOTFast (sigmoid like) activation function defined by David Elliott
  • span: 0 < y < 1
  • y = ((x*s) / 2) / (1 + |x*s|) + 0.5
  • d = s*1/(2*(1+|x*s|)*(1+|x*s|))
FANN_ELLIOT_SYMMETRICFast (symmetric sigmoid like) activation function defined by David Elliott
  • span: -1 < y < 1
  • y = (x*s) / (1 + |x*s|)
  • d = s*1/((1+|x*s|)*(1+|x*s|))
FANN_LINEAR_PIECEBounded linear activation function.
  • span: 0 < y < 1
  • y = x*s, d = 1*s
FANN_LINEAR_PIECE_SYMMETRICBounded Linear activation function.
  • span: -1 < y < 1
  • y = x*s, d = 1*s
FANN_SIN_SYMMETRICPeriodical sinus activation function.
  • span: -1 <= y <= 1
  • y = sin(x*s)
  • d = s*cos(x*s)
FANN_COS_SYMMETRICPeriodical cosinus activation function.
  • span: -1 <= y <= 1
  • y = cos(x*s)
  • d = s*-sin(x*s)

See also

<FANNCSharp.Float::NeuralNet::SetActivationFunctionHidden>, <FANNCSharp.Float::NeuralNet::SetActivationFunctionOutput>

public enum ActivationFunction
public ActivationFunction ActivationFunctionHidden { set }
Set the activation function for all of the hidden layers.
public ActivationFunction ActivationFunctionOutput { set }
Set the activation function for the output layer.
public void SetActivationFunction(ActivationFunction function,
int layer,
int neuron)
Set the activation function for neuron number neuron in layer number layer, counting the input layer as layer 0.
public float ActivationSteepnessHidden { set }
Set the steepness of the activation steepness in all of the hidden layers.
public float ActivationSteepnessOutput { set }
Set the steepness of the activation steepness in the output layer.
public void SetActivationSteepness(float steepness,
int layer,
int neuron)
Set the activation steepness for neuron number neuron in layer number layer, counting the input layer as layer 0.
Close