SiameseNet-tensorflow

tensorflow implementation of Koch et al. (2015)

Siamese Neural Network Architecture

Architecture

The model consists of a sequence of convolutional layers, each of which uses a single channel with filters of varying size and a fixed stride of 1.

Activation

The network applies a ReLU activation function to the output feature maps, optionally followed by maxpooling with a fliter size and stride of 2.

Loss Function

Binary cross entropy is used as loss function to calculate the error.

Optimizer

Adam is used to find the global minima in this architecture.

Weight Initialization

All network weights are initialized in the convolutional layers from a normal distribution with zero-mean and a standard deviation of $10^{−2}$. Biases were also initialized from a normal distribution, but with mean 0.5 and standard deviation $10^{−2}$.

References