Feedforward networks are the simplest form of neural network. They consist of layers of neurons in which all of the neuron outputs from one layer are connected to the all of the neuron inputs in the following layer. They can be used to map any static pattern into some other pattern. When used as classifiers, the output layer of neurons uses a so-called softmax squashing function. Like the normal squashing function, it compresses the inputs into a reduced positive range but it additionally scales them all so that they sum to one. This allows each output to be interpreted as the probability that the current input belongs to the class associated with that particular neuron.
The graphic shows a simple two-layer feedforward neural network that I use to recognise barcodes. This is a very simple example and indeed you might be wondering why I use a neural network at all since I could just store the bar codes in a table and find the best match by table lookup. In fact, the bar codes that I have to recognise are sometimes extracted from photos of paper copies which are often crumpled and faded. Using a neural network allows me to make a good guess at what the codes are even when they are quite badly distorted.