I transform one pattern into another using neural networks. Each network consists of layers of artificial neurons and each neuron has a simple uniform structure. As shown in the animation above, each neuron has three elements: a set of input weights, a summing node and a squashing function. When a vector of numbers representing a pattern is applied to the input of a neuron, each element of the vector is multiplied by a weight and summed. This so-called activation is then compressed by a squashing function to bring the output into a manageable range. The squashing functions shown here force the activation to lie in the range 0 to 1.
A key characteristic of neural networks is that the weights can be trained automatically to produce the desired behaviour. This behaviour is defined by training data consisting of example inputs and a label defining the ideal output, called the target. Training then consists of repeatedly applying training examples to the network, calculating the difference between the actual output and the target, and then propagating this error backwards through the network. Whenever a weight is encountered in this backward error propagation, it is adjusted by a very small amount so as to reduce the error. After repeated cycles through the training data, the weights will move towards values which maximise performance. At this point, the network is ready for use.
Each layer in a neural network can have many neurons, and there can be many layers but no matter how complex the network is, it can always be trained by the same error back propagation algorithm.