Training is done in a for loop , counting epochs until training successfully converges. On failure to converge it stops when the maximum number of epochs are reached. The appropriate training procedure given in the problem definition is used to do the training, e.g. incremental or batch mode training.
In incremental learning, the network errors are computed after presentation of each input pattern, and the weights are updated. Whereas in batch learning, the the network state is updated after all errors are collected for all of input patterns, i.e. one epoch of input presentation to the network.
The ending condition for the loop would be either the network to be able to learn the problem at hand, or fail. For the former case, a convergence check is performed at the end of each epoch .