# Strongly improved stability and faster convergence
of temporal sequence learning by utilising input correlations
only

## Bernd Porr & Florentin Wörgötter

## Neural Computation, MIT press

PDF

Currently all important, low-level, unsupervised network learning
algorithms follow the paradigm of Hebb, where input- and output
activity are correlated to change the connection strength of a
synapse. However, as a consequence, classical Hebbian learning always
carries a potentially destabilising autocorrelation term which is due
to the fact that every input is in a weighted form reflected in the
neuron's output. This self-correlation can lead to positive feedback,
where increasing weights will increase the output and vice versa,
which may result in divergence. This can be avoided by different
strategies like weight normalisation or weight saturation which,
however, can cause different problems. Consequently, in most cases,
high learning rates cannot be used for Hebbian learning leading to
relatively slow convergence. Here we introduce a novel correlation
based learning rule which is related to our ISO-learning rule,
but replaces the derivative of the *output*
in the learning rule with the derivative of the reflex
*input*. Hence the new rule utilises input correlations only,
effectively implementing strict heterosynaptic learning. This looks
like a minor modification, but leads to dramatically improved
properties. Elimination of the output from the learning rule removes
the unwanted, destabilising autocorrelation term allowing us to use
high learning rates. As a consequence we can mathematically show that
the theoretical optimum of *one-shot learning* can be reached
under ideal conditions with the new rule. This result is then tested
against four different experimental setups and we will show that in
all of them very few (and sometimes only one) learning experiences are
needed to achieve the learning goal. As a consequence the new learning
rule is up to 100 times faster and in general more stable than
ISO-learning.

## ICO C++ class

This is a C++ class which implements ICO learning. Please
have a look at the README file in the tar-ball.

icolearning-0.5.tar.gz

## BibTex

@Article{Porr2006ICO,
author = {Porr, Bernd and {W\"{o}rg\"{o}tter}, Florentin},
title = {Strongly improved stability and faster convergence of
temporal sequence learning by utilising input correlations only},
journal = {Neural Computation},
year = {2006},
OPTkey = {},
volume = {18},
number = {6},
pages = {1380--1412},
OPTmonth = {},
OPTnote = {},
OPTannote = {}
}