Abstract
Temporally varying classification by a dynamic classifier
network is
introduced. The
dynamic classifier network consists of several
independent nonlinear
classifiers in
parallel. The subclassifiers adapt to the measurements
with a variety of
adaptation
rates. The output of the classifier network can be
calculated as a weighted sum
of the
outputs of each subclassifier. Two methods to optimize
the weighting are given.
However, even a simple weighting function gives
reasonable results. The network
might be considered as a temporal associative memory.
Because of nonlinearities
and
the ensuing chaos the behavior of the network can be very
complicated.
Algorithms to
calculate the fractal and correlation dimension are also
given. With these
dimensions
one can estimate how complicated the behavior of a system
is and how many
parameters are needed to describe its behavior. An
extension of geodesic
distance
transform, called distance transform in curved space, is
also presented. This
transform
can, for example, be used to model dynamic decision
manifolds. Some new
properties
of fractals are also presented. These properties can be
utilized efficiently
when defining
the Lyapunov exponents and the basins of attraction for
maps. The methods
presented
have several application areas.
Original language | English |
---|---|
Qualification | Doctor Degree |
Awarding Institution |
|
Award date | 20 Dec 1991 |
Place of Publication | Espoo |
Publisher | |
Print ISBNs | 951-38-4061-1 |
Publication status | Published - 1991 |
MoE publication type | G5 Doctoral dissertation (article) |
Keywords
- neural networks
- classification
- nonlinear network analysis