A Comparison of Different Neural Methods for Solving Iterative Roots


Contact
lkindermann [ at ] awi-bremerhaven.de

Abstract

Finding iterative roots is the inverse problem of iteration.Iteration itself plays a major role in numerous theoriesand applications. So far it is hardly realized how manyproblems can be related to its counterpart. This may bedue to the difficulty of the mathematics involved: Thereare no standard methods available for computing thesefractional iterations.Previously we have shown how neural networks can beutilized to perform calculations of iterative roots by addinga weight coupling mechanism to backpropagationlearning. Here we show that an easier implementation ofthis functionality can be achieved by a simple weightcopy function. Introducing second order methods likequasi Newton learning on the other hand can significantlyreduce training times and improve the reliability of themethod. It also overcomes some limitations in the complexityof the problems the method can be applied to.



Item Type
Conference (Conference paper)
Authors
Divisions
Programs
Peer revision
Not peer-reviewed
Publication Status
Published
Event Details
Proceedings of the Seventh International Conference on Neural Information Processing (ICONIP'2000), Taejon.
Eprint ID
10390
Cite as
Kindermann, L. , Lewandowski, A. and Protzel, P. (2000): A Comparison of Different Neural Methods for Solving Iterative Roots , Proceedings of the Seventh International Conference on Neural Information Processing (ICONIP'2000), Taejon .


Share

Research Platforms
N/A

Campaigns


Actions
Edit Item Edit Item