Abstract
Local minima in the error surfaces of feed-forward neural networks are significant because they may entrap gradient based training algorithms. Recent results have identified conditions under which local minima do not occur. The present paper considers three distinct definitions of local minimum, concluding that a new definition, called regional minimum, corresponds most closely to intuition. Using this definition, we analyse weight configurations in which a hidden node is ignored or redundant and show that these are not local minima. The practical implications of this result for gradient based learning are discussed.
| Original language | English |
|---|---|
| Title of host publication | Proceedings of the Seventh Australian Conference on Neural Networks : ACNN'96, Canberra, 10-12 April 1996 |
| Editors | P Bartlett, A Burkitt, R Williamson |
| Place of Publication | Canberra |
| Publisher | ANU |
| Pages | 173-178 |
| Number of pages | 6 |
| ISBN (Print) | 0731524292 |
| Publication status | Published - 1996 |
| Event | Australian Conference on Neural Networks (7th : 1996) - Canberra, Australia Duration: 10 Apr 1996 → 12 Apr 1996 |
Conference
| Conference | Australian Conference on Neural Networks (7th : 1996) |
|---|---|
| Country/Territory | Australia |
| City | Canberra |
| Period | 10/04/96 → 12/04/96 |
Fingerprint
Dive into the research topics of 'Results on weight configurations that are not local minima in feed-forward neural networks'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver