Representational trajectories in connectionist learning

Andy Clark*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)

Abstract

The paper considers the problems involved in getting neural networks to learn about highly structured task domains. A central problem concerns the tendency of networks to learn only a set of shallow (non-generalizable) representations for the task, i.e., to 'miss' the deep organizing features of the domain. Various solutions are examined, including task specific network configuration and incremental learning. The latter strategy is the more attractive, since it holds out the promise of a task-independent solution to the problem. Once we see exactly how the solution works, however, it becomes clear that it is limited to a special class of cases in which (1) statistically driven undersampling is (luckily) equivalent to task decomposition, and (2) the dangers of unlearning are somehow being minimized. The technique is suggestive nonetheless, for a variety of developmental factors may yield the functional equivalent of both statistical AND 'informed' undersampling in early learning.

Original languageEnglish
Pages (from-to)317-332
Number of pages16
JournalMinds and Machines
Volume4
Issue number3
DOIs
Publication statusPublished - Aug 1994
Externally publishedYes

Keywords

  • catastrophic forgetting
  • Connectionism
  • development
  • learning
  • recurrent networks
  • unlearning

Fingerprint Dive into the research topics of 'Representational trajectories in connectionist learning'. Together they form a unique fingerprint.

Cite this