|
Top Document: comp.ai.neural-nets FAQ, Part 1 of 7: Introduction Previous Document: Where can I find a simple introduction to NNs? Next Document: What can you do with an NN and what not? See reader questions & answers on this topic! - Help others by sharing your knowledge Kevin Gurney has on-line a preliminary draft of his book, An Introduction to Neural Networks, at http://www.shef.ac.uk/psychology/gurney/notes/index.html The book is now in print and is one of the better general-purpose introductory textbooks on NNs. Here is the table of contents from the on-line version: 1. Computers and Symbols versus Nets and Neurons 2. TLUs and vectors - simple learning rules 3. The delta rule 4. Multilayer nets and backpropagation 5. Associative memories - the Hopfield net 6. Hopfield nets (contd.) 7. Kohonen nets 8. Alternative node types 9. Cubic nodes (contd.) and Reward Penalty training 10. Drawing things together - some perspectives Another on-line book by Ben Kröse and Patrick van der Smagt, also called An Introduction to Neural Networks, can be found at ftp://ftp.wins.uva.nl/pub/computer-systems/aut-sys/reports/neuro-intro/neuro-intro.ps.gz or http://www.robotic.dlr.de/Smagt/books/neuro-intro.ps.gz. or http://www.supelec-rennes.fr/acth/net/neuro-intro.ps.gz Here is the table of contents: 1. Introduction 2. Fundamantals 3. Perceptron and Adaline 4. Back-Propagation 5. Recurrent Networks 6. Self-Organising Networks 7. Reinforcement Learning 8. Robot Control 9. Vision 10. General Purpose Hardware 11. Dedicated Neuro-Hardware User Contributions:Comment about this article, ask questions, or add new information about this topic:Top Document: comp.ai.neural-nets FAQ, Part 1 of 7: Introduction Previous Document: Where can I find a simple introduction to NNs? Next Document: What can you do with an NN and what not? Part1 - Part2 - Part3 - Part4 - Part5 - Part6 - Part7 - Single Page [ Usenet FAQs | Web FAQs | Documents | RFC Index ] Send corrections/additions to the FAQ Maintainer: saswss@unx.sas.com (Warren Sarle)
Last Update March 27 2014 @ 02:11 PM
|

PDP++ is a neural-network simulation system written in C++, developed as an advanced version of the original PDP software from McClelland and Rumelhart's "Explorations in Parallel Distributed Processing Handbook" (1987). The software is designed for both novice users and researchers, providing flexibility and power in cognitive neuroscience studies. Featured in Randall C. O'Reilly and Yuko Munakata's "Computational Explorations in Cognitive Neuroscience" (2000), PDP++ supports a wide range of algorithms. These include feedforward and recurrent error backpropagation, with continuous and real-time models such as Almeida-Pineda. It also incorporates constraint satisfaction algorithms like Boltzmann Machines, Hopfield networks, and mean-field networks, as well as self-organizing learning algorithms, including Self-organizing Maps (SOM) and Hebbian learning. Additionally, it supports mixtures-of-experts models and the Leabra algorithm, which combines error-driven and Hebbian learning with k-Winners-Take-All inhibitory competition. PDP++ is a comprehensive tool for exploring neural network models in cognitive neuroscience.