See reader questions & answers on this topic! - Help others by sharing your knowledge
Copyright 1997, 1998, 1999, 2000, 2001, 2002 by Warren S. Sarle, Cary, NC, USA. Answers provided by other authors as cited below are copyrighted by those authors, who by submitting the answers for the FAQ give permission for the answer to be reproduced as part of the FAQ in any of the ways specified in part 1 of the FAQ. This is part 2 (of 7) of a monthly posting to the Usenet newsgroup comp.ai.neural-nets. See the part 1 of this posting for full information what it is all about. ========== Questions ========== ******************************** Part 1: Introduction Part 2: Learning What are combination, activation, error, and objective functions? Combination functions Activation functions Error functions Objective functions What are batch, incremental, on-line, off-line, deterministic, stochastic, adaptive, instantaneous, pattern, epoch, constructive, and sequential learning? Batch vs. Incremental Learning (also Instantaneous, Pattern, and Epoch) On-line vs. Off-line Learning Deterministic, Stochastic, and Adaptive Learning Constructive Learning (Growing networks) Sequential Learning, Catastrophic Interference, and the Stability-Plasticity Dilemma What is backprop? What learning rate should be used for backprop? What are conjugate gradients, Levenberg-Marquardt, etc.? How does ill-conditioning affect NN training? How should categories be encoded? Why not code binary inputs as 0 and 1? Why use a bias/threshold? Why use activation functions? How to avoid overflow in the logistic function? What is a softmax activation function? What is the curse of dimensionality? How do MLPs compare with RBFs? Hybrid training and the curse of dimensionality Additive inputs Redundant inputs Irrelevant inputs What are OLS and subset/stepwise regression? Should I normalize/standardize/rescale the data? Should I standardize the input variables? Should I standardize the target variables? Should I standardize the variables for unsupervised learning? Should I standardize the input cases? Should I nonlinearly transform the data? How to measure importance of inputs? What is ART? What is PNN? What is GRNN? What does unsupervised learning learn? Help! My NN won't learn! What should I do? Part 3: Generalization Part 4: Books, data, etc. Part 5: Free software Part 6: Commercial software Part 7: Hardware and miscellaneous User Contributions:Comment about this article, ask questions, or add new information about this topic:Section Contents
Part1 - Part2 - Part3 - Part4 - Part5 - Part6 - Part7 - Single Page [ Usenet FAQs | Web FAQs | Documents | RFC Index ] Send corrections/additions to the FAQ Maintainer: saswss@unx.sas.com (Warren Sarle)
Last Update March 27 2014 @ 02:11 PM
|
PDP++ is a neural-network simulation system written in C++, developed as an advanced version of the original PDP software from McClelland and Rumelhart's "Explorations in Parallel Distributed Processing Handbook" (1987). The software is designed for both novice users and researchers, providing flexibility and power in cognitive neuroscience studies. Featured in Randall C. O'Reilly and Yuko Munakata's "Computational Explorations in Cognitive Neuroscience" (2000), PDP++ supports a wide range of algorithms. These include feedforward and recurrent error backpropagation, with continuous and real-time models such as Almeida-Pineda. It also incorporates constraint satisfaction algorithms like Boltzmann Machines, Hopfield networks, and mean-field networks, as well as self-organizing learning algorithms, including Self-organizing Maps (SOM) and Hebbian learning. Additionally, it supports mixtures-of-experts models and the Leabra algorithm, which combines error-driven and Hebbian learning with k-Winners-Take-All inhibitory competition. PDP++ is a comprehensive tool for exploring neural network models in cognitive neuroscience.