Search the FAQ Archives

3 - A - B - C - D - E - F - G - H - I - J - K - L - M
N - O - P - Q - R - S - T - U - V - W - X - Y - Z
faqs.org - Internet FAQ Archives

comp.ai.neural-nets FAQ, Part 1 of 7: Introduction

( Part1 - Part2 - Part3 - Part4 - Part5 - Part6 - Part7 - Single Page )
[ Usenet FAQs | Web FAQs | Documents | RFC Index | Forum ]

See reader questions & answers on this topic! - Help others by sharing your knowledge
Copyright 1997, 1998, 1999, 2000, 2001, 2002 by Warren S. Sarle, Cary, NC,
USA. 

  ---------------------------------------------------------------
    Additions, corrections, or improvements are always welcome.
    Anybody who is willing to contribute any information,
    please email me; if it is relevant, I will incorporate it.

    The monthly posting departs around the 28th of every month.
  ---------------------------------------------------------------

This is the first of seven parts of a monthly posting to the Usenet
newsgroup comp.ai.neural-nets (as well as  and ,
where it should be findable at any time). Its purpose is to provide basic
information for individuals who are new to the field of neural networks or
who are just beginning to read this group. It will help to avoid lengthy
discussion of questions that often arise for beginners. 

   SO, PLEASE, SEARCH THIS POSTING FIRST IF YOU HAVE A QUESTION
                           and
   DON'T POST ANSWERS TO FAQs: POINT THE ASKER TO THIS POSTING

The latest version of the FAQ is available as a hypertext document, readable
by any WWW (World Wide Web) browser such as Netscape, under the URL: 
ftp://ftp.sas.com/pub/neural/FAQ.html.

If you are reading the version of the FAQ posted in comp.ai.neural-nets, be
sure to view it with a monospace font such as Courier. If you view it with a
proportional font, tables and formulas will be mangled. Some newsreaders or
WWW news services garble plain text. If you have trouble viewing plain text,
try the HTML version described above. 

All seven parts of the FAQ can be downloaded from either of the following
URLS:

   ftp://ftp.sas.com/pub/neural/FAQ.html.zip
   ftp://ftp.sas.com/pub/neural/FAQ.txt.zip

These postings are archived in the periodic posting archive on host
rtfm.mit.edu (and on some other hosts as well). Look in the anonymous ftp
directory "/pub/usenet/news.answers/ai-faq/neural-nets" under the file names
"part1", "part2", ... "part7". If you do not have anonymous ftp access, you
can access the archives by mail server as well. Send an E-mail message to
mail-server@rtfm.mit.edu with "help" and "index" in the body on separate
lines for more information. 

For those of you who read this FAQ anywhere other than in Usenet: To read
comp.ai.neural-nets (or post articles to it) you need Usenet News access.
Try the commands, 'xrn', 'rn', 'nn', or 'trn' on your Unix machine, 'news'
on your VMS machine, or ask a local guru. WWW browsers are often set up for
Usenet access, too--try the URL news:comp.ai.neural-nets. 

The FAQ posting departs to comp.ai.neural-nets around the 28th of every
month. It is also sent to the groups  and  where it
should be available at any time (ask your news manager). The FAQ posting,
like any other posting, may a take a few days to find its way over Usenet to
your site. Such delays are especially common outside of North America. 

All changes to the FAQ from the previous month are shown in another monthly
posting having the subject `changes to "comp.ai.neural-nets FAQ" -- monthly
posting', which immediately follows the FAQ posting. The `changes' post
contains the full text of all changes and can also be found at
ftp://ftp.sas.com/pub/neural/changes.txt . There is also a weekly post with
the subject "comp.ai.neural-nets FAQ: weekly reminder" that briefly
describes any major changes to the FAQ. 

This FAQ is not meant to discuss any topic exhaustively. It is neither a
tutorial nor a textbook, but should be viewed as a supplement to the many
excellent books and online resources described in Part 4: Books, data, etc..

Disclaimer: 

   This posting is provided 'as is'. No warranty whatsoever is expressed or
   implied, in particular, no warranty that the information contained herein
   is correct or useful in any way, although both are intended. 

To find the answer of question "x", search for the string "Subject: x"

========== Questions ========== 
********************************

Part 1: Introduction

   What is this newsgroup for? How shall it be used?
   Where is comp.ai.neural-nets archived?
   What if my question is not answered in the FAQ?
   May I copy this FAQ?
   What is a neural network (NN)?
   Where can I find a simple introduction to NNs?
   Are there any online books about NNs?
   What can you do with an NN and what not?
   Who is concerned with NNs?
   How many kinds of NNs exist?
   How many kinds of Kohonen networks exist? (And what is k-means?)
      VQ: Vector Quantization and k-means
      SOM: Self-Organizing Map
      LVQ: Learning Vector Quantization
      Other Kohonen networks and references
   How are layers counted?
   What are cases and variables?
   What are the population, sample, training set, design set, validation
   set, and test set?
   How are NNs related to statistical methods?

Part 2: Learning

   What are combination, activation, error, and objective functions?
   What are batch, incremental, on-line, off-line, deterministic,
   stochastic, adaptive, instantaneous, pattern, epoch, constructive, and
   sequential learning?
   What is backprop?
   What learning rate should be used for backprop?
   What are conjugate gradients, Levenberg-Marquardt, etc.?
   How does ill-conditioning affect NN training?
   How should categories be encoded?
   Why not code binary inputs as 0 and 1?
   Why use a bias/threshold?
   Why use activation functions?
   How to avoid overflow in the logistic function?
   What is a softmax activation function?
   What is the curse of dimensionality?
   How do MLPs compare with RBFs?
   What are OLS and subset/stepwise regression?
   Should I normalize/standardize/rescale the data?
   Should I nonlinearly transform the data?
   How to measure importance of inputs?
   What is ART?
   What is PNN?
   What is GRNN?
   What does unsupervised learning learn?
   Help! My NN won't learn! What should I do?

Part 3: Generalization

   How is generalization possible?
   How does noise affect generalization?
   What is overfitting and how can I avoid it?
   What is jitter? (Training with noise)
   What is early stopping?
   What is weight decay?
   What is Bayesian learning?
   How to combine networks?
   How many hidden layers should I use?
   How many hidden units should I use?
   How can generalization error be estimated?
   What are cross-validation and bootstrapping?
   How to compute prediction and confidence intervals (error bars)?

Part 4: Books, data, etc.

   Books and articles about Neural Networks?
   Journals and magazines about Neural Networks?
   Conferences and Workshops on Neural Networks?
   Neural Network Associations?
   Mailing lists, BBS, CD-ROM?
   How to benchmark learning methods?
   Databases for experimentation with NNs?

Part 5: Free software

   Source code on the web?
   Freeware and shareware packages for NN simulation?

Part 6: Commercial software

   Commercial software packages for NN simulation?

Part 7: Hardware and miscellaneous

   Neural Network hardware?
   What are some applications of NNs?
      General
      Agriculture
      Chemistry
      Face recognition
      Finance and economics
      Games, sports, gambling
      Industry
      Materials science
      Medicine
      Music
      Robotics
      Weather forecasting
      Weird
   What to do with missing/incomplete data?
   How to forecast time series (temporal sequences)?
   How to learn an inverse of a function?
   How to get invariant recognition of images under translation, rotation,
   etc.?
   How to recognize handwritten characters?
   What about pulsed or spiking NNs?
   What about Genetic Algorithms and Evolutionary Computation?
   What about Fuzzy Logic?
   Unanswered FAQs
   Other NN links?

User Contributions:

1
Majid Maqbool
Sep 27, 2024 @ 5:05 am
https://techpassion.co.uk/how-does-a-smart-tv-work-read-complete-details/
PDP++ is a neural-network simulation system written in C++, developed as an advanced version of the original PDP software from McClelland and Rumelhart's "Explorations in Parallel Distributed Processing Handbook" (1987). The software is designed for both novice users and researchers, providing flexibility and power in cognitive neuroscience studies. Featured in Randall C. O'Reilly and Yuko Munakata's "Computational Explorations in Cognitive Neuroscience" (2000), PDP++ supports a wide range of algorithms. These include feedforward and recurrent error backpropagation, with continuous and real-time models such as Almeida-Pineda. It also incorporates constraint satisfaction algorithms like Boltzmann Machines, Hopfield networks, and mean-field networks, as well as self-organizing learning algorithms, including Self-organizing Maps (SOM) and Hebbian learning. Additionally, it supports mixtures-of-experts models and the Leabra algorithm, which combines error-driven and Hebbian learning with k-Winners-Take-All inhibitory competition. PDP++ is a comprehensive tool for exploring neural network models in cognitive neuroscience.

Comment about this article, ask questions, or add new information about this topic:



Section Contents



Part1 - Part2 - Part3 - Part4 - Part5 - Part6 - Part7 - Single Page

[ Usenet FAQs | Web FAQs | Documents | RFC Index ]

Send corrections/additions to the FAQ Maintainer:
saswss@unx.sas.com (Warren Sarle)





Last Update March 27 2014 @ 02:11 PM