Search the FAQ Archives

3 - A - B - C - D - E - F - G - H - I - J - K - L - M
N - O - P - Q - R - S - T - U - V - W - X - Y - Z
faqs.org - Internet FAQ Archives

comp.ai.neural-nets FAQ, Part 1 of 7: Introduction
Section - Who is concerned with NNs?

( Part1 - Part2 - Part3 - Part4 - Part5 - Part6 - Part7 - Single Page )
[ Usenet FAQs | Web FAQs | Documents | RFC Index | Property taxes ]


Top Document: comp.ai.neural-nets FAQ, Part 1 of 7: Introduction
Previous Document: What can you do with an NN and what not?
Next Document: How many kinds of NNs exist?
See reader questions & answers on this topic! - Help others by sharing your knowledge

Neural Networks are interesting for quite a lot of very different people: 

 o Computer scientists want to find out about the properties of non-symbolic
   information processing with neural nets and about learning systems in
   general. 
 o Statisticians use neural nets as flexible, nonlinear regression and
   classification models. 
 o Engineers of many kinds exploit the capabilities of neural networks in
   many areas, such as signal processing and automatic control. 
 o Cognitive scientists view neural networks as a possible apparatus to
   describe models of thinking and consciousness (High-level brain
   function). 
 o Neuro-physiologists use neural networks to describe and explore
   medium-level brain function (e.g. memory, sensory system, motorics). 
 o Physicists use neural networks to model phenomena in statistical
   mechanics and for a lot of other tasks. 
 o Biologists use Neural Networks to interpret nucleotide sequences. 
 o Philosophers and some other people may also be interested in Neural
   Networks for various reasons. 

For world-wide lists of groups doing research on NNs, see the Foundation for
Neural Networks's (SNN) page at 
http://www.mbfys.kun.nl/snn/pointers/groups.html and see Neural Networks
Research on the IEEE Neural Network Council's homepage 
http://www.ieee.org/nnc. 

User Contributions:

1
Majid Maqbool
Sep 27, 2024 @ 5:05 am
https://techpassion.co.uk/how-does-a-smart-tv-work-read-complete-details/
PDP++ is a neural-network simulation system written in C++, developed as an advanced version of the original PDP software from McClelland and Rumelhart's "Explorations in Parallel Distributed Processing Handbook" (1987). The software is designed for both novice users and researchers, providing flexibility and power in cognitive neuroscience studies. Featured in Randall C. O'Reilly and Yuko Munakata's "Computational Explorations in Cognitive Neuroscience" (2000), PDP++ supports a wide range of algorithms. These include feedforward and recurrent error backpropagation, with continuous and real-time models such as Almeida-Pineda. It also incorporates constraint satisfaction algorithms like Boltzmann Machines, Hopfield networks, and mean-field networks, as well as self-organizing learning algorithms, including Self-organizing Maps (SOM) and Hebbian learning. Additionally, it supports mixtures-of-experts models and the Leabra algorithm, which combines error-driven and Hebbian learning with k-Winners-Take-All inhibitory competition. PDP++ is a comprehensive tool for exploring neural network models in cognitive neuroscience.

Comment about this article, ask questions, or add new information about this topic:




Top Document: comp.ai.neural-nets FAQ, Part 1 of 7: Introduction
Previous Document: What can you do with an NN and what not?
Next Document: How many kinds of NNs exist?

Part1 - Part2 - Part3 - Part4 - Part5 - Part6 - Part7 - Single Page

[ Usenet FAQs | Web FAQs | Documents | RFC Index ]

Send corrections/additions to the FAQ Maintainer:
saswss@unx.sas.com (Warren Sarle)





Last Update March 27 2014 @ 02:11 PM