Search the FAQ Archives

3 - A - B - C - D - E - F - G - H - I - J - K - L - M
N - O - P - Q - R - S - T - U - V - W - X - Y - Z
faqs.org - Internet FAQ Archives

comp.ai.neural-nets FAQ, Part 1 of 7: Introduction
Section - What are cases and variables?

( Part1 - Part2 - Part3 - Part4 - Part5 - Part6 - Part7 - Single Page )
[ Usenet FAQs | Web FAQs | Documents | RFC Index | Forum ]


Top Document: comp.ai.neural-nets FAQ, Part 1 of 7: Introduction
Previous Document: How are layers counted?
Next Document: What are the population, sample, training set,
See reader questions & answers on this topic! - Help others by sharing your knowledge

A vector of values presented at one time to all the input units of a neural
network is called a "case", "example", "pattern, "sample", etc. The term
"case" will be used in this FAQ because it is widely recognized,
unambiguous, and requires less typing than the other terms. A case may
include not only input values, but also target values and possibly other
information. 

A vector of values presented at different times to a single input unit is
often called an "input variable" or "feature". To a statistician, it is a
"predictor", "regressor", "covariate", "independent variable", "explanatory
variable", etc. A vector of target values associated with a given output
unit of the network during training will be called a "target variable" in
this FAQ. To a statistician, it is usually a "response" or "dependent
variable". 

A "data set" is a matrix containing one or (usually) more cases. In this
FAQ, it will be assumed that cases are rows of the matrix, while variables
are columns. 

Note that the often-used term "input vector" is ambiguous; it can mean
either an input case or an input variable. 

User Contributions:

1
Majid Maqbool
Sep 27, 2024 @ 5:05 am
https://techpassion.co.uk/how-does-a-smart-tv-work-read-complete-details/
PDP++ is a neural-network simulation system written in C++, developed as an advanced version of the original PDP software from McClelland and Rumelhart's "Explorations in Parallel Distributed Processing Handbook" (1987). The software is designed for both novice users and researchers, providing flexibility and power in cognitive neuroscience studies. Featured in Randall C. O'Reilly and Yuko Munakata's "Computational Explorations in Cognitive Neuroscience" (2000), PDP++ supports a wide range of algorithms. These include feedforward and recurrent error backpropagation, with continuous and real-time models such as Almeida-Pineda. It also incorporates constraint satisfaction algorithms like Boltzmann Machines, Hopfield networks, and mean-field networks, as well as self-organizing learning algorithms, including Self-organizing Maps (SOM) and Hebbian learning. Additionally, it supports mixtures-of-experts models and the Leabra algorithm, which combines error-driven and Hebbian learning with k-Winners-Take-All inhibitory competition. PDP++ is a comprehensive tool for exploring neural network models in cognitive neuroscience.

Comment about this article, ask questions, or add new information about this topic:




Top Document: comp.ai.neural-nets FAQ, Part 1 of 7: Introduction
Previous Document: How are layers counted?
Next Document: What are the population, sample, training set,

Part1 - Part2 - Part3 - Part4 - Part5 - Part6 - Part7 - Single Page

[ Usenet FAQs | Web FAQs | Documents | RFC Index ]

Send corrections/additions to the FAQ Maintainer:
saswss@unx.sas.com (Warren Sarle)





Last Update March 27 2014 @ 02:11 PM