Search the FAQ Archives

3 - A - B - C - D - E - F - G - H - I - J - K - L - M
N - O - P - Q - R - S - T - U - V - W - X - Y - Z
faqs.org - Internet FAQ Archives

comp.ai.neural-nets FAQ, Part 2 of 7: Learning
Section - What is GRNN?

( Part1 - Part2 - Part3 - Part4 - Part5 - Part6 - Part7 - Single Page )
[ Usenet FAQs | Web FAQs | Documents | RFC Index | Forum ]


Top Document: comp.ai.neural-nets FAQ, Part 2 of 7: Learning
Previous Document: What is PNN?
Next Document: What does unsupervised learning learn?
See reader questions & answers on this topic! - Help others by sharing your knowledge

GRNN or "General Regression Neural Network" is Donald Specht's term for
Nadaraya-Watson kernel regression, also reinvented in the NN literature by
Schi\oler and Hartmann. (Kernels are also called "Parzen windows".) You can
think of it as a normalized RBF network in which there is a hidden unit
centered at every training case. These RBF units are called "kernels" and
are usually probability density functions such as the Gaussian. The
hidden-to-output weights are just the target values, so the output is simply
a weighted average of the target values of training cases close to the given
input case. The only weights that need to be learned are the widths of the
RBF units. These widths (often a single width is used) are called "smoothing
parameters" or "bandwidths" and are usually chosen by cross-validation or by
more esoteric methods that are not well-known in the neural net literature;
gradient descent is not used. 

GRNN is a universal approximator for smooth functions, so it should be able
to solve any smooth function-approximation problem given enough data. The
main drawback of GRNN is that, like kernel methods in general, it suffers
badly from the curse of dimensionality. GRNN cannot ignore irrelevant inputs
without major modifications to the basic algorithm. So GRNN is not likely to
be the top choice if you have more than 5 or 6 nonredundant inputs. 

References: 

   Caudill, M. (1993), "GRNN and Bear It," AI Expert, Vol. 8, No. 5 (May),
   28-33. 

   Haerdle, W. (1990), Applied Nonparametric Regression, Cambridge Univ.
   Press. 

   Masters, T. (1995) Advanced Algorithms for Neural Networks: A C++
   Sourcebook, NY: John Wiley and Sons, ISBN 0-471-10588-0 

   Nadaraya, E.A. (1964) "On estimating regression", Theory Probab. Applic.
   10, 186-90. 

   Schi\oler, H. and Hartmann, U. (1992) "Mapping Neural Network Derived
   from the Parzen Window Estimator", Neural Networks, 5, 903-909. 

   Specht, D.F. (1968) "A practical technique for estimating general
   regression surfaces," Lockheed report LMSC 6-79-68-6, Defense Technical
   Information Center AD-672505. 

   Specht, D.F. (1991) "A Generalized Regression Neural Network", IEEE
   Transactions on Neural Networks, 2, Nov. 1991, 568-576. 

   Wand, M.P., and Jones, M.C. (1995), Kernel Smoothing, London: Chapman &
   Hall. 

   Watson, G.S. (1964) "Smooth regression analysis", Sankhy\=a, Series A,
   26, 359-72. 

User Contributions:

Comment about this article, ask questions, or add new information about this topic:




Top Document: comp.ai.neural-nets FAQ, Part 2 of 7: Learning
Previous Document: What is PNN?
Next Document: What does unsupervised learning learn?

Part1 - Part2 - Part3 - Part4 - Part5 - Part6 - Part7 - Single Page

[ Usenet FAQs | Web FAQs | Documents | RFC Index ]

Send corrections/additions to the FAQ Maintainer:
saswss@unx.sas.com (Warren Sarle)





Last Update March 27 2014 @ 02:11 PM