Search the FAQ Archives

3 - A - B - C - D - E - F - G - H - I - J - K - L - M
N - O - P - Q - R - S - T - U - V - W - X - Y - Z
faqs.org - Internet FAQ Archives

comp.ai.neural-nets FAQ, Part 2 of 7: Learning
Section - How to avoid overflow in the logistic function?

( Part1 - Part2 - Part3 - Part4 - Part5 - Part6 - Part7 - Single Page )
[ Usenet FAQs | Web FAQs | Documents | RFC Index | Forum archive ]


Top Document: comp.ai.neural-nets FAQ, Part 2 of 7: Learning
Previous Document: Why use activation functions?
Next Document: What is a softmax activation function?
See reader questions & answers on this topic! - Help others by sharing your knowledge

The formula for the logistic activation function is often written as: 

   netoutput = 1 / (1+exp(-netinput));

But this formula can produce floating-point overflow in the exponential
function if you program it in this simple form. To avoid overflow, you can
do this: 

   if (netinput < -45) netoutput = 0;
   else if (netinput > 45) netoutput = 1;
   else netoutput = 1 / (1+exp(-netinput));

The constant 45 will work for double precision on all machines that I know
of, but there may be some bizarre machines where it will require some
adjustment. Other activation functions can be handled similarly. 

User Contributions:

Comment about this article, ask questions, or add new information about this topic:




Top Document: comp.ai.neural-nets FAQ, Part 2 of 7: Learning
Previous Document: Why use activation functions?
Next Document: What is a softmax activation function?

Part1 - Part2 - Part3 - Part4 - Part5 - Part6 - Part7 - Single Page

[ Usenet FAQs | Web FAQs | Documents | RFC Index ]

Send corrections/additions to the FAQ Maintainer:
saswss@unx.sas.com (Warren Sarle)





Last Update March 27 2014 @ 02:11 PM