I'm trying to create a function in a network with trainable parameters. In my function I have an exponential that for large tensor values goes to infinity. What would the best way to avoid this be?
The function is as follows:
step1 = Pss-(k*Pvv)
step2 = step1*s
step3 = torch.exp(step2)
step4 = torch.log10(1+step3)
step5 = step4/s
#or equivalently
# train_curve = torch.log(1+torch.exp((Pss-k*Pvv)*s))/s
If it makes it easier to understand, the basic function is log10(1+e^(x-const)*10)/10. The exponential inside the log gets too big and goes to inf.
I think I might have to normalize my tensor x, and this would mean normalizing the constants and the rest of the function also. Would someone have any thoughts on the best way to go about this?
Thanks so much.
One solution is to just use a more stable computation. Notice that log(1 + exp(x)) is approximately equal to x when x is large enough. Intuitively this can be observed by noting that, for example, exp(50) is approximately 5.18e+21 for which adding 1 will have no effect when using 32-bit floating point arithmetic like PyTorch does. Further verification using an arbitrary precision calculator shows that the error in this approximation at 50 is far outside the maximum 32-bit floating point precision (which is about 7 decimal digits).
Using this information we can implement a simple piecewise function in PyTorch for which we use log1p(exp(x)) for values less than 50 and x for values greater than 50. Also note that this function is autograd compatible
def log1pexp(x):
# more stable version of log(1 + exp(x))
return torch.where(x < 50, torch.log1p(torch.exp(x)), x)
This get's us most of the way to a solution since you actually want to evaluate torch.log10(1+torch.exp((Pss-k*Pvv)*s))/s
Now we can use our new log1pexp function to compute this expression without worrying about infinities
(log1pexp((Pss - k*Pvv)*s) / math.log(10)) / s
and mind the conversion from natural log to log base-10 by dividing by log(10).
Related
I am trying to implement Gensim's most_similar function by hand but calculate the similarity between the query word and just one other word (avoiding the time to calculate it for the query word with all other words). So far I use
cossim = (np.dot(a, b)
/ np.linalg.norm(a)
/ np.linalg.norm(b))
and this is the same as the similarity result between a and b. I find this works almost exactly but that some precision is lost, for example
from gensim.models.word2vec import Word2Vec
import gensim.downloader as api
model_gigaword = api.load("glove-wiki-gigaword-300")
a = 'france'
b = 'chirac'
cossim1 = model_gigaword.most_similar(a)
import numpy as np
cossim2 = (np.dot(model_gigaword[a], model_gigaword[b])
/ np.linalg.norm(model_gigaword[a])
/ np.linalg.norm(model_gigaword[b]))
print(cossim1)
print(cossim2)
Output:
[('french', 0.7344760894775391), ('paris', 0.6580672264099121), ('belgium', 0.620672345161438), ('spain', 0.573593258857727), ('italy', 0.5643460154533386), ('germany', 0.5567398071289062), ('prohertrib', 0.5564222931861877), ('britain', 0.5553334355354309), ('chirac', 0.5362644195556641), ('switzerland', 0.5320892333984375)]
0.53626436
So the most_similar function gives 0.53626441955... (rounds to 0.53626442) and the calculation with numpy gives 0.53626436. Similarly, you can see differences between the values for 'paris' and 'italy' (in similarity compared to 'france'). These differences suggest that the calculation is not being done to full precision (but it is in Gensim). How can I fix it and get the output for a single similarity to higher precision, exactly as it comes from most_similar?
TL/DR - I want to use function('france', 'chirac') and get 0.5362644195556641, not 0.53626436.
Any idea what's going on?
UPDATE: I should clarify, I want to know and replicate how most_similar does the computation, but for only one (a,b) pair. That's my priority, rather than finding out how to improve the precision of my cossim calculation above. I just assumed the two were equivalent.
To increase accuracy you can try the following:
a = np.array(model_gigaword[a]).astype('float128')
b = np.array(model_gigaword[b]).astype('float128')
cossim = (np.dot(a, b)
/ np.linalg.norm(a)
/ np.linalg.norm(b))
The vectors are likely to use lower-precision floats and hence there is loss precision in calculations.
However, the results I got are somewhat different to what model_gigaword.most_similar offers for you:
model_gigaword.similarity: 0.5362644
float64: 0.5362644263010196
float128: 0.53626442630101950744
You may want to check what you get on your machine and with your version of Python and gensim.
Because floating-point numbers (like the np.float32-typed values in these vector models) are represented using an imprecise binary approximation, none of the numbers you're working with, or displaying, are the exact decimal numbers you think they are.
The number you're seeing as 0.53626436 isn't exactly that - but some binary floating-point number very close to that number. Similarly, the number you're seeing as 0.5362644195556641 isn't exactly that – but some other binary floating-point number, ver close to that.
Further, these tiny imprecisions can mean that mathematical expressions that should under ideal circumstances give identical results to each other, no matter the order-of-evaluation, instead give slightly different results for different orders-of-evaluation. For example, we know that mathematically, a * (b + c) is always equal to ab + ac. However, if a, b, & c are floating-point numbers with limited precision, the results of doing the addition then multiplication, versus doing two multiplications then one addition, might vary - because the interim values would have been approximated slightly differently.
But: for nearly all domains in which these numbers are used, this tiny amount of noise shouldn't make any difference. The right policy is to ignore it, and write code that's robust to this small 'jitter' in extremely-low-significance digits - especially when printing or comparing results.
So really you should only be printing/comparing these numbers to a level of significance where they reliably agree, say, 4 digits after the decimal:
0.53626436
0.5362644195556641
(In fact, your output already makes it look like you may have changed the default level of display-precision in numpy or python, because it wouldn't be typical for the results of most_simlar() to display with those 16 digits after the decimal.)
If you really, really wanted, as an exploration, to match the most_similar() results exactly, you could look at its source code. Then, perform the exact same steps, in the exact same order, using the exact same library routines, on your inputs.
(Here's the source for most_similar() in the current gensim-4.0.0beta prerelease: https://github.com/RaRe-Technologies/gensim/blob/4.0.0beta/gensim/models/keyedvectors.py#L690)
But: insisting on such exact correspondence is usually unwise, & creates more-fragile code, given the inherent imprecision in floating-point math.
See also: another answer covering some similar issues, which also points out a way to change the default displayed precision.
I am developing a machine learning based algorithm on python. The main thing, that I need to calculate to solve this problem is probabilities. This way I have the following code:
class_ans = class_probability[current_class] * lambdas[current_class]
for word in appears_words:
if word in message:
class_ans *= words_probability[(word, current_class)]
else:
class_ans *= (1 - words_probability[(word, current_class)])
ans.append(class_ans)
ans[current_class] /= summ
It works, but in case the dataset is too big or lambdas value is too small, I ran out of my float precision.
I've tryed to research an other algorithm of calculating my answer's value, multimplying and dividing on some random consts different variables to make them not to overflow. Despite this, nothing helped.
This way, I would like to ask, is there any ways to increase my float precision in python?
Thanks!
You cannot. When using serious scientific computation where precision is key (and speed is not), consider the following two options:
Instead of using float, switch your datatype to decimal.Decimal and set your desired precision.
For a more battle-hardened thorough implementation, switch to gmpy2.mpfr as your data type.
However, if your entire computation (or at least the problematic part) involves the multiplication of factors, you can often bypass the need for the above by working in log-space as Konrad Rudolph suggests in the comments:
a * b * c * d * ... = exp(log(a) + log(b) + log(c) + log(d) + ...)
I calculate the first derivative using the following code:
def f(x):
f = np.exp(x)
return f
def dfdx(x):
Df = (f(x+h)-f(x-h)) / (2*h)
return Df
For example, for x == 10 this works fine. But when I set h to around 10E-14 or below, Df starts
to get values that are really far away from the expected value f(10) and the relative error between the expected value and Df becomes huge.
Why is that? What is happening here?
The evaluation of f(x) has, at best, a rounding error of |f(x)|*mu where mu is the machine constant of the floating point type. The total error of the central difference formula is thus approximately
2*|f(x)|*mu/(2*h) + |f'''(x)|/6 * h^2
In the present case, the exponential function is equal to all of its derivatives, so that the error is proportional to
mu/h + h^2/6
which has a minimum at h = (3*mu)^(1/3), which for the double format with mu=1e-16 is around h=1e-5.
The precision is increased if instead of 2*h the actual difference (x+h)-(x-h) between the evaluation points is used in the denominator. This can be seen in the following loglog plot of the distance to the exact derivative.
You are probably encountering some numerical instability, as for x = 10 and h =~ 1E-13, the argument for np.exp is very close to 10 whether h is added or subtracted, so small approximation errors in the value of np.exp are scaled significantly by the division with the very small 2 * h.
In addition to the answer by #LutzL I will add some info from a great book Numerical Recipes 3rd Edition: The Art of Scientific Computing from chapter 5.7 about Numerical Derivatives, especially about the choice of optimal h value for given x:
Always choose h so that h and x differ by an exactly representable number. Funny stuff like 1/3 should be avoided, except when x is equal to something along the lines of 14.3333333.
Round-off error is approximately epsilon * |f(x) * h|, where epsilon is floating point accuracy, Python represents floating point numbers with double precision so it's 1e-16. It may differ for more complicated functions (where precision errors arise further), though it's not your case.
Choice of optimal h: Not getting into details it would be sqrt(epsilon) * x for simple forward case, except when your x is near zero (you will find more information in the book), which is your case. You may want to use higher x values in such cases, complementary answer is already provided. In the case of f(x+h) - f(x-h) as in your example it would amount to epsilon ** 1/3 * x, so approximately 5e-6 times x, which choice might be a little difficult in case of small values like yours. Quite close (if one can say so bearing in mind floating point arithmetic...) to practical results posted by #LutzL though.
You may use other derivative formulas, except the symmetric one you are using. You may want to use the forward or backward evaluation(if the function is costly to evaluate and you have calculated f(x) beforehand. If your function is cheap to evaluate, you may want to evaluate it multiple times using higher order methods to make the precision error smaller (see five-point stencil on wikipedia as provided in the comment to your question).
This Python tutorial explains the reason behind the limited precision. In summary, decimals are ultimately represented in binary and the precision is about 17 significant digits. So, you are right that it gets fuzzy beyond 10E-14.
I need to estimate the size of a population, by finding the value of n which maximises scipy.misc.comb(n, a)/n**b where a and b are constants. n, a and b are all integers.
Obviously, I could just have a loop in range(SOME_HUGE_NUMBER), calculate the value for each n and break out of the loop once I reach an inflexion in the curve. But I wondered if there was an elegant way of doing this with (say) numpy/scipy, or is there some other elegant way of doing this just in pure Python (e.g. like an integer equivalent of Newton's method?)
As long as your number n is reasonably small (smaller than approx. 1500), my guess for the fastest way to do this is to actually try all possible values. You can do this quickly by using numpy:
import numpy as np
import scipy.misc as misc
nMax = 1000
a = 77
b = 100
n = np.arange(1, nMax+1, dtype=np.float64)
val = misc.comb(n, a)/n**b
print("Maximized for n={:d}".format(int(n[val.argmax()]+0.5)))
# Maximized for n=181
This is not especially elegant but rather fast for that range of n. Problem is that for n>1484 the numerator can already get too large to be stored in a float. This method will then fail, as you will run into overflows. But this is not only a problem of numpy.ndarray not working with python integers. Even with them, you would not be able to compute:
misc.comb(10000, 1000, exact=True)/10000**1001
as you want to have a float result in your division of two numbers larger than the maximum a float in python can hold (max_10_exp = 1024 on my system. See sys.float_info().). You couldn't use your range in that case, as well. If you really want to do something like that, you will have to take more care numerically.
You essentially have a nicely smooth function of n that you want to maximise. n is required to be integral but we can consider the function instead to be a function of the reals. In this case, the maximising integral value of n must be close to (next to) the maximising real value.
We could convert comb to a real function by using the gamma function and use numerical optimisation techniques to find the maximum. Another approach is to replace the factorials with Stirling's approximation. This gives a moderately complicated but tractable algebraic expression. This expression is not hard to differentiate and set to zero to find the extrema.
I did this and obtained
n * (b + (n-a) * log((n-a)/n) ) = a * b - a/2
This is not straightforward to solve algebraically but easy enough numerically (e.g. using Newton's method, as you suggest).
I may have made a mistake in the algebra, but I typed the a = 77, b = 100 example into Wolfram Alpha and got 180.58 so the approach seems to work.
I would like to compute 1/(1+exp(x)) for (possibly large) x. This is a well behaved function between 0 and 1. I could just do
import numpy as np
1.0/(1.0+np.exp(x))
but in this naive implementation np.exp(x) will likely just return 0 or infinity for large x, depending on the sign. Are there functions available in python that will help me out here?
I am considering implementing a series expansion and series acceleration, but I am wondering if this problem has already been solved.
You can use scipy.special.expit(-x). It will avoid the overflow warnings generated by 1.0/(1.0 + exp(x)).
Fundamentally you are limited by floating point precision. For example, if you are using 64 bit floats:
fmax_64 = np.finfo(np.float64).max # the largest representable 64 bit float
print(np.log(fmax_64))
# 709.782712893
If x is larger than about 709 then you simply won't be able to represent np.exp(x) (or 1. / (1 + np.exp(x))) using a 64 bit float.
You could use an extended precision float (i.e. np.longdouble):
fmax_long = np.finfo(np.longdouble).max
print(np.log(fmax_long))
# 11356.5234063
The precision of np.longdouble may vary depending on your platform - on x86 it is usually 80 bit, which would allow you to work with x values up to about 11356:
func = lambda x: 1. / (1. + np.exp(np.longdouble(x)))
print(func(11356))
# 1.41861159972e-4932
Beyond that you would need to rethink how you're computing your expansion, or else use something like mpmath which supports arbitrary precision arithmetic. However this usually comes at the cost of much worse runtime performance compared with numpy, since vectorization is no longer possible.