Faster float to int conversion in Python - python

Here's a piece of code that takes most time in my program, according to timeit statistics. It's a dirty function to convert floats in [-1.0, 1.0] interval into unsigned integer [0, 2**32]. How can I accelerate floatToInt?
piece = []
rng = range(32)
for i in rng:
piece.append(1.0/2**i)
def floatToInt(x):
n = x + 1.0
res = 0
for i in rng:
if n >= piece[i]:
res += 2**(31-i)
n -= piece[i]
return res

Did you try the obvious one?
def floatToInt(x):
return int((x+1.0) * (2**31))

Related

minimal absolute value of the difference between A[i] and B[i] (array A is strictly increasing, array B is strictly decreasing)

Given two sequences A and B of the same length: one is strictly increasing, the other is strictly decreasing.
It is required to find an index i such that the absolute value of the difference between A[i] and B[i] is minimal. If there are several such indices, the answer is the smallest of them. The input sequences are standard Python arrays. It is guaranteed that they are of the same length. Efficiency requirements: Asymptotic complexity: no more than the power of the logarithm of the length of the input sequences.
I have implemented index lookup using the golden section method, but I am confused by the use of floating point arithmetic. Is it possible to somehow improve this algorithm so as not to use it, or can you come up with a more concise solution?
import random
import math
def peak(A,B):
def f(x):
return abs(A[x]-B[x])
phi_inv = 1 / ((math.sqrt(5) + 1) / 2)
def cal_x1(left,right):
return right - (round((right-left) * phi_inv))
def cal_x2(left,right):
return left + (round((right-left) * phi_inv))
left, right = 0, len(A)-1
x1, x2 = cal_x1(left, right), cal_x2(left,right)
while x1 < x2:
if f(x1) > f(x2):
left = x1
x1 = x2
x2 = cal_x1(x1,right)
else:
right = x2
x2 = x1
x1 = cal_x2(left,x2)
if x1 > 1 and f(x1-2) <= f(x1-1): return x1-2
if x1+2 < len(A) and f(x1+2) < f(x1+1): return x1+2
if x1 > 0 and f(x1-1) <= f(x1): return x1-1
if x1+1 < len(A) and f(x1+1) < f(x1): return x1+1
return x1
#value check
def make_arr(inv):
x = set()
while len(x) != 1000:
x.add(random.randint(-10000,10000))
x = sorted(list(x),reverse = inv)
return x
x = make_arr(0)
y = make_arr(1)
needle = 1000000
c = 0
for i in range(1000):
if abs(x[i]-y[i]) < needle:
c = i
needle = abs(x[i]-y[i])
print(c)
print(peak(x,y))
Approach
The poster asks about alternative, simpler solutions to posted code.
The problem is a variant of Leetcode Problem 852, where the goal is to find the peak index in a moutain array. We convert to a peak, rather than min, by computing the negative of the abolute difference. Our aproach is to modify this Python solution to the Leetcode problem.
Code
def binary_search(x, y):
''' Mod of https://walkccc.me/LeetCode/problems/0852/ to use function'''
def f(m):
' Absoute value of difference at index m of two arrays '
return -abs(x[m] - y[m]) # Make negative so we are looking for a peak
# peak using binary search
l = 0
r = len(arr) - 1
while l < r:
m = (l + r) // 2
if f(m) < f(m + 1): # check if increasing
l = m + 1
else:
r = m # was decreasing
return l
Test
def linear_search(A, B):
' Linear Search Method '
values = [abs(ai-bi) for ai, bi in zip(A, B)]
return values.index(min(values)) # linear search
def make_arr(inv):
random.seed(10) # added so we can repeat with the same data
x = set()
while len(x) != 1000:
x.add(random.randint(-10000,10000))
x = sorted(list(x),reverse = inv)
return x
# Create data
x = make_arr(0)
y = make_arr(1)
# Run search methods
print(f'Linear Search Solution {linear_search(x, y)}')
print(f'Golden Section Search Solution {peak(x, y)}') # posted code
print(f'Binary Search Solution {binary_search(x, y)}')
Output
Linear Search Solution 499
Golden Section Search Solution 499
Binary Search Solution 499

numpy precision with large numbers

I want to factorize a large number using Fermat's factorization method. This is how I implemented it:
import numpy as np
def fac(n):
x = np.ceil(np.sqrt(n))
y = x*x - n
while not np.sqrt(y).is_integer():
x += 1
y = x*x - n
return(x + np.sqrt(y), x - np.sqrt(y))
Using this method I want to factor N into its components. Note that N=p*q, where p and q are prime.
I chose the following values to compute N:
p = 34058934059834598495823984675767545695711020949846845989934523432842834738974239847294083409583495898523872347284789757987987387543533846141.0
q = 34058934059834598495823984675767545695711020949846845989934523432842834738974239847294083409583495898523872347284789757987987387543533845933.0
and defined N
N = p*q
Now I factor N:
r = fac(n)
However, the factorization seems to not be correct:
int(r[0])*int(r[1]) == N
It does work for smaller ints:
fac(65537)
Out[1]: (65537.0, 1.0)
I'm quite sure the reason is numerical precision at some point.
I tried calculating N in numpy using object types:
N = np.dot(np.array(p).astype(object), np.array(q).astype(object))
but it doesn't help. Still, the numpy requires a float for the sqrt function.
I also tried using the math library instead of numpy, this library seems to not require a float for its sqrt function, but ultimately running into precision issues as well.
Python int are multiple precision numbers. But numpy is a wrapper around C low level libraries to speed up operations. The downside is that it cannot handle those multi-precision numbers. Worse, if you try to use np.sqrt on them, they will be converted to floating point numbers (C double or numpy float64) what have a precision of about 15 decimal digits.
But as Python int type is already a multiprecision type, you could use math.sqrt to get an approximative value of the true square root, and then use Newton to find a closer value:
def isqrt(n):
x = int(math.sqrt(n))
old = None
while True:
d = (n - x * x) // (2 * x)
if d == 0: break
if d == 1: # infinite loop prevention
if old is None:
old = 1
else: break
x += d
return x
Using it, your fac function could become:
def fac(n):
x = isqrt(n)
if x*x < n: x += 1
y = x * x - n
while True:
z = isqrt(y)
if z*z == y: break
x += 1
y = x*x -n
return x+z, x-z
Demo:
p = 34058934059834598495823984675767545695711020949846845989934523432842834738974239847294083409583495898523872347284789757987987387543533846141
q = 34058934059834598495823984675767545695711020949846845989934523432842834738974239847294083409583495898523872347284789757987987387543533845933
N = p*q
print(fac(N) == (p,q))
prints as expected True

vectorizing a double for loop

This is a performance question. I am trying to optimize the following double for loop. Here is a MWE
import numpy as np
from timeit import default_timer as tm
# L1 and L2 will range from 0 to 3 typically, sometimes up to 5
# all of the following are dummy values but match correct `type`
L1, L2, x1, x2, fac = 2, 3, 2.0, 4.5, 2.3
saved_values = np.random.uniform(high=75.0, size=[max(L1,L2) + 1, max(L1,L2) + 1])
facts = np.random.uniform(high=65.0, size=[L1 + L2 + 1])
val = 0
start = tm()
for i in range(L1+1):
sf = saved_values[L1][i] * x1 ** (L1 - i)
for j in range(L2 + 1):
m = i + j
if m % 2 == 0:
num = sf * facts[m] / (2 * fac) ** (m / 2)
val += saved_values[L2][j] * x1 ** (L1 - j) * num
end = tm()
time = end-start
print("Long way: time taken was {} and value is {}".format(time, val))
My idea for a solution is to take out the if m % 2 == 0: statement and then calculate all i and j combinations i.e., a matrix, which I should be able to vectorize, and then use something like np.where() to add up all of the elements meeting the requirement of if m % 2 == 0: where m= i+j.
Even if this is not faster than the explicit for loops, it should be vectorized because in reality I will be sending arrays to a function containing the double for loops, so being able to do that part vectorized, should get me the speed gains I am after, even if vectorizing this double for loop does not.
I am stuck spinning my wheels right now on how to broadcast, but account for the sf factor as well as the m factor in the inner loop.

How to find the sum of this series using loops

x - x^2/fact(2) + x^3/fact(3) ... -x^6/fact(6)
I tried various ways, even used nested 'for' loops, but I can't seem to figure out the code, any help?
you could try this; order defines how many terms should be taken into account:
def taylor(x, order=3):
x_n = x
fact = 1
sign = 1
res = 0
for n in range(2, order+2):
res += sign * x_n/fact
x_n *= x
fact *= n
sign = -sign
return res
for comparison (because this is the same function):
from math import exp
def real_funtion(x):
return 1-exp(-x)

python integral calculation without using for loop

I want to implement a python program, which can calculate this integral
I know how to make it using For Loop and it will look something like this
import numpy as np;
def numint(f,alpha,beta,N,b,c):
s = np.size(b);
x = np.linspace(alpha,beta,N);
h = x[1]-x[0];
result = 0;
result1 = 0;
for j in range(1,N+1):
for i in range(1,s+1):
result1+=b[i]*f(x[j-1]+h*c[i]);
result+=h*result1;
result1 = 0;
return result;
Without loop I think it should be something like this:
def numint(f,alpha,beta,N,b,c):
s = np.size(b);
x = np.linspace(alpha,beta,N);
h = np.ones(N,dtype=int)*(x[1] - x[0]);
result = 0;
result = np.sum(h[1:N+1] * np.sum(b*(f(x[0:N]+h[0]*c))));
return result;
But the second part of the result = np.sum ... is wrong and I dont know how to fix it . Any suggestions ?
EDIT :
def numint(f,alpha,beta,N,b,c):
s = np.size(b);
x = np.linspace(alpha,beta,N);
h = np.ones(N,dtype=int)*(x[1] - x[0]);
functionResult = f(x+h*c);
dif = np.diff(functionResult);
result = 0;
result = np.sum(h[1:N+1] * np.sum(b*dif.sum()));
return result;
As a tip : vectorize them
but I don't know how to use it
What you are looking for is the np.diff function.
Say you are looking for the integral from 0 to 10 for f(x) = x^2 with F(x) = (x^3)/3:
x = np.linspace(0, 10, 1000)
F = lambda x: x**3/3
b = F(x)
dif = np.diff(b)
sum = dif.sum()
print(sum)
333.33333333333326
Actual result is 333.3...
You can replace F by any function you want, but the key is np.diff which decreases the size of your array by one. Then, just sum the differences along your interval and you have your result.
You can improve the precision of your result by simply increasing the number of steps (1000 in my example).

Categories