Calculating Integrals: One-liner solution? - python

I was working on a program to find the integral of a function, where the user specifies the amount of rectangles, the start, and the stop.
NOTE: I am using left-end points of the rectangles.
I have the function working perfectly (at least, it seems to be perfect). However, I wanted to see if I could write a one-liner for it, but not sure how because I'm using eval(). Here is my original code:
def integral(function, n=1000, start=0, stop=100):
"""Returns integral of function from start to stop with 'n' rectangles"""
increment, rectangles, x = float((stop - start)) / n, [], start
while x <= stop:
num = eval(function)
rectangles.append(num)
if x >= stop: break
x += increment
return increment * sum(rectangles)
This works fine:
>>> integral('x**2')
333833.4999999991
The actual answer is 1000000/3, so my function gives a pretty nice estimate (for only 1000 rectangles).
My attempt at a one-liner:
def integral2(function, n=1000, start=0, stop=100): rectangles = [(float(x) / n) for x in range(start*n, (stop*n)+1)]; return (float((stop-start))/n) * sum([eval(function) for x in rectangles])
However, this isn't a truly a one-liner since I'm using a semi-colon. Also, it's a bit slower (takes a few seconds longer, which is pretty significant) and gives the wrong answer:
>>> integral2('x**2')
33333833.334999967
So, is it possible to use a one-liner solution for this function? I wasn't sure how to implement eval() and float(x)/n in the same list comprehension. float(x)/n achieves a virtual 'step' in the range function.
Thanks!

def integral2(function, n=1000, start=0, stop=100): return (float(1)/n) * sum([eval(function) for x in [(float(x) / n) for x in range(start*n, (stop*n)+1)]])
Note that there is a big difference between integral and integral2: integral2 makes (stop*n)+1-(start*n) rectangles, while integral only makes n rectangles.
In [64]: integral('x**2')
Out[64]: 333833.4999999991
In [68]: integral2('x**2')
Out[68]: 333338.33334999956
In [69]: %timeit integral2('x**2')
1 loops, best of 3: 704 ms per loop
In [70]: %timeit integral('x**2')
100 loops, best of 3: 7.32 ms per loop
Perhaps a more comparable translation of integral would be:
def integral3(function, n=1000, start=0, stop=100): return (float(stop-start)/n) * sum([eval(function) for x in [start+(i*float(stop-start)/n) for i in range(n)]])
In [77]: %timeit integral3('x**2')
100 loops, best of 3: 7.1 ms per loop
Of course, it should go with say that there is no purpose for making this a one-liner other than (perverse?) amusement :)

You don't need to use eval if you receive function as a Python callable itself
You could also make use of numpy.arange function to generate a list of float values
Case 1: function is a Python callable
def integrate(f, n, start, end):
return sum([f(i)*(abs(start-end)/float(n)) for i in np.arange(start, end, abs(start-end)/float(n))])
Case 2: function is not a Python callable
def integrate(f, n, start, end):
return sum([eval(f)*(abs(start-end)/float(n)) for x in np.arange(start, end, abs(start-end)/float(n))])

How about some cheating using numpy's linspace?
integrate = lambda F, n0, nf: sum([F(x)*(abs(nf-n0)/(abs(nf-n0)*300))for x in np.linspace(n0, nf, abs(nf-n0)*300)])
It takes a function F, a lower bound n0 and an upper bound nf. Here's how it works for x**2 between 0 and 5:
In [31]: integrate(lambda x: x**2, 0, 5)
Out[31]: 41.68056482099179
It's pretty close.
EDIT: here's my linspace one liner.
linspace = lambda lo, hi, step:[lo + (hi-lo)/step*i + (hi-lo)/(step*step-1)*i for i in range(step)]
There's a way to tickle that into a full one liner integrator. I leave that to you my friend.

Related

Vectoriced iterative fixpoint search

I have a function that will always converge to a fixpoint, e.g. f(x)= (x-a)/2+a. I have a function that will find this fixpoint through repetive invoking of the function:
def find_fix_point(f,x):
while f(x)>0.1:
x = f(x)
return x
Which works fine, now I want to do this for a vectoriced version;
def find_fix_point(f,x):
while (f(x)>0.1).any():
x = f(x)
return x
However this is quite inefficient, if most of the instances only need about 10 iterations and one needs 1000. What is a fast method to remove `x that already have been found?
The code can use numpy or scipy.
One way to solve this would be to use recursion:
def find_fix_point_recursive(f, x):
ind = x > 0.1
if ind.any():
x[ind] = find_fix_point_recursive(f, f(x[ind]))
return x
With this implementation, we only call f on the points which need to be updated.
Note that by using recursion we avoid having to do the check x > 0.1 all the time, with each call working on smaller and smaller arrays.
%timeit x = np.zeros(10000); x[0] = 10000; find_fix_point(f, x)
1000 loops, best of 3: 1.04 ms per loop
%timeit x = np.zeros(10000); x[0] = 10000; find_fix_point_recursive(f, x)
10000 loops, best of 3: 141 µs per loop
First for generality,I change the criteria to fit with the fix-point definition : we stop when |x-f(x)|<=epsilon.
You can mix boolean indexing and integer indexing to keep each time the active points. Here a way to do that :
def find_fix_point(f,x,epsilon):
ind=np.mgrid[:len(x)] # initial indices.
while ind.size>0:
xind=x[ind] # integer indexing
yind=f(xind)
x[ind]=yind
ind=ind[abs(yind-xind)>epsilon] # boolean indexing
An example with a lot of fix points :
from matplotlib.pyplot import plot,show
x0=np.linspace(0,1,1000)
x = x0.copy()
def f(x): return x*np.sin(1/x)
find_fix_point(f,x,1e-5)
plot(x0,x,'.');show()
The general method is to use boolean indexing to compute only the ones, that did not yet reach equilibrium.
I adapted the algorithm given by Jonas Adler to avoid maximal recursion depth:
def find_fix_point_vector(f,x):
x = x.copy()
x_fix = np.empty(x.shape)
unfixed = np.full(x.shape, True, dtype = bool)
while unfixed.any():
x_new = f(x) #iteration
x_fix[unfixed] = x_new # copy the values
cond = np.abs(x_new-x)>1
unfixed[unfixed] = cond #find out which ones are fixed
x = x_new[cond] # update the x values that still need to be computed
return x_fix
Edit:
Here I review the 3 solutions proposed. I will call the different fucntions according to their proposer, find_fix_Jonas, find_fix_Jurg, find_fix_BM. I changed the fixpoint condition in all functions according to BM (see updated fucntion of Jonas at the end).
Speed:
%timeit find_fix_BM(f, np.linspace(0,100000,10000),1)
100 loops, best of 3: 2.31 ms per loop
%timeit find_fix_Jonas(f, np.linspace(0,100000,10000))
1000 loops, best of 3: 1.52 ms per loop
%timeit find_fix_Jurg(f, np.linspace(0,100000,10000))
1000 loops, best of 3: 1.28 ms per loop
According to readability I think the version of Jonas is the easiest one to understand, so should be chosen when speed does not matter very much.
Jonas's version however might raise a Runtimeerror, when the number of iterations until fixpoint is reached is large (>1000). The other two solutions do not have this drawback.
The verion of B.M. however might be easier to understand than the version proposed by me.
#
Version of Jonas used:
def find_fix_Jonas(f, x):
fx = f(x)
ind = np.abs(fx-x)>1
if ind.any():
fx[ind] = find_fix_Jonas(f, fx[ind])
return fx
...remove `x that already have been found?
Create a new array using boolean indexing with your condition.
>>> a = np.array([3,1,6,3,9])
>>> a != 3
array([False, True, True, False, True], dtype=bool)
>>> b = a[a != 3]
>>> b
array([1, 6, 9])
>>>

Efficiently check if an element occurs at least n times in a list

How to best write a Python function (check_list) to efficiently test if an element (x) occurs at least n times in a list (l)?
My first thought was:
def check_list(l, x, n):
return l.count(x) >= n
But this doesn't short-circuit once x has been found n times and is always O(n).
A simple approach that does short-circuit would be:
def check_list(l, x, n):
count = 0
for item in l:
if item == x:
count += 1
if count == n:
return True
return False
I also have a more compact short-circuiting solution with a generator:
def check_list(l, x, n):
gen = (1 for item in l if item == x)
return all(next(gen,0) for i in range(n))
Are there other good solutions? What is the best efficient approach?
Thank you
Instead of incurring extra overhead with the setup of a range object and using all which has to test the truthiness of each item, you could use itertools.islice to advance the generator n steps ahead, and then return the next item in the slice if the slice exists or a default False if not:
from itertools import islice
def check_list(lst, x, n):
gen = (True for i in lst if i==x)
return next(islice(gen, n-1, None), False)
Note that like list.count, itertools.islice also runs at C speed. And this has the extra advantage of handling iterables that are not lists.
Some timing:
In [1]: from itertools import islice
In [2]: from random import randrange
In [3]: lst = [randrange(1,10) for i in range(100000)]
In [5]: %%timeit # using list.index
....: check_list(lst, 5, 1000)
....:
1000 loops, best of 3: 736 µs per loop
In [7]: %%timeit # islice
....: check_list(lst, 5, 1000)
....:
1000 loops, best of 3: 662 µs per loop
In [9]: %%timeit # using list.index
....: check_list(lst, 5, 10000)
....:
100 loops, best of 3: 7.6 ms per loop
In [11]: %%timeit # islice
....: check_list(lst, 5, 10000)
....:
100 loops, best of 3: 6.7 ms per loop
You could use the second argument of index to find the subsequent indices of occurrences:
def check_list(l, x, n):
i = 0
try:
for _ in range(n):
i = l.index(x, i)+1
return True
except ValueError:
return False
print( check_list([1,3,2,3,4,0,8,3,7,3,1,1,0], 3, 4) )
About index arguments
The official documentation does not mention in its Python Tutuorial, section 5 the method's second or third argument, but you can find it in the more comprehensive Python Standard Library, section 4.6:
s.index(x[, i[, j]]) index of the first occurrence of x in s (at or after index i and before index j) (8)
(8) index raises ValueError when x is not found in s. When supported, the additional arguments to the index method allow efficient searching of subsections of the sequence. Passing the extra arguments is roughly equivalent to using s[i:j].index(x), only without copying any data and with the returned index being relative to the start of the sequence rather than the start of the slice.
Performance Comparison
In comparing this list.index method with the islice(gen) method, the most important factor is the distance between the occurrences to be found. Once that distance is on average 13 or more, the list.index has a better performance. For lower distances, the fastest method also depends on the number of occurrences to find. The more occurrences to find, the sooner the islice(gen) method outperforms list.index in terms of average distance: this gain fades out when the number of occurrences becomes really large.
The following graph draws the (approximate) border line, at which both methods perform equally well (the X-axis is logarithmic):
Ultimately short circuiting is the way to go if you expect a significant number of cases will lead to early termination. Let's explore the possibilities:
Take the case of the list.index method versus the list.count method (these were the two fastest according to my testing, although ymmv)
For list.index if the list contains n or more of x and the method is called n times. Whilst within the list.index method, execution is very fast, allowing for much faster iteration than the custom generator. If the occurances of x are far enough apart, a large speedup will be seen from the lower level execution of index. If instances of x are close together (shorter list / more common x's), much more of the time will be spent executing the slower python code that mediates the rest of the function (looping over n and incrementing i)
The benefit of list.count is that it does all of the heavy lifting outside of slow python execution. It is a much easier function to analyse, as it is simply a case of O(n) time complexity. By spending almost none of the time in the python interpreter however it is almost gaurenteed to be faster for short lists.
Summary of selection criteria:
shorter lists favor list.count
lists of any length that don't have a high probability to short circuit favor list.count
lists that are long and likely to short circuit favor list.index
I would recommend using Counter from the collections module.
from collections import Counter
%%time
[k for k,v in Counter(np.random.randint(0,10000,10000000)).items() if v>1100]
#Output:
Wall time: 2.83 s
[1848, 1996, 2461, 4481, 4522, 5844, 7362, 7892, 9671, 9705]
This shows another way of doing it.
Sort the list.
Find the index of the first occurrence of the item.
Increase the index by one less than the number of times the item must occur. (n - 1)
Find if the element at that index is the same as the item you want to find.
def check_list(l, x, n):
_l = sorted(l)
try:
index_1 = _l.index(x)
return _l[index_1 + n - 1] == x
except IndexError:
return False
c=0
for i in l:
if i==k:
c+=1
if c>=n:
print("true")
else:
print("false")
Another possibility might be:
def check_list(l, x, n):
return sum([1 for i in l if i == x]) >= n

Efficient ways to iterate over several numpy arrays and process current and previous elements?

I've read a lot about different techniques for iterating over numpy arrays recently and it seems that consensus is not to iterate at all (for instance, see a comment here). There are several similar questions on SO, but my case is a bit different as I have to combine "iterating" (or not iterating) and accessing previous values.
Let's say there are N (N is small, usually 4, might be up to 7) 1-D numpy arrays of float128 in a list X, all arrays are of the same size. To give you a little insight, these are data from PDE integration, each array stands for one function, and I would like to apply a Poincare section. Unfortunately, the algorithm should be both memory- and time-efficient since these arrays are sometimes ~1Gb each, and there are only 4Gb of RAM on board (I've just learnt about memmap'ing of numpy arrays and now consider using them instead of regular ones).
One of these arrays is used for "filtering" the others, so I start with secaxis = X.pop(idx). Now I have to locate pairs of indices where (secaxis[i-1] > 0 and secaxis[i] < 0) or (secaxis[i-1] < 0 and secaxis[i] > 0) and then apply simple algebraic transformations to remaining arrays, X (and save results). Worth mentioning, data shouldn't be wasted during this operation.
There are multiple ways for doing that, but none of them seem efficient (and elegant enough) to me. One is a C-like approach, where you just iterate in a for-loop:
import array # better than lists
res = [ array.array('d') for _ in X ]
for i in xrange(1,secaxis.size):
if condition: # see above
co = -secaxis[i-1]/secaxis[i]
for j in xrange(N):
res[j].append( (X[j][i-1] + co*X[j][i])/(1+co) )
This is clearly very inefficient and besides not a Pythonic way.
Another way is to use numpy.nditer, but I haven't figured out yet how one accesses the previous value, though it allows iterating over several arrays at once:
# without secaxis = X.pop(idx)
it = numpy.nditer(X)
for vec in it:
# vec[idx] is current value, how do you get the previous (or next) one?
Third possibility is to first find sought indices with efficient numpy slices, and then use them for bulk multiplication/addition. I prefer this one for now:
res = []
inds, = numpy.where((secaxis[:-1] < 0) * (secaxis[1:] > 0) +
(secaxis[:-1] > 0) * (secaxis[1:] < 0))
coefs = -secaxis[inds] / secaxis[inds+1] # array of coefficients
for f in X: # loop is done only N-1 times, that is, 3 to 6
res.append( (f[inds] + coefs*f[inds+1]) / (1+coefs) )
But this is seemingly done in 7 + 2*(N - 1) passes, moreover, I'm not sure about secaxis[inds] type of addressing (it is not slicing and generally it has to find all elements by indices just like in the first method, doesn't it?).
Finally, I've also tried using itertools and it resulted in monstrous and obscure structures, which might stem from the fact that I'm not very familiar with functional programming:
def filt(x):
return (x[0] < 0 and x[1] > 0) or (x[0] > 0 and x[1] < 0)
import array
from itertools import izip, tee, ifilter
res = [ array.array('d') for _ in X ]
iters = [iter(x) for x in X] # N-1 iterators in a list
prev, curr = tee(izip(*iters)) # 2 similar iterators, each of which
# consists of N-1 iterators
next(curr, None) # one of them is now for current value
seciter = tee(iter(secaxis))
next(seciter[1], None)
for x in ifilter(filt, izip(seciter[0], seciter[1], prev, curr)):
co = - x[0]/x[1]
for r, p, c in zip(res, x[2], x[3]):
r.append( (p+co*c) / (1+co) )
Not only this looks very ugly, it also takes an awful lot of time to complete.
So, I have following questions:
Of all these methods is the third one indeed the best? If so, what can be done to impove the last one?
Are there any other, better ones yet?
Out of sheer curiosity, is there a way to solve the problem using nditer?
Finally, will I be better off using memmap versions of numpy arrays, or will it probably slow things down a lot? Maybe I should only load secaxis array into RAM, keep others on disk and use third method?
(bonus question) List of equal in length 1-D numpy arrays comes from loading N .npy files whose sizes aren't known beforehand (but N is). Would it be more efficient to read one array, then allocate memory for one 2-D numpy array (slight memory overhead here) and read remaining into that 2-D array?
The numpy.where() version is fast enough, you can speedup it a little by method3(). If the > condition can change to >=, you can also use method4().
import numpy as np
a = np.random.randn(100000)
def method1(a):
idx = []
for i in range(1, len(a)):
if (a[i-1] > 0 and a[i] < 0) or (a[i-1] < 0 and a[i] > 0):
idx.append(i)
return idx
def method2(a):
inds, = np.where((a[:-1] < 0) * (a[1:] > 0) +
(a[:-1] > 0) * (a[1:] < 0))
return inds + 1
def method3(a):
m = a < 0
p = a > 0
return np.where((m[:-1] & p[1:]) | (p[:-1] & m[1:]))[0] + 1
def method4(a):
return np.where(np.diff(a >= 0))[0] + 1
assert np.allclose(method1(a), method2(a))
assert np.allclose(method2(a), method3(a))
assert np.allclose(method3(a), method4(a))
%timeit method1(a)
%timeit method2(a)
%timeit method3(a)
%timeit method4(a)
the %timeit result:
1 loop, best of 3: 294 ms per loop
1000 loops, best of 3: 1.52 ms per loop
1000 loops, best of 3: 1.38 ms per loop
1000 loops, best of 3: 1.39 ms per loop
I'll need to read your post in more detail, but will start with some general observations (from previous iteration questions).
There isn't an efficient way of iterating over arrays in Python, though there are things that slow things down. I like to distinguish between the iteration mechanism (nditer, for x in A:) and the action (alist.append(...), x[i+1] += 1). The big time consumer is usually the action, done many times, not the iteration mechanism itself.
Letting numpy do the iteration in compiled code is the fastest.
xdiff = x[1:] - x[:-1]
is much faster than
xdiff = np.zeros(x.shape[0]-1)
for i in range(x.shape[0]:
xdiff[i] = x[i+1] - x[i]
The np.nditer isn't any faster.
nditer is recommended as a general iteration tool in compiled code. But its main value lies in handling broadcasting and coordinating the iteration over several arrays (input/output). And you need to use buffering and c like code to get the best speed from nditer (I'll look up a recent SO question).
https://stackoverflow.com/a/39058906/901925
Don't use nditer without studying the relevant iteration tutorial page (the one that ends with a cython example).
=========================
Just judging from experience, this approach will be fastest. Yes it's going to iterate over secaxis a number of times, but those are all done in compiled code, and will be much faster than any iteration in Python. And the for f in X: iteration is just a few times.
res = []
inds, = numpy.where((secaxis[:-1] < 0) * (secaxis[1:] > 0) +
(secaxis[:-1] > 0) * (secaxis[1:] < 0))
coefs = -secaxis[inds] / secaxis[inds+1] # array of coefficients
for f in X:
res.append( (f[inds] + coefs*f[inds+1]) / (1+coefs) )
#HYRY has explored alternatives for making the where step faster. But as you can see the differences aren't that big. Other possible tweaks
inds1 = inds+1
coefs = -secaxis[inds] / secaxis[inds1]
coefs1 = coefs+1
for f in X:
res.append(( f[inds] + coefs*f[inds1]) / coefs1)
If X was an array, res could be an array as well.
res = (X[:,inds] + coefs*X[:,inds1])/coefs1
But for small N I suspect the list res is just as good. Don't need to make the arrays any bigger than necessary. The tweaks are minor, just trying to avoid recalculating things.
=================
This use of np.where is just np.nonzero. That actually makes two passes of the array, once with np.count_nonzero to determine how many values it will return, and create the return structure (list of arrays of now known length). And a second loop to fill in those indices. So multiple iterations are fine if it keeps action simple.

Python - Intersection of 2D Numpy Arrays

I'm desperately searching for an efficient way to check if two 2D numpy Arrays intersect.
So what I have is two arrays with an arbitrary amount of 2D arrays like:
A=np.array([[2,3,4],[5,6,7],[8,9,10]])
B=np.array([[5,6,7],[1,3,4]])
C=np.array([[1,2,3],[6,6,7],[10,8,9]])
All I need is a True if there is at least one vector intersecting with another one of the other array, otherwise a false. So it should give results like this:
f(A,B) -> True
f(A,C) -> False
I'm kind of new to python and at first I wrote my program with Python lists, which works but of course is very inefficient. The Program takes days to finish so I am working on a numpy.array solution now, but these arrays really are not so easy to handle.
Here's Some Context about my Program and the Python List Solution:
What i'm doing is something like a self-avoiding random walk in 3 Dimensions. http://en.wikipedia.org/wiki/Self-avoiding_walk. But instead of doing a Random walk and hoping that it will reach a desirable length (e.g. i want chains build up of 1000 beads) without reaching a dead end i do the following:
I create a "flat" Chain with the desired length N:
X=[]
for i in range(0,N+1):
X.append((i,0,0))
Now i fold this flat chain:
randomly choose one of the elements ("pivotelement")
randomly choose one direction ( either all elements to the left or to the right of the pivotelment)
randomly choose one out of 9 possible rotations in space (3 axes * 3 possible rotations 90°,180°,270°)
rotate all the elements of the chosen direction with the chosen rotation
Check if the new elements of the chosen direction intersect with the other direction
No intersection -> accept the new configuration, else -> keep the old chain.
Steps 1.-6. have to be done a large amount of times (e.g. for a chain of length 1000, ~5000 Times) so these steps have to be done efficiently. My List-based solution for this is the following:
def PivotFold(chain):
randPiv=random.randint(1,N) #Chooses a random pivotelement, N is the Chainlength
Pivot=chain[randPiv] #get that pivotelement
C=[] #C is going to be a shifted copy of the chain
intersect=False
for j in range (0,N+1): # Here i shift the hole chain to get the pivotelement to the origin, so i can use simple rotations around the origin
C.append((chain[j][0]-Pivot[0],chain[j][1]-Pivot[1],chain[j][2]-Pivot[2]))
rotRand=random.randint(1,18) # rotRand is used to choose a direction and a Rotation (2 possible direction * 9 rotations = 18 possibilitys)
#Rotations around Z-Axis
if rotRand==1:
for j in range (randPiv,N+1):
C[j]=(-C[j][1],C[j][0],C[j][2])
if C[0:randPiv].__contains__(C[j])==True:
intersect=True
break
elif rotRand==2:
for j in range (randPiv,N+1):
C[j]=(C[j][1],-C[j][0],C[j][2])
if C[0:randPiv].__contains__(C[j])==True:
intersect=True
break
...etc
if intersect==False: # return C if there was no intersection in C
Shizz=C
else:
Shizz=chain
return Shizz
The Function PivotFold(chain) will be used on the initially flat chain X a large amount of times. it's pretty naivly written so maybe you have some protips to improve this ^^ I thought that numpyarrays would be good because i can efficiently shift and rotate entire chains without looping over all the elements ...
This should do it:
In [11]:
def f(arrA, arrB):
return not set(map(tuple, arrA)).isdisjoint(map(tuple, arrB))
In [12]:
f(A, B)
Out[12]:
True
In [13]:
f(A, C)
Out[13]:
False
In [14]:
f(B, C)
Out[14]:
False
To find intersection? OK, set sounds like a logical choice.
But numpy.array or list are not hashable? OK, convert them to tuple.
That is the idea.
A numpy way of doing involves very unreadable boardcasting:
In [34]:
(A[...,np.newaxis]==B[...,np.newaxis].T).all(1)
Out[34]:
array([[False, False],
[ True, False],
[False, False]], dtype=bool)
In [36]:
(A[...,np.newaxis]==B[...,np.newaxis].T).all(1).any()
Out[36]:
True
Some timeit result:
In [38]:
#Dan's method
%timeit set_comp(A,B)
10000 loops, best of 3: 34.1 µs per loop
In [39]:
#Avoiding lambda will speed things up
%timeit f(A,B)
10000 loops, best of 3: 23.8 µs per loop
In [40]:
#numpy way probably will be slow, unless the size of the array is very big (my guess)
%timeit (A[...,np.newaxis]==B[...,np.newaxis].T).all(1).any()
10000 loops, best of 3: 49.8 µs per loop
Also the numpy method will be RAM hungry, as A[...,np.newaxis]==B[...,np.newaxis].T step creates a 3D array.
Using the same idea outlined here, you could do the following:
def make_1d_view(a):
a = np.ascontiguousarray(a)
dt = np.dtype((np.void, a.dtype.itemsize * a.shape[1]))
return a.view(dt).ravel()
def f(a, b):
return len(np.intersect1d(make_1d_view(A), make_1d_view(b))) != 0
>>> f(A, B)
True
>>> f(A, C)
False
This doesn't work for floating point types (it will not consider +0.0 and -0.0 the same value), and np.intersect1d uses sorting, so it is has linearithmic, not linear, performance. You may be able to squeeze some performance by replicating the source of np.intersect1d in your code, and instead of checking the length of the return array, calling np.any on the boolean indexing array.
You can also get the job done with some np.tile and np.swapaxes business!
def intersect2d(X, Y):
"""
Function to find intersection of two 2D arrays.
Returns index of rows in X that are common to Y.
"""
X = np.tile(X[:,:,None], (1, 1, Y.shape[0]) )
Y = np.swapaxes(Y[:,:,None], 0, 2)
Y = np.tile(Y, (X.shape[0], 1, 1))
eq = np.all(np.equal(X, Y), axis = 1)
eq = np.any(eq, axis = 1)
return np.nonzero(eq)[0]
To answer the question more specifically, you'd only need to check if the returned array is empty.
This should be much faster it is not O(n^2) like the for-loop solution, but it isn't fully numpythonic. Not sure how better to leverage numpy here
def set_comp(a, b):
sets_a = set(map(lambda x: frozenset(tuple(x)), a))
sets_b = set(map(lambda x: frozenset(tuple(x)), b))
return not sets_a.isdisjoint(sets_b)
i think you want true if tow arrays have subarray set ! you can use this :
def(A,B):
for i in A:
for j in B:
if i==j
return True
return False
This problem can be solved efficiently using the numpy_indexed package (disclaimer: I am its author):
import numpy_indexed as npi
len(npi.intersection(A, B)) > 0

Is it faster to make a variable for the length of a string?

I am implementing a reverse(s) function in Python 2.7 and I made a code like this:
# iterative version 1
def reverse(s):
r = ""
for c in range(len(s)-1, -1, -1):
r += s[c];
return r
print reverse("Be sure to drink your Ovaltine")
But for each iteration, it gets the length of the string even though it's been deducted.
I made another version that
# iterative version 2
def reverse(s):
r = ""
l = len(s)-1
for c in range(l, -1, -1):
r += s[c];
return r
print reverse("Be sure to drink your Ovaltine")
This version remembers the length of the string and doesn't ask for it every iteration, is this faster for longer strings (like a string that has the length of 1024) than the first version or does it have no effect at all?
In [12]: %timeit reverse("Be sure to drink your Ovaltine")
100000 loops, best of 3: 2.53 µs per loop
In [13]: %timeit reverse1("Be sure to drink your Ovaltine")
100000 loops, best of 3: 2.55 µs per loop
reverse is your first method, reverse1 is the second.
As you can see from timing there is very little difference in the performance.
You can use Ipython to time your code with the above syntax, just def your functions and use %timeit and then your function and whatever parameters .
In the line
for c in range(len(s)-1, -1, -1):
len(s) is evaluated only once, and the result (minus one) passed as an argument to range. Therefore the two versions are almost identical - if anything, the latter may be (very) slightly slower, as it creates a new name to assign the result of the subtraction.

Categories