Related
Recently I came across some codes that look like this:
a = np.divide(np.subtract(b, c), np.add(d, e))
In which a, b, c, d, e are all np.ndarray. This code line looks harder-to-understand compared to
a = (b-c)/(d+e)
Is there any advantage of using np.add(), np.divide(), etc. compared to +, /, etc. ?
Thanks so much.
numpy.add docs says that
The + operator can be used as a shorthand for np.add on ndarrays.
Is there any advantage of using np.add(), np.divide(), etc. compared to +, /, etc. ?
np.add is first class citizien, so you might for example do
def myfunction(arr1, arr2, action):
return action(arr1,arr2)
and used it like
import numpy as np
a = np.array([1,2,3])
b = np.array([4,5,6])
total = myfunction(a,b,np.add)
rather than doing something like
def myfunction(arr1, arr2, action):
if action=="+":
return arr1 + arr2
elif action=="/":
return arr1 / arr2
...
Please also note that if arguments are not numpys array you might other result than using just +, e.g.
import numpy as np
a = [1,2,3]
b = [4,5,6]
print(a+b) # [1, 2, 3, 4, 5, 6]
print(np.add(a,b)) # [5 7 9]
np.mgrid accepts tuple of slices, like np.mgrid[1:3, 4:8] or np.mgrid[np.s_[1:3, 4:8]].
But is there a way to mix both slices and arrays of indexes in a tuple argument to mgrid? E.g.:
extended_mgrid(np.s_[1:3, 4:8] + (np.array([1,2,3]), np.array([7,8])))
should give same results as
np.mgrid[1:3, 4:8, 1:4, 7:9]
But in general an array of indexes inside a tuple may not be representable as a slice.
Solving this task is needed to be able to create N-D tuple of indexes provided a mix of slicing + indexing using np.mgrid like in this my answer for another question.
Task solved with help of #hpaulj using np.meshgrid.
Try it online!
import numpy as np
def extended_mgrid(i):
res = np.meshgrid(*[(
np.arange(e.start or 0, e.stop, e.step or 1)
if type(e) is slice else e
) for e in {slice: (i,), np.ndarray: (i,), tuple: i}[type(i)]
], indexing = 'ij')
return np.stack(res, 0) if type(i) is tuple else res[0]
# Tests
a = np.mgrid[1:3]
b = extended_mgrid(np.s_[1:3])
assert np.array_equal(a, b), (a, b)
a = np.mgrid[(np.s_[1:3],)]
b = extended_mgrid((np.s_[1:3],))
assert np.array_equal(a, b), (a, b)
a = np.array([[[1,1],[2,2]],[[3,4],[3,4]]])
b = extended_mgrid((np.array([1,2]), np.array([3,4])))
assert np.array_equal(a, b), (a, b)
a = np.mgrid[1:3, 4:8, 1:4, 7:9]
b = extended_mgrid(np.s_[1:3, 4:8] + (np.array([1,2,3]), np.array([7,8])))
assert np.array_equal(a, b), (a, b)
I have a question based an how to "call" a specific cell in an array, while looping over another array.
Assume, there is an array a:
a = [[a1 a2 a3],[b1 b2 b3]]
and an array b:
b = [[c1 c2] , [d1 d2]]
Now, I want to recalculate the values in array b, by using the information from array a. In detail, each value of array b has to be recalculated by multiplication with the integral of the gauss-function between the borders given in array a. but for the sake of simplicity, lets forget about the integral, and assume a simple calculation is necessary in the form of:
c1 = c1 * (a2-a1) ; c2 = c2 * (a3 - a2) and so on,
with indices it might look like:
b[i,j] = b[i,j] * (a[i, j+1] - a[i,j])
Can anybody tell me how to solve this problem?
Thank you very much and best regards,
Marc
You can use zip function within a nested list comprehension :
>>> [[k*(v[1]-v[0]) for k,v in zip(v,zip(s,s[1:]))] for s,v in zip(a,b)]
zip(s,s[1:]) will gave you the desire pairs of elements that you want, for example :
>>> s =[4, 5, 6]
>>> zip(s,s[1:])
[(4, 5), (5, 6)]
Demo :
>>> b =[[7, 8], [6, 0]]
>>> a = [[1,5,3],[4 ,0 ,6]]
>>> [[k*(v[1]-v[0]) for k,v in zip(v,zip(s,s[1:]))] for s,v in zip(a,b)]
[[28, -16], [-24, 0]]
you can also do this really cleanly with numpy:
import numpy as np
a, b = np.array(a), np.array(b)
np.diff(a) * b
First I would split your a table in a table of lower bound and one of upper bound to work with aligned tables and improve readability :
lowerBounds = a[...,:-1]
upperBounds = a[...,1:]
Define the Gauss function you provided :
def f(x, gs_wdth = 1., mean=0.):
return 1./numpy.sqrt(2*numpy.pi)*gs_wdth * numpy.exp(-(x-mean)**2/(2*gs_wdth**2))
Then, use a nditer (see Iterating Over Arrays) to efficientely iterate over the arrays :
it = numpy.nditer([b, lowerBounds, upperBounds],
op_flags=[['readwrite'], ['readonly'], ['readonly']])
for _b, _lb, _ub in it:
multiplier = scipy.integrate.quad(f, _lb, _ub)[0]
_b[...] *= multiplier
print b
This does the job required in your post, and should be computationnaly efficient. Note that b in modified "in-place" : original values are lost but there is no memory overshoot during calculation.
What is the most elegant and concise way (without creating my own class with operator overloading) to perform tuple arithmetic in Python 2.7?
Lets say I have two tuples:
a = (10, 10)
b = (4, 4)
My intended result is
c = a - b = (6, 6)
I currently use:
c = (a[0] - b[0], a[1] - b[1])
I also tried:
c = tuple([(i - j) for i in a for j in b])
but the result was (6, 6, 6, 6). I believe the above works as a nested for loops resulting in 4 iterations and 4 values in the result.
If you're looking for fast, you can use numpy:
>>> import numpy
>>> numpy.subtract((10, 10), (4, 4))
array([6, 6])
and if you want to keep it in a tuple:
>>> tuple(numpy.subtract((10, 10), (4, 4)))
(6, 6)
One option would be,
>>> from operator import sub
>>> c = tuple(map(sub, a, b))
>>> c
(6, 6)
And itertools.imap can serve as a replacement for map.
Of course you can also use other functions from operator to add, mul, div, etc.
But I would seriously consider moving into another data structure since I don't think this type of problem is fit for tuples
Use zip and a generator expression:
c = tuple(x-y for x, y in zip(a, b))
Demo:
>>> a = (10, 10)
>>> b = (4, 4)
>>> c = tuple(x-y for x, y in zip(a, b))
>>> c
(6, 6)
Use itertools.izip for a memory efficient solution.
help on zip:
>>> print zip.__doc__
zip(seq1 [, seq2 [...]]) -> [(seq1[0], seq2[0] ...), (...)]
Return a list of tuples, where each tuple contains the i-th element
from each of the argument sequences. The returned list is truncated
in length to the length of the shortest argument sequence.
JFYI, execution time in my laptop in 100000 times iteration
np.subtract(a, b) : 0.18578505516052246
tuple(x - y for x, y in zip(a, b)) :
0.09348797798156738
tuple(map(lambda x, y: x - y, a, b)) : 0.07900381088256836
from operator import sub tuple(map(sub, a, b)) : 0.044342041015625
operator looks more elegant for me.
This can also be done just as nicely without an import at all, although lambda is often undesirable:
tuple(map(lambda x, y: x - y, a, b))
If you are looking to get the distance between two points on say a 2d coordinate plane you should use the absolute value of the subtraction of the pairs.
tuple(map(lambda x ,y: abs(x - y), a, b))
Since in your question there only examples of 2-number-tuples, for such coordinate-like tuples, you may be good with simple built-in "complex" class:
>>> a=complex(7,5)
>>> b=complex(1,2)
>>> a-b
>>> c=a-b
>>> c
(6+3j)
>>> c.real
6.0
>>> c.imag
3.0
As an addition to Kohei Kawasaki's answer, for speed, the original solution was actually the fastest (For a length two tuple at least).
>>> timeit.timeit('tuple(map(add, a, b))',number=1000000,setup='from operator import add; a=(10,11); b=(1,2)')
0.6502681339999867
>>> timeit.timeit('(a[0] - b[0], a[1] - b[1])',number=1000000,setup='from operator import add; a=(10,11); b=(1,2)')
0.19015854899998885
>>>
my element-wise tuple arithmetic helper
supported operations: +, -, /, *, d
operation = 'd' calculates distance between two points on a 2D coordinate plane
def tuplengine(tuple1, tuple2, operation):
"""
quick and dirty, element-wise, tuple arithmetic helper,
created on Sun May 28 07:06:16 2017
...
tuple1, tuple2: [named]tuples, both same length
operation: '+', '-', '/', '*', 'd'
operation 'd' returns distance between two points on a 2D coordinate plane (absolute value of the subtraction of pairs)
"""
assert len(tuple1) == len(tuple2), "tuple sizes doesn't match, tuple1: {}, tuple2: {}".format(len(tuple1), len(tuple2))
assert isinstance(tuple1, tuple) or tuple in type(tuple1).__bases__, "tuple1: not a [named]tuple"
assert isinstance(tuple2, tuple) or tuple in type(tuple2).__bases__, "tuple2: not a [named]tuple"
assert operation in list("+-/*d"), "operation has to be one of ['+','-','/','*','d']"
return eval("tuple( a{}b for a, b in zip( tuple1, tuple2 ))".format(operation)) \
if not operation == "d" \
else eval("tuple( abs(a-b) for a, b in zip( tuple1, tuple2 ))")
This question already has answers here:
Element-wise addition of 2 lists?
(17 answers)
Closed 5 years ago.
I often do vector addition of Python lists.
Example: I have two lists like these:
a = [0.0, 1.0, 2.0]
b = [3.0, 4.0, 5.0]
I now want to add b to a to get the result a = [3.0, 5.0, 7.0].
Usually I end up doing like this:
a[0] += b[0]
a[1] += b[1]
a[2] += b[2]
Is there some efficient, standard way to do this with less typing?
UPDATE: It can be assumed that the lists are of length 3 and contain floats.
If you need efficient vector arithmetic, try Numpy.
>>> import numpy
>>> a=numpy.array([0,1,2])
>>> b=numpy.array([3,4,5])
>>> a+b
array([3, 5, 7])
>>>
Or (thanks, Andrew Jaffe),
>>> a += b
>>> a
array([3, 5, 7])
>>>
I don't think you will find a faster solution than the 3 sums proposed in the question. The advantages of numpy are visible with larger vectors, and also if you need other operators. numpy is specially useful with matrixes, witch are trick to do with python lists.
Still, yet another way to do it :D
In [1]: a = [1,2,3]
In [2]: b = [2,3,4]
In [3]: map(sum, zip(a,b))
Out[3]: [3, 5, 7]
Edit: you can also use the izip from itertools, a generator version of zip
In [5]: from itertools import izip
In [6]: map(sum, izip(a,b))
Out[6]: [3, 5, 7]
While Numeric is excellent, and list-comprehension solutions OK if you actually wanted to create a new list, I'm surprised nobody suggested the "one obvious way to do it" -- a simple for loop! Best:
for i, bi in enumerate(b): a[i] += bi
Also OK, kinda sorta:
for i in xrange(len(a)): a[i] += b[i]
If you think Numpy is overkill, this should be really fast, because this code runs in pure C (map() and __add__() are both directly implemented in C):
a = [1.0,2.0,3.0]
b = [4.0,5.0,6.0]
c = map(float.__add__, a, b)
Or alternatively, if you don't know the types in the list:
import operator
c = map(operator.add, a, b)
How about this:
a = [x+y for x,y in zip(a,b)]
Or, if you're willing to use an external library (and fixed-length arrays), use numpy, which has "+=" and related operations for in-place operations.
import numpy as np
a = np.array([0, 1, 2])
b = np.array([3, 4, 5])
a += b
[a[x] + b[x] for x in range(0,len(a))]
For the general case of having a list of lists you could do something like this:
In [2]: import numpy as np
In [3]: a = np.array([[1, 1, 1], [2, 2, 2], [3, 3, 3],[4, 5, 6]])
In [4]: [sum(a[:,i]) for i in xrange(len(a[0]))]
Out[4]: [10, 11, 12]
If you're after concise, try...
vectors = [[0.0, 1.0, 2.0],[3.0, 4.0, 5.0]]
[sum(col) for col in zip(*vectors)]
Though I can't speak for the performance of this.
list(map(lambda x:x[0]+x[1], zip(a,b)))
a = map(lambda x, y: x + y, a, b)
You could create a function that gets the size of the array, loops through it and creating a return array which it returns.
An improvement (less memory consumption) of the comprehension list
import itertools
a = [x+y for x,y in itertools.izip(a,b)]
Actually if you are not sure that a will be consume then I would even go with generator expression:
(x+y for x,y in itertools.izip(a,b))