Suppose that I have a np.einsum that performs some calculation, and then pump that directly into yet another np.einsum to do some other thing. Can I, in general, compose those two einsums into a single einsum?
My specific use case is that I am doing a transpose, a matrix multiplication, and then another matrix multiplication to compute b a^T a :
import numpy as np
from numpy import array
a = array([[1, 2],
[3, 4]])
b = array([[1, 2],
[3, 4],
[5, 6]])
matrix_multiply_by_transpose = 'ij,kj->ik'
matrix_multiply = 'ij,jk->ik'
test_answer = np.einsum(matrix_multiply,
np.einsum(matrix_multiply_by_transpose,
b, a
),
a
)
assert np.array_equal(test_answer,
np.einsum(an_answer_to_this_question, b, a, a))
#or, the ultimate most awesomest answer ever, if such a thing even exists
assert np.array_equal(test_answer,
np.einsum(the_bestest_answer(matrix_multiply_by_transpose, matrix_multiply),
b, a, a)
)
In single einsum call, it would be -
np.einsum('ij,kj,kl->il',b,a,a)
The intuition involved would be :
Start off from the innermost einsum call : 'ij,kj->ik'.
Moving out, the second one is : 'ij,jk->ik'. The first argument in it is the output from step#1. So, let's mould this argument for the second one based on the output from the first one, introducing new strings for new iterators : 'ik,kl->il'. Note that 'kl' is the second arg in this second einsum call, which is a.
Thus, combining, we have : 'ij,kj,kl->il' with the inputs in the same sequence, i.e. b,a for the innermost einsum call and then a incoming as the third input.
Related
I need to evaluate the derivative of functions (f') given by the user in many points. The points are in a list (or numpy.array, pandas.Series...). I obtain the expected value when f' depends on a sympy variable, but not when f' is a constant:
import sympy as sp
f1 = sp.sympify('1')
f2 = sp.sympify('t')
lamb1 = sp.lambdify('t',f1)
lamb2 = sp.lambdify('t',f2)
print(lamb1([1,2,3]))
print(lamb2([1,2,3]))
I obtain:
1
[1, 2, 3]
The second is alright, but I expected that the first would be a list of ones.
These functions are in a matrix and the end result of sympy operations, such as taking derivatives. The exact form of f1 and f2 varies per problem.
lamb1 is a function that returns the constant 1: def lamb1(x): return 1.
lamb2 is a function that returns its argument: def lamb2(x): return x.
So, the output is very well the expected one.
Here is an approach that might work. I changed the test function for f2 to t*t as that was more annoying in my tests (dealing with Pow(t,2)).
import sympy as sp
import numpy as np
f1 = sp.sympify('1')
f2 = sp.sympify('t*t')
def np_lambdify(varname, func):
lamb = sp.lambdify(varname, func, modules=['numpy'])
if func.is_constant():
return lambda t: np.full_like(t, lamb(t))
else:
return lambda t: lamb(np.array(t))
lamb1 = np_lambdify('t', f1)
lamb2 = np_lambdify('t', f2)
print(lamb1(1))
print(lamb1([1, 2, 3]))
print(lamb2(2))
print(lamb2([1, 2, 3]))
Outputs:
1
[1 1 1]
4
[1 4 9]
With isympy/ipython introspection:
In [28]: lamb2??
Signature: lamb2(t)
Docstring:
Created with lambdify. Signature:
func(arg_0)
Expression:
t
Source code:
def _lambdifygenerated(t):
return (t)
and for the first:
In [29]: lamb1??
Signature: lamb1(t)
Docstring:
Created with lambdify. Signature:
func(arg_0)
Expression:
1
Source code:
def _lambdifygenerated(t):
return (1)
So one returns the input argument; the other returns just the constant, regardless of the input. lambdify does a rather simple lexical translation from sympy to numpy Python.
edit
Putting your functions in a sp.Matrix:
In [55]: lamb3 = lambdify('t',Matrix([f1,f2]))
In [56]: lamb3??
...
def _lambdifygenerated(t):
return (array([[1], [t]]))
...
In [57]: lamb3(np.arange(3))
Out[57]:
array([[1],
[array([0, 1, 2])]], dtype=object)
So this returns a numpy array; but because of the mix of shapes the result is object dtype, not 2d.
We can see this with a direct array generation:
In [53]: np.array([[1],[1,2,3]])
Out[53]: array([list([1]), list([1, 2, 3])], dtype=object)
In [54]: np.array([np.ones(3,int),[1,2,3]])
Out[54]:
array([[1, 1, 1],
[1, 2, 3]])
Neither sympy nor the np.array attempts to 'broadcast' that constant. There are numpy constructs that will do that, such as multiplication and addition, but this simple sympy function and lambdify don't.
edit
frompyfunc is a way of passing an array (or arrays) to a function that only works with scalar inputs. While lamb2 works with an array input, you aren't happy with the lamb1 case, or presumably lamb3.
In [60]: np.frompyfunc(lamb1,1,1)([1,2,3])
Out[60]: array([1, 1, 1], dtype=object)
In [61]: np.frompyfunc(lamb2,1,1)([1,2,3])
Out[61]: array([1, 2, 3], dtype=object)
This [61] is slower than simply lamb2([1,2,3]) since it effectively iterates.
In [62]: np.frompyfunc(lamb3,1,1)([1,2,3])
Out[62]:
array([array([[1],
[1]]), array([[1],
[2]]),
array([[1],
[3]])], dtype=object)
In this Matrix case the result is an array of arrays. But since shapes match they can be combined into one array (in various ways):
In [66]: np.concatenate(_62, axis=1)
Out[66]:
array([[1, 1, 1],
[1, 2, 3]])
Usually it isn't actually a problem for lambdify to return a constant, because NumPy's broadcasting semantics will automatically treat a constant as an array of that constant of the appropriate shape.
If it is a problem, you can use a wrapper like
def broadcast(fun):
return lambda *x: numpy.broadcast_arrays(fun(*x), *x)[0]
(this is taken from https://github.com/sympy/sympy/issues/5642, which has more discussion on this issue).
Note that using broadcast is better than full_like as in JohanC's answer, because broadcasted constant arrays do not actually take up more memory, whereas full_like will copy the constant in memory to make the array.
I often use the trick t * 0 + 1 to create a zero-vector the same length as my input, but then add 1 to each of its elements. It works with NumPy; check if it works with Sympy!
I never use lambdify so I can't be too critical of how it is working. But it appears that you will need to fool it by giving it an expression that doesn't simplify to a scalar which, when evaluated with numbers will reduce to the desired value:
>>> import numpy as np
>>> lambdify('t','(1+t)*t-t**2-t+42','numpy')(np.array([1,2,3]))
array([42, 42, 42])
I'm trying to access a numpy array A using another array B providing the indices at each position:
A = np.array([[1,2],[3,4]])
B = np.array([[[0,0],[0,0]],[[0,1],[0,1]]])
Desired output:
C = array([[1,1],[3,3]])
I haven't gotten it to work using np.take() or the advanced indexing.
I could do it iteratively but my arrays are on the order of 10**7 so I was hoping for a faster way.
I probably should have insisted on seeing the iterative solution first, but here's the array one:
In [45]: A[B[:,:,1], B[:,:,0]]
Out[45]:
array([[1, 1],
[3, 3]])
I first tried A[B[:,:,0], B[:,:,1]], the natural order of the inner dimension. Your own code could have saved me that trial.
The key with advanced indexing is that you have to create or define separate arrays (broadcastable) for each dimension. We can think of that index as a tuple:
idx = (B[:,:,0], B[:,:,1])
A[idx]
Adding on #hpaulj an alternative way is:
idx = tuple(B[:,:,[1,0]].transpose(2,0,1))
A[idx]
# array([[1, 1], [3, 3]])
Let's say I have a 2-d tensor:
x = torch.Tensor([[1, 2], [3, 4]])
Is there an efficient way to apply one function to the first 'row' [1, 2] and apply a second different function to the second row [3, 4]? (Doesn't have to be a row, could be across any dimension)
At the moment, I use the following code: Say I have my two functions, f and g, for example,
def f(z):
return 2 * z
def g(z):
return 0.5 * z
Then, to apply them to seperate rows I would do:
torch.cat([f(x[0]).unsqueeze(0), g(x[1]).unsqueeze(0)], dim = 0)
which gives the desired tensor [[2, 4], [1.5, 2]].
Obviously, in this 2-d example this solution is fine, but it seems a bit clunky. Is there a better way of doing this? Particularly in higher dimensions or when there are a large number of elements in the chosen dimension
A handy tip is to slice instead of selecting to avoid the unsqueeze step. Indeed, notice how x[:1] keeps the indexed dimension compared to x[0].
This way you can perform the desired operation in a slightly shorter form:
>>> torch.vstack((f(x[:1]), g(x[1:])))
Optionally you can use vstack to not have to provide dim=0 to torch.stack.
Alternatively, you can use a helper function that will apply both f and g:
>>> fn = lambda a,b: (f(a), g(b))
And split the tensor inline with torch.Tensor.split:
>>> torch.vstack(fn(*x.split(1)))
I ran across something that seemed to me like inconsistent behavior in Numpy slices. Specifically, please consider the following example:
import numpy as np
a = np.arange(9).reshape(3,3) # a 2d numpy array
y = np.array([1,2,2]) # vector that will be used to index the array
b = a[np.arange(len(a)),y] # a vector (what I want)
c = a[:,y] # a matrix ??
I wanted to obtain a vector such that the i-th element is a[i,y[i]]. I tried two things (b and c above) and was surprised that b and c are not the same... in fact one is a vector and the other is a matrix! I was under the impression that : was shorthand for "all elements" but apparently the meaning is somewhat more subtle.
After trial and error I somewhat understand the difference now (b == np.diag(c)), but would appreciate clarification on why they are different, what exactly using : implies, and how to understand when to use either case.
Thanks!
It's hard to understand advanced indexing (with lists or arrays) without understanding broadcasting.
In [487]: a=np.arange(9).reshape(3,3)
In [488]: idx = np.array([1,2,2])
Index with a (3,) and (3,) producing shape (3,) result:
In [489]: a[np.arange(3),idx]
Out[489]: array([1, 5, 8])
Index with (3,1) and (3,), result is (3,3)
In [490]: a[np.arange(3)[:,None],idx]
Out[490]:
array([[1, 2, 2],
[4, 5, 5],
[7, 8, 8]])
The slice : does basically the same thing. There are subtle differences, but here it's the same.
In [491]: a[:,idx]
Out[491]:
array([[1, 2, 2],
[4, 5, 5],
[7, 8, 8]])
ix_ does the same thing, converting the (3,) & (3,) to (3,1) and (1,3):
In [492]: np.ix_(np.arange(3),idx)
Out[492]:
(array([[0],
[1],
[2]]), array([[1, 2, 2]]))
A broadcasted sum might help visualize the two cases:
In [495]: np.arange(3)*10+idx
Out[495]: array([ 1, 12, 22])
In [496]: np.sum(np.ix_(np.arange(3)*10,idx),axis=0)
Out[496]:
array([[ 1, 2, 2],
[11, 12, 12],
[21, 22, 22]])
When you pass
np.arange(len(a)), y
You can view the result as being all the indexed pairs for the zipped elements you passed. In this case, indexing by np.arange(len(a)) and y
np.arange(len(a))
# [0, 1, 2]
y
# [1, 2, 2]
effectively takes elements: (0, 1), (1, 2), and (2, 2).
print(a[0, 1], a[1, 2], a[2, 2]) # 0th, 1st, 2nd elements from each indexer
# 1 5 8
In the second case, you take the entire slice along the first dimension. (Nothing before the colon.) So this is all elements along the 0th axis. You then specify with y that you want the 1st, 2nd, and 2nd element along each row. (0-indexed.)
As you pointed out, it may seem a bit unintuitive that the results are different given that the individual elements of the slice are equivalent:
a[:] == a[np.arange(len(a))]
and
a[:y] == a[:y]
However, NumPy advanced indexing cares what type of data structure you pass when indexing (tuples, integers, etc). Things can become hairy very quickly.
The detail behind that is this: first consider all NumPy indexing to be of the general form x[obj], where obj is the evaluation of whatever you passed. How NumPy "behaves" depends on what type of object obj is:
Advanced indexing is triggered when the selection object, obj, is a
non-tuple sequence object, an ndarray (of data type integer or bool),
or a tuple with at least one sequence object or ndarray (of data type
integer or bool).
...
The definition of advanced indexing means that x[(1,2,3),] is
fundamentally different than x[(1,2,3)]. The latter is equivalent to
x[1,2,3] which will trigger basic selection while the former will
trigger advanced indexing. Be sure to understand why this occurs.
In your first case, obj = np.arange(len(a)),y, a tuple that fits the bill in bold above. This triggers advanced indexing and forces the behavior described above.
As for the second case, [:,y]
When there is at least one slice (:), ellipsis (...) or np.newaxis in
the index (or the array has more dimensions than there are advanced
indexes), then the behaviour can be more complicated. It is like
concatenating the indexing result for each advanced index element.
Demonstrated:
# Concatenate the indexing result for each advanced index element.
np.vstack((a[0, y], a[1, y], a[2, y]))
I'm using scipy's method integrate.odeint to solve a second order LDE. The method requires that the equation be put in the form of a system of two first-order equations in two unknowns. The method
odeint(system_matrix,initial_conditions_matrix,time_values)
outputs the solution vector at each point of time in time_values. The solution vector is actually of the form [u,u'], where u is the variable I am interested in. So I want to plot only u. I found online one way of accomplishing this is to use
u,u'=odeint(system_matrix,initial_conditions_matrix,time_values).T
but I don't understand why this works and what does the .T at the end mean?
odeint(system_matrix,initial_conditions_matrix,time_values) is a matrix of 2 columns.
To be able to get the first column, first use .T (transpose) and then you are able to unpack since the elements are oriented like you want.
BTW I doubt that u' is a valid variable name. I would do:
u,_ = odeint(system_matrix,initial_conditions_matrix,time_values).T
since second value is of no interest to you.
The example I have in mind is:
>>> sol = odeint(pend, y0, t, args=(b, c))
The solution is an array with shape (101, 2). The first column is theta(t), and the second is omega(t). The following code plots both components.
>>>
>>> import matplotlib.pyplot as plt
>>> plt.plot(t, sol[:, 0], 'b', label='theta(t)')
>>> plt.plot(t, sol[:, 1], 'g', label='omega(t)')
sol[:,0] selects the first column of sol
Unpacking is usually used with a function that returns a tuple, for example:
def foo():
....
return [1,2,3],{3:3}
x, y = foo()
should end up with x being a list, y a dictionary.
But it works with any iterable, provide the number of terms match. For example a 2 row array can be unpacked into 2 arrays.
In [1]: x, y = np.arange(6).reshape(2,3)
In [4]: x,y
Out[4]: (array([0, 1, 2]), array([3, 4, 5]))
If I'd created a (3,2) array I would have needed x,y,z= ..., or .T.
Because we can index columns and rows, unpacking isn't used a lot in numpy. Usually we have too many rows to unpack. But it works just as basic Python intended to.
As a matter of curiosity, transpose works on a tuple
In [6]: np.transpose((x,y))
Out[6]:
array([[0, 3],
[1, 4],
[2, 5]])
This is actually used in np.argwhere, which turns the tuple of indices produced by np.where into array with the same number of columns as dimensions.