Unpacking a list in python using .T? - python

I'm using scipy's method integrate.odeint to solve a second order LDE. The method requires that the equation be put in the form of a system of two first-order equations in two unknowns. The method
odeint(system_matrix,initial_conditions_matrix,time_values)
outputs the solution vector at each point of time in time_values. The solution vector is actually of the form [u,u'], where u is the variable I am interested in. So I want to plot only u. I found online one way of accomplishing this is to use
u,u'=odeint(system_matrix,initial_conditions_matrix,time_values).T
but I don't understand why this works and what does the .T at the end mean?

odeint(system_matrix,initial_conditions_matrix,time_values) is a matrix of 2 columns.
To be able to get the first column, first use .T (transpose) and then you are able to unpack since the elements are oriented like you want.
BTW I doubt that u' is a valid variable name. I would do:
u,_ = odeint(system_matrix,initial_conditions_matrix,time_values).T
since second value is of no interest to you.

The example I have in mind is:
>>> sol = odeint(pend, y0, t, args=(b, c))
The solution is an array with shape (101, 2). The first column is theta(t), and the second is omega(t). The following code plots both components.
>>>
>>> import matplotlib.pyplot as plt
>>> plt.plot(t, sol[:, 0], 'b', label='theta(t)')
>>> plt.plot(t, sol[:, 1], 'g', label='omega(t)')
sol[:,0] selects the first column of sol
Unpacking is usually used with a function that returns a tuple, for example:
def foo():
....
return [1,2,3],{3:3}
x, y = foo()
should end up with x being a list, y a dictionary.
But it works with any iterable, provide the number of terms match. For example a 2 row array can be unpacked into 2 arrays.
In [1]: x, y = np.arange(6).reshape(2,3)
In [4]: x,y
Out[4]: (array([0, 1, 2]), array([3, 4, 5]))
If I'd created a (3,2) array I would have needed x,y,z= ..., or .T.
Because we can index columns and rows, unpacking isn't used a lot in numpy. Usually we have too many rows to unpack. But it works just as basic Python intended to.
As a matter of curiosity, transpose works on a tuple
In [6]: np.transpose((x,y))
Out[6]:
array([[0, 3],
[1, 4],
[2, 5]])
This is actually used in np.argwhere, which turns the tuple of indices produced by np.where into array with the same number of columns as dimensions.

Related

Problem with Sympy lambdify with functions of 'x' or constants [duplicate]

I need to evaluate the derivative of functions (f') given by the user in many points. The points are in a list (or numpy.array, pandas.Series...). I obtain the expected value when f' depends on a sympy variable, but not when f' is a constant:
import sympy as sp
f1 = sp.sympify('1')
f2 = sp.sympify('t')
lamb1 = sp.lambdify('t',f1)
lamb2 = sp.lambdify('t',f2)
print(lamb1([1,2,3]))
print(lamb2([1,2,3]))
I obtain:
1
[1, 2, 3]
The second is alright, but I expected that the first would be a list of ones.
These functions are in a matrix and the end result of sympy operations, such as taking derivatives. The exact form of f1 and f2 varies per problem.
lamb1 is a function that returns the constant 1: def lamb1(x): return 1.
lamb2 is a function that returns its argument: def lamb2(x): return x.
So, the output is very well the expected one.
Here is an approach that might work. I changed the test function for f2 to t*t as that was more annoying in my tests (dealing with Pow(t,2)).
import sympy as sp
import numpy as np
f1 = sp.sympify('1')
f2 = sp.sympify('t*t')
def np_lambdify(varname, func):
lamb = sp.lambdify(varname, func, modules=['numpy'])
if func.is_constant():
return lambda t: np.full_like(t, lamb(t))
else:
return lambda t: lamb(np.array(t))
lamb1 = np_lambdify('t', f1)
lamb2 = np_lambdify('t', f2)
print(lamb1(1))
print(lamb1([1, 2, 3]))
print(lamb2(2))
print(lamb2([1, 2, 3]))
Outputs:
1
[1 1 1]
4
[1 4 9]
With isympy/ipython introspection:
In [28]: lamb2??
Signature: lamb2(t)
Docstring:
Created with lambdify. Signature:
func(arg_0)
Expression:
t
Source code:
def _lambdifygenerated(t):
return (t)
and for the first:
In [29]: lamb1??
Signature: lamb1(t)
Docstring:
Created with lambdify. Signature:
func(arg_0)
Expression:
1
Source code:
def _lambdifygenerated(t):
return (1)
So one returns the input argument; the other returns just the constant, regardless of the input. lambdify does a rather simple lexical translation from sympy to numpy Python.
edit
Putting your functions in a sp.Matrix:
In [55]: lamb3 = lambdify('t',Matrix([f1,f2]))
In [56]: lamb3??
...
def _lambdifygenerated(t):
return (array([[1], [t]]))
...
In [57]: lamb3(np.arange(3))
Out[57]:
array([[1],
[array([0, 1, 2])]], dtype=object)
So this returns a numpy array; but because of the mix of shapes the result is object dtype, not 2d.
We can see this with a direct array generation:
In [53]: np.array([[1],[1,2,3]])
Out[53]: array([list([1]), list([1, 2, 3])], dtype=object)
In [54]: np.array([np.ones(3,int),[1,2,3]])
Out[54]:
array([[1, 1, 1],
[1, 2, 3]])
Neither sympy nor the np.array attempts to 'broadcast' that constant. There are numpy constructs that will do that, such as multiplication and addition, but this simple sympy function and lambdify don't.
edit
frompyfunc is a way of passing an array (or arrays) to a function that only works with scalar inputs. While lamb2 works with an array input, you aren't happy with the lamb1 case, or presumably lamb3.
In [60]: np.frompyfunc(lamb1,1,1)([1,2,3])
Out[60]: array([1, 1, 1], dtype=object)
In [61]: np.frompyfunc(lamb2,1,1)([1,2,3])
Out[61]: array([1, 2, 3], dtype=object)
This [61] is slower than simply lamb2([1,2,3]) since it effectively iterates.
In [62]: np.frompyfunc(lamb3,1,1)([1,2,3])
Out[62]:
array([array([[1],
[1]]), array([[1],
[2]]),
array([[1],
[3]])], dtype=object)
In this Matrix case the result is an array of arrays. But since shapes match they can be combined into one array (in various ways):
In [66]: np.concatenate(_62, axis=1)
Out[66]:
array([[1, 1, 1],
[1, 2, 3]])
Usually it isn't actually a problem for lambdify to return a constant, because NumPy's broadcasting semantics will automatically treat a constant as an array of that constant of the appropriate shape.
If it is a problem, you can use a wrapper like
def broadcast(fun):
return lambda *x: numpy.broadcast_arrays(fun(*x), *x)[0]
(this is taken from https://github.com/sympy/sympy/issues/5642, which has more discussion on this issue).
Note that using broadcast is better than full_like as in JohanC's answer, because broadcasted constant arrays do not actually take up more memory, whereas full_like will copy the constant in memory to make the array.
I often use the trick t * 0 + 1 to create a zero-vector the same length as my input, but then add 1 to each of its elements. It works with NumPy; check if it works with Sympy!
I never use lambdify so I can't be too critical of how it is working. But it appears that you will need to fool it by giving it an expression that doesn't simplify to a scalar which, when evaluated with numbers will reduce to the desired value:
>>> import numpy as np
>>> lambdify('t','(1+t)*t-t**2-t+42','numpy')(np.array([1,2,3]))
array([42, 42, 42])

PyTorch - Efficient way to apply different functions to different 'row/column' of a tensor

Let's say I have a 2-d tensor:
x = torch.Tensor([[1, 2], [3, 4]])
Is there an efficient way to apply one function to the first 'row' [1, 2] and apply a second different function to the second row [3, 4]? (Doesn't have to be a row, could be across any dimension)
At the moment, I use the following code: Say I have my two functions, f and g, for example,
def f(z):
return 2 * z
def g(z):
return 0.5 * z
Then, to apply them to seperate rows I would do:
torch.cat([f(x[0]).unsqueeze(0), g(x[1]).unsqueeze(0)], dim = 0)
which gives the desired tensor [[2, 4], [1.5, 2]].
Obviously, in this 2-d example this solution is fine, but it seems a bit clunky. Is there a better way of doing this? Particularly in higher dimensions or when there are a large number of elements in the chosen dimension
A handy tip is to slice instead of selecting to avoid the unsqueeze step. Indeed, notice how x[:1] keeps the indexed dimension compared to x[0].
This way you can perform the desired operation in a slightly shorter form:
>>> torch.vstack((f(x[:1]), g(x[1:])))
Optionally you can use vstack to not have to provide dim=0 to torch.stack.
Alternatively, you can use a helper function that will apply both f and g:
>>> fn = lambda a,b: (f(a), g(b))
And split the tensor inline with torch.Tensor.split:
>>> torch.vstack(fn(*x.split(1)))

Python: Hierarchical Slicing

Is there a more pythonic/numpythonic way to do some sort of nested/hierarchical slicing, i.e. a prettier version of this:
_sum = 0
for i in np.arange(n):
_sum += someFunc(A[i,:])
Basically I would like to map someFunc (which takes arrays of any shape and returns a number) over the rows and then sum the results.
I have been thinking about np.sum(someFunc(A[:,:])), but according to my understanding this will just map someFuncover the whole array.
If I understood correctly, you could use a list comprehension like this:
sum([someFunc(A[i:]) for i in np.arange(n)])
Define a function to count 1's in an array:
def foo(x):
return (x==1).sum()
and a 2d array:
In [431]: X=np.array([[1,0,2],[3,1,1],[0,2,3]])
I can apply it iteratively to rows
In [432]: [foo(i) for i in X] # iterate on 1st dimension
Out[432]: [1, 2, 0]
In [433]: [foo(X[i,:]) for i in range(3)]
Out[433]: [1, 2, 0]
and get the total count with sum (here the Python sum)
In [434]: sum([foo(X[i,:]) for i in range(3)])
Out[434]: 3
As written foo gets the same thing with applied to the whole array
In [435]: foo(X)
Out[435]: 3
and for row counts, use the np.sum axis control:
In [440]: np.sum(X==1, axis=1)
Out[440]: array([1, 2, 0])
apply_along_axis can to the same sort of row iteration:
In [438]: np.apply_along_axis(foo,1,X)
Out[438]: array([1, 2, 0])
but for this it is overkill. It's more useful with 3d or larger arrays where it is awkward to iterate over all dimensions except the nth one. It's never faster than doing your own iteration.
It's clearly best if you can write the function to work on the whole array. But if you must iterate on rows, there aren't any magical solutions. vectorize and frompyfunc wrap functions that work with scalar values, not 1d arrays. Some row problems are solved by casting the rows as larger dtype objects (e.g. unique rows).

Index of multidimensional array

I have a problem using multi-dimensional vectors as indices for multi-dimensional vectors. Say I have C.ndim == idx.shape[0], then I want C[idx] to give me a single element. Allow me to explain with a simple example:
A = arange(0,10)
B = 10+A
C = array([A.T, B.T])
C = C.T
idx = array([3,1])
Now, C[3] gives me the third row, and C[1] gives me the first row. C[idx] then will give me a vstack of both rows. However, I need to get C[3,1]. How would I achieve that given arrays C, idx?
/edit:
An answer suggested tuple(idx). This work's perfectly for a single idx. But:
Let's take it to the next level: say INDICES is a vector where I have stacked vertically arrays of shape idx. tuple(INDICES) will give me one long tuple, so C[tuple(INDICES)] won't work. Is there a clean way of doing this or will I need to iterate over the rows?
If you convert idx to a tuple, it'll be interpreted as basic and not advanced indexing:
>>> C[3,1]
13
>>> C[tuple(idx)]
13
For the vector case:
>>> idx
array([[3, 1],
[7, 0]])
>>> C[3,1], C[7,0]
(13, 7)
>>> C[tuple(idx.T)]
array([13, 7])
>>> C[idx[:,0], idx[:,1]]
array([13, 7])

Acquiring the Minimum array out of Multiple Arrays by order in Python

Say that I have 4 numpy arrays
[1,2,3]
[2,3,1]
[3,2,1]
[1,3,2]
In this case, I've determined [1,2,3] is the "minimum array" for my purposes, as it is one of two arrays with lowest value at index 0, and of those two arrays it has the the lowest index 1. If there were more arrays with similar values, I would need to compare the next index values, and so on.
How can I extract the array [1,2,3] in that same order from the pile?
How can I extend that to x arrays of size n?
Thanks
Using the python non-numpy .sort() or sorted() on a list of lists (not numpy arrays) automatically does this e.g.
a = [[1,2,3],[2,3,1],[3,2,1],[1,3,2]]
a.sort()
gives
[[1,2,3],[1,3,2],[2,3,1],[3,2,1]]
The numpy sort seems to only sort the subarrays recursively so it seems the best way would be to convert it to a python list first. Assuming you have an array of arrays you want to pick the minimum of you could get the minimum as
sorted(a.tolist())[0]
As someone pointed out you could also do min(a.tolist()) which uses the same type of comparisons as sort, and would be faster for large arrays (linear vs n log n asymptotic run time).
Here's an idea using numpy:
import numpy
a = numpy.array([[1,2,3],[2,3,1],[3,2,1],[1,3,2]])
col = 0
while a.shape[0] > 1:
b = numpy.argmin(a[:,col:], axis=1)
a = a[b == numpy.min(b)]
col += 1
print a
This checks column by column until only one row is left.
numpy's lexsort is close to what you want. It sorts on the last key first, but that's easy to get around:
>>> a = np.array([[1,2,3],[2,3,1],[3,2,1],[1,3,2]])
>>> order = np.lexsort(a[:, ::-1].T)
>>> order
array([0, 3, 1, 2])
>>> a[order]
array([[1, 2, 3],
[1, 3, 2],
[2, 3, 1],
[3, 2, 1]])

Categories