Tensorflow multiple scalar multiplication - python

I have a 3d Tensor with [batch_size,x,y] and a vector [batch_size].
I want to scalar multiply the i-th matrix [x,y] with the i-th entry of the given vector.
Is there a build in function in Tensorflow or do i have to use the tf.while_loop?

You can do this with broadcasting. You need to reshape the vector first.
a = tf.constant([[[1,1],[2,2]],[[3,3],[4,4]]])
b = tf.constant([2,3])
c = tf.reshape(b, [-1,1,1])
d = a * c
>>> sess.run(d)
array([[[ 2, 2],
[ 4, 4]],
[[ 9, 9],
[12, 12]]], dtype=int32)

I don't if there is built in function, but you also don't need to use while loop. You can do basic array manipulation. e.g.:
a=tf.random_uniform([3,5,8])
b=tf.random_uniform([3])
c=tf.expand_dims(tf.expand_dims(b, -1),1)
c=tf.tile(c,[1,5,8])
d=tf.multiply(a,c)
sess=tf.Session()
sess.run([a,b,c,d])
It should work.

Related

What is the most efficient way to handle conversion from full to symmetric second order tensors using numpy?

I am processing symmetric second order tensors (of stress) using numpy. In order to transform the tensors I have to generate a fully populated tensor, do the transformation and then recover the symmetric tensor in the rotated frame.
My input is a 2D numpy array of symmetric tensors (nx6). The code below works, but I'm pretty sure there must be a more efficient and/or elegant way to manipulate the arrays but I can't seem to figure it out.
I anyone can anyone suggest an improvement I'd be very grateful? The sample input is just 2 symmetric tensors but in use this could be millions of tensors, hence the concernr with efficiency
Thanks,
Doug
# Sample symmetric input (S11, S22, S33, S12, S23, S13)
sym_tens_in=np.array([[0,9], [1,10], [2,11], [3,12], [4,13], [5,14]])
# Expand to full tensor
tens_full=np.array([[sym_tens_in[0], sym_tens_in[3], sym_tens_in[4]],
[sym_tens_in[3], sym_tens_in[1], sym_tens_in[5]],
[sym_tens_in[4], sym_tens_in[5], sym_tens_in[2]]])
# Transpose and reshape to n x 3 x 3
tens_full=np.transpose(tens_full, axes=(2, 0, 1))
# This where the work on the full tensor will go....
# Reshape for extraction of the symmetric tensor
tens_full=np.reshape(tens_full, (2,9))
# Create an array for the test ouput symmetric tensor
sym_tens_out=np.empty((2,6), dtype=np.int32)
# Extract the symmetric components
sym_tens_out[:,0]=tens_full[:,0]
sym_tens_out[:,1]=tens_full[:,4]
sym_tens_out[:,2]=tens_full[:,8]
sym_tens_out[:,3]=tens_full[:,2]
sym_tens_out[:,4]=tens_full[:,3]
sym_tens_out[:,5]=tens_full[:,5]
# Transpose....
sym_tens_out=np.transpose(sym_tens_out)
This won't be any faster, but it's more compact:
In [166]: idx=np.array([0,3,4,3,1,5,4,5,2]).reshape(3,3)
In [167]: sym_tens_in[idx].transpose(2,0,1)
Out[167]:
array([[[ 0, 3, 4],
[ 3, 1, 5],
[ 4, 5, 2]],
[[ 9, 12, 13],
[12, 10, 14],
[13, 14, 11]]])
The transpose could be done first:
sym_tens_in.T[:,idx]
Similarly the reverse mapping can be done with:
In [168]: idx1 = [0,4,8,1,2,5]
In [171]: tens_full.reshape(2,-1)[:,idx1]
Out[171]:
array([[ 0, 1, 2, 3, 4, 5],
[ 9, 10, 11, 12, 13, 14]])
with the optional transpose.
OK - Based on the answers provided here I found a really cool solution. Now, I have to say that in my original question I omitted the actual reason that I was trying to get the full tensor into nx3x3 form. Basically, I'm implementing a function to rotate 2nd order stress tensors which requires solution of σ′=R⋅σ⋅RT.
I was planning to use numpy.matmul for the matrix multiplication but to transform multiple stress tensors, matmul requires the 3x3 tensors to be in the last two indices of the nx3x3 matrix - hence the effort to get the data into nx3x3 from from the original 3x3xn form....
However, after I let go of numpy.matmul as my target solution and embraced numpy.einsum instead....... everything became much easier....
# Sample symmetric input (S11, S22, S33, S12, S23, S13)
sym_tens_in=np.array([[0,9], [1,10], [2,11], [3,12], [4,13], [5,14]])
idx=np.array([0,3,5,3,1,4,5,4,2]).reshape(3,3)
full=sym_tens_in[idx]
full_transformed=np.einsum('ij, jkn, lk->nil', rot_mat, full, rot_mat)
Thanks for the inspiration!!!!

Pytorch get "reduced" tensor by indices

I have a tensor a = torch.arange(6).reshape(2,3), and another tensor b=(torch.rand(a.size())> 0.5).int().nonzero().
I want to create a new tensor that contains only values from a of the indices that are indicated by b.
For example:
a = torch.arange(6).reshape(2,3) # tensor([[0, 1, 2],
# [3, 4, 5]])
b = (torch.rand(a.size())> 0.5).int().nonzero() # tensor([[0, 1],
# [0, 2],
# [1, 0],
# [1, 1]])
The desired output is:
tensor([1,2,3,4])
I know that I can iterate over the values of b and access those values in a as indices but I wanted to know if there is a better Pytorch way to to this (using tensor operations only).
** The shape of the output tensor doesn't really matter, I just need to have a tensor with only the values indicated by b.
If I understand you correctly, you can do:
a[b[:,0], b[:,1]]
This will produce a 1D tensor with the values at the indices specified by b. Note that the output might not be the same from run to run of the program since the indices are selected nondeterministically.
If you don't know the number of dimension in advance, you'll need to use map() to generate the desired slices:
a[tuple(map(lambda x: b[:,x], range(a.dim())))]

operation on numpy array in Python

I have one array and i want to convert it into certain shape which I do not know how to do.
I tried, but it does not give me proper result.
Here is the array-:
this is a numpy array
a=[[ [1,2,13],
[12,2,32],
[61,2,6],
[1,23,3],
[1,21,3],
[91,2,38] ]]
expected outputs-:
1. [[ [1,2],
[12,2],
[61,2],
[1,23],
[1,21],
[91,2] ]]
2. [ [1,2],
[12,2],
[61,2],
[1,23],
[1,21],
[91,2] ]
So the question can be boiled down to
"Given a 3D numpy array with the shape (1, 6, 3) how can we make a copy but a shape of (1, 6, 2) by removing the last indexed value from the innermost nested array?"
Array Indexing
The below example achieves this by slicing the original array (a) to return the desired structure.
import numpy as np
a = np.array([[[1,2,13],[12,2,32],[61,2,6],[1,23,3],[1,21,3],[91,2,38]]])
o = a[:,:,:2]
List Comprehension
The below makes use of a list comprehension applied to filter a down in the manner described above.
import numpy as np
a = np.array([[[1,2,13],[12,2,32],[61,2,6],[1,23,3],[1,21,3],[91,2,38]]])
o = np.array([[j[:2] for i in a for j in i]])
In each of the above examples o will refer to the following array (the first output you are asking for).
array([[[ 1, 2],
[12, 2],
[61, 2],
[ 1, 23],
[ 1, 21],
[91, 2]]])
Given o as defined by one of the above examples, your second sought output is accessible via o[0].
This will do
import numpy as np
a=[[ [1,2,13],
[12,2,32],
[61,2,6],
[1,23,3],
[1,21,3],
[91,2,38] ]]
outputs=list()
for i in a[0]:
outputs.append([i[0],i[1]])
print(np.array([outputs]))
""" OUTPUTS
[[[ 1 2]
[12 2]
[61 2]
[ 1 23]
[ 1 21]
[91 2]]]
"""
Instead of deriving output2 = output1[0] you could use squeeze method. It removes all the single - dimensional entries from your array
output1 = a[:,:,:2]
output2 = output1.squeeze()
This is a vizualization of a process for a better understanding:

Numpy: get 1D array as 2D array without reshape

I have need for hstacking multple arrays with with the same number of rows (although the number of rows is variable between uses) but different number of columns. However some of the arrays only have one column, eg.
array = np.array([1,2,3,4,5])
which gives
#array.shape = (5,)
but I'd like to have the shape recognized as a 2d array, eg.
#array.shape = (5,1)
So that hstack can actually combine them.
My current solution is:
array = np.atleast_2d([1,2,3,4,5]).T
#array.shape = (5,1)
So I was wondering, is there a better way to do this? Would
array = np.array([1,2,3,4,5]).reshape(len([1,2,3,4,5]), 1)
be better?
Note that my use of [1,2,3,4,5] is just a toy list to make the example concrete. In practice it will be a much larger list passed into a function as an argument. Thanks!
Check the code of hstack and vstack. One, or both of those, pass the arguments through atleast_nd. That is a perfectly acceptable way of reshaping an array.
Some other ways:
arr = np.array([1,2,3,4,5]).reshape(-1,1) # saves the use of len()
arr = np.array([1,2,3,4,5])[:,None] # adds a new dim at end
np.array([1,2,3],ndmin=2).T # used by column_stack
hstack and vstack transform their inputs with:
arrs = [atleast_1d(_m) for _m in tup]
[atleast_2d(_m) for _m in tup]
test data:
a1=np.arange(2)
a2=np.arange(10).reshape(2,5)
a3=np.arange(8).reshape(2,4)
np.hstack([a1.reshape(-1,1),a2,a3])
np.hstack([a1[:,None],a2,a3])
np.column_stack([a1,a2,a3])
result:
array([[0, 0, 1, 2, 3, 4, 0, 1, 2, 3],
[1, 5, 6, 7, 8, 9, 4, 5, 6, 7]])
If you don't know ahead of time which arrays are 1d, then column_stack is easiest to use. The others require a little function that tests for dimensionality before applying the reshaping.
Numpy: use reshape or newaxis to add dimensions
If I understand your intent correctly, you wish to convert an array of shape (N,) to an array of shape (N,1) so that you can apply np.hstack:
In [147]: np.hstack([np.atleast_2d([1,2,3,4,5]).T, np.atleast_2d([1,2,3,4,5]).T])
Out[147]:
array([[1, 1],
[2, 2],
[3, 3],
[4, 4],
[5, 5]])
In that case, you could use avoid reshaping the arrays and use np.column_stack instead:
In [151]: np.column_stack([[1,2,3,4,5], [1,2,3,4,5]])
Out[151]:
array([[1, 1],
[2, 2],
[3, 3],
[4, 4],
[5, 5]])
I followed Ludo's work and just changed the size of v from 5 to 10000. I ran the code on my PC and the result shows that atleast_2d seems to be a more efficient method in the larger scale case.
import numpy as np
import timeit
v = np.arange(10000)
print('atleast2d:',timeit.timeit(lambda:np.atleast_2d(v).T))
print('reshape:',timeit.timeit(lambda:np.array(v).reshape(-1,1))) # saves the use of len()
print('v[:,None]:', timeit.timeit(lambda:np.array(v)[:,None])) # adds a new dim at end
print('np.array(v,ndmin=2).T:', timeit.timeit(lambda:np.array(v,ndmin=2).T)) # used by column_stack
The result is:
atleast2d: 1.3809496470021259
reshape: 27.099974197000847
v[:,None]: 28.58291715100131
np.array(v,ndmin=2).T: 30.141663907001202
My suggestion is that use [:None] when dealing with a short vector and np.atleast_2d when your vector goes longer.
Just to add info on hpaulj's answer. I was curious about how fast were the four methods described. The winner is the method adding a column at the end of the 1d array.
Here is what I ran:
import numpy as np
import timeit
v = [1,2,3,4,5]
print('atleast2d:',timeit.timeit(lambda:np.atleast_2d(v).T))
print('reshape:',timeit.timeit(lambda:np.array(v).reshape(-1,1))) # saves the use of len()
print('v[:,None]:', timeit.timeit(lambda:np.array(v)[:,None])) # adds a new dim at end
print('np.array(v,ndmin=2).T:', timeit.timeit(lambda:np.array(v,ndmin=2).T)) # used by column_stack
And the results:
atleast2d: 4.455070924214851
reshape: 2.0535152913971615
v[:,None]: 1.8387219828073285
np.array(v,ndmin=2).T: 3.1735243063353664

Create matrix with 2 arrays in numpy

I want to find a command in numpy for a column vector times a row vector equals to a matrix
[1,1,1,1 ] ^T * [ 2,3 ] = [[2,3],[2,3],[2,3],[2,3]]
First, let's define your 1-D numpy arrays:
In [5]: one = np.array([ 1,1,1,1 ]); two = np.array([ 2,3 ])
Now, lets multiply them:
In [6]: one[:, np.newaxis] * two[np.newaxis, :]
Out[6]:
array([[2, 3],
[2, 3],
[2, 3],
[2, 3]])
This used numpy's newaxis to add the appropriate axes to get a 4x2 output matrix.
The problem you are encountering is that both of your vectors are neither column nor row vectors - they're just vectors. If you look at len(vec.shape) it's 1.
What you can do is use numpy.reshape to turn your column vector into shape (m, 1) and your row vector into shape (1, n).
import numpy as np
colu = np.reshape(u, (u.shape[0], 1))
rowv = np.reshape(v, (1, v.shape[0]))
Now when you multiply colu and rowv you'll get a matrix with shape (m, n).
If you need a matrix - use matrices. This way you can use your expression nearly verbatim:
np.matrix([1,1,1,1]).T * np.matrix([2,3])
You might want to use numpy.kron(a,b) it takes the Kronecker product of two arrays. You can see the b vector as a block. The function puts this block, multiplied by the corresponding coefficient of the a vector, on the position of that coefficient. You can also use it for matrices.
For your example it would look like:
import numpy as np
vecA = np.array([[1],[1],[1],[1]])
vecB = np.array([2,3])
Out = np.kron(vecA,vecB)
this returns
>>> Out
array([[2, 3],
[2, 3],
[2, 3],
[2, 3]])
Hope this helps you.

Categories