I would like to center my set of rows with several means and get several sets of centered rows.
My data has shape of (4, 3) i.e. four 3D vectors:
data = tf.get_variable("myvar1", shape=[4, 3], dtype=tf.float64)
I have two centers (two 3D vectors):
mu = tf.get_variable("mu", initializer=tf.constant(np.arange(2*3).reshape(2, 3), dtype=tf.float64))
I would like to center data once per each mu. In numpy I would write loop:
data = np.arange(4 * 3).reshape(4, 3)
mu = np.arange(2*3).reshape(2, 3)
centered_data = np.empty((2, 4, 3))
for i_data in range(len(data)):
for i_mu in range(len(mu)):
centered = data[i_data] - mu[i_mu]
centered_data[i_mu, i_data, :] = centered
How to do the same in tensorflow?
Bulk method for numpy would also be appreciated!
Apparently I can insert singular dimension to provoke broadcasting:
data = tf.get_variable("myvar1", shape=[4, 3], dtype=tf.float64)
mu = tf.get_variable("mu", initializer=tf.constant(np.arange(2*3).reshape(2, 3), dtype=tf.float64))
centered_data = data - tf.expand_dims(mu, axis=1)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
ans_value, centered_data_value, mu_value = sess.run([centered_data, data, mu], {data: np.arange(4 * 3).reshape(4, 3)})
print("centered_data_value: ", centered_data_value)
print("mu: ", mu_value)
print("ans: ", ans_value)
The same is in numpy:
mu = np.reshape(mu, (2, 1, 3))
centered_data = data - mu
You only need to use - or tf.substract it will do element wise operation then:
centered_data = tf.substract(data, mu)
Related
I have a 3D array (4,3,3) in which I would like to iteratively multiply with a 1D array (t variable) and sum to end up with an array (A) that is a summation of the four 3,3 arrays
I'm unsure on how I should be assigning indexes or how and if I should be using np.ndenumerate
Thanks
import numpy as np
import math
#Enter material constants for calculation of stiffness matrix
E1 = 20
E2 = 1.2
G12 = 0.8
v12=0.25
v21=(v12/E1)*E2
theta = np.array([30,-30,-30,30])
deg = ((math.pi*theta/180))
k = len(theta) #number of layers
t = np.array([0.005,0.005,0.005,0.005])
#Calculation of Q Values
Q11 = 1
Q12 = 2
Q21 = 3
Q22 = 4
Q66 = 5
Qbar = np.zeros((len(theta),3,3),order='F')
#CALCULATING THE VALUES OF THE QBAR MATRIX
for i, x in np.ndenumerate(deg):
m= np.cos(x) #sin of rotated lamina
n= np.sin(x) #cos of rotated lamina
Qbar11=Q11*3
Qbar12=Q22*4
Qbar16=Q16*4
Qbar21 = Qbar12
Qbar22=Q22*1
Qbar26=Q66*2
Qbar66=Q12*3
Qbar[i] = np.array([[Qbar11, Qbar12, Qbar16], [Qbar21, Qbar22, Qbar26], [Qbar16, Qbar26, Qbar66]], order = 'F')
print(Qbar)
A = np.zeros((3,3))
for i in np.nditer(t):
A[i]=Qbar[i]*t[i]
A=sum(A[i])
If I understand correctly, you want to multiply Qbar and t over the first axis, and then summing the result over the first axis (which results in an array of shape (3, 3)).
I created random arrays to make the code minimal:
import numpy as np
Qbar = np.random.randint(2, size=(4, 3, 3))
t = np.arange(4)
A = (Qbar * t[:, None, None]).sum(axis=0)
t[:, None, None] will create two new dimensions so that the shape becomes (4, 1, 1), which can be multiplied to Qbar element-wise. Then we just have to sum over the first axis.
NB: A = np.tensordot(t, Qbar, axes=([0],[0])) also works and can be faster for larger dimensions, but for the dimensions you provided I prefer the first solution.
I need to divide a 2D matrix into a set of 2D patches with a certain stride, then multiply every patch by its center element and sum the elements of each patch.
It feels not unlike a convolution where a separate kernel is used for every element of the matrix.
Below is a visual illustration.
The elements of the result matrix are calculated like this:
The result should look like this:
Here's a solution I came up with:
window_shape = (2, 2)
stride = 1
# Matrix
m = np.arange(1, 17).reshape((4, 4))
# Pad it once per axis to make sure the number of views
# equals the number of elements
m_padded = np.pad(m, (0, 1))
# This function divides the array into `windows`, from:
# https://stackoverflow.com/questions/45960192/using-numpy-as-strided-function-to-create-patches-tiles-rolling-or-sliding-w#45960193
w = window_nd(m_padded, window_shape, stride)
ww, wh, *_ = w.shape
w = w.reshape((ww * wh, 4)) # Two first dimensions multiplied is the number of rows
# Tile each center element for element-wise multiplication
m_tiled = np.tile(m.ravel(), (4, 1)).transpose()
result = (w * m_tiled).sum(axis = 1).reshape(m.shape)
In my view it's not very efficient as a few arrays are allocated in the intermediary steps.
What is a better or more efficient way to accomplish this?
Try scipy.signal.convolve
from scipy.signal import convolve
window_shape = (2, 2)
stride = 1
# Matrix
m = np.arange(1, 17).reshape((4, 4))
# Pad it once per axis to make sure the number of views
# equals the number of elements
m_padded = np.pad(m, (0, 1))
output = convolve(m_padded, np.ones(window_shape), 'valid') * m
print(output)
Output:
array([[ 14., 36., 66., 48.],
[150., 204., 266., 160.],
[414., 500., 594., 336.],
[351., 406., 465., 256.]])
Suppose p has shape (4, 3, 2). I want to iterate 12 times over arrays of size (2,)
q = np.empty_like(p)
op_axes = [list(range(len(p.shape) - 1)) + [-1]] * 2
it = np.nditer([p, q],
op_axes=op_axes,
op_flags=[['readonly'], ['writeonly', 'allocate']])
with it:
for this_p, this_q in it:
print(this_p.shape) # I want this to have shape (2,)
this_q[...] = some_function_of(this_p)
What am I doing wrong?
Best I can do:
q = np.empty_like(p)
for i in np.ndindex(p.shape[: -1]):
this_p = p[i]
...
q[i] = solution.x
import tensorflow as tf
with tf.Session() as sess:
with tf.variable_scope('masssdsms'):
a = tf.get_variable('a', [1000,24,128], dtype=tf.float32, initializer=tf.random_normal_initializer(stddev=0.1) )
b = tf.get_variable('b', [1000,15,128], dtype=tf.float32, initializer=tf.random_normal_initializer(stddev=0.1) )
I want to get a new tensor named c from a and b.
1000 is the batch size, and c's shape should be (1000,20, 10, 1). For every instance from a and b: ai and bi, they are both two dimensional tensors.
The new instance ci is the result of ai and bi and it has 20 * 10 = 200 elements, that every element is the dot product of ai and bi with 128 dimension respectively. So there are 200 dot products results in sum. The ci is more like a 2-D image.
How can I initialize this operation?
Modified:
When I take the codes in usage, the operation of dot product should be replaced with some other function like guassian distance, or cosine distance etc, which is contact notation in the graph.
So I need to a common method to do this.
Here is what I design, but I am not sure whether it is a efficient way to do this:
with tf.Session() as sess:
with tf.variable_scope('masssdsms'):
a = tf.get_variable('a', [1000,24,128], dtype=tf.float32, initializer=tf.random_normal_initializer(stddev=0.1) )
b = tf.get_variable('b', [1000,15,128], dtype=tf.float32, initializer=tf.random_normal_initializer(stddev=0.1) )
i = 999 # for i in range(1000):
ai = tf.slice(a,[i,0,0],[1,-1,-1]) # (1,24,128)
bi = tf.slice(b,[i,0,0],[1,-1,-1]) # (1,15,128)
ci = contact_func(ai,bi) # (1,24,15)
You can achieve that with clever application of broadcasting. Try this:
a = tf.ones([1000, 20, 128])
b = tf.ones([1000, 10, 128])
a = tf.expand_dims(a, axis=1) # [1000, 1, 20, 128]
b = tf.expand_dims(b, axis=2) # [1000, 10, 1, 128]
products = a * b # [1000, 10, 20, 128]
reduced = tf.reduce_sum(products, axis=-1) # [1000, 10, 20]
The products contains all pairwise multiplications of all items in a and b. And the reduced aggregates the sum over the last axis.
Doing a matmul of the matrix a with the transpose of the dimension-1 of b should give the desired result:
c = tf.matmul(a, tf.transpose(b, [0, 2, 1])) # [1000, 20, 10]
# to get (1000, 20, 10, 1) you do
tf.expand_dims(c, 3)
EDIT:
For the contact_func operation, you may need to manually do the broadcasting using tile operator. Here is the code for gaussian distance:
# use tile to repeat the rows
d = tf.reshape(tf.tile(a, [1, 1, b.shape[1]]), (-1,a.shape[1]*b.shape[1],a.shape[2]))
#[1000, 360, 128],
# repeat the columns
e = tf.tile(b, [1, a.shape[1], 1])
#[1000, 360, 128]
# exp(-d_i_j), where d_i_j is the eucludian distance of i, j
c = tf.reshape(tf.exp(tf.reduce_sum(d-e, 2)), (-1, a.shape[1], b.shape[1]))
#[1000, 24, 15]
I'm newbie to tensorflow and I'm trying to get the index of the maximum value in a Tensor. Here is the code:
def select(input_layer):
shape = input_layer.get_shape().as_list()
rel = tf.nn.relu(input_layer)
print (rel)
redu = tf.reduce_sum(rel,3)
print (redu)
location2 = tf.argmax(redu, 1)
print (location2)
sess = tf.InteractiveSession()
I = tf.random_uniform([32, 3, 3, 5], minval = -541, maxval = 23, dtype = tf.float32)
matI, matO = sess.run([I, select(I, 3)])
print(matI, matO)
Here is the output:
Tensor("Relu:0", shape=(32, 3, 3, 5), dtype=float32)
Tensor("Sum:0", shape=(32, 3, 3), dtype=float32)
Tensor("ArgMax:0", shape=(32, 3), dtype=int64)
...
Because of dimension=1 in the argmax function the shape of Tensor("ArgMax:0") = (32,3). Is there any way to get a argmax output tensor size = (32,) without doing reshape before applying the argmax?
You problably don't want an output of size (32,) because when you argmax along several directions, you usually want to have the coordinates of the max for all the reduced dimensions. In your case, you would want to have an output of size (32,2).
You can do a two-dimensional argmax like this:
import numpy as np
import tensorflow as tf
x = np.zeros((10,9,8))
# pick a random position for each batch image that we set to 1
pos = np.stack([np.random.randint(9,size=10), np.random.randint(8,size=10)])
posext = np.concatenate([np.expand_dims([i for i in range(10)], axis=0), pos])
x[tuple(posext)] = 1
a = tf.argmax(tf.reshape(x, [10, -1]), axis=1)
pos2 = tf.stack([a // 8, tf.mod(a, 8)]) # recovered positions, one per batch image
sess = tf.InteractiveSession()
# check that the recovered positions are as expected
assert (pos == pos2.eval()).all(), "it did not work"