How to create matrix that include range of numbers? - python

I want to create matrixes that contain range of integers like
[[range(1, 3), 0, 0],
[range(5, 7), range(0, 2), 0],
[0, range(1, 3), 0]]
Answer that I expected is
[[[1 0 0], [5 0 0], [0, 1, 0]]
[[2 0 0], [5 0 0], [0, 1, 0]]
[[1 0 0], [6 0 0], [0, 1, 0]]
[[2 0 0], [6 0 0], [0, 1, 0]]
[[1 0 0], [5 0 0], [0, 2, 0]]
[[2 0 0], [5 0 0], [0, 2, 0]]
[[1 0 0], [6 0 0], [0, 2, 0]]
[[2 0 0], [6 0 0], [0, 2, 0]]]
rather than range object inside matrix.
I can achieve my goal with using loops
def foo(p1_range, p2_range, p3_range)
for p1 in range(p1_range):
for p2 in range(p2_range):
for p3 in range(p3_range):
yield [[p1 0 0], [p2 0 0], [0, p3, 0]]
Is there any pythonic way to solve this?
Note: I normaly need 7 parameters in my main code... It looks awful with this way.

You can use itertools.product to reduce indentation levels and eliminate the nested loops:
from itertools import product
def foo(p1_range, p2_range, p3_range):
for p1, p2, p3 in product(p1_range, p2_range, p3_range):
yield [[p1, 0, 0], [p2, 0, 0], [0, p3, 0]]

Related

how to create or expand an identity-like matrix in python

I'm trying to creating a matrix like this:
[[A 0 0],
[0 B 0],
[0 0 C]]
in which A,B,C could be either a submatrix or a constant
suppose I got one of the submatrix first:
[[1 2],
[3 4]]
then got the next:
[[5 0 0],
[0 6 0],
[0 0 7]]
How can I concat them into the format like below?
[[1 2 0 0 0],
[3 4 0 0 0],
[0 0 5 0 0],
[0 0 0 6 0],
[0 0 0 0 7]]
You can simply use scipy.linalg.block_diag
As follow:
from scipy.linalg import block_diag
A = [[1, 2], [3, 4]]
B = [[5, 0, 0],
[0, 6, 0],
[0, 0, 7]]
block_diag(A, B)
Output:
array([[1, 2, 0, 0, 0],
[3, 4, 0, 0, 0],
[0, 0, 5, 0, 0],
[0, 0, 0, 6, 0],
[0, 0, 0, 0, 7]])

How to delete DF rows based on multiple column conditions?

Here's an example of DF:
EC1 EC2 CDC L1 L2 L3 L4 L5 L6 VNF
0 [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [1, 0]
1 [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [0, 1]
2 [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [-1, 0]
3 [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [0, -1]
4 [0, 0] [0, 0] [0, 1] [0, 0] [0, 0] [0, 0] [0, 0] [0, 1] [0, 1] [1, 0]
5 [0, 0] [0, 0] [0, 1] [0, 0] [0, 0] [0, 0] [0, 0] [0, 1] [0, 1] [0, 1]
6 [1, 0] [0, 0] [0, 1] [0, 0] [0, 0] [0, 0] [0, 0] [0, 1] [0, 1] [-1, 0]
How to delete those rows where df['VNF'] = [-1, 0] or [0, -1] and df['EC1'], df['EC2'] and df['CDC'] has a value of 0 in the same index position as the -1 in df['VNF'])?
The expected result would be:
EC1 EC2 CDC L1 L2 L3 L4 L5 L6 VNF
0 [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [1, 0]
1 [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [0, 1]
2 [0, 0] [0, 0] [0, 1] [0, 0] [0, 0] [0, 0] [0, 0] [0, 1] [0, 1] [1, 0]
3 [0, 0] [0, 0] [0, 1] [0, 0] [0, 0] [0, 0] [0, 0] [0, 1] [0, 1] [0, 1]
4 [1, 0] [0, 0] [0, 1] [0, 0] [0, 0] [0, 0] [0, 0] [0, 1] [0, 1] [-1, 0]
Here's the constructor for the DataFrame:
data = {'EC1': [[0, 0], [0, 0], [0, 0], [0, 0], [0, 0], [0, 0], [1, 0]],
'EC2': [[0, 0], [0, 0], [0, 0], [0, 0], [0, 0], [0, 0], [0, 0]],
'CDC': [[0, 0], [0, 0], [0, 0], [0, 0], [0, 1], [0, 1], [0, 1]],
'L1': [[0, 0], [0, 0], [0, 0], [0, 0], [0, 0], [0, 0], [0, 0]],
'L2': [[0, 0], [0, 0], [0, 0], [0, 0], [0, 0], [0, 0], [0, 0]],
'L3': [[0, 0], [0, 0], [0, 0], [0, 0], [0, 0], [0, 0], [0, 0]],
'L4': [[0, 0], [0, 0], [0, 0], [0, 0], [0, 0], [0, 0], [0, 0]],
'L5': [[0, 0], [0, 0], [0, 0], [0, 0], [0, 1], [0, 1], [0, 1]],
'L6': [[0, 0], [0, 0], [0, 0], [0, 0], [0, 1], [0, 1], [0, 1]],
'VNF': [[1, 0], [0, 1], [-1, 0], [0, -1], [1, 0], [0, 1], [-1, 0]]}
You can explode every column of df, then identify the elements satisfying the first (sum of "VNF" values must be -1) and second condition and filter out the elements that satisfy both conditions to create temp. Then since each cell must have two elements, you can count whether each index contains 2 elements by transforming count, then filter the rows with two indices and groupby the index and aggregate to list:
exploded = df.explode(df.columns.tolist())
first_cond = exploded.groupby(level=0)['VNF'].transform('sum').eq(-1)
second_cond = exploded['VNF'].eq(-1) & exploded['EC1'].eq(0) & exploded['EC2'].eq(0) & exploded['CDC'].eq(0)
temp = exploded[~(first_cond & second_cond)]
out = temp[temp.groupby(level=0)['VNF'].transform('count').gt(1)].groupby(level=0).agg(list).reset_index(drop=True)
Output:
EC1 EC2 CDC L1 L2 L3 L4 L5 L6 \
0 [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [0, 0]
1 [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [0, 0]
2 [0, 0] [0, 0] [0, 1] [0, 0] [0, 0] [0, 0] [0, 0] [0, 1] [0, 1]
3 [0, 0] [0, 0] [0, 1] [0, 0] [0, 0] [0, 0] [0, 0] [0, 1] [0, 1]
4 [1, 0] [0, 0] [0, 1] [0, 0] [0, 0] [0, 0] [0, 0] [0, 1] [0, 1]
VNF
0 [1, 0]
1 [0, 1]
2 [1, 0]
3 [0, 1]
4 [-1, 0]
List comprehension to find which indexes to drop might help see the conditions more directly:
columns = df.EC1, df.EC2, df.CDC, df.VNF
inds_to_drop = [iloc
for iloc, (ec1, ec2, cdc, vnf) in enumerate(zip(*columns))
if vnf == [-1, 0] or vnf == [0, -1]
if all(val[idx] == 0
for idx in (vnf.index(-1),) for val in (ec1, ec2, cdc))]
new_df = df.drop(df.index[inds_to_drop])
to get
>>> new_df
EC1 EC2 CDC L1 L2 L3 L4 L5 L6 VNF
0 [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [1, 0]
1 [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [0, 0] [0, 1]
4 [0, 0] [0, 0] [0, 1] [0, 0] [0, 0] [0, 0] [0, 0] [0, 1] [0, 1] [1, 0]
5 [0, 0] [0, 0] [0, 1] [0, 0] [0, 0] [0, 0] [0, 0] [0, 1] [0, 1] [0, 1]
6 [1, 0] [0, 0] [0, 1] [0, 0] [0, 0] [0, 0] [0, 0] [0, 1] [0, 1] [-1, 0]
The list comprehension starts with the outer for loop over the column values and their integer indexes:
for iloc, (ec1, ec2, cdc, vnf) in enumerate(zip(*columns))
Then the first condition to drop kicks in:
df['VNF'] = [-1, 0] or [0, -1]
if vnf == [-1, 0] or vnf == [0, -1]
And the second condition:
df['EC1'], df['EC2'] and df['CDC'] has a value of 0 in the same index position as the -1 in df['VNF'])
if all(val[idx] == 0 for idx in (vnf.index(-1),) for val in (ec1, ec2, cdc))
Here, we check if all of the values of 3 columns satisfy the criterion. A trick here is 1-turn loop for idx in (vnf.index(-1),) so as to evaluate the index of -1 only once (compare with val[vnf.index(-1)] for val in (ec1, ec2, cdc); less efficient).
Then the list is comprehended with the integer index locations of rows to drop:
>>> inds_to_drop
[2, 3]
If you have a RangeIndex, i.e., 0..N-1 kind of index, then you can directly say new_df = df.drop(inds_to_drop). But if custom index (e.g., ["a", "d", "e", "f"]), we lookup the real index labels with df.index[inds_to_drop] and then drop (would be "e", "f"); this covers all cases.

Unexpected behaviour in list value change

I defined this function:
def newMap(dim, n):
tc = [0 for i in range(n)]
return [[tc for _ in range(dim)] for _ in range(dim)]
Which creates a list of lists of zeroes. For example
m = newMap(2,2)
print(m)
returns
[[[0, 0], [0, 0]], [[0, 0], [0, 0]]]
I want to change one os the zeroes to obtain [[[0, 0], [0, 0]], [[0, 0], [0, 0]]] and tried doing so by
m[0][0][0] = 1
which, unexpectedly returns [[[1, 0], [1, 0]], [[1, 0], [1, 0]]] instead of [[[1, 0], [0, 0]], [[0, 0], [0, 0]]].
However, if I defined a = [[[0, 0], [0, 0]], [[0, 0], [0, 0]]], and then do
a[0][0][0] = 1
print(a)
it returns [[[1, 0], [0, 0]], [[0, 0], [0, 0]]], which is what I want.
Why does this happen? Shouldn't the two definitions be equivalent? How can I prevent it from happening in the first case?
Use tc.copy() this should fix it, i tried it and it works:
def newMap(dim, n):
tc = [0 for i in range(n)]
return [[tc.copy() for _ in range(dim)] for _ in range(dim)]
a = newMap(2,2)
a
#[[[0, 0], [0, 0]], [[0, 0], [0, 0]]]
a[0][0][0] = 1
#[[[1, 0], [0, 0]], [[0, 0], [0, 0]]]

Updating a slice in rank3 tensorflow tensor along the third axis (Z) given a location (X,Y)

I am trying to re-implement the below function (written in numpy) using Tensorflow 1.9.0.
def lateral_inhibition2(conv_spikes,SpikesPerNeuronAllowed):
vbn = np.where(SpikesPerNeuronAllowed==0)
conv_spikes[vbn[0],vbn[1],:]=0
return conv_spikes
conv_spikes is a binary tensor of rank 3 and SpikesPerNeuronAllowed is tensor of rank 2. conv_spikes is a variable that indicates if a neuron in a specific location has spiked if the location contains 1 and a 0 indicates that neuron in that location hasn't spiked. SpikesPerNeuronAllowed variable indicates if all the neurons in a X-Y location along the Z axis are allowed to spike or not. A 1 in SpikesPerNeuronAllowed indicates that neurons at the corresponding X-Y location in conv_spikes and along the Z axis are allowed to spike. A 0 indicates that neurons at the corresponding X-Y location in conv_spikes and along the Z axis are not allowed to spike.
conv_spikes2 = (np.random.rand(5,5,3)>=0.5).astype(np.int16)
temp2 = np.random.choice([0, 1], size=(25,), p=[3./4, 1./4])
SpikesPerNeuronAllowed2 = temp2.reshape(5,5)
print(conv_spikes2[:,:,0])
print
print(conv_spikes2[:,:,1])
print
print(conv_spikes2[:,:,2])
print
print(SpikesPerNeuronAllowed2)
produces the following output
##First slice of conv_spikes across Z-axis
[[0 0 1 1 1]
[1 0 0 1 1]
[1 0 1 1 0]
[0 1 0 1 1]
[0 1 0 0 0]]
##Second slice of conv_spikes across Z-axis
[[0 0 1 0 0]
[0 0 1 0 1]
[0 0 1 1 1]
[0 0 0 1 0]
[1 1 1 1 1]]
##Third slice of conv_spikes across Z-axis
[[0 1 1 0 0]
[0 0 1 0 0]
[0 1 1 0 0]
[0 0 0 1 0]
[1 0 1 1 1]]
##SpikesPerNeuronAllowed2
[[0 0 0 0 1]
[0 0 0 0 0]
[0 0 0 0 0]
[1 1 0 0 0]
[0 0 0 1 0]]
Now, when the function is called
conv_spikes2 = lateral_inhibition2(conv_spikes2,SpikesPerNeuronAllowed2)
print(conv_spikes2[:,:,0])
print
print(conv_spikes2[:,:,1])
print
print(conv_spikes2[:,:,2])
produces the following output
##First slice of conv_spikes across Z-axis
[[0 0 0 0 1]
[0 0 0 0 0]
[0 0 0 0 0]
[0 1 0 0 0]
[0 0 0 0 0]]
##Second slice of conv_spikes across Z-axis
[[0 0 0 0 0]
[0 0 0 0 0]
[0 0 0 0 0]
[0 0 0 0 0]
[0 0 0 1 0]]
##Third slice of conv_spikes across Z-axis
[[0 0 0 0 0]
[0 0 0 0 0]
[0 0 0 0 0]
[0 0 0 0 0]
[0 0 0 1 0]]
I tried to repeat the same in Tensorflow as belows
conv_spikes_tf = tf.Variable((np.random.rand(5,5,3)>=0.5).astype(np.int16))
a_placeholder = tf.placeholder(tf.float32,shape=(5,5))
b_placeholder = tf.placeholder(tf.float32)
inter2 = tf.where(tf.equal(a_placeholder,b_placeholder))
output= sess.run(inter2,feed_dict{a_placeholder:SpikesPerNeuronAllowed2,b_placeholder:0})
print(output)
produces the below output
[[0 0]
[0 1]
[0 2]
[0 3]
[1 0]
[1 1]
[1 2]
[1 3]
[1 4]
[2 0]
[2 1]
[2 2]
[2 3]
[2 4]
[3 2]
[3 3]
[3 4]
[4 0]
[4 1]
[4 2]
[4 4]]
I try to update conv_spikes_tf with the below code results in an error, I tried going through the manual for scatter_nd_update but I don't think I understood very well.
update = tf.scatter_nd_update(conv_spikes_tf, output, np.zeros(output.shape[0]))
sess.run(update)
ValueError: The inner 1 dimensions of input.shape=[5,5,3] must match the inner 1 dimensions of updates.shape=[21,2]: Dimension 0 in both shapes must be equal, but are 3 and 2. Shapes are [3] and [2]. for 'ScatterNdUpdate_8' (op: 'ScatterNdUpdate') with input shapes: [5,5,3], [21,2], [21,2].
I don't understand the error message, specifically what is inner 1 dimensions mean and how can I achieve the above numpy functionality with tensorflow?
The last dim of updates in tf.scatter_nd_update should be 3, which is equal to the last dim of ref.
update = tf.scatter_nd_update(conv_spikes_tf, output, np.zeros(output.shape[0], 3))
If I understand correctly, you want to apply SpikesPerNeuronAllowed2(mask) to conv_spikes. A easier way is to reshape conv_spikes to (3,5,5) and multiply SpikesPerNeuronAllowed2.
I use a constant example to show the result. You can change it to tf.Variable as well.
conv = (np.random.rand(3,5,5)>=0.5).astype(np.int32)
tmp = np.random.choice([0, 1], size=(25,), p=[3./4, 1./4])
mask = tmp.reshape(5,5)
# array([[[1, 1, 0, 0, 0],
# [0, 1, 0, 0, 1],
# [0, 1, 0, 0, 1],
# [1, 0, 0, 0, 1],
# [1, 0, 0, 1, 0]],
# [[1, 0, 0, 0, 1],
# [1, 0, 1, 1, 1],
# [0, 0, 1, 0, 1],
# [0, 0, 0, 1, 1],
# [0, 0, 0, 1, 1]],
# [[0, 0, 0, 1, 0],
# [0, 1, 1, 0, 1],
# [0, 1, 1, 0, 1],
# [1, 1, 1, 1, 0],
# [1, 1, 1, 0, 1]]], dtype=int32)
# array([[0, 0, 0, 1, 1],
# [0, 0, 0, 1, 0],
# [0, 0, 0, 0, 0],
# [0, 1, 0, 1, 0],
# [0, 0, 1, 0, 1]])
tf_conv = tf.constant(conv, dtype=tf.int32)
tf_mask = tf.constant(mask, dtype=tf.int32)
res = tf_conv * tf_mask
sess = tf.InteractiveSession()
sess.run(res)
# array([[[0, 0, 0, 0, 0],
# [0, 0, 0, 0, 0],
# [0, 0, 0, 0, 0],
# [0, 0, 0, 0, 0],
# [0, 0, 0, 0, 0]],
# [[0, 0, 0, 0, 1],
# [0, 0, 0, 1, 0],
# [0, 0, 0, 0, 0],
# [0, 0, 0, 1, 0],
# [0, 0, 0, 0, 1]],
# [[0, 0, 0, 1, 0],
# [0, 0, 0, 0, 0],
# [0, 0, 0, 0, 0],
# [0, 1, 0, 1, 0],
# [0, 0, 1, 0, 1]]], dtype=int32)

How to get 2d list find count python?

I want to get the list count, but I don't know how to get 2d list count.
the_string = raw_input()
COLS, ROWS = the_string.split()
two_string = raw_input()
co, ro = two_string.split()
Mylist= [[0 for cols in range(int(COLS))] for rows in range(int(ROWS))]
print(Mylist)
for cols in range(int(co)):
for rows in range(int(ROWS)):
Mylist[rows][cols]=1
for rows in range(int(ro)):
for cols in range(int(COLS)):
Mylist[rows][cols]=1
print(Mylist)
d = sum(row.count('1') for row in Mylist)
print(d)
This is the output I think:
d = sum(row.count('1') for row in Mylist)
It is wrong, but I can't change it.
This is the result:
3 4
2 1
is just input.
3 4
2 1
[[0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0]]
[[1, 1, 1], [1, 1, 0], [1, 1, 0], [1, 1, 0]]
0
Why '0'?
You issue is that you are counting the occurrences of the string "1" with .count(), not the numerical value 1. Instead, try:
d = sum(row.count(1) for row in Mylist)
Substituting this into your program and running:
3 4
2 1
Yields:
[[0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0]]
[[1, 1, 1], [1, 1, 0], [1, 1, 0], [1, 1, 0]]
9

Categories