create a list of lists with a checkerboard pattern - python

I would like to change the values ​​of this list by alternating the 0 and 1 values ​​in a checkerboard pattern.
table =
1 1 1 1 1
1 1 1 1 1
1 1 1 1 1
1 1 1 1 1
1 1 1 1 1
i tried:
for i in range(len(table)):
for j in range(0, len(table[i]), 2): # ho definito uno step nella funzione range
table[i][j] = 0
but for each list the count starts again and the result is:
0 1 0 1 0
0 1 0 1 0
0 1 0 1 0
0 1 0 1 0
0 1 0 1 0
my question is how can I change the loop to form a checkerboard pattern.
I expect the result to be like:
0 1 0 1 0
1 0 1 0 1
0 1 0 1 0
1 0 1 0 1
0 1 0 1 0

for i in range(len(table)):
for j in range(len(table[i])):
if (i+j)%2 == 0:
table[i][j] = 0
output:
[[0, 1, 0, 1, 0],
[1, 0, 1, 0, 1],
[0, 1, 0, 1, 0],
[1, 0, 1, 0, 1],
[0, 1, 0, 1, 0]]

There doesn't appear to be any reliance on the original values in the list. Therefore it might be better to implement something that creates a list in the required format like this:
def checkboard(rows, columns):
e = 0
result = []
for _ in range(rows):
c = []
for _ in range(columns):
c.append(e)
e ^= 1
result.append(c)
return result
print(checkboard(5, 5))
print(checkboard(2, 3))
print(checkboard(4, 4))
Output:
[[0, 1, 0, 1, 0], [1, 0, 1, 0, 1], [0, 1, 0, 1, 0], [1, 0, 1, 0, 1], [0, 1, 0, 1, 0]]
[[0, 1, 0], [1, 0, 1]]
[[0, 1, 0, 1], [0, 1, 0, 1], [0, 1, 0, 1], [0, 1, 0, 1]]

Related

Measuring average cosine similarity between the groups

I have the following data frame:
Group Vector
1 [1 1 0 1 0 0]
1 [1 0 0 1 0 0]
1 [1 0 0 1 1 1]
1 [0 0 0 1 0 1]
2 [0 0 0 1 0 1]
2 [0 0 0 1 0 1]
2 [0 1 1 1 0 1]
2 [1 1 0 0 0 1]
How could I calculate the average cosine similarity within the groups? This is the expected outcome (Note I make up to numbers for the calculation)
Group Vector Average_Similarity
1 [1 1 0 1 0 0] 0.34
1 [1 0 0 1 0 0] 0.34
1 [1 0 0 1 1 1] 0.34
1 [0 0 0 1 0 1] 0.34
2 [0 0 0 1 0 1] 0.48
2 [0 0 0 1 0 1] 0.48
2 [0 1 1 1 0 1] 0.48
2 [1 1 0 0 0 1] 0.48
Suppose we read data from your example like:
from ast import literal_eval
df = pd.read_clipboard(sep="|", converters = {"Vector":literal_eval})
df
Group Vector
0 1 [1, 1, 0, 1, 0, 0]
1 1 [1, 0, 0, 1, 0, 0]
2 1 [1, 0, 0, 1, 1, 1]
3 1 [0, 0, 0, 1, 0, 1]
4 2 [0, 0, 0, 1, 0, 1]
5 2 [0, 0, 0, 1, 0, 1]
6 2 [0, 1, 1, 1, 0, 1]
7 2 [1, 1, 0, 0, 0, 1]
Then try:
from scipy.spatial.distance import pdist
df["Average_Similarity"] = df.groupby("Group")["Vector"].transform(
lambda group: pdist(group.to_list(), metric="cosine").mean()
)
df
Group Vector Average_Similarity
0 1 [1, 1, 0, 1, 0, 0] 0.380615
1 1 [1, 0, 0, 1, 0, 0] 0.380615
2 1 [1, 0, 0, 1, 1, 1] 0.380615
3 1 [0, 0, 0, 1, 0, 1] 0.380615
4 2 [0, 0, 0, 1, 0, 1] 0.365323
5 2 [0, 0, 0, 1, 0, 1] 0.365323
6 2 [0, 1, 1, 1, 0, 1] 0.365323
7 2 [1, 1, 0, 0, 0, 1] 0.365323
You can do a groupby apply
from sklearn.metrics.pairwise import cosine_similarity
import numpy as np
df.groupby('group').apply(lambda x: cosine_similarity(np.array([i for i in x['vec']])))
group
1 [[1.0000000000000002, 0.816496580927726, 0.577...
2 [[0.9999999999999998, 0.9999999999999998, 0.70...
Reconstruct your DataFrame so each value in the vector is placed into its own cell. Then we self merge within group and use the index to de-duplicate comparisons (i.e. we only compare 1 to 3 and not 1 to 3 and 3 to 1).
Then we calculate the cosine similarity for all rows and average within group.
df = pd.concat([df['Group'], pd.DataFrame(df['Vector'].tolist())], axis=1).reset_index()
m = (df.merge(df, on='Group').query('index_x > index_y')
.drop(columns=['index_x', 'index_y'])
.set_index('Group'))
X = m.filter(like='_x')
X.columns = X.columns.str.strip('_x')
Y = m.filter(like='_y')
Y.columns = Y.columns.str.strip('_y')
m['cos'] = 1-(X*Y).sum(1).div((np.sqrt((X**2).sum(1))*np.sqrt((Y**2).sum(1))), axis=0)
m.groupby(level=0)['cos'].mean()
Group
1 0.380615
2 0.365323
Name: cos, dtype: float64

How to keep track of row index of the rows I randomly select from a matrix?

I am trying to perform tournament selection in a GA whereby I need select two rows randomly. Is there a way of keeping track of the index values of the 2 random rows I select from the matrix self.population and storing those in variables?
At the moment it just outputs the two random rows but I need to keep track of which rows were selected.
Below is what I have so far although ideally I would like to store both rows I select from my matrix in separate variables.
self.population = [[0 1 1 1 0 0 1 1 0 1]
[1 0 1 1 0 0 0 1 1 1]
[0 0 0 0 0 1 1 0 0 0]
[1 1 0 0 1 1 1 0 1 1]
[0 1 0 1 1 1 1 1 1 0]
[0 0 0 0 1 0 1 1 1 0]]
def tournament_select(self):
b = np.random.randint(0, self.population[0], 2)
return self.population[b]
Is this what you're looking for?
from random import sample
import numpy as np
population = np.array([[0, 1, 1, 1, 0, 0, 1, 1, 0, 1],
[1, 0, 1, 1, 0, 0, 0, 1, 1, 1],
[0, 0, 0, 0, 0, 1, 1, 0, 0, 0],
[1, 1, 0, 0, 1, 1, 1, 0, 1, 1],
[0, 1, 0, 1, 1, 1, 1, 1, 1, 0],
[0, 0, 0, 0, 1, 0, 1, 1, 1, 0]])
def tournament_select():
row_indices = sample(range(len(population)), k=2)
return row_indices, population[row_indices]
row_indices, candidates = tournament_select()
print(row_indices)
print(candidates)
Output:
[2, 3]
[[0 0 0 0 0 1 1 0 0 0]
[1 1 0 0 1 1 1 0 1 1]]

Cloning a column in 3d numpy array

Let's say I have a 3D array representing tic-tac-toe games (and their respective historical states):
[
[[0,0,0,1,1,0,0,0,1]], #<<--game 1
[[1,0,0,1,0,0,1,0,1]], #<<--game 2
[[1,0,0,1,0,0,1,0,1]] #<<--game 3
]
I would like to pre-pend a clone of these states, but then keep the historical records growing out to the right where they will act as an unadultered historical record
So the next iteration would look like this:
[
[[0,0,0,1,1,0,0,0,1], [0,0,0,1,1,0,0,0,1]], #<<--game 1
[[1,0,0,1,0,0,1,0,1], [1,0,0,1,0,0,1,0,1]], #<<--game 2
[[1,0,0,1,0,0,1,0,1], [1,0,0,1,0,0,1,0,1]] #<<--game 3
]
I will then edit these new columns. At a later time, I will copy it again.
So, I always want to copy this leftmost column (pass by value) - but I don't know how to perform this operation.
You can use concatenate:
# initial array
a = np.array([
[[0,0,0,1,1,0,0,0,1], [0,1,0,1,1,0,0,0,1]], #<<--game 1
[[1,0,0,1,0,0,1,0,1], [1,1,0,1,0,0,1,0,1]], #<<--game 2
[[1,0,0,1,0,0,1,0,1], [1,1,0,1,0,0,1,0,1]] #<<--game 3
])
#subset of this array (column 0)
b = a[:,0,:]
# reshape to add dimension
b = b.reshape ([-1,1,9])
print(a.shape, b.shape) # ((3, 2, 9), (3, 1, 9))
# concatenate:
c = np.concatenate ((a,b), axis = 1)
print (c)
array([[[0, 0, 0, 1, 1, 0, 0, 0, 1],
[0, 1, 0, 1, 1, 0, 0, 0, 1],
[0, 0, 0, 1, 1, 0, 0, 0, 1]], # leftmost column copied
[[1, 0, 0, 1, 0, 0, 1, 0, 1],
[1, 1, 0, 1, 0, 0, 1, 0, 1],
[1, 0, 0, 1, 0, 0, 1, 0, 1]], # leftmost column copied
[[1, 0, 0, 1, 0, 0, 1, 0, 1],
[1, 1, 0, 1, 0, 0, 1, 0, 1],
[1, 0, 0, 1, 0, 0, 1, 0, 1]]]) # leftmost column copied
You can do this using hstack and slicing:
import numpy as np
start= np.asarray([[[0,0,0,1,1,0,0,0,1]],[[1,0,0,1,0,0,1,0,1]],[[1,0,0,1,0,0,1,0,1]]])
print(start)
print("duplicating...")
finish = np.hstack((start,start[:,:1,:]))
print(finish)
print("modifying...")
finish[0,1,2]=2
print(finish)
[[[0 0 0 1 1 0 0 0 1]]
[[1 0 0 1 0 0 1 0 1]]
[[1 0 0 1 0 0 1 0 1]]]
duplicating...
[[[0 0 0 1 1 0 0 0 1]
[0 0 0 1 1 0 0 0 1]]
[[1 0 0 1 0 0 1 0 1]
[1 0 0 1 0 0 1 0 1]]
[[1 0 0 1 0 0 1 0 1]
[1 0 0 1 0 0 1 0 1]]]
modifying...
[[[0 0 0 1 1 0 0 0 1]
[0 0 2 1 1 0 0 0 1]]
[[1 0 0 1 0 0 1 0 1]
[1 0 0 1 0 0 1 0 1]]
[[1 0 0 1 0 0 1 0 1]
[1 0 0 1 0 0 1 0 1]]]

How to generate random 1s and 0s in a matrix array?

I have a matrix and it's currently populated with just 1's. How do I make it so it populates with random 1's and 0's?
matrix5x5 = [[1 for row in range (5)] for col in range (5)]
for row in matrix5x5:
for item in row:
print(item,end=" ")
print()
print("")
Output:
1 1 1 1 1
1 1 1 1 1
1 1 1 1 1
1 1 1 1 1
1 1 1 1 1
I want something like:
1 0 0 1 0
0 1 1 1 1
1 0 1 0 1
1 1 0 0 1
0 1 1 1 1
I found something regarding using random.randint(0,1) but I don't know how to change my current code to include the above.
Modifying your code, using the random package (and not the numpy equivalent):
matrix5x5 = [[random.randint(0,1) for _ in range(5)] for _ in range(5)]
for row in matrix5x5:
for item in row:
print(item,end=" ")
print()
print("")
0 1 0 0 1
0 1 0 1 0
0 0 1 1 0
0 0 0 1 0
1 0 0 1 1
But honestly, numpy makes it a lot faster and easier!
If you don't mind using numpy:
>>> import numpy as np
>>> np.random.randint(2, size=(5, 5))
array([[1, 0, 1, 0, 1],
[1, 0, 1, 0, 0],
[0, 0, 0, 1, 0],
[1, 0, 0, 0, 1],
[0, 1, 0, 0, 1]])
Numpy arrays support most list operations that involve indexing and iteration, and if you really care, you can turn it back into a list:
>>> np.random.randint(2, size=(5, 5)).tolist()
[[1, 0, 0, 0, 0], [0, 0, 0, 0, 1], [0, 0, 1, 0, 0], [1, 0, 1, 1, 1], [1, 0, 1, 0, 0]]
And, if for some strange reason, you are 100% adamant on using vanilla Python, just use the random module and a list comprehension:
>>> import random
>>> [[random.randint(0,1) for j in range (5)] for i in range (5)]
[[0, 1, 0, 1, 1], [0, 1, 1, 1, 0], [0, 0, 1, 0, 1], [0, 0, 0, 0, 1], [1, 1, 1, 1, 1]]
You probably want to use numpy. Do the following:
import numpy as np
my_matrix = np.random.randint(2,size=(5,5))
This will create a random 5 by 5 matrix with 0s and 1s.

In opencv how do I get a list of segemented regions

I'm working on a project where I want to evaluate certain parameters on regions of a segemented image. So I have the following code
col = cv2.imread("in.jpg",1)
col=cv2.resize(col,(width,height),interpolation=cv2.INTER_CUBIC)
res=cv2.pyrMeanShiftFiltering(col,20,45,3)
and would now like to somehow get a list of masks per region in res.
So for example if res was now something like this
1 1 0 2 1
1 0 0 2 1
0 0 2 2 1
I would like to get an output such as
1 1 0 0 0
1 0 0 0 0
0 0 0 0 0
,
0 0 1 0 0
0 1 1 0 0
1 1 0 0 0
,
0 0 0 1 0
0 0 0 1 0
0 0 1 1 0
,
0 0 0 0 1
0 0 0 0 1
0 0 0 0 1
So that is a mask for each group of the same values that are connected. Maybe this could somehow involve the floodfill function? I can
see that maybe by looping over every pixel and then flood filling and comparing to see if that set of pixels was already set might work but that seems like a very expensive way so is there something faster?
Oh and here is an example image of res after the code has run
Here's one approach with cv2.connectedComponents -
def list_seg_regs(a): # a is array
out = []
for i in np.unique(a):
ret, l = cv2.connectedComponents((a==i).astype(np.uint8))
for j in range(1,ret):
out.append((l==j).astype(int)) #skip .astype(int) for bool
return out
Sample run -
In [53]: a = np.array([
...: [1, 1, 0, 2, 1],
...: [1, 0, 0, 2, 1],
...: [0, 0, 2, 2, 1]])
In [54]: out = list_seg_regs(a)
In [55]: out[0]
Out[55]:
array([[0, 0, 1, 0, 0],
[0, 1, 1, 0, 0],
[1, 1, 0, 0, 0]])
In [56]: out[1]
Out[56]:
array([[1, 1, 0, 0, 0],
[1, 0, 0, 0, 0],
[0, 0, 0, 0, 0]])
In [57]: out[2]
Out[57]:
array([[0, 0, 0, 0, 1],
[0, 0, 0, 0, 1],
[0, 0, 0, 0, 1]])
In [58]: out[3]
Out[58]:
array([[0, 0, 0, 1, 0],
[0, 0, 0, 1, 0],
[0, 0, 1, 1, 0]])

Categories