Python - 2D Array: Finding Coordinates - python

I have a python based battleship game that allows players to pass in a .csv containing a 10x10 grid of 0s and 1s, where 1s correspond to locations of ships. For example,
In [1]: board = np.genfromtext(filename, delimiter=',')
Out[2]: array([
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 1., 0., 0., 0., 1., 0., 0., 0.],
[0., 0., 1., 0., 0., 0., 1., 0., 0., 0.],
[0., 0., 1., 0., 0., 0., 1., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 1., 1., 1., 1., 0., 0., 0.],
[1., 0., 0., 0., 0., 1., 1., 1., 1., 1.],
[1., 0., 0., 0., 0., 0., 0., 0., 0., 0.]])
(5) unit ship located at board[8,5:10]
(4) unit ship located at board[7,3:7]
(3) unit ship located at board[3:6,2]
(3) unit ship located at board[3:6,6]
(2) unit ship located at board[8:10,0]
The game restricts players to 5 ships with sizes of (5, 4, 3, 3, 2). Given this information how can I come up with a method for determining the coordinates for each ship?
I'm currently applying a for loop that finds indexes where 1s are present, see below.
for ri, ci in zip(range(board.shape[0]), range(board.shape[1])):
# get row indexes of ships
rlocs = len(np.where(board[ri,:]==1)[0])
# get col indexes of ships
clocs = len(np.where(board[:,ci]==1)[0])
# skip empty row and col
if (rlocs == 0) and (clocs == 0): continue
# check if consecutives <int>
rcons = is_consecutive(rlocs)
ccons = is_consecutive(clocs)
# if more than one consecutive is found then assume more than 1 ship in row/col
if (rcons > 1) or (ccons > 1):
# .... ?
At this point I'm unsure what the next step would be... any help or advice is welcomed!
FYI: is_consecutive returns an int of the number of values in a list that are in a consecutive order. For example, [0,1,2,9,10] would return 2 (i.e., 0-2 and 9-10).
The output that I'm looking for is a dictionary that looks similar to this:
{'ship_05_01': [(x0,y0), (x1,y1)], 'ship_03_01': [(x0,y0), (x1,y1)], 'ship_03_02': [(x0,y0), (x1,y1)], ...}
where ship_xx_nn --> xx = number of spaces; nn = index

Related

Trying to compare different sized one-hot-encoded lists

I have run an autoencoder model, and returned a dictionary with each output and it's label, using FashionMNIST. My goal is to print 10 images only for the dress and coat class (class labels 3 and 4). I have one-hot-encoded the labels such that the dress class appears as [0.,0,.0,1.,0.,0.,0.,0.,0.]. My dictionary output is:
print(pa). #dictionary is called pa
{'output': array([[1.5346111e-04, 2.3307074e-04, 2.8705355e-04, ..., 1.9890528e-04,
1.8257453e-04, 2.0764180e-04],
[1.9767908e-03, 1.5839143e-03, 1.7811939e-03, ..., 1.7838757e-03,
1.4038634e-03, 2.3405524e-03],
[5.8998094e-06, 6.9388111e-06, 5.8752844e-06, ..., 5.1715115e-06,
4.4670110e-06, 1.2018012e-05],
...,
[2.1034568e-05, 3.0344427e-05, 7.0048365e-05, ..., 9.4724113e-05,
8.9003828e-05, 4.1828611e-05],
[2.7930623e-06, 3.0393956e-06, 4.5835086e-06, ..., 3.8765144e-04,
3.6324131e-05, 5.6411723e-06],
[1.2453397e-04, 1.1948447e-04, 2.0121646e-04, ..., 1.0773790e-03,
2.9582143e-04, 1.7229551e-04]], dtype=float32),
'label': array([[1., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 1., 0.],
[0., 0., 0., ..., 1., 0., 0.],
...,
[1., 0., 0., ..., 0., 0., 0.],
[0., 0., 1., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.]], dtype=float32)}
I am trying to run a for loop, where if the pa['label'] is equal to a certain one-hot-encoded array, I plot the corresponding pa['output'].
for i in range(len(pa['label'])):
if pa['label'][i] == np.array([0.,0.,0.,1.,0.,0.,0.,0.,0.]):
print(pa['lable'][i])
# plt.imshow(pa['output'][i].reshape(28,28))
# plt.show()
However, I get a warning(?):
/opt/conda/lib/python3.7/site-packages/ipykernel_launcher.py:2: DeprecationWarning: elementwise comparison failed; this will raise an error in the future.
I have also tried making a list of arrays of the one-hot-encoded arrays i want to plot and trying to compare my dictionary label to this array (different sized arrays):
clothing_array = np.array([[0., 0., 0., 1., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 1., 0., 0., 0., 0., 0.]])
for i in range(len(pa['label'])):
if (pa['label'][i] == clothing_array[i]).any():
plt.imshow(pa['output'][i].reshape(28,28))
plt.show()
However, it plots a picture of a tshirt, a bag, and then i get the error
IndexError: index 2 is out of bounds for axis 0 with size 2
Which i understand since clothing_array only has two indices. But obviously this code is wrong since I want to print ONLY dress and coat. I don't know why it's printing these images and i don't know how to fix it. Any help or clarifying questions are more than welcome.
Here are the first ten arrays of my dictionary labels:
array([[0., 0., 0., 1., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 1., 0.],
[0., 0., 0., 1., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 1.],
[1., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 1., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 1., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 1., 0., 0., 0., 0.],
[0., 0., 0., 0., 1., 0., 0., 0., 0., 0.],
[1., 0., 0., 0., 0., 0., 0., 0., 0., 0.]], dtype=float32)
I will post an example here.
Here we have two arrays for you x is the label array and y the clothing . You can get in z the ones that are identical (the indexes). Finally by using the matching_indexes you can collect the onces you want from output and plot them
x = np.array([[1., 0., 0., 0., 0., 0., 0.],
[0., 1., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 1., 0., 0.],
[1., 0., 0., 0., 0., 0., 0.],
[0., 0., 1., 0., 0., 0., 0.],
[0., 0., 0., 1., 0., 0., 0.]])
y = np.array([[1.,0.,0.,0.,0.,0.,0.]])
z= np.multiply(x,y)
matching_indexes = np.where(z.any(axis=1))[0]

Contour detection of a numpy array

I am new in computer vision and I am currently working on a numpy array of 0 and 1 on python as follow :
I am trying to find the contour of the shape formed by the cells that are equal to 1, this is what the result should look like :
I would like to be able to get the position of each element highlighted in green following a certain order (counter clockwise for exemple).
I tried to use the findContours function of OpenCV in python by following some examples I found on the web but I didn't make it work :
# Import
import numpy as np
import cv2
# Find contours
tableau_poche = np.array([[1., 1., 1., 1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0.],
[1., 1., 1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0.],
[1., 1., 1., 1., 1., 0., 0., 1., 1., 0., 0., 0., 0., 0., 0., 0.,
0.],
[0., 0., 0., 0., 1., 1., 1., 1., 1., 0., 0., 0., 0., 0., 0., 0.,
0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0.]])
tableau_poche = np.int8(tableau_poche)
contours, hierarchy = cv2.findContours(tableau_poche, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
I get the following message :
error: OpenCV(4.0.1) C:\ci\opencv-suite_1573470242804\work\modules\imgproc\src\thresh.cpp:1492: error: (-210:Unsupported format or combination of formats) in function 'cv::threshold'
Actually, I don't know if I am supposed to use this OpenCV function (maybe the "matplotlib.pyplot.contour()" function can solve my problem too ...) or if it's possible to use it on the numpy array I have. In a near future, I might be interested by the convexityDefects function of OpenCV on my numpy array.
You have a type issue. This OpenCV function works only with unsigned integer uint8. Your array uses signed integer int8.
simply replace:
tableau_poche = np.int8(tableau_poche)
by
tableau_poche = tableau_poche.astype(np.uint8)
and the findContour() will work.
As pointed out in comments, you can get what you want (or very close) by changing the attributes of the findContour(). By using cv2.CHAIN_APPROX_NONE instead of cv2.CHAIN_APPROX_SIMPLE, it will give you all the points of the contour. However, it works vertically, horizontally and diagonally for any pixel of distance 1. So it is not 100% what you had on your question as it "cuts" the corners.
See documentation here for more info on options ContourApproximationModes

Most efficient way to create fill rectangles in NumPy

Given a list of coordinates that represent a rectangle in a grid (e.g. the upper-left and lower-right coordinate), which would be the most efficient way to fill a binary NumPy array with ones in the place of that rectangles?
The simple way would be to do a for loop such as
arr = np.zeros((w, h))
for x1, y1, x2, y2 in coordinates:
arr[x1:x2, y1:y2] = True
where coordinates is something like [(x_11, y_11, x_22, y_22), ..., (x_n1, y_n1, x_n2, y_n2)]
However, I want to try to avoid it, as it is one of the advantages of using vectorial inner NumPy operations. I have tried the logical_and but it seems that it works for a single rectangle or condition. How could I do it in a more "numpy" way?
The resulting image would be something like this for 2 rectangles:
Let say (1,1) are the upper-left coordinates of the rectangle,
and (5,4) the lower-right.
Then
arr = np.zeros((10, 10))
arr[1:5, 1:4] = 1
returns
array([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 1., 1., 1., 0., 0., 0., 0., 0., 0.],
[0., 1., 1., 1., 0., 0., 0., 0., 0., 0.],
[0., 1., 1., 1., 0., 0., 0., 0., 0., 0.],
[0., 1., 1., 1., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]])

Superdiagonal non-square matrix in numpy?

Using numpy, I want to create a superdiagonal matrix that is only almost square. It has extra zeros to the right or left of the square part. The code snippet below give me the desired result, but it is a little tricky to read, and the matrix type seems to me common enough that there should be an idiomatic way to construct it.
What is the simplest way to construct 'padded eyes' as below, in numpy?
import numpy as np
size = 5
pad_width = 3
left_padded_eye = np.block([np.zeros((size,pad_width)),np.eye(size)])
right_padded_eye = np.block([np.eye(size),np.zeros((size,pad_width))])
np.eye can do that directly
>>> np.eye(size, size+pad_width, pad_width)
array([[0., 0., 0., 1., 0., 0., 0., 0.],
[0., 0., 0., 0., 1., 0., 0., 0.],
[0., 0., 0., 0., 0., 1., 0., 0.],
[0., 0., 0., 0., 0., 0., 1., 0.],
[0., 0., 0., 0., 0., 0., 0., 1.]])
>>> np.eye(size, size+pad_width)
array([[1., 0., 0., 0., 0., 0., 0., 0.],
[0., 1., 0., 0., 0., 0., 0., 0.],
[0., 0., 1., 0., 0., 0., 0., 0.],
[0., 0., 0., 1., 0., 0., 0., 0.],
[0., 0., 0., 0., 1., 0., 0., 0.]])

Vectorize Sequences explanation

Studying Deep Learning with Python, I can't comprehend the following simple batch of code which encodes the integer sequences into a binary matrix.
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
x_train = vectorize_sequences(train_data)
And the output of x_train is something like
x_train[0]
array([ 0., 1.,1., ...,0.,0.,0.])
Can someone put some light of the 0.'s existance in x_train array while only 1.'s are appending in each next i iteration?
I mean shouldn't be all 1's?
The script transforms you dataset into a binary vector space model. Let's disect things one by one.
First, if we examine the x_train content we see that each review is represented as a sequence of word ids. Each word id corresponds to one specific word:
print(train_data[0]) # print the first review
[1, 14, 22, 16, 43, 530, 973, ..., 5345, 19, 178, 32]
Now, this would be very difficult to feed the network. The lengths of reviews varies, fractional values between any integers have no meaning (e.g. what if on the output we get 43.5, what does it mean?)
So what we can do, is create a single looong vector, the size of the entire dictionary, dictionary=10000 in your example. We will then associate each element/index of this vector with one word/word_id. So word represented by word id 14 will now be represented by 14-th element of this vector.
Each element will either be 0 (word is not present in the review) or 1 (word is present in the review). And we can treat this as a probability, so we even have meaning for values in between 0 and 1. Furthermore, every review will now be represented by this very long (sparse) vector which has a constant length for every review.
So on a smaller scale if:
word word_id
I -> 0
you -> 1
he -> 2
be -> 3
eat -> 4
happy -> 5
sad -> 6
banana -> 7
a -> 8
the sentences would then be processed in a following way.
I be happy -> [0,3,5] -> [1,0,0,1,0,1,0,0,0]
I eat a banana. -> [0,4,8,7] -> [1,0,0,0,1,0,0,1,1]
Now I highlighted the word sparse. That means, there will have A LOT MORE zeros in comparison with ones. We can take advantage of that. Instead of checking every word, whether it is contained in a review or not; we will check a substantially smaller list of only those words that DO appear in our review.
Therefore, we can make things easy for us and create reviews × vocabulary matrix of zeros right away by np.zeros((len(sequences), dimension)). And then just go through words in each review and flip the indicator to 1.0 at position corresponding to that word:
result[review_id][word_id] = 1.0
So instead of doing 25000 x 10000 = 250 000 000 operations, we only did number of words = 5 967 841. That's just ~2.5% of original amount of operations.
The for loop here is not processing all the matrix. As you can see, it enumerates elements of the sequence, so it's looping only on one dimension.
Let's take a simple example :
t = np.array([1,2,3,4,5,6,7,8,9])
r = np.zeros((len(t), 10))
Output
array([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]])
then we modify elements with the same way you have :
for i, s in enumerate(t):
r[i,s] = 1.
array([[0., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 1., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 1., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 1., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 1., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 1., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 1., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 1., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 1.]])
you can see that the for loop modified only a set of elements (len(t)) which has index [i,s] (in this case ; (0, 1), (1, 2), (2, 3), an so on))
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1.
return results

Categories