Convert meshgrid points into adjacency matrix in python - python

I am converting mesh-grid points (2D Maze) into Adjacency matrix which I will use later to find the shortest path between the given coordinates. Please have a look into my code:
import numpy as np
import numpy.matlib #for matrices
def cutblocks(xv,yv,allBlocks):
x_new,y_new = np.copy(xv), np.copy(yv)
ori_x,ori_y = xv[0][0], yv[0][0]
for block in allBlocks:
block = sorted(block)
block_xmin = np.min((block[0][0], block[2][0]))
block_xmax = np.max((block[0][0], block[2][0]))
block_ymin = np.min((block[0][1], block[1][1]))
block_ymax = np.max((block[0][1], block[1][1]))
rx_min, rx_max = int((block_xmin-ori_x)/stepSize)+1, int((block_xmax-ori_x)/stepSize)+1
ry_min, ry_max = int((block_ymin-ori_y)/stepSize)+1, int((block_ymax-ori_y)/stepSize)+1
for i in range(rx_min,rx_max):
for j in range(ry_min,ry_max):
x_new[j][i] = np.nan
for i in range(ry_min,ry_max):
for j in range(rx_min,rx_max):
y_new[i][j] = np.nan
return x_new, y_new
stepSize = 0.2
##Blocks that should be disabled
allBlocks = [[(139.6, 93.6), (143.6, 93.6), (143.6, 97.6), (139.6, 97.6)],
[(154.2, 93.4), (158.2, 93.4), (158.2, 97.4), (154.2, 97.4)],
[(139.2, 77.8), (143.2, 77.8), (143.2, 81.8), (139.2, 81.8)],
[(154.2, 77.8), (158.2, 77.8), (158.2, 81.8), (154.2, 81.8)],
[(139.79999999999998, 86.4),
(142.6, 86.4),
(142.6, 88.0),
(139.79999999999998, 88.0)],
[(154.79999999999998, 87.2),
(157.6, 87.2),
(157.6, 88.8),
(154.79999999999998, 88.8)]]
x = np.arange(136.0, 161.0, stepSize)
y = np.arange(75.0, 101.0, stepSize)
xv, yv = np.meshgrid(x, y)
xv, yv = cutblocks(xv,yv,allBlocks)
MazeSize = xv.shape[0]*xv.shape[1]
adj = np.matlib.zeros((MazeSize,MazeSize)) #initialize AdjacencyMatrix
#make 1 whenever there is a connection between neighboring coordinates
mazeR, mazeC = 0,0
for row in range(xv.shape[0]):
for col in range(xv.shape[1]):
if xv[row][col]>0 and col+1<xv.shape[1] and round(np.abs(xv[row][col] - xv[row][col+1]),2) == stepSize:
adj[mazeR,mazeC+1] = 1
break
mazeC = mazeC+1
mazeR = mazeR+1
This code generates a mesh-grid in which some of the points are disabled because they are walls in the maze. The cost for every step (between connected vertices) is 1. My questions are:
1) The adjacency Matrix would be N.N and N=x.y (. is multiply). is that correct?
2) What could be the efficient way of finding and assigning the neighbors to values 1 in the adjacency matrix. ( I tried it but it doesn't work correctly)
3) Should I use graphs for this kind of problems ? My final goal is find the shortest path between the 2 coordinates (vertices).
Thanks

Related

Drawing 2D multiple vectors from an ordered list in Python

tl;dr How do I draw connecting 2D Vectors from a list of Coordinates
First time questioneer, so bare with me if I am not following proper etiquette :D
I study mechanical engineering and I have been trying to create a visual aide for the addition of rotating and oscillating forces within an engine. Every piston initially has its own vector, consisting of a length and an angle.
vectors = [[Force, Angle], [Force, Angle], [Force, Angle], ...]
These are converted to x,y components in EXCEL (will include this in python later)
Individual vector components
vectors = [[0.00, 1296.16],
[-1013.38, -808.14],
[421.22, -96.14],
[-374.92, 778.53],
[-374.92, -778.53],
[0.00, 0.00],
[337.79, -269.38]]
These vectors are then added 1 by 1 for a step-by-step resultant
x_coords
Out[183]:
[0.0, -1013.38, -592.16, -967.08, -1342.0, -1342.0, -1004.21]
y_coords Out[182]:
[1296.16, 488.02, 391.88, 1170.41, 391.88, 391.88, 122.5]
WHAT I WANT
What I have
EXCEL CONVERSION
Converting the data in excel
drawing in tkinter/mpl is appreciated, but I will accept any help :)
CODE
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
n=['A','B','C','D','E','F','G'] # These will be replaced with Engine Firing Order
vectors = [[0.00, 1296.16],
[-1013.38, -808.14],
[421.22, -96.14],
[-374.92, 778.53],
[-374.92, -778.53],
[0.00, 0.00],
[337.79, -269.38]]
vectors = pd.DataFrame(np.array(vectors),columns=('x1', 'y1'))
x_coords = []
y_coords = []
sum_x = 0
sum_y = 0
for x in vectors['x1']:
sum_x += x
x_coords.append(round(sum_x,2))
for y in vectors['y1']:
sum_y += y
y_coords.append(round(sum_y, 2))
coords = []
fig, ax = plt.subplots()
ax.scatter(x_coords, y_coords)
for i, txt in enumerate(n):
ax.annotate(txt, (x_coords[i], y_coords[i]))
plt.show()
PS. Any suggestions/comments/tips about the code are appreciated :)

How to append the first element of a matrix onto a list over a loop?

I have two loops that runs for a different x and y coordinates and for each different (x,y) coordinates, a linear equation is being solved for force 1 and force 2 using matrices method i.e. finding the inverse of A if Ax = C. For each loop it gives an answer as a matrix where first element is force 1 and 2nd element is force 2 at those specific coordinates. Here's my code:
import numpy as np
from scipy import linalg
def Force():
Force1 = np.zeros((160,90))
Force2 = np.zeros((160,90))
for x in np.arange(0,16.1,0.1):
for y in np.arange(1,9.1,0.1):
l1 = np.hypot(x,y)
l2 = np.hypot(15-x,y)
A = np.array([[(x/l1),((x-15)/l2)],[(y/l1),(y/l2)]])
c = np.array([[0],[70*9.81]])
F = linalg.solve(A,c)
Force1[x,y] = F[0]
Force2[x,y] = F[1]
print("Force 1 = {} \nForce 2 = {}\n".format(F[0], F[1]))
so at each point (x,y) a matrix [[Force 1],[Force 2]] is solved. Now I would like to append all the Force1(s) into a list of Force1[x,y] and similarly for Forces2(s) so that I can do
plt.imshow[Force1]
plt.imshow[Force2]
to plot a 2 heatmaps. How would I go about doing that?
This solves your issue - you were trying to assign to indices in Force1 and Force2 of type float. I've changed the for loops to use enumerate instead, and tweaked the assignment so it assigns F[0][0] and F[1][0].
import numpy as np
from scipy import linalg
def Force():
Force1 = np.zeros((160,90))
Force2 = np.zeros((160,90))
for i, x in enumerate(np.arange(0,16,0.1)):
for j, y in enumerate(np.arange(1,9,0.1)):
l1 = np.hypot(x,y)
l2 = np.hypot(15-x,y)
A = np.array([[(x/l1),((x-15)/l2)],[(y/l1),(y/l2)]])
c = np.array([[0],[70*9.81]])
F = linalg.solve(A,c)
Force1[i, j] = F[0][0]
Force2[i, j] = F[1][0]
# print("Force 1 = {} \nForce 2 = {}\n".format(F[0], F[1]))
plt.imshow(Force1)
plt.show()
plt.imshow(Force2)
plt.show()
Force()
The generated plots are:
and

How to vectorize a code with python numpy.bincount, using apply along axis

I'm trying to vectorize a code with numpy, to run it using multiprocessing, but i can't understand how numpy.apply_along_axis works. This is an example of the code, vectorized using map
import numpy
from scipy import sparse
import multiprocessing
from matplotlib import pyplot
#first i build a matrix of some x positions vs time datas in a sparse format
matrix = numpy.random.randint(2, size = 100).astype(float).reshape(10,10)
x = numpy.nonzero(matrix)[0]
times = numpy.nonzero(matrix)[1]
weights = numpy.random.rand(x.size)
#then i define an array of y positions
nStepsY = 5
y = numpy.arange(1,nStepsY+1)
#now i build an image using x-y-times coordinates and x-times weights
def mapIt(ithStep):
ncolumns = 80
image = numpy.zeros(ncolumns)
yTimed = y[ithStep]*times
positions = (numpy.round(x-yTimed)+50).astype(int)
values = numpy.bincount(positions,weights)
values = values[numpy.nonzero(values)]
positions = numpy.unique(positions)
image[positions] = values
return image
image = list(map(mapIt, range(nStepsY)))
image = numpy.array(image)
a = pyplot.imshow(image, aspect = 10)
Here the output plot
I tried to use numpy.apply_along_axis, but this function allows me to iterate only along the rows of image, while i need to iterate along the ithStep index too. E.g.:
#now i build an image using x-y-times coordinates and x-times weights
nrows = nStepsY
ncolumns = 80
matrix = numpy.zeros(nrows*ncolumns).reshape(nrows,ncolumns)
def applyIt(image):
image = numpy.zeros(ncolumns)
yTimed = y[ithStep]*times
positions = (numpy.round(x-yTimed)+50).astype(int)
values = numpy.bincount(positions,weights)
values = values[numpy.nonzero(values)]
positions = numpy.unique(positions)
image[positions] = values
return image
imageApplied = numpy.apply_along_axis(applyIt,1,matrix)
a = pyplot.imshow(imageApplied, aspect = 10)
It obviously return only the firs row nrows times, since nothing iterates ithStep:
And here the wrong plot
There is a way to iterate an index, or to use an index while numpy.apply_along_axis iterates?
Here the code with only matricial operations: it's quite faster than map or apply_along_axis but uses so much memory.
(in this function i use a trick with scipy.sparse, which works more intuitively than numpy arrays when you try to sum numbers on a same element)
def fullmatrix(nRows, nColumns):
y = numpy.arange(1,nStepsY+1)
image = numpy.zeros((nRows, nColumns))
yTimed = numpy.outer(y,times)
x3d = numpy.outer(numpy.ones(nStepsY),x)
weights3d = numpy.outer(numpy.ones(nStepsY),weights)
y3d = numpy.outer(y,numpy.ones(x.size))
positions = (numpy.round(x3d-yTimed)+50).astype(int)
matrix = sparse.coo_matrix((numpy.ravel(weights3d), (numpy.ravel(y3d), numpy.ravel(positions)))).todense()
return matrix
image = fullmatrix(nStepsY, 80)
a = pyplot.imshow(image, aspect = 10)
This way is simplier and very fast! Thank you so much.
nStepsY = 5
nRows = nStepsY
nColumns = 80
y = numpy.arange(1,nStepsY+1)
image = numpy.zeros((nRows, nColumns))
fakeRow = numpy.zeros(positions.size)
def itermatrix(ithStep):
yTimed = y[ithStep]*times
positions = (numpy.round(x-yTimed)+50).astype(int)
matrix = sparse.coo_matrix((weights, (fakeRow, positions))).todense()
matrix = numpy.ravel(matrix)
missColumns = (nColumns-matrix.size)
zeros = numpy.zeros(missColumns)
matrix = numpy.concatenate((matrix, zeros))
return matrix
for i in numpy.arange(nStepsY):
image[i] = itermatrix(i)
#or, without initialization of image:
imageMapped = list(map(itermatrix, range(nStepsY)))
imageMapped = numpy.array(imageMapped)
It feels like attempting to use map or apply_along_axis is obscuring the essentially iteration of the problem.
I rewrote your code as an explicit loop on y:
nStepsY = 5
y = numpy.arange(1,nStepsY+1)
image = numpy.zeros((nStepsY, 80))
for i, yi in enumerate(y):
yTimed = yi*times
positions = (numpy.round(x-yTimed)+50).astype(int)
values = numpy.bincount(positions,weights)
values = values[numpy.nonzero(values)]
positions = numpy.unique(positions)
image[i, positions] = values
a = pyplot.imshow(image, aspect = 10)
pyplot.show()
Looking at the code, I think I could calculate positions for all y values making a (y.shape[0],times.shape[0]) array. But the rest, the bincount and unique still have to work row by row.
apply_along_axis when working with a 2d array, and axis=1 essentially does:
res = np.zeros_like(arr)
for i in range....:
res[i,:] = func1d(arr[i,:])
If the input array has more dimensions it constructs a more elaborate indexing object [i,j,k,:]. And it can handle cases where func1d returns a different size array than the input. But in any case it is just a generalized iteration tool.
Moving the initial positions creation outside the loop:
yTimed = y[:,None]*times
positions = (numpy.round(x-yTimed)+50).astype(int)
image = numpy.zeros((positions.shape[0], 80))
for i, pos in enumerate(positions):
values = numpy.bincount(pos,weights)
values = values[numpy.nonzero(values)]
pos = numpy.unique(pos)
image[i, pos] = values
Now I can cast this as an apply_along_axis problem, with an applyIt that takes a positions vector (with all the yTimed information) rather than blank image vector.
def applyIt(pos, size, weights):
acolumn = numpy.zeros(size)
values = numpy.bincount(pos,weights)
values = values[numpy.nonzero(values)]
pos = numpy.unique(pos)
acolumn[pos] = values
return acolumn
image = numpy.apply_along_axis(applyIt, 1, positions, 80, weights)
Timing wise I expect it's a bit slower than my explicit iteration. It has to do more setup work, including a test call applyIt(positions[0,:],...) to determine the size of its return array (i.e image has different shape than positions.)
def csrmatrix(y, times, x, weights):
yTimed = numpy.outer(y,times)
n=y.shape[0]
x3d = numpy.outer(numpy.ones(n),x)
weights3d = numpy.outer(numpy.ones(n),weights)
y3d = numpy.outer(y,numpy.ones(x.size))
positions = (numpy.round(x3d-yTimed)+50).astype(int)
#print(y.shape, weights3d.shape, y3d.shape, positions.shape)
matrix = sparse.csr_matrix((numpy.ravel(weights3d), (numpy.ravel(y3d), numpy.ravel(positions))))
#print(repr(matrix))
return matrix
# one call
image = csrmatrix(y, times, x, weights)
# iterative call
alist = []
for yi in numpy.arange(1,nStepsY+1):
alist.append(csrmatrix(numpy.array([yi]), times, x, weights))
def mystack(alist):
# concatenate without offset
row, col, data = [],[],[]
for A in alist:
A = A.tocoo()
row.extend(A.row)
col.extend(A.col)
data.extend(A.data)
print(len(row),len(col),len(data))
return sparse.csr_matrix((data, (row, col)))
vimage = mystack(alist)

getting elements in an array1 that are not in array2

Main Problem
What is the better/pythonic way of retrieving elements in a particular array that are not found in a different array. This is what I have;
idata = [np.column_stack(data[k]) for k in range(len(data)) if data[k] not in final]
idata = np.vstack(idata)
My interest is in performance. My data is an (X,Y,Z) array of size (7000 x 3) and my gdata is an (X,Y) array of (11000 x 2)
Preamble
I am working on an octant search to find the n-number(e.g. 8) of points (+) closest to my circular point (o) in each octant. This would mean that my points (+) are reduced to only 64 (8 per octant). Then for each gdata I would save the elements that are not found in data.
import tkinter as tk
from tkinter import filedialog
import pandas as pd
import numpy as np
from scipy.spatial.distance import cdist
from collections import defaultdict
root = tk.Tk()
root.withdraw()
file_path = filedialog.askopenfilename()
data = pd.read_excel(file_path)
data = np.array(data, dtype=np.float)
nrow, cols = data.shape
file_path1 = filedialog.askopenfilename()
gdata = pd.read_excel(file_path1)
gdata = np.array(gdata, dtype=np.float)
gnrow, gcols = gdata.shape
N=8
delta = gdata - data[:,:2]
angles = np.arctan2(delta[:,1], delta[:,0])
bins = np.linspace(-np.pi, np.pi, 9)
bins[-1] = np.inf # handle edge case
octantsort = []
for j in range(gnrow):
delta = gdata[j, ::] - data[:, :2]
angles = np.arctan2(delta[:, 1], delta[:, 0])
octantsort = []
for i in range(8):
data_i = data[(bins[i] <= angles) & (angles < bins[i+1])]
if data_i.size > 0:
dist_order = np.argsort(cdist(data_i[:, :2], gdata[j, ::][np.newaxis]), axis=0)
if dist_order.size < npoint_per_octant+1:
[octantsort.append(data_i[dist_order[:npoint_per_octant][j]]) for j in range(dist_order.size)]
else:
[octantsort.append(data_i[dist_order[:npoint_per_octant][j]]) for j in range(npoint_per_octant)]
final = np.vstack(octantsort)
idata = [np.column_stack(data[k]) for k in range(len(data)) if data[k] not in final]
idata = np.vstack(idata)
Is there an efficient and pythonic way of doing this do increase performance in the last two lines of the code?
If I understand your code correctly, then I see the following potential savings:
dedent the final = ... line
don't use arctan it's expensive; since you only want octants compare the coordinates to zero and to each other
don't do a full argsort, use argpartition instead
make your octantsort an "octantargsort", i.e. store the indices into data, not the data points themselves; this would save you the search in the last but one line and allow you to use np.delete for removing
don't use append inside a list comprehension. This will produce a list of Nones that is immediately discarded. You can use list.extend outside the comprehension instead
besides, these list comprehensions look like a convoluted way of converting data_i[dist_order[:npoint_per_octant]] into a list, why not simply cast, or even keep as an array, since you want to vstack in the end?
Here is some sample code illustrating these ideas:
import numpy as np
def discard_nearest_in_each_octant(eater, eaten, n_eaten_p_eater):
# build octants
# start with quadrants ...
top, left = (eaten < eater).T
quadrants = [np.where(v&h)[0] for v in (top, ~top) for h in (left, ~left)]
dcoord2 = (eaten - eater)**2
dc2quadrant = [dcoord2[q] for q in quadrants]
# ... and split them
oct4158 = [q[:, 0] < q [:, 1] for q in dc2quadrant]
# main loop
dc2octants = [[q[o], q[~o]] for q, o in zip (dc2quadrant, oct4158)]
reloap = [[
np.argpartition(o.sum(-1), n_eaten_p_eater)[:n_eaten_p_eater]
if o.shape[0] > n_eaten_p_eater else None
for o in opair] for opair in dc2octants]
# translate indices
octantargpartition = [q[so] if oap is None else q[np.where(so)[0][oap]]
for q, o, oaps in zip(quadrants, oct4158, reloap)
for so, oap in zip([o, ~o], oaps)]
octantargpartition = np.concatenate(octantargpartition)
return np.delete(eaten, octantargpartition, axis=0)

How can I create a tridimensional map in python?

I need to create a tridimensional map. I would create an x axis with values between 0 and 255 with step 0.5. The same thing with the y axis.
And then I would assign a value for each coordinate (for example at the point (10.5,10)).
Matrix is not the solution because I can't decide values in the x and y axes.
Can you help me?
EDIT: I try to explain better the question. This is a piece of my code:
img = cv2.imread('Lena256.bmp',0)
M = cv2.getRotationMatrix2D((cols/2,rows/2),angle,1)
img_rotate = cv2.warpAffine(img,M,(cols,rows))
Then I locate some point in the img_rotate: for example p=(10,10). I want to map "p" to the corresponding point in the original image. To do that I have written this code:
T = np.zeros((rows,cols))
T[10][10] = 1
M_INV = cv2.getRotationMatrix2D((cols/2,rows/2),-angle,1)
T = cv2.warpAffine(T,M_INV,(cols,rows))
In this way it works. But if I locate a point with no integer coordinates (in the img_rotate), for example (10.5,10), I should to create a matrix T with double dimensions where I could assign values 0, 0.5, 1, ecc in order to identify point (10.5,10). And then I could apply the inverse rotation.
I hope to be enough clear
You could use a dictionary:
d = {
(10.5, 10): 23,
(0,0): 42,
(-100,999): 15,
#etc
}
Then you can access the value at some coordinates (x,y) by doing d[x,y]. (or d.get((x,y), default_value_goes_here) if you're not sure whether that coordinate exists in the collection yet)
you can use np.meshgrid for that
import numpy as np
x_ = np.linspace(0., .5, 255)
y_ = np.linspace(1., .5, 255)
# You can make what you want in z ex:
z_ = np.linspace(3., 4., 30)
x, y, z = np.meshgrid(x_, y_, z_, indexing='ij')

Categories