I want to apply a transformation matrix to a set of points. So the set of points:
points = np.array([[0 ,20], [0, 575], [0, 460]])
And I want to use the matrix I calculated with cv2.getPerspectiveTransform() which is a 3x3 matrix.
matrix = np.array([
[ -4. , -3. , 1920. ],
[ -2.25 , -1.6875 , 1080. ],
[ -0.0020833, -0.0015625, 1. ]])
Then I pass the array and a matrix to the following function:
def poly_points_transform(poly_points, matrix):
poly_points_transformed = np.empty_like(poly_points)
for i in range(len(poly_points)):
point = np.array([[poly_points[i]]])
transformed_point = cv2.perspectiveTransform(point, matrix)
np.append(poly_points_transformed, transformed_point)
return poly_points_transformed
Now It doesn't throw an error, but it just copies the src array to the poly_points_transformed. It might be something really rudimentary and stupid. If it is the case, I am sorry, but could someone give me a hint on what is wrong? Thanks in advance
We may solve it with one line of code:
transformed_point = cv2.perspectiveTransform(np.array([points], np.float64), matrix)[0]
As Micka commented cv2.perspectiveTransform takes a list of points (and returns a list of points as output).
np.array([points]) is used because cv2.perspectiveTransform expects 3D array.
For details see trouble getting cv.transform to work.
np.float64 is used in case the dtype of points is int32 (the method accepts float64 and float32 types).
[0] is used for removing the redundant dimension (convert from 3D to 2D).
For fixing the loop, replace np.append(poly_points_transformed, transformed_point) with:
poly_points_transformed[i] = transformed_point[0].
Since the array is initialized to poly_points_transformed = np.empty_like(poly_points), we can't use np.append().
Code sample:
import cv2
import numpy as np
points = np.array([[0.0 ,20.0], [0.0, 575.0], [0.0, 460.0]])
matrix = np.array([
[ -4. , -3. , 1920. ],
[ -2.25 , -1.6875 , 1080. ],
[ -0.0020833, -0.0015625, 1. ]])
# transformed_point = cv2.perspectiveTransform(np.array([points], np.float64), matrix)[0]
def poly_points_transform(poly_points, matrix):
poly_points_transformed = np.empty_like(poly_points)
for i in range(len(poly_points)):
point = np.array([[poly_points[i]]])
transformed_point = cv2.perspectiveTransform(point, matrix)
poly_points_transformed[i] = transformed_point[0] #np.append(poly_points_transformed, transformed_point)
return poly_points_transformed
poly_points_transformed = poly_points_transform(points, matrix)
The result is:
poly_points_transformed =
array([[1920., 1080.],
[1920., 1080.],
[1920., 1080.]])
Why are we getting [1920.0, 1080.0] value for all the transformed points?
Lets transform the middle point mathematically:
Multiply matrix by point (with 1 in the third index)
[ -4. , -3. , 1920. ] [ 0]
[ -2.25 , -1.6875 , 1080. ] * [575] =
[ -0.0020833, -0.0015625, 1. ] [ 1]
p = matrix # np.array([[0.0], [575.0], [1.0]]) =
[1.950000e+02]
[1.096875e+02]
[1.015625e-01]
Now divide the coordinates by the last element (converting homogeneous coordinates to Euclidian coordinates):
[1.950000e+02/1.015625e-01] [1920]
[1.096875e+02/1.015625e-01] = p / p[2] = [1080]
[1.015625e-01/1.015625e-01] [ 1]
The equivalent Euclidian point is [1920, 1080].
The transformation matrix may be wrong, because it transforms all the input points (with x coordinate equals 0) to the same output point...
Related
I'm trying to get the L and U--matrices from the following Gauss-elimination code I wrote
matrix = np.array ([[2,1,4,1], [3,4,-1,-1] , [1,-4,1,5] , [2,-2,1,3]], dtype = float)
vector = np.array([-4, 3, 9, 7], float)
length = len(vector)
L_matrix = np.zeros((4,4), float)
U_matrix = np.zeros((4,4), float)
for m in range(length):
L_matrix[:,m] = matrix[:,m]
div = matrix[m,m]
matrix[m,:] /= div
U_matrix[m, :] = matrix[m,:]
vector[m] /= div
I'm getting the right U-matrix, but I'm getting this L-matrix
[[ 2. 0.5 2. 0.5]
[ 3. 2.5 -2.8 -1. ]
[ 1. -4.5 -13.6 -0. ]
[ 2. -3. -11.4 -1. ]]
i.e I'm getting the whole matrix instead of a lower triangular matrix with zeros at the top! What am I doing wrong here?
The issue here is that the provided code does not perform the elimination. Try this:
for m in range(length):
div = matrix[m, m]
L_matrix[:, m] = matrix[:, m] / div
U_matrix[m, :] = matrix[m, :]
matrix -= np.outer(L_matrix[:, m], U_matrix[m, :])
See this article for more details. For actually solving your linear system, the issue is that LU is not exactly the same as standard Gaussian elimination. You can use back substitution to efficiently compute what vector should be.
I'm trying to scale the following NumPy array based on its minimum and maximum values.
array = [[17405.051 17442.4 17199.6 17245.65 ]
[17094.949 17291.75 17091.15 17222.75 ]
[17289. 17294.9 17076.551 17153. ]
[17181.85 17235.1 17003.9 17222. ]]
Formula used is:
m=(x-xmin)/(xmax-xmin)
wherein m is an individually scaled item, x is an individual item, xmax is the highest value and xmin is the smallest value of the array.
My question is how do I print the scaled array?
P.S. - I can't use MinMaxScaler as I need to scale a given number (outside the array) by plugging it in the mentioned formula with xmin & xmax of the given array.
I tried scaling the individual items by iterating over the array but I'm unable to put together the scaled array.
I'm new to NumPy, any suggestions would be welcome.
Thank you.
Use method ndarray.min(), ndarray.max() or ndarray.ptp()(gets the range of the values in the array):
>>> ar = np.array([[17405.051, 17442.4, 17199.6, 17245.65 ],
... [17094.949, 17291.75, 17091.15, 17222.75 ],
... [17289., 17294.9, 17076.551, 17153. ],
... [17181.85, 17235.1, 17003.9, 17222. ]])
>>> min_val = ar.min()
>>> range_val = ar.ptp()
>>> (ar - min_val) / range_val
array([[0.91482554, 1. , 0.44629418, 0.55131129],
[0.2076374 , 0.65644242, 0.19897377, 0.4990878 ],
[0.65017104, 0.663626 , 0.16568073, 0.34002281],
[0.40581528, 0.527252 , 0. , 0.49737742]])
I think you should learn more about the basic operation of numpy.
import numpy as np
array_list = [[17405.051, 17442.4, 17199.6, 17245.65 ],
[17094.949, 17291.75, 17091.15, 17222.75 ],
[17289., 17294.9, 17076.551, 17153., ],
[17181.85, 17235.1, 17003.9, 17222. ]]
# Convert list into numpy array
array = np.array(array_list)
# Create empty list
scaled_array_list=[]
for x in array:
m = (x - np.min(array))/(np.max(array)-np.min(array))
scaled_array_list.append(m)
# Convert list into numpy array
scaled_array = np.array(scaled_array_list)
scaled_array
My version is by iterating over the array as you said.
You can also put everything in a function and use it in future:
def scaler(array_to_scale):
# Create empty list
scaled_array_list=[]
for x in array:
m = (x - np.min(array))/(np.max(array)-np.min(array))
scaled_array_list.append(m)
# Convert list into numpy array
scaled_array = np.array(scaled_array_list)
return scaled_array
# Here it is our input
array_list = [[17405.051, 17442.4, 17199.6, 17245.65 ],
[17094.949, 17291.75, 17091.15, 17222.75 ],
[17289., 17294.9, 17076.551, 17153., ],
[17181.85, 17235.1, 17003.9, 17222. ]]
# Convert list into numpy array
array = np.array(array_list)
scaler(array)
Output:
Out:
array([[0.91482554, 1. , 0.44629418, 0.55131129],
[0.2076374 , 0.65644242, 0.19897377, 0.4990878 ],
[0.65017104, 0.663626 , 0.16568073, 0.34002281],
[0.40581528, 0.527252 , 0. , 0.49737742]])
My aim is to interpolate some data. To do that i have to create a meshgrid.
To do this step, i got an array with my 2D coordinate "coord" (first column : element number, second : X and third : Y).
I do a meshgrid with np.meshgrid as you can see below.
But my results seem to be strange, so i would like to know if i have done
a mistake...Must i have to reorganize my data before meshgrid step?
import numpy as np
coord = np.array([[ 1. , -1.38888667, -1.94444333],
[ 2. , -1.94444333, -1.38888667],
[ 3. , 0.27777667, -1.94444333],
[ 4. , -0.27777667, -1.38888667],
[ 5. , 1.94444333, -1.94444333],
[ 6. , 1.38888667, -1.38888667],
[ 7. , -1.38888667, -0.27777667],
[ 8. , -1.94444333, 0.27777667],
[ 9. , 0.27777667, -0.27777667],
[ 10. , -0.27777667, 0.27777667],
[ 11. , 1.94444333, -0.27777667],
[ 12. , 1.38888667, 0.27777667],
[ 13. , -1.38888667, 1.38888667],
[ 14. , -1.94444333, 1.94444333],
[ 15. , 0.27777667, 1.38888667],
[ 16. , -0.27777667, 1.94444333],
[ 17. , 1.94444333, 1.38888667],
[ 18. , 1.38888667, 1.94444333]])
[Y,X]=np.meshgrid(coord[:,2],coord[:,1])
If i plot Y, i got that :
plt.imshow(Y);plt.colorbar();plt.show()
---- EDIT LATER -----
I m wondering (for example) if the coordinates with meshgrid have to be strictly increasing? if there is a better way when i have some coordinates not organized?
For the interpolation, i would like to use :
def interpolate(values, tri,uv,d=2):
simplex = tri.find_simplex(uv)
vertices = np.take(tri.simplices, simplex, axis=0)
temp = np.take(tri.transform, simplex, axis=0)
delta = uv- temp[:, d]
bary = np.einsum('njk,nk->nj', temp[:, :d, :], delta)
return np.einsum('nj,nj->n', np.take(values, vertices), np.hstack((bary, 1.0 - bary.sum(axis=1, keepdims=True))))
which was used in Stack before Speedup scipy griddata for multiple interpolations between two irregular grids allowing to limit the calculation time
Currently I'm trying to solve the generalized eigenvalue problem in NumPy for two symmetric matrices and I've been running into massive trouble as I'm expecting all eigenvalues to be positive, but eigh returns several very large numbers that are not all positive, while eig returns the correct, expected values (but is, of course, very, very slow).
In this case, note that K is symmetric as expected from its construction (here is the code in question):
# Calculate K matrix (<i|pHp|j> in the LGL-nodes basis)
for i in range(Ne):
idx_s, idx_e = i*(Np-1), i*(Np-1)+Np
K[idx_s:idx_e, idx_s:idx_e] += dmat.T.dot(diag(w*peq[idx_s:idx_e])).dot(dmat)
# Re-make matrix for efficient vector products
K = sparse.csr_matrix(K)
# Make matrix for <i|p|j> in the LGL basis as efficient diagonal sparse matrix
S = sparse.diags(peq*w_d, 0)
# Solve the generalized eigenvalue problem: Kc = lSc for hermitian matrices K and S
lQ, Q = linalg.eigh(K.todense(), S.todense())
_lQ, _Q = linalg.eig(K.todense(), S.todense())
lQ.sort()
_lQ.sort()
if not allclose(lQ, _lQ):
print('Literally why')
print(lQ)
print(_lQ)
return
For testing, dmat is defined as
array([[ -896. , 1212.00631086, -484.43454844, 275.06612251,
-179.85209531, 124.26620323, -83.05199285, 32. ],
[ -205.43460499, 0. , 290.78944413, -135.17191772,
82.83085126, -55.64467829, 36.70818656, -14.07728095],
[ 50.7185076 , -179.61445086, 0. , 184.03311398,
-87.85829324, 54.08144362, -34.37053351, 13.01021241],
[ -23.81762789, 69.05246008, -152.20398294, 0. ,
152.89115899, -72.66291308, 42.31407046, -15.57316561],
[ 15.57316561, -42.31407046, 72.66291308, -152.89115899,
0. , 152.20398294, -69.05246008, 23.81762789],
[ -13.01021241, 34.37053351, -54.08144362, 87.85829324,
-184.03311398, 0. , 179.61445086, -50.7185076 ],
[ 14.07728095, -36.70818656, 55.64467829, -82.83085126,
135.17191772, -290.78944413, 0. , 205.43460499],
[ -32. , 83.05199285, -124.26620323, 179.85209531,
-275.06612251, 484.43454844, -1212.00631086, 896. ]])
And all of w[i], w_d[i], peq[i] are essentially arbitrary positive-valued arrays. w_d and w are of the same order (~ 1e-1) and peq[i] ranges on the order of (~ 1e-10 to 1e1)
Some of the output I'm getting is
Literally why
[ -6.25540943e+07 -4.82660391e+07 -2.62629052e+07 ..., 1.07960873e+10
1.07967334e+10 4.26007915e+10]
[ -5.25462340e-12+0.j 4.62614812e-01+0.j 1.23357898e+00+0.j ...,
2.17613917e+06+0.j 1.07967334e+10+0.j 4.26007915e+10+0.j]
EDIT:
Here's a self-contained version of the code for easier debugging
import numpy as np
from math import *
from scipy import sparse, linalg
# Variable declarations and such (pre-computed)
Ne, Np = 256, 8
N = Ne*Np - Ne + 1
domain_size = 4/Ne
x = np.array([-0.015625 , -0.01362094, -0.00924532, -0.0032703 , 0.0032703 ,
0.00924532, 0.01362094, 0.015625 ])
w = np.array([ 0.00055804, 0.00329225, 0.00533004, 0.00644467, 0.00644467,
0.00533004, 0.00329225, 0.00055804])
dmat = np.array([[ -896. , 1212.00631086, -484.43454844, 275.06612251,
-179.85209531, 124.26620323, -83.05199285, 32. ],
[ -205.43460499, 0. , 290.78944413, -135.17191772,
82.83085126, -55.64467829, 36.70818656, -14.07728095],
[ 50.7185076 , -179.61445086, 0. , 184.03311398,
-87.85829324, 54.08144362, -34.37053351, 13.01021241],
[ -23.81762789, 69.05246008, -152.20398294, 0. ,
152.89115899, -72.66291308, 42.31407046, -15.57316561],
[ 15.57316561, -42.31407046, 72.66291308, -152.89115899,
0. , 152.20398294, -69.05246008, 23.81762789],
[ -13.01021241, 34.37053351, -54.08144362, 87.85829324,
-184.03311398, 0. , 179.61445086, -50.7185076 ],
[ 14.07728095, -36.70818656, 55.64467829, -82.83085126,
135.17191772, -290.78944413, 0. , 205.43460499],
[ -32. , 83.05199285, -124.26620323, 179.85209531,
-275.06612251, 484.43454844, -1212.00631086, 896. ]])
# More declarations
x_d = np.zeros(N)
w_d = np.zeros(N)
dmat_d = np.zeros((N, N))
for i in range(Ne):
x_d[i*(Np-1):i*(Np-1)+Np] = x+i*domain_size
w_d[i*(Np-1):i*(Np-1)+Np] += w
dmat_d[i*(Np-1):i*(Np-1)+Np, i*(Np-1):i*(Np-1)+Np] += dmat
peq = (np.cos((x_d-2)*pi/4))**2
# Normalization
peq = peq/np.sum(w_d*peq)
p0 = np.maximum(peq, 1e-10)
p0 /= np.sum(p0*w_d)
# Make efficient matrix that can be built
K = sparse.lil_matrix((N, N))
# Calculate K matrix (<i|pHp|j> in the LGL-nodes basis)
for i in range(Ne):
idx_s, idx_e = i*(Np-1), i*(Np-1)+Np
K[idx_s:idx_e, idx_s:idx_e] += dmat.T.dot(np.diag(w*p0[idx_s:idx_e])).dot(dmat)
# Re-make matrix for efficient vector products
K = sparse.csr_matrix(K)
# Make matrix for <i|p|j> in the LGL basis as efficient diagonal sparse matrix
S = sparse.diags(p0*w_d, 0)
# Solve the generalized eigenvalue problem: Kc = lSc for hermitian matrices K and S
lQ, Q = linalg.eigh(K.todense(), S.todense())
_lQ, _Q = linalg.eig(K.todense(), S.todense())
lQ.sort()
_lQ.sort()
if not np.allclose(lQ, _lQ):
print('Literally why')
print(lQ)
print(_lQ)
EDIT2: This is really odd. Running all of the NumPy/SciPy tests on my machine, I receive no errors. But even running the simple test (with large enough matrices) as
import numpy as np
from spicy import linalg
M = np.random.random((1000,1000))
M += M.T
np.allclose(sorted(linalg.eigh(M)[0]), sorted(linalg.eig(M)[0]))
fails on my machine. Though running the same test with a 50x50 matrix does work---even after rebuilding the SciPy/NumPy stack and passing all unit tests.
EDIT3: Actually, this seems to fail everywhere, after testing it on a cluster computer. I'm not sure why.
The above fails due to the in-place behaviour of += and .T as a view rather than an operation.
I have (n,2) numpy array which contains the coordination of n points now I want to sort them based on approximation of each element to the specific point (x,y) and pick the closest one. How can I achieve this?
Right now I have:
def find_nearest(array,value):
xlist = (np.abs(array[:, 0]-value[:, 0]))
ylist = (np.abs(array[:, 1]-value[:, 1]))
newList = np.vstack((xlist,ylist))
// SORT NEW LIST and return the 0 elemnt
In my solution I need to sort newList based on Proximity to (0,0) and I don't know how? Any solution for this or any other solution?
My array of points looks like:
array([[ 0.1648, 0.227 ],
[ 0.2116, 0.2472],
[ 0.78 , 0.546 ],
[ 0.9752, 1. ],
[ 0.384 , 0.4862],
[ 0.4428, 0.2204],
[ 0.4448, 0.4146],
[ 0.1046, 0.2658],
[ 0.5668, 0.7792],
[ 0.1664, 0.0746],
[ 0.5636, 0.6372],
[ 0.7822, 0.5536],
[ 0.7718, 0.8276],
[ 0.9916, 1. ],
[ 0. , 0. ],
[ 0.8206, 0.817 ],
[ 0.4858, 0.4652],
[ 0. , 0. ],
[ 0.1574, 0.3114],
[ 0. , 0.0022],
[ 0.874 , 0.714 ],
[ 0.148 , 0.6624],
[ 0.0656, 0.5912],
[ 0.1148, 0.607 ],
[ 0.069 , 0.6296]])
Sorting to find the nearest point is not a good idea. If you want the closest point then just find the closest point instead, sorting for that is overkilling.
def closest_point(arr, x, y):
dist = (arr[:, 0] - x)**2 + (arr[:, 1] - y)**2
return arr[dist.argmin()]
Moreover if you need to repeat the search many times with a fixed or quasi fixed set of points there are specific data structures that can speed up this kind of query a lot (the search time becomes sub-linear).
If you just want the cartesian distance you can do something like the following:
def find_nearest(arr,value):
newList = arr - value
sort = np.sum(np.power(newList, 2), axis=1)
return newList[sort.argmin()]
I am assuming newList has a shape of (n,2). As a note I changed the input variable array to arr to avoid issues if numpy is imported like: from numpy import *.
If you have scipy the following works:
import scipy.spatial.distance as ds
import numpy as np
pointOfInterest = np.array([[0, 0]])
Then:
arr[ds.cdist(pointOfInterest, arr)[0].np.argsort()[0]]
arr is your array above.
How about just using the key parameter in sorted?
sorted(p, key = lambda (a,b) :(a-m)**2+(b-n)**2)
Here p is of the form array([[1,2], [3,4], ...]) and (m,n) is the tuple of the slowest point ...