Related
is it possible to calculate the distance between points which i created with cv2.projectPoints?
I have two aruco markers and from both markers i have created (with cv2.projectPoints) points which are in a specific distance to the marker. Now i want to know how far these points are away from each other?
I know you cant give a specific code without an MVP and it is not necessary i only need an idea how this is possible to calculate. I would be awesome if someone knows maybe a cv2 function or a way to calculate these?
Thank you very much <3
Edit:
I generated the four identity matrixes and inversed all of them. Code and result ist below.
#T_point1_marker1 = np.linalg.inv(T_marker1_point1)
#T_marker1_cam = np.linalg.inv(T_cam_marker1)
T_point1_marker1 = np.array([
[ 1., 0., 0., -0.1 ],
[ 0., 1., 0., -0.05],
[ 0., 0., 1., 0. ],
[ 0., 0., 0., 1. ],
])
T_marker1_cam = np.array([
[ 1., 0., 0., 0.10809129],
[ 0., 1., 0., 0.03833054],
[ 0., 0., 1., -0.35931477],
[ 0., 0., 0., 1. ],
])
T_cam_marker2 = np.array([
[ 1., 0., 0., 0.09360527],
[ 0., 1., 0., -0.01229168],
[ 0., 0., 1., 0.36470099],
[ 0., 0., 0., 1. ],
])
T_marker2_point2 = np.array([
[ 1., 0., 0., 0.005],
[ 0., 1., 0., 0.1 ],
[ 0., 0., 1., 0. ],
[ 0., 0., 0., 1. ],
])
#Process finished with exit code 1
The think i don't understand is this part:
T_point1_point2 = T_point1_marker1 # T_marker1_cam # T_cam_marker2 # T_marker2_point2
How do i bring these four matrixes together so i get T_point1_point2?
Thanks again :)
Since your graphic contains measurements of physical distance, rather than pixels, I'll assume you're asking about 3D, i.e. you want a 3D distance between those points...
You just need to define the poses of those points, relative to their markers. That is T_marker1_point1 and T_marker2_point2. Make those be pure translation, probably with Z=0 if these points are in each respective marker's plane. Literally make a 4x4 identity matrix, then stick your nominal (constructed) dimensions into the last column.
Then you need the marker poses relative to the camera, T_cam_marker1 and T_cam_marker2.
Finally you calculate
T_point1_point2 = T_point1_marker1 # T_marker1_cam # T_cam_marker2 # T_marker2_point2
# where
# T_marker1_cam = np.linalg.inv(T_cam_marker1)
# and so on
The translation part of that pose matrix gives you the distance between those points. You can ignore the rotation component. That'd only give you the rotation between those markers, because your points were defined as poses of the same orientation as their respective markers. Yes, orientation is silly for points but eh...
All of that is 4x4 matrices. Compose from tvec, put in third column, and rvec, turned into a 3x3 rotation matrix using cv.Rodrigues. Decompose 4x4 matrix into rvec and tvec accordingly (Rodrigues goes both ways).
I need to change values in ndarray: if > 0.5 then 1, less - then 0. Example:
array([ 0.11865984, 0.49666225, 0.134077 , 0.09502114, 0.70667432,
0.1648032 , 0.6356094 , 0.36054945, 0.68379407, 0.09297716])
You can try something like
import numpy as np
arr = np.array([ 0.11865984, 0.49666225, 0.134077 , 0.09502114, 0.70667432,
0.1648032 , 0.6356094 , 0.36054945, 0.68379407, 0.09297716])
bool_condition_array = arr > 0.5
binary_array = bool_condition_array.astype(int)
Evenly round to the given number of decimals. np.around
If you have an np.array you can use np.around function:
import numpy as np
arr = np.array([ 0.11865984, 0.49666225, 0.134077 , 0.09502114, 0.70667432,
0.1648032 , 0.6356094 , 0.36054945, 0.68379407, 0.09297716])
result = np.around(arr)
print(result)
# array([0., 0., 0., 0., 1., 0., 1., 0., 1., 0.])
I have a list of vectors (each vectors only contain 0 or 1) :
In [3]: allLabelPredict
Out[3]: array([[ 0., 0., 0., ..., 0., 0., 1.],
[ 0., 0., 0., ..., 0., 0., 1.],
[ 0., 0., 0., ..., 0., 0., 1.],
...,
[ 0., 0., 0., ..., 0., 0., 0.],
[ 0., 0., 0., ..., 0., 0., 1.],
[ 0., 0., 0., ..., 0., 0., 1.]])
In [4]: allLabelPredict.shape
Out[4]: (5000, 190)
As you can see I have 190 different vectors each vector is a result of one classifier, now I want to select some of these output based on proximity of each vector to my original label
In [7]: myLabel
Out[7]: array([ 0., 0., 0., ..., 1., 1., 1.])
In [8]: myLabel.shape
Out[8]: (5000,)
For this purpose I've defined two different criteria for each vector; Zero Hamming Distance and One Hamming Distance.
"One Hamming Distance": hamming distance between the sub-array of myLabel which are equal to "1" and sub-array of each vector (I have created sub-array of each vector by selecting value from each vector based on indices of "myLabel" where the value is '1')
"zero Hamming Distance": hamming distance between the sub-array of myLabel which are equal to "0" and sub-array of each vector (I have created sub-array of each vector by selecting value from each vector based on indices of "myLabel" where the value is '0')
To make it more clear will give you a small example:
MyLabel [1,1,1,1,0,0,0,0]
V1 [1,1,0,1,0,0,1,1]
sub-array1 [1,1,0,1]
sub-array0 [0,0,1,1]
"zero Hamming Distance": hamming(sub-array0, MyLabel[4:])
"one Hamming Distance": hamming(sub-array1, MyLabel[:4])
Now I want to select some vectors from 'allLabelPredict' based on "One Hamming Distance" and
"zero Hamming Distance"
I want to select those vectors which have the minimum "One Hamming Distance" and
"zero Hamming Distance". (by minimum I mean both criteria for this vector be the lowest amongst others)
If above request is not possible how can I do something like this sort somehow that always sort first based on "One Hamming Distance" and after that try to minimize "Zero Hamming Distance"
OK, so first I'd split up the entire allLabelPredict into two subarrays based on the values in myLabel:
import numpy as np
allLabelPredict = np.random.randint(0, 2, (5000, 190))
myLabel = np.random.randint(0, 2, 5000)
sub0 = allLabelPredict[myLabel==0]
sub1 = allLabelPredict[myLabel==1]
ham0 = np.abs(sub0 - 0).mean(0)
ham1 = np.abs(sub1 - 1).mean(0)
hamtot = np.abs(allLabelPredict - myLabel[:, None]).mean(0) # if they're not split
This is the same as scipy.spatial.distance.hamming, but that can only be applied to one vector at a time:
>>> np.allclose(scipy.spatial.distance.hamming(allLabelPredict[:,0], myLabel),
... np.abs(allLabelPredict[:,0] - myLabel).mean(0))
True
Now, the indices in either ham array will be the indices in the second axis of the allLabelPredict array. If you want to sort your vectors by hamming distance:
sortby0 = allLabelPredict[:, ham0.argsort()]
sortby1 = allLabelPredict[:, ham1.argsort()]
Or if you want the lowest zero (or one) hamming, you would look at
best0 = allLabelPredict[:, ham0.argmin()]
best1 = allLabelPredict[:, ham1.argmin()]
Or if you want the lowest one hamming with zero hamming near 0.1, you could say something like
hamscore = (ham0 - 0.1)**2 + ham1**2
best = allLabelPredict[:, hamscore.argmin()]
The crux of the answer should include this: use sorted(allLabelPredict, key=<criteria>)
It will let you sort the list based on the criteria you defined as a function and passed to keys argument.
To do this, first let's convert your 190 vectors into pair of (0-H Dist, 1-H Dist). Then you'll have something like this:
(0.10, 0.15)
(0.12, 0.09)
(0.25, 0.03)
(0.14, 0.16)
(0.14, 0.11)
...
Next, we need to clarify what you meant by "both criteria for this vector be the lowest amongst others". In the above case, should we choose (0.25, 0.03)? Or is it (0.10, 0.15)? How about (0.14, 0.11)? Fortunately you already said that in this case, we need to prioritize 1-H Dist first. So we will choose (0.25, 0.03), is this correct? From your comments in #askewchan's answer it seems that you want the sort criteria to be flexible.
If that's so, then your first criterion that "both criteria for this vector be the lowest amongst others" is actually part of your second criterion, which is "sort based on One Hamming Distance, then by Zero Hamming Distance", since after the sorting the vector with lowest distance on both scores will be at the top anyway.
Hence we just need to sort based on 1-D Dist and then by 0-H Dist when the 1-H Dist score is the same. This sort criteria can be changed flexibly, as long as you already have the pair of scores.
Here is a sample code:
import numpy as np
from scipy.spatial.distance import hamming
def sort_criteria(pair_of_scores):
score0, score1 = pair_of_scores
return (score1, score0) # Sort by 1-H, then by 0-H
# The following will sort by Euclidean distance
#return score0**2 + score1**2
# The following is to select the vectors with score0==0.5, then sort based on score1
#return score1 if np.abs(score0-0.5)<1e7 else (1+score1, score0) == 0.5
def main():
allLabelPredict = np.asarray(np.random.randint(0, 2, (5, 10)), dtype=np.float64)
myLabel = np.asarray(np.random.randint(0, 2, 10), dtype=np.float64)
print allLabelPredict
print myLabel
allSub0 = allLabelPredict[:, myLabel==0]
allSub1 = allLabelPredict[:, myLabel==1]
all_scores = [(hamming(sub0, 0), hamming(sub1, 1))
for sub0, sub1 in zip(allSub0, allSub1)]
print all_scores # The (0-H, 1-H) score pairs
all_scores = sorted(all_scores, key=sort_criteria) # The sorting
#all_scores = np.array([pair for pair in all_scores if pair[0]==0.5]) # For filtering
print all_scores
if __name__ == '__main__':
main()
Result:
[[ 1. 0. 0. 0. 0. 1. 1. 0. 1. 1.]
[ 1. 0. 0. 0. 1. 0. 1. 0. 0. 1.]
[ 0. 1. 1. 0. 1. 1. 1. 1. 1. 0.]
[ 0. 0. 1. 1. 1. 1. 1. 0. 1. 1.]
[ 1. 1. 1. 1. 1. 1. 0. 0. 0. 0.]]
[ 1. 1. 1. 1. 1. 0. 1. 1. 0. 1.]
[(1.0, 0.625), (0.0, 0.5), (1.0, 0.375), (1.0, 0.375), (0.5, 0.375)]
[(0.5, 0.375), (1.0, 0.375), (1.0, 0.375), (0.0, 0.5), (1.0, 0.625)]
You just need to change the sort_criteria function to change your criteria.
If you sort first based on one criteria, then another, the first entry in that sort will be the only one that could simultaneously minimize both criteria.
You can do that operation with numpy using argsort. This requires you to make a numpy array that has keys. I will assume that you have an array called zeroHamming and oneHamming.
# make an array of the distances with keys
# these must be input as pairs, not as columns
hammingDistances = np.array([(one,zero) for one,zero in zip(oneHamming,zeroHamming],\
dtype=[("one","float"),("zero","float")])
# to see how the keys work, try:
print hammingDistances['zero']
# do a sort by oneHamming, then by zeroHamming
sortedIndsOneFirst = np.argsort(hammingDistances,order=['one','zero'])
# do a sort by zeroHamming, then by oneHamming
sortedIndsZeroFirst = np.argsort(hammingDistances,order=['zero','one'])
Its easier to work with as1 = allLabelPredict.T, because then as1[0] will be your first vector, as1[1] your second etc. Then, your hamming distance function is simply:
def ham(a1, b1): return sum(map(abs, a1-b1))
So, if you want the vectors that match your criterion, you can use composition:
vects = numpy.array( [ a for a in as1 if ham(a, myLabel) < 2 ] )
where, myLabel is the vector you want to compare with.
I need to calculate n number of points(3D) with equal spacing along a defined line(3D).
I know the starting and end point of the line. First, I used
for k in range(nbin):
step = k/float(nbin-1)
bin_point.append(beam_entry+(step*(beamlet_intersection-beam_entry)))
Then, I found that using append for large arrays takes more time, then I changed code like this:
bin_point = [start_point+((k/float(nbin-1))*(end_point-start_point)) for k in range(nbin)]
I got a suggestion that using newaxis will further improve the time.
The modified code looks like this.
step = arange(nbin) / float(nbin-1)
bin_point = start_point + ( step[:,newaxis,newaxis]*((end_pint - start_point))[newaxis,:,:] )
But, I could not understand the newaxis function, I also have a doubt that, whether the same code will work if the structure or the shape of the start_point and end_point are changed. Similarly how can I use the newaxis to mdoify the following code
for j in range(32): # for all los
line_dist[j] = sqrt([sum(l) for l in (end_point[j]-start_point[j])**2])
Sorry for being so clunky, to be more clear the structure of the start_point and end_point are
array([ [[1,1,1],[],[],[]....[]],
[[],[],[],[]....[]],
[[],[],[],[]....[]]......,
[[],[],[],[]....[]] ])
Explanation of the newaxis version in the question: these are not matrix multiplies, ndarray multiply is element-by-element multiply with broadcasting. step[:,newaxis,newaxis] is num_steps x 1 x 1 and point[newaxis,:,:] is 1 x num_points x num_dimensions. Broadcasting together ndarrays with shape (num_steps x 1 x 1) and (1 x num_points x num_dimensions) will work, because the broadcasting rules are that every dimension should be either 1 or the same; it just means "repeat the array with dimension 1 as many times as the corresponding dimension of the other array". This results in an ndarray with shape (num_steps x num_points x num_dimensions) in a very efficient way; the i, j, k subscript will be the k-th coordinate of the i-th step along the j-th line (given by the j-th pair of start and end points).
Walkthrough:
>>> start_points = numpy.array([[1, 0, 0], [0, 1, 0]])
>>> end_points = numpy.array([[10, 0, 0], [0, 10, 0]])
>>> steps = numpy.arange(10)/9.0
>>> start_points.shape
(2, 3)
>>> steps.shape
(10,)
>>> steps[:,numpy.newaxis,numpy.newaxis].shape
(10, 1, 1)
>>> (steps[:,numpy.newaxis,numpy.newaxis] * start_points).shape
(10, 2, 3)
>>> (steps[:,numpy.newaxis,numpy.newaxis] * (end_points - start_points)) + start_points
array([[[ 1., 0., 0.],
[ 0., 1., 0.]],
[[ 2., 0., 0.],
[ 0., 2., 0.]],
[[ 3., 0., 0.],
[ 0., 3., 0.]],
[[ 4., 0., 0.],
[ 0., 4., 0.]],
[[ 5., 0., 0.],
[ 0., 5., 0.]],
[[ 6., 0., 0.],
[ 0., 6., 0.]],
[[ 7., 0., 0.],
[ 0., 7., 0.]],
[[ 8., 0., 0.],
[ 0., 8., 0.]],
[[ 9., 0., 0.],
[ 0., 9., 0.]],
[[ 10., 0., 0.],
[ 0., 10., 0.]]])
As you can see, this produces the correct answer :) In this case broadcasting (10,1,1) and (2,3) results in (10,2,3). What you had is broadcasting (10,1,1) and (1,2,3) which is exactly the same and also produces (10,2,3).
The code for the distance part of the question does not need newaxis: the inputs are num_points x num_dimensions, the ouput is num_points, so one dimension has to be removed. That is actually the axis you sum along. This should work:
line_dist = numpy.sqrt( numpy.sum( (end_point - start_point) ** 2, axis=1 )
Here numpy.sum(..., axis=1) means sum along that axis only, rather than all elements: a ndarray with shape num_points x num_dimensions summed along axis=1 produces a result with num_points, which is correct.
EDIT: removed code example without broadcasting.
EDIT: fixed up order of indexes.
EDIT: added line_dist
I'm not through understanding all you wrote, but some things I already can tell you; maybe they help.
newaxis is rather a marker than a function (in fact, it is plain None). It is used to add an (unused) dimension to a multi-dimensional value. With it you can make a 3D value out of a 2D value (or even more). Each dimension already there in the input value must be represented by a colon : in the index (assuming you want to use all values, otherwise it gets complicated beyond our usecase), the dimensions to be added are denoted by newaxis.
Example:
input is a one-dimensional vector (1D): 1,2,3
output shall be a matrix (2D).
There are two ways to accomplish this; the vector could fill the lines with one value each, or the vector could fill just the first and only line of the matrix. The first is created by vector[:,newaxis], the second by vector[newaxis,:]. Results of this:
>>> array([ 7,8,9 ])[:,newaxis]
array([[7],
[8],
[9]])
>>> array([ 7,8,9 ])[newaxis,:]
array([[7, 8, 9]])
(Dimensions of multi-dimensional values are represented by nesting of arrays of course.)
If you have more dimensions in the input, use the colon more than once (otherwise the deeper nested dimensions are simply ignored, i.e. the arrays are treated as simple values). I won't paste a representation of this here as it won't clarify things due to the optical complexity when 3D and 4D values are written on a 2D display using nested brackets. I hope it gets clear anyway.
The newaxis reshapes the array in such a way so that when you multiply numpy uses broadcasting. Here is a good tutorial on broadcasting.
step[:, newaxis, newaxis] is the same as step.reshape((step.shape[0], 1, 1)) (if step is 1d). Either method for reshaping should be very fast because reshaping arrays in numpy is very cheep, it just makes a view of the array, especially because you should only be doing it once.
I have data which I need to center and scale so that is centered around the origin. Then the data needs to be rotated so that the direction of maximum variance is on the x-axis. The mean of the data and the covariance is then calculated. I need the first element of the covariance matrix to be 1. I think this is done by adjusting the scaling factor, but I can't figure out what the scaling factor should be.
To center the data I take away the mean, and to rotate I use SVD, but the scaling is still my problem.
signature = numpy.loadtxt(name, comments = '%', usecols = (0,cols-1))
signature = numpy.transpose(signature)
#SVD to get D so that data can be scaled by 1/(highest singular value in D)
U, D, Vt = numpy.linalg.svd( signature , full_matrices=0)
cs = utils.centerscale(signature, scale=False)
signature = cs[0]
#plt.scatter(cs[0][0],cs[0][1],color='r')
#SVD so that data can be rotated so that direction of most variance is on x-axis
U, D, Vt = numpy.linalg.svd( signature , full_matrices=0)
cs = utils.centerscale(signature, center=False, scalefactor=D[0])
U, D, Vt = numpy.linalg.svd( cs[0] , full_matrices=0)
D = numpy.diag(D)
norm = numpy.dot(D,Vt)
The following are examples of results of the mean and cov of norm (the test cases use res).
**********************************************************************
Failed example:
print numpy.mean(res, axis=1)
Expected:
[ 7.52074907e-18 -6.59917722e-18]
Got:
[ -1.22008884e-17 2.41126563e-17]
**********************************************************************
Failed example:
print numpy.cov(res, bias=1)
Expected:
[[ 1.00000000e+00 9.02112676e-18]
[ 9.02112676e-18 1.40592827e-01]]
Got:
[[ 4.16666667e-03 -1.57698124e-19]
[ -1.57698124e-19 5.85803446e-04]]
**********************************************************************
1 items had failures:
2 of 4 in __main__.processfile
***Test Failed*** 2 failures.
All values are irrelevant except for the first element of the covariance matrix, that needs to be one.
I have tried looking everywhere and can't find an answer. Any help would be appreciated.
I don't know what utils.centerscale is or does, but if you want to scale a matrix by a constant factor so that the upper left term of its covariance matrix is 1, you can simply divide the matrix by the square root of the unscaled covariance term:
>>> import numpy
>>> numpy.random.seed(17)
>>> m = numpy.random.rand(5,4)
>>> m
array([[ 0.294665 , 0.53058676, 0.19152079, 0.06790036],
[ 0.78698546, 0.65633352, 0.6375209 , 0.57560289],
[ 0.03906292, 0.3578136 , 0.94568319, 0.06004468],
[ 0.8640421 , 0.87729053, 0.05119367, 0.65241862],
[ 0.55175137, 0.59751325, 0.48352862, 0.28298816]])
>>> c = numpy.cov(m,bias=1)
>>> c
array([[ 0.0288779 , 0.00524455, 0.00155373, 0.02779861, 0.01798404],
[ 0.00524455, 0.00592484, -0.00711072, 0.01006019, 0.00631144],
[ 0.00155373, -0.00711072, 0.13391344, -0.10551922, 0.00945934],
[ 0.02779861, 0.01006019, -0.10551922, 0.11250984, 0.00982862],
[ 0.01798404, 0.00631144, 0.00945934, 0.00982862, 0.01444482]])
>>> numpy.cov(m/c[0][0]**0.5, bias=1)
array([[ 1. , 0.18161135, 0.05380354, 0.96262562, 0.62276138],
[ 0.18161135, 0.20516847, -0.24623392, 0.3483699 , 0.21855613],
[ 0.05380354, -0.24623392, 4.63722877, -3.65397781, 0.32756326],
[ 0.96262562, 0.3483699 , -3.65397781, 3.89605297, 0.34035085],
[ 0.62276138, 0.21855613, 0.32756326, 0.34035085, 0.5002033 ]])
But this has the same effect as simply dividing the covariance matrix by the upper left member:
>>> (numpy.cov(m,bias=1)/numpy.cov(m,bias=1)[0][0])/(numpy.cov(m/c[0][0]**0.5, bias=1))
array([[ 1., 1., 1., 1., 1.],
[ 1., 1., 1., 1., 1.],
[ 1., 1., 1., 1., 1.],
[ 1., 1., 1., 1., 1.],
[ 1., 1., 1., 1., 1.]])
Depending on what you're doing, you might also be interested in numpy.corrcoef, which gives the correlation coefficient matrix instead.