CV2 Distance with projectPoints - python

is it possible to calculate the distance between points which i created with cv2.projectPoints?
I have two aruco markers and from both markers i have created (with cv2.projectPoints) points which are in a specific distance to the marker. Now i want to know how far these points are away from each other?
I know you cant give a specific code without an MVP and it is not necessary i only need an idea how this is possible to calculate. I would be awesome if someone knows maybe a cv2 function or a way to calculate these?
Thank you very much <3
Edit:
I generated the four identity matrixes and inversed all of them. Code and result ist below.
#T_point1_marker1 = np.linalg.inv(T_marker1_point1)
#T_marker1_cam = np.linalg.inv(T_cam_marker1)
T_point1_marker1 = np.array([
[ 1., 0., 0., -0.1 ],
[ 0., 1., 0., -0.05],
[ 0., 0., 1., 0. ],
[ 0., 0., 0., 1. ],
])
T_marker1_cam = np.array([
[ 1., 0., 0., 0.10809129],
[ 0., 1., 0., 0.03833054],
[ 0., 0., 1., -0.35931477],
[ 0., 0., 0., 1. ],
])
T_cam_marker2 = np.array([
[ 1., 0., 0., 0.09360527],
[ 0., 1., 0., -0.01229168],
[ 0., 0., 1., 0.36470099],
[ 0., 0., 0., 1. ],
])
T_marker2_point2 = np.array([
[ 1., 0., 0., 0.005],
[ 0., 1., 0., 0.1 ],
[ 0., 0., 1., 0. ],
[ 0., 0., 0., 1. ],
])
#Process finished with exit code 1
The think i don't understand is this part:
T_point1_point2 = T_point1_marker1 # T_marker1_cam # T_cam_marker2 # T_marker2_point2
How do i bring these four matrixes together so i get T_point1_point2?
Thanks again :)

Since your graphic contains measurements of physical distance, rather than pixels, I'll assume you're asking about 3D, i.e. you want a 3D distance between those points...
You just need to define the poses of those points, relative to their markers. That is T_marker1_point1 and T_marker2_point2. Make those be pure translation, probably with Z=0 if these points are in each respective marker's plane. Literally make a 4x4 identity matrix, then stick your nominal (constructed) dimensions into the last column.
Then you need the marker poses relative to the camera, T_cam_marker1 and T_cam_marker2.
Finally you calculate
T_point1_point2 = T_point1_marker1 # T_marker1_cam # T_cam_marker2 # T_marker2_point2
# where
# T_marker1_cam = np.linalg.inv(T_cam_marker1)
# and so on
The translation part of that pose matrix gives you the distance between those points. You can ignore the rotation component. That'd only give you the rotation between those markers, because your points were defined as poses of the same orientation as their respective markers. Yes, orientation is silly for points but eh...
All of that is 4x4 matrices. Compose from tvec, put in third column, and rvec, turned into a 3x3 rotation matrix using cv.Rodrigues. Decompose 4x4 matrix into rvec and tvec accordingly (Rodrigues goes both ways).

Related

How to append ND numpy arrays to (N+1)D numpy array through loop?

For example I need 30x30 numpy arrays created from images to be fed to a neural net. If I have a directory of images to predict, I should be able to loop through the directory, get image data and create an (n,30,30) shape np array
This is my current method, I intend to reshape each row before feeding to the model
def get_image_vectors(path):
img_list=os.listdir(path)
print(img_list)
X=np.empty((900,))
for img_file in img_list:
img= Image.open(os.path.join(path,img_file))
img_grey= img.convert("L")
resized = img_grey.resize((30,30))
flattened = np.array(resized.getdata())
# print(flattened.shape)
X=np.vstack((X,flattened))
print(img_file,'=>',X.shape)
return X[1:,:]
Instead of appending to an existing array, it will probably be better to use a list initially, appending to it, and converting to an array at the end. thus saving many redundant modifications of np arrays.
Here a toy example:
import numpy as np
def get_image_vectors():
X= [] #Create empty list
for i in range(10):
flattened = np.zeros(900)
X.append(flattened) #Append some np array to it
return np.array(X) #Create array from the list
With result:
array([[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.]])

scikit learn LDA giving unexpected results

I am attempting to classify some data with the scikit learn LDA classifier. I'm not entirely sure what to "expect" from it, but what I am getting is weird. Seems like a good opportunity to learn about either a shortcoming of the technique, or a way in which I am applying it wrong. I understand that no line could completely separate this data, but it seems that there are much "better" lines than the one it is finding. I'm just using the default options. Any thoughts on how to do this better? I'm using LDA because it is linear in the size of my dataset. Although I think a linear SVM has a similar complexity. Perhaps it would be better for such data? I will update when I have tested other possibilities.
The picture: (light blue is what my LDA classifier predicts will be dark blue)
The code:
import numpy as np
from numpy import array
import matplotlib.pyplot as plt
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
import itertools
X = array([[ 0.23125754, 0.79170351],
[ 0.78021491, -0.24999486],
[ 0.00856446, 0.41452734],
[ 0.66381753, -0.09872504],
[-0.03178685, 0.04876317],
[ 0.65574645, -0.68214948],
[ 0.14290684, 0.38256002],
[ 0.05156987, 0.11094875],
[ 0.06843403, 0.19110019],
[ 0.24070898, -0.07403764],
[ 0.03184353, 0.4411446 ],
[ 0.58708124, -0.38838008],
[-0.00700369, 0.07540799],
[-0.01907816, 0.07641038],
[ 0.30778608, 0.30317186],
[ 0.55774143, -0.38017325],
[-0.00957214, -0.03303287],
[ 0.8410637 , 0.158594 ],
[-0.00294113, -0.00380608],
[ 0.26577841, 0.07833684],
[-0.32249375, 0.49290502],
[ 0.11313078, 0.35697211],
[ 0.41153679, -0.4471876 ],
[-0.00313315, 0.30065913],
[ 0.14344143, -0.19127107],
[ 0.04857767, 0.01339191],
[ 0.5865007 , 0.71209886],
[ 0.08157439, 0.40909955],
[ 0.72495202, 0.29583866],
[-0.09391461, 0.17976605],
[ 0.06149141, 0.79323099],
[ 0.52208024, -0.2877661 ],
[ 0.01992141, -0.00435266],
[ 0.68492617, -0.46981335],
[-0.00641231, 0.29699622],
[ 0.2369677 , 0.140319 ],
[ 0.6602586 , 0.11200433],
[ 0.25311836, -0.03085372],
[-0.0895014 , 0.45147252],
[-0.18485667, 0.43744524],
[ 0.94636701, 0.16534406],
[ 0.01887734, -0.07702135],
[ 0.91586801, 0.17693792],
[-0.18834833, 0.31944796],
[ 0.20468328, 0.07099982],
[-0.15506378, 0.94527383],
[-0.14560083, 0.72027034],
[-0.31037647, 0.81962815],
[ 0.01719756, -0.01802322],
[-0.08495304, 0.28148978],
[ 0.01487427, 0.07632112],
[ 0.65414479, 0.17391618],
[ 0.00626276, 0.01200355],
[ 0.43328095, -0.34016614],
[ 0.05728525, -0.05233956],
[ 0.61218382, 0.20922571],
[-0.69803697, 2.16018536],
[ 1.38616732, -1.86041621],
[-1.21724616, 2.72682759],
[-1.26584365, 1.80585403],
[ 1.67900048, -2.36561699],
[ 1.35537903, -1.60023078],
[-0.77289615, 2.67040114],
[ 1.62928969, -1.20851808],
[-0.95174264, 2.51515935],
[-1.61953649, 2.34420531],
[ 1.38580104, -1.9908369 ],
[ 1.53224512, -1.96537012]])
y = array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1.])
classifier = LDA()
classifier.fit(X,y)
xx = np.array(list(itertools.product(np.linspace(-4,4,300), np.linspace(-4,4,300))))
yy = classifier.predict(xx)
b_colors = ['salmon' if yyy==0 else 'deepskyblue' for yyy in yy]
p_colors = ['r' if yyy==0 else 'b' for yyy in y]
plt.scatter(xx[:,0],xx[:,1],s=1,marker='o',edgecolor=b_colors,c=b_colors)
plt.scatter(X[:,0], X[:,1], marker='o', s=5, c=p_colors, edgecolor=p_colors)
plt.show()
UPDATE: Changing from using sklearn.discriminant_analysis.LinearDiscriminantAnalysis to sklearn.svm.LinearSVC also using the default options gives the following picture:
I think using the zero-one loss instead of the hinge loss would help, but sklearn.svm.LinearSVC doesn't seem to allow custom loss functions.
UPDATE: The loss function to sklearn.svm.LinearSVC approaches the zero-one loss as the parameter C goes to infinity. Setting C = 1000 gives me what I was originally hoping for. Not posting this as an answer, because the original question was about LDA.
picture:
LDA models each class as a Gaussian, so the model for each class is determined by the class' estimated mean vector and covariance matrix.
Judging by the eye only, your blue and red classes have approximately the same mean and same covariance, which means the 2 Gaussians will 'sit' on top of each other, and the discrimination will be poor. Actually it also means that the separator (the blue-pink border) will be noisy, that is it will change a lot between random samples of your data.
Btw your data is clearly not linearly-separable, so every linear model will have a hard time discriminating the data.
If you must use a linear model, try using LDA with 3 components, such that the top-left blue blob is classified as '0', the bottom-right blue blob as '1', and the red as '2'. This way you will get a much better linear model. You can do it by preprocessing the blue class with a clustering algorithm with K=2 classes.

Scikit-learn cross val score: too many indices for array

I have the following code
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.cross_validation import cross_val_score
#split the dataset for train and test
combnum['is_train'] = np.random.uniform(0, 1, len(combnum)) <= .75
train, test = combnum[combnum['is_train']==True], combnum[combnum['is_train']==False]
et = ExtraTreesClassifier(n_estimators=200, max_depth=None, min_samples_split=10, random_state=0)
min_samples_split=10, random_state=0 )
labels = train[list(label_columns)].values
tlabels = test[list(label_columns)].values
features = train[list(columns)].values
tfeatures = test[list(columns)].values
et_score = cross_val_score(et, features, labels, n_jobs=-1)
print("{0} -> ET: {1})".format(label_columns, et_score))
Checking the shape of the arrays:
features.shape
Out[19]:(43069, 34)
And
labels.shape
Out[20]:(43069, 1)
and I'm getting:
IndexError: too many indices for array
and this relevant part of the traceback:
---> 22 et_score = cross_val_score(et, features, labels, n_jobs=-1)
I'm creating the data from Pandas dataframes and I searched here and saw some reference to possible errors via this method but can't figure out how to correct?
What the data arrays look like:
features
Out[21]:
array([[ 0., 1., 1., ..., 0., 0., 1.],
[ 0., 1., 1., ..., 0., 0., 1.],
[ 1., 1., 1., ..., 0., 0., 1.],
...,
[ 0., 0., 1., ..., 0., 0., 1.],
[ 0., 0., 1., ..., 0., 0., 1.],
[ 0., 0., 1., ..., 0., 0., 1.]])
labels
Out[22]:
array([[1],
[1],
[1],
...,
[1],
[1],
[1]])
When we do cross validation in scikit-learn, the process requires an (R,) shape label instead of (R,1). Although they are the same thing to some extend, their indexing mechanisms are different. So in your case, just add:
c, r = labels.shape
labels = labels.reshape(c,)
before passing it to the cross-validation function.
It seems to be fixable if you specify the target labels as a single data column from Pandas. If the target has multiple columns, I get a similar error. For example try:
labels = train['Y']
Adding .ravel() to the Y/Labels variable passed into the formula helped solve this problem within KNN as well.
try target:
y=df['Survived']
instead , i used
y=df[['Survived']]
which made the target y a dateframe, it seems series would be ok
You might need to play with the dimensions a bit, e.g.
et_score = cross_val_score(et, features, labels, n_jobs=-1)[:,n]
or
et_score = cross_val_score(et, features, labels, n_jobs=-1)[n,:]
n being the dimension.

Convolution & Deconvolution using Scipy

I am trying to compute Deconvolution using Python. I have a signal let say f(t) which is the convoluted by the window function say g(t). Is there some direct way to compute the deconvolution so I can get back the original signal?
For instance f(t) = exp(-t**2/3); Gaussian function
and g(t) = Trapezoidal function
Thanks in advance for your kind suggestion.
Is this an analytical or numerical problem?
If it's numerical, use scipy.signal.devconvolve: http://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.deconvolve.html
From the docs:
>>> from scipy import signal
>>> sig = np.array([0, 0, 0, 0, 0, 1, 1, 1, 1,])
>>> filter = np.array([1,1,0])
>>> res = signal.convolve(sig, filter)
>>> signal.deconvolve(res, filter)
(array([ 0., 0., 0., 0., 0., 1., 1., 1., 1.]),
array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]))
Otherwise, if you want an analytic solution, you might be using the wrong tool.
Additionally, just a tip for future google-ing, when you're talking about convolution, the action is usually/often "convolved" not "convoluted", see https://english.stackexchange.com/questions/64046/convolve-vs-convolute

List as element of list of lists or multidimensional lists as a grid

I am trying to create a lat/lon grid that contains an array of found indices where two conditions are met for a lat/lon combination. This approach might be too complicated, but using a meshgrid or numpy broadcasting failed also. If there is a better approach, feel free to share your knowlegde. :-)
Round lat/lon values to gridsize resolution of 1° but retain full length of array:
x = np.around(lon, decimals=0)
y = np.around(lat, decimals=0)
arrays consists of longitude/latitude values from -180 to 180 and -82° to 82°; multiple douplets possible
Check for each combination of lat/lon how many measurements are available for 1°/1° grid point:
a = arange(-180,181)
b = arange(-82,83)
totalgrid = [ [ 0 for i in range(len(b)) ] for j in range(len(a)) ]
for d1 in range(len(a)):
for d2 in range(len(b)):
totalgrid[d1][d2]=np.where((x==a[d1])&(y==b[d2]))[0]
This method fails and returns only a list of lists with empty arrays. I can't figure out why it's not working properly.
Replacing the last line by:
totalgrid[d1][d2]=np.where((x==a[0])&(y==b[0]))[0]
returns all found indices from lon/lat that are present at -180°/-82°. Unfortunately it takes a while. Am I missing a for loop somewhere?!
The Problem in more detail:
#askewchan
Unfortunately this one does not solve my original problem.
As expected the result represents the groundtrack quite well.
But besides the fact that I need the total number of points for each grid point, I also need each single index of lat/lon combinations in the lat/lon array for further computations.
Let's assume I have an array
lat(100000L,), lon(100000L,) and a third one array(100000L,)
which corresponds to the measurement at each point. I need every index of all 1°/1° combinations in lat/lon, to check this index in the array(100000L,) if a condition is met. Now lets assume that the indices[10000,10001,10002,..,10025] of lat/lon are on the same gridpoint. For those indices I need to check whether array[10000,10001,10002,..,10025] now met a condition, i.e. np.where(array==0). With cts.nonzero() I only get the index in the histogram. But then all information of each point contributing to the value of the histogram is lost. Hopefully you get what was my initial problem.
Not sure if I understand the goal here, but you want to count how many lat/lon pairs you have in each 1° section? This is what a histogram does:
lon = np.random.random(5000)*2*180 - 180
lat = np.random.random(5000)*2*82 - 82
a = np.arange(-180,181)
b = np.arange(-82,83)
np.histogram2d(lon, lat, (a,b))
#(array([[ 0., 0., 1., ..., 0., 0., 0.],
# [ 0., 2., 0., ..., 0., 0., 1.],
# [ 0., 0., 0., ..., 0., 1., 0.],
# ...,
# [ 0., 1., 0., ..., 0., 0., 0.],
# [ 0., 0., 0., ..., 0., 0., 0.],
# [ 0., 0., 0., ..., 0., 0., 0.]]),
The indices where you have a nonzero count would be at:
cts.nonzero()
#(array([ 0, 0, 0, ..., 359, 359, 359]),
# array([ 2, 23, 25, ..., 126, 140, 155]))
You can plot it too:
cts, xs, ys = np.histogram2d(lon, lat, (a,b))
pyplot.imshow(cts, extent=(-82,82,-180,180))

Categories