incorrect estimate_normals with open3d? - python

I am trying to calculate the normals of a point cloud formed by three planes each aligned with an axis.
In matlab the function pcnormals gives me a coherent result, while when I try to do the same with estimate_normals of open3d the result is incorrect.
The code is here:
import numpy as np
from open3d import *
pcd = read_point_cloud("D:\Artificial.txt",format = 'xyz')
estimate_normals(pcd, search_param = KDTreeSearchParamKNN(knn = 25))
x = np.concatenate((np.asarray(pcd.points),np.asarray(pcd.normals)),axis=1)
np.savetxt("D:\ArtificialN_python.txt",x,delimiter=',')
I also have tried with differen knn value and search_param, but the result is similar.
I enclose the images of the coloured clouds according to the third component of the normal one (red-horizontal and green-inclined) calculated with matlab and python.
matlab result:
python result:
Anybody know what that might be due to?

Related

covert rgb png and depth txt to point cloud

I have a series of rgb files in png format, as well as the corresponding depth file in txt format, which can be loaded with np.loadtxt. How could I merge these two files to point cloud using open3d?
I followed the procedure as obtain point cloud from depth numpy array using open3d - python, but the result is not readable for human.
The examples is listed here:
the source png:
the pcd result:
You can get the source file from this link ![google drive] to reproduce my result.
By the way, the depth and rgb are not registerd.
Thanks.
I had to play a bit with the settings and data and used mainly the answer of your SO link.
import cv2
import numpy as np
import open3d as o3d
color = o3d.io.read_image("a542c.png")
depth = np.loadtxt("a542d.txt")
vertices = []
for x in range(depth.shape[0]):
for y in range(depth.shape[1]):
vertices.append((float(x), float(y), depth[x][y]))
pcd = o3d.geometry.PointCloud()
point_cloud = np.asarray(np.array(vertices))
pcd.points = o3d.utility.Vector3dVector(point_cloud)
pcd.estimate_normals()
pcd = pcd.normalize_normals()
o3d.visualization.draw_geometries([pcd])
However, if you keep the code as provided, the whole scene looks very weird and unfamiliar. That is because your depth file contains data between 0 and almost 2.5 m.
I introduced a cut-off at 500 or 1000 mm plus removed all 0s as suggested in the other answer. Additionally I flipped the x-axis (float(-x) instead of float(x)) to resemble your photo.
# ...
vertices = []
for x in range(depth.shape[0]):
for y in range(depth.shape[1]):
if 0< depth[x][y]<500:
vertices.append((float(-x), float(y), depth[x][y]))
For a good perspective I had to rotate the images manually. Probably open3d provides methods to do it automatically (I quickly tried pcd.transform() from your SO link above, it can help you if needed).
Results
500 mm cut-off: and 1000 mm cut-off: .
I used laspy instead of open3d because wanted to give some colors to your image:
import imageio
import numpy as np
# first reading the image for RGB values
image = imageio.imread(".../a542c.png")
loading the depth file
depth = np.loadtxt("/home/shaig93/Documents/internship_FWF/a542d.txt")
# creating fake x, y coordinates with meshgrid
xv, yv = np.meshgrid(np.arange(400), np.arange(640), indexing='ij')
# save_las is a function based on laspy that was provided to me by my supervisor
save_las("fn.laz", image[:400, :, 0].flatten(), np.c_[yv.flatten(), xv.flatten(), depth.flatten()], cmap = plt.cm.magma_r)
and the result is this. As you can see objects are visible from front.
However from side they are not easy to distinguish.
This means to me to think that your depth file is not that good.
Another idea would be also getting rid off 0 values from your depth file so that you can get point cloud without a wall kind of structure in the front. But still does not solve depth issue of course.
ps. I know this is not a proper answer but I hope it was helpful on identifying the problem.

Why does scipy.signal.correlate2d fail to work in this example?

I am trying to cross-correlate two images, and thus locate the template image on the first image, by finding the maximum correlation value.
I drew an image with some random shapes (first image), and cut out one of these shapes (template). Now, when I use scipy's correlate2d, and locate point in the correlation with maximum values, several point appear. From my knowledge, shouldn't there only be one point where the overlap is at max?
The idea behind this exercise is to take some part of an image, and then correlate that to some previous images from a database. Then I should be able to locate this part on the older images based on the maximum value of correlation.
My code looks something like this:
from matplotlib import pyplot as plt
from PIL import Image
import scipy.signal as sp
img = Image.open('test.png').convert('L')
img = np.asarray(img)
temp = Image.open('test_temp.png').convert('L')
temp = np.asarray(temp)
corr = sp.correlate2d(img, temp, boundary='symm', mode='full')
plt.imshow(corr, cmap='hot')
plt.colorbar()
coordin = np.where(corr == np.max(corr)) #Finds all coordinates where there is a maximum correlation
listOfCoordinates= list(zip(coordin[1], coordin[0]))
for i in range(len(listOfCoordinates)): #Plotting all those coordinates
plt.plot(listOfCoordinates[i][0], listOfCoordinates[i][1],'c*', markersize=5)
This yields the figure:
Cyan stars are points with max correlation value (255).
I expect there to be only one point in "corr" to have the max value of correlation, but several appear. I have tried to use different modes of correlating, but to no avail.
This is the test image I use when correlating.
This is the template, cut from the original image.
Can anyone give some insight to what I might be doing wrong here?
You are probably overflowing the numpy type uint8.
Try using:
img = np.asarray(img,dtype=np.float32)
temp = np.asarray(temp,dtype=np.float32)
Untested.
Applying
img = img - img.mean()
temp = temp - temp.mean()
before computing the 2D cross-correlation corr should give you the expected result.
Cleaning up the code, for a full example:
from imageio import imread
from matplotlib import pyplot as plt
import scipy.signal as sp
import numpy as np
img = imread('https://i.stack.imgur.com/JL2LW.png', pilmode='L')
temp = imread('https://i.stack.imgur.com/UIUzJ.png', pilmode='L')
corr = sp.correlate2d(img - img.mean(),
temp - temp.mean(),
boundary='symm',
mode='full')
# coordinates where there is a maximum correlation
max_coords = np.where(corr == np.max(corr))
plt.plot(max_coords[1], max_coords[0],'c*', markersize=5)
plt.imshow(corr, cmap='hot')

How to produce the following images (gabor patches)

I am trying to create four gabor patches, very similar to those below.
I don't need them to be identical to the pictures below, but similar.
Despite a bit of tinkering, I have been unable to reproduce these images...
I believe they were created in MATLAB originally. I don't have access to the original MATLAB code.
I have the following code in python (2.7.10):
import numpy as np
from scipy.misc import toimage # One can also use matplotlib*
data = gabor_fn(sigma = ???, theta = 0, Lambda = ???, psi = ???, gamma = ???)
toimage(data).show()
*graphing a numpy array with matplotlib
gabor_fn, from here, is defined below:
def gabor_fn(sigma,theta,Lambda,psi,gamma):
sigma_x = sigma;
sigma_y = float(sigma)/gamma;
# Bounding box
nstds = 3;
xmax = max(abs(nstds*sigma_x*numpy.cos(theta)),abs(nstds*sigma_y*numpy.sin(theta)));
xmax = numpy.ceil(max(1,xmax));
ymax = max(abs(nstds*sigma_x*numpy.sin(theta)),abs(nstds*sigma_y*numpy.cos(theta)));
ymax = numpy.ceil(max(1,ymax));
xmin = -xmax; ymin = -ymax;
(x,y) = numpy.meshgrid(numpy.arange(xmin,xmax+1),numpy.arange(ymin,ymax+1 ));
(y,x) = numpy.meshgrid(numpy.arange(ymin,ymax+1),numpy.arange(xmin,xmax+1 ));
# Rotation
x_theta=x*numpy.cos(theta)+y*numpy.sin(theta);
y_theta=-x*numpy.sin(theta)+y*numpy.cos(theta);
gb= numpy.exp(-.5*(x_theta**2/sigma_x**2+y_theta**2/sigma_y**2))*numpy.cos(2*numpy.pi/Lambda*x_theta+psi);
return gb
As you may be able to tell, the only difference (I believe) between the images is contrast. So, gabor_fn would likely needed to be altered to do allow for this (unless I misunderstand one of the params)...I'm just not sure how.
UPDATE:
from math import pi
from matplotlib import pyplot as plt
data = gabor_fn(sigma=5.,theta=pi/2.,Lambda=12.5,psi=90,gamma=1.)
unit = #From left to right, unit was set to 1, 3, 7 and 9.
bound = 0.0009/unit
fig = plt.imshow(
data
,cmap = 'gray'
,interpolation='none'
,vmin = -bound
,vmax = bound
)
plt.axis('off')
The problem you are having is a visualization problem (although, I think you are chossing too large parameters).
By default matplotlib, and scipy's (toimage) use bilinear (or trilinear) interpolation, depending on your matplotlib's configuration script. That's why your image looks so smooth. It is because your pixels values are being interpolated, and you are not displaying the raw kernel you have just calculated.
Try using matplotlib with no interpolation:
from matplotlib import pyplot as plt
plt.imshow(data, 'gray', interpolation='none')
plt.show()
For the following parameters:
data = gabor_fn(sigma=5.,theta=pi/2.,Lambda=25.,psi=90,gamma=1.)
You get this output:
If you reduce lamda to 15, you get something like this:
Additionally, the sigma you choose changes the strength of the smoothing, adding parameters vmin=-1 and vmax=1 to imshow (similar to what #kazemakase) suggested, will give you the desired contrast.
Check this guide for sensible values (and ways to use) gabor kernels:
http://scikit-image.org/docs/dev/auto_examples/plot_gabor.html
It seems like toimage scales the input data so that the min/max values are mapped to black/white.
I do not know what amplitudes to reasonably expect from gabor patches, but you should try something like this:
toimage(data, cmin=-1, cmax=1).show()
This tells toimage what range your data is in. You can try to play around with cmin and cmax, but make sure they are symmetric (i.e. cmin=-x, cmax=x) so that a value of 0 maps to grey.

Cartesian projection issue in a FITS image through PyFITS / AstroPy

I've looked and looked for a solution to this problem and am turning up nothing.
I'm generating rectangular FITS images through matplotlib and subsequently applying WCS coordinates to them using AstroPy (or PyFITS). My images are in galactic latitude and longitude, so the header keywords appropriate for my maps should be GLON-CAR and GLAT-CAR (for Cartesian projection). I've looked at other maps that use this same map projection in SAO DS9 and the coordinates work great... the grid is perfectly orthogonal as it should be. The FITS standard projections can be found here.
But when I generate my maps, the coordinates are not at all Cartesian. Here's a side-by-side comparison of my map (left) and another reference map of roughly the same region (right). Both are listed GLON-CAR and GLAT-CAR in the FITS header, but mine is screwy when looked at in SAO DS9 (note that the coordinate grid is something SAO DS9 generates based on the data in the FITS header, or at least stored somewhere in the FITS file):
This is problematic, because the coordinate-assigning algorithm will assign incorrect coordinates to each pixel if the projection is wrong.
Has anyone encountered this, or know what could be the problem?
I've tried applying other projections (just to see how they perform in SAO DS9) and they come out fine... but my Cartesian and Mercator projections do not come out with the orthogonal grid like they should.
I can't believe this would be a bug in AstroPy, but I can't find any other cause... unless my arguments in the header are incorrectly formatted, but I still don't see how that could cause the problem I'm experiencing. Or would you recommend using something else? (I've looked at matplotlib basemap but have had some trouble getting that to work on my computer).
My header code is below:
from __future__ import division
import numpy as np
from astropy.io import fits as pyfits # or use 'import pyfits, same thing'
#(lots of code in between: defining variables and simple calculations...
#probably not relevant)
header['BSCALE'] = (1.00000, 'REAL = TAPE*BSCALE + BZERO')
header['BZERO'] = (0.0)
header['BUNIT'] = ('mag ', 'UNIT OF INTENSITY')
header['BLANK'] = (-100.00, 'BLANK VALUE')
header['CRVAL1'] = (glon_center, 'REF VALUE POINT DEGR') #FIRST COORDINATE OF THE CENTER
header['CRPIX1'] = (center_x+0.5, 'REF POINT PIXEL LOCATION') ## REFERENCE X PIXEL
header['CTYPE1'] = ('GLON-CAR', 'COORD TYPE : VALUE IS DEGR')
header['CDELT1'] = (-glon_length/x_length, 'COORD VALUE INCREMENT WITH COUNT DGR') ### degrees per pixel
header['CROTA1'] = (0, 'CCW ROTATION in DGR')
header['CRVAL2'] = (glat_center, 'REF VALUE POINT DEGR') #Y COORDINATE OF THE CENTER
header['CRPIX2'] = (center_y+0.5, 'REF POINT PIXEL LOCATION') #Y REFERENCE PIXEL
header['CTYPE2'] = ('GLAT-CAR', 'COORD TYPE: VALUE IS DEGR') # WAS CAR OR TAN
header['CDELT2'] = (glat_length/y_length, 'COORD VALUE INCREMENT WITH COUNT DGR') #degrees per pixel
header['CROTA2'] = (rotation, 'CCW ROTATION IN DEGR') #NEGATIVE ROTATES CCW around origin (bottom left).
header['DATAMIN'] = (data_min, 'Minimum data value in the file')
header['DATAMAX'] = (data_max, 'Maximum data value in the file')
header['TELESCOP'] = ("Produced from 2MASS")
pyfits.update(filename, map_data, header)
Thanks for any help you can provide.
In the modern definition of the -CAR projection (from Calabretta et al.), GLON-CAR/GLAT-CAR projection only produces a rectilinear grid if CRVAL2 is set to zero. If CRVAL2 is not zero, then the grid is curved (this should have nothing to do with Astropy). You can try and fix this by adjusting CRVAL2 and CRPIX2 so that CRVAL2 is zero. Does this help?
Just to clarify what I mean, try, after your code above, and before writing out the file:
header['CRPIX2'] -= header['CRVAL2'] / header['CDELT2']
header['CRVAL2'] = 0.
Any luck?
If you look at the header for the 'reference' file you looked at, you'll see that CRVAL2 is zero there. Just to be clear, there's nothing wrong with CRVAL2 being non-zero, but the grid is then no longer rectilinear.

Python OpenCV image interpolation inaccuracy

I am familiar with using OpenCV through the Python interface, but while using the image interpolation facilities for a somewhat non-standard problem requiring a good deal of accuracy, I noticed some unexpected inaccuracy in the results. The code below illustrates my issue. Any ideas? Am I just trying to use the interpolater outside of its design accuracy?
import numpy as np
import cv2
import matplotlib.pyplot as plt
# Source gradient image from 0 to 255
src = np.atleast_2d(np.linspace(0,255,10));
# Set up to interpolate from first pixel value to last pixel value
map_x_32 = np.linspace(0,9,101)
map_x_32 = np.atleast_2d(map_x_32).astype('float32')
map_y_32 = map_x_32*0
# Interpolate using OpenCV
output = cv2.remap(src, map_x_32, map_y_32, cv2.INTER_LINEAR)
# Truth
output_truth = np.atleast_2d(np.linspace(0,255,101));
interp_error = output - output_truth
plt.plot(interp_error[0])
I have experienced the same inaccuracy. scipy.ndimage.map_coordinates is much more accurate, but in my case also 5x slower.
You could use it in this case as:
# Interpolate using Scipy
xy = np.vstack((map_y_32[np.newaxis,:,:], map_x_32[np.newaxis,:,:]))
output_scipy = scipy.ndimage.map_coordinates(src, xy, order=1)

Categories