How to fix misalignment between image and plot in python? - python

My goal is to show a binary image and then plot the boundary contours as lines overlaying the image. If I do this and export the result as a PDF, I see a progressively worsening misalignment between the image and contours as one moves across the image from bottom left. So it seems like there is a multiplicative error in the position of either the background image or the contours.
I think the issue is caused by the PDF renderer. If I output the result in PNG with a very high DPI, I can remove the problem, but I would prefer PDF for other reasons. Does anyone know if there is a setting I can change to make the PDF render correctly?
Here is an example and the resulting image. You can see that the bottom left corner has good alignment between image and contour and the top right is the worst.
import numpy as np
import matplotlib.pyplot as plt
import cv2
# Make a test image
img = np.zeros((100,100), dtype=np.uint8)
img[20:99,1:80] = 1
img = np.matlib.repmat(img, 9, 6)
# Extract contours
cntrs, hier = cv2.findContours(img, cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE)
# Make overlay
fig = plt.figure(figsize=(6,9), dpi=300)
ax = fig.add_subplot()
ax.imshow(img, interpolation='none', cmap=plt.cm.gray)
for cntr in cntrs:
x = np.append(cntr[:, 0, 0], cntr[0, 0, 0])
y = np.append(cntr[:, 0, 1], cntr[0, 0, 1])
ax.plot(x, y, c='r', linewidth=0.5, alpha=0.7)
ax.axis('off')
# Save overlay
plt.savefig('test.pdf', dpi=fig.dpi)

You can use the pgf backend instead of the default pdf backend:
plt.savefig('test.pdf', dpi=fig.dpi, backend='pgf')
This gives the correct pdf identical to the png file: link to the generated pdf file:
The reason for the mismatch is the different scaling of the image: while the difference of the red line positions between pdf and pgf backends is 2.6 µm maximum (i.e. not visually discernible), the image sizes differ by about 0.3 mm:
pdf: 115.358 x 172.861 mm with bottom left corner at 20.574 / 28.575 mm,
pgf: 115.057 x 172.606 mm with bottom left corner at 20.574 / 28.606 mm.

A dirty workaround would be to blow up the figsize. I introduced a scaling factor for this purpose at the beginning of the script. I also increased the original linewidth to 0.6 because it looked a bit nicer. The .pdf looks pretty nice
import numpy as np
import matplotlib.pyplot as plt
import numpy.matlib
import cv2
scale = 15
# Make a test image
img = np.zeros((100,100), dtype=np.uint8)
img[20:99,1:80] = 1
img = np.matlib.repmat(img, 9, 6)
# Extract contours
cntrs, hier = cv2.findContours(img, cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE)
# Make overlay
fig = plt.figure(figsize=(6*scale,9*scale), dpi=300//scale)
ax = fig.add_subplot()
ax.imshow(img, interpolation='none', cmap=plt.cm.gray)
for cntr in cntrs:
x = np.append(cntr[:, 0, 0], cntr[0, 0, 0])
y = np.append(cntr[:, 0, 1], cntr[0, 0, 1])
ax.plot(x, y, c='r', linewidth=0.6*scale, alpha=0.7)
ax.axis('off')
# Save overlay
plt.savefig('test.pdf')
Here the upper right corner

Related

How to plot a parallelepiped in matplotlib

I would like to draw a parallelepiped in Python using matplotlib centred in (0,0,0), with the top face of a different color (or each face of a different parametrized color), and with these dimensions:
L = 1
l = 0.7
s = 0.4
This is the code I developed to draw a cube with the same face color.
import matplotlib.pyplot as plt
import numpy as np
# Create axis
axes = [5, 5, 5]
# Create Data
data = np.ones(axes, dtype = np.bool)
# Control Tranperency
alpha = 0.9
# Control colour
colors = np.empty(axes + [4], dtype = np.float32)
colors[:] = [1, 0, 0, alpha] # red
# Plot figure
fig2 = plt.figure()
ax = fig2.add_subplot(111, projection='3d')
ax.voxels(data, facecolors=colors)
Any suggestion to modify it? Considering that I would like to rotate it with a rotation matrix/quaternion operator in a second time, it would be useful to define the coordinates of the vertices or of some ley points of the parallelepiped.
thank you all!

Plotting border of masked array with plt.imshow

Actually plotting this image using plt.imshow and np.masked to plot the figure in blue. I would like to remove the inner part of the figure and just leaving the border. Tried with fill=False but that's not an arg for plt.imshow.
Code:
#img comes from a raster using GDAL
img = np.dstack((band1, band2, band3))
max = img.max()
img = img / max
one = f.add_subplot(1, 2, 1)
one.set_title('Polygon ID: '+index)
plt.imshow(img, cmap=plt.cm.binary)
plt.imshow(masked_polygon, 'jet', interpolation='none', alpha=0.3)
plt.axis('off')
Result of plot:

Filtering image in opencv with histogram projection - Mask problem

I am making an algorithm to detect license plates, specifically at character segmentation part. For this I use a histogram projection.
My idea is to use this histogram projection to
Clean edges
Segment the characters
The problem is when I want to apply histogram projection twice, when applying a filter, it does not show well (see last image)
Example with clean edges, with this binary image:
I have this algorithm that cleans the edges of a binary image:
from matplotlib import pyplot as plt
import pylab
import numpy as np
img = binary_image # input image
(rows,cols)=img.shape
h_projection = np.array([ x/rows for x in img.sum(axis=0)])
threshold = 255 - 60
print("we will use threshold {} for horizontal".format(threshold))
# select the black areas
black_areas = np.where(h_projection > threshold)
fig = plt.figure(figsize=(16,8))
fig.add_subplot(121)
for j in black_areas:
img[:, j] = 0
plt.plot((j, j), (0, 1), 'g-')
plt.plot(range(cols), h_projection.T)
v_projection = np.array([ x/cols for x in img.sum(axis=1)])
threshold = 255 - 60
print("we will use threshold {} for vertical".format(threshold))
black_areas = np.where(v_projection > threshold)
fig.add_subplot(122)
for j in black_areas:
img[j, :] = 0
plt.plot((0,1), (j,j), 'g-')
plt.plot(v_projection, range(rows))
plt.show()
# obscurate areas on the image
plt.figure(figsize=(16,12))
plt.subplot(211)
plt.title("Image with the projection mask")
plt.imshow(img)
And the output:
It do it very well (It could be more specific with the threshold, but I can't because different images were loaded for a neural network).
Now if I want to apply another histogram projection for character segmentation:
from matplotlib import pyplot as plt
import pylab
import numpy as np
#input image will be the same img variable.
(rows,cols)=img.shape
h_projection = np.array([ x/rows for x in img.sum(axis=0)])
print(np.min(h_projection))
print(np.max(h_projection))
threshold = (np.max(h_projection) - np.min(h_projection)) / 4
print("we will use threshold {} for horizontal".format(threshold))
# select the black areas
black_areas = np.where(h_projection < threshold)
fig = plt.figure(figsize=(16,8))
fig.add_subplot(121)
for j in black_areas:
img[:, j] = 1
plt.plot((j, j), (0, 1), 'g-')
plt.plot(range(cols), h_projection.T)
v_projection = np.array([ x/cols for x in img.sum(axis=1)])
threshold = (np.max(v_projection) - np.min(v_projection)) / 4
print("we will use threshold {} for vertical".format(threshold))
black_areas = np.where(v_projection < threshold)
fig.add_subplot(122)
for j in black_areas:
img[j, :] = 0
plt.plot((0,1), (j,j), 'g-')
plt.plot(v_projection, range(rows))
plt.show()
# obscurate areas on the image
plt.figure(figsize=(16,12))
plt.subplot(211)
plt.title("Image with the projection mask")
plt.imshow(img)
# erode the features
import scipy
plt.subplot(212)
plt.title("Image after erosion (suggestion)")
eroded_img = scipy.ndimage.morphology.binary_erosion(img, structure=np.ones((1,1))).astype(img.dtype)
plt.imshow(eroded_img)
plt.show()
It will show something like this:
As you can see the last image, when erosion it's applied, it seems that don't take the black of the previous the mask. Am I marking some areas wrong?

How do I get a white border in a figure that has been plotted in python?

I have written the following code that calculates the orientation of a blob using eigenvalues. When the orientation is determined, the function "straighten_up" straightens the blob out.
The only thing I'm missing to be fully satisfied, is a 1px white border in the second output figure between the black area and the green area. How can I do this?
I'm using a mask image as input:
code:
import numpy as np
import matplotlib.pyplot as plt
import cv2
img = cv2.imread('input_image.png',100)
edges = cv2.Canny(img,0,255) #searching for a border
# compute the orientation of a blob
img = edges
y, x = np.nonzero(img) # Find the index of the white pixels
x = x - np.mean(x) #The average of an array of elements
y = y - np.mean(y)
coords = np.vstack([x, y])
cov = np.cov(coords) #determine covariance matrix
evals, evecs = np.linalg.eig(cov) #eigenvectors
sort_indices = np.argsort(evals)[::-1] #Sort Eigenvalues in decreasing order
x_v1, y_v1 = evecs[:, sort_indices[0]]
x_v2, y_v2 = evecs[:, sort_indices[1]]
scale = 30
plt.plot([x_v1*-scale*2, x_v1*scale*2], #plot to show the eigenvectors
[y_v1*-scale*2, y_v1*scale*2], color='red')
plt.plot([x_v2*-scale, x_v2*scale],
[y_v2*-scale, y_v2*scale], color='blue')
plt.plot(x, y, 'k.')
plt.axis('equal')
plt.gca().invert_yaxis()
plt.show()
def straighten_up(x_v1,y_v1,coords):
theta = np.arctan((x_v1)/(y_v1))
rotation_mat =np.matrix([[np.cos(theta), -np.sin(theta)],[np.sin(theta),np.cos(theta)]])
transformed_mat = rotation_mat*coords
x_transformed, y_transformed = transformed_mat.A
fig, ax = plt.subplots(nrows=1, ncols=1)
ax = fig.add_subplot(1, 1, 1) # nrows, ncols, index
ax.set_facecolor((1.0, 0.47, 0.42))
plt.plot(x_transformed,y_transformed,"black")
straighten_up(x_v1,y_v1,coords)
plt.show()
with output:
Your x_transformed and y_transformed are the x and y coordinates of the rotated border. So you can draw them e.g. with plt.scatter. This draws dots (the third parameter is the size) on these x,y positions. Use zorder to make sure the scatter dots are not hidden by the previous parts of the plot.
Following code does just that:
fig, ax = plt.subplots(nrows=1, ncols=1)
ax = fig.add_subplot(1, 1, 1) # nrows, ncols, index
ax.set_facecolor('fuchsia')
plt.axis('equal')
plt.plot(x_transformed, y_transformed, c="lime")
plt.scatter(x_transformed, y_transformed, 1, c="white", zorder=3)
plt.show()
As you notice, there is another problem: the plot of the filled figure isn't similar to your input image. What is happening, is that plot draws lines(x[0],y[0]) to (x[1],y[1]) to (x[2],y[2]) etc.. As your x and y are only the border points, not ordered as a polygon, it is more complicated to get a correctly filled polygon. For a random input image, you can have many borders, that can form polygons with holes and islands and which can touch the image borders.
To properly get the interior points, you might get y, x = np.nonzero(img) from the original image (instead of only the edges), then do the same shift subtracting the mean of the edges, and use the same transformation matrix.

Aligning two combined plots - Matplotlib

I'm currently working in a plot in which I show to datas combined.
I plot them with the following code:
plt.figure()
# Data 1
data = plt.cm.binary(data1)
data[..., 3] = 1.0 * (data1 > 0.0)
fig = plt.imshow(data, interpolation='nearest', cmap='binary', vmin=0, vmax=1, extent=(-4, 4, -4, 4))
# Plotting just the nonzero values of data2
x = numpy.linspace(-4, 4, 11)
y = numpy.linspace(-4, 4, 11)
data2_x = numpy.nonzero(data2)[0]
data2_y = numpy.nonzero(data2)[1]
pts = plt.scatter(x[data2_x], y[data2_y], marker='s', c=data2[data2_x, data2_y])
And this gives me this plot:
As can be seen in the image, my background and foreground squares are not aligned.
Both of then have the same dimension (20 x 20). I would like to have a way, if its possible, to align center with center, or corner with corner, but to have some kind of alignment.
In some grid cells it seems that I have right bottom corner alignment, in others left bottom corner alignment and in others no alignment at all, with degrades the visualization.
Any help would be appreciated.
Thank you.
As tcaswell says, your problem may be easiest to solve by defining the extent keyword for imshow.
If you give the extent keyword, the outermost pixel edges will be at the extents. For example:
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure()
ax = fig.add_subplot(111)
ax.imshow(np.random.random((8, 10)), extent=(2, 6, -1, 1), interpolation='nearest', aspect='auto')
Now it is easy to calculate the center of each pixel. In X direction:
interpixel distance is (6-2) / 10 = 0.4 pixels
center of the leftmost pixel is half a pixel away from the left edge, 2 + .4/2 = 2.2
Similarly, the Y centers are at -.875 + n * 0.25.
So, by tuning the extent you can get your pixel centers wherever you want them.
An example with 20x20 data:
import matplotlib.pyplot as plt
import numpy
# create the data to be shown with "scatter"
yvec, xvec = np.meshgrid(np.linspace(-4.75, 4.75, 20), np.linspace(-4.75, 4.75, 20))
sc_data = random.random((20,20))
# create the data to be shown with "imshow" (20 pixels)
im_data = random.random((20,20))
fig = plt.figure()
ax = fig.add_subplot(111)
ax.imshow(im_data, extent=[-5,5,-5,5], interpolation='nearest', cmap=plt.cm.gray)
ax.scatter(xvec, yvec, 100*sc_data)
Notice that here the inter-pixel distance is the same for both scatter (if you have a look at xvec, all pixels are 0.5 units apart) and imshow (as the image is stretched from -5 to +5 and has 20 pixels, the pixels are .5 units apart).
here is a code where there is no alignment problem.
import matplotlib.pyplot as plt
import numpy
data1 = numpy.random.rand(10, 10)
data2 = numpy.random.rand(10, 10)
data2[data2 < 0.4] = 0.0
plt.figure()
# Plotting data1
fig = plt.imshow(data1, interpolation='nearest', cmap='binary', vmin=0.0, vmax=1.0)
# Plotting data2
data2_x = numpy.nonzero(data2)[0]
data2_y = numpy.nonzero(data2)[1]
pts = plt.scatter(data2_x, data2_y, marker='s', c=data2[data2_x, data2_y])
plt.show()
which gives a perfectly aligned combined plots:
Thus the use of additional options in your code might be the reason of the non-alignment of the combined plots.

Categories