How to map rectangle image to quadrilateral with PIL? - python

Python PIL library allows me to map any quadrilateral in an image to rectangle using
im.transform(size, QUAD, data)
What I need is a function that does the opposite, i.e. map a rectangular image to specified quadrilateral.
I figured this might be achieved with the above mentioned function like this:
I.e. I would find such quad (the red one in the image) that would, using the function im.transform(size, QUAD, data) transform the image to quad I want. The problem is I don't know how to find the red quad.
I would appreciate any idea on how to find the red quad or any other way to map a rect image to quad, only with PIL if possible.

So I solved the issue with a simple forward mapping, rather than inverse mapping, which is usually better, but in my application I only ever map the rectangle to a quad that is smaller than the rectangle, so there are usually no holes in the transformed image. The code is as follows:
def reverse_quad_transform(image, quad_to_map_to, alpha):
# forward mapping, for simplicity
result = Image.new("RGBA",image.size)
result_pixels = result.load()
width, height = result.size
for y in range(height):
for x in range(width):
result_pixels[x,y] = (0,0,0,0)
p1 = (quad_to_map_to[0],quad_to_map_to[1])
p2 = (quad_to_map_to[2],quad_to_map_to[3])
p3 = (quad_to_map_to[4],quad_to_map_to[5])
p4 = (quad_to_map_to[6],quad_to_map_to[7])
p1_p2_vec = (p2[0] - p1[0],p2[1] - p1[1])
p4_p3_vec = (p3[0] - p4[0],p3[1] - p4[1])
for y in range(height):
for x in range(width):
pixel = image.getpixel((x,y))
y_percentage = y / float(height)
x_percentage = x / float(width)
# interpolate vertically
pa = (p1[0] + p1_p2_vec[0] * y_percentage, p1[1] + p1_p2_vec[1] * y_percentage)
pb = (p4[0] + p4_p3_vec[0] * y_percentage, p4[1] + p4_p3_vec[1] * y_percentage)
pa_to_pb_vec = (pb[0] - pa[0],pb[1] - pa[1])
# interpolate horizontally
p = (pa[0] + pa_to_pb_vec[0] * x_percentage, pa[1] + pa_to_pb_vec[1] * x_percentage)
try:
result_pixels[p[0],p[1]] = (pixel[0],pixel[1],pixel[2],min(int(alpha * 255),pixel[3]))
except Exception:
pass
return result

Related

How do I properly implement Paint.NET's polar inversion effect in Python?

I want to implement Paint.NET's polar inversion effect in Python.
If you don't know Paint.NET's polar inversion effect, basically, it transforms this (I created the image using Python):
To this:
After a bit of Google searching I found this:
protected override void InverseTransform(ref WarpEffectBase.TransformData data)
{
double x = data.X;
double y = data.Y;
double invertDistance = DoubleUtil.Lerp(1.0, base.DefaultRadius2 / (x * x + y * y), this.amount);
data.X = x * invertDistance;
data.Y = y * invertDistance;
}
Source
After a bit more Google searching I found this:
float Lerp(float firstFloat, float secondFloat, float by)
{
return firstFloat * (1 - by) + secondFloat * by;
}
Source
So putting the pieces together, this is the transformation that needs to be applied to every pixel, implemented in Python:
def lerp(x, y, by):
return x * (1 - by) + y * by
def transform_xy(x, y, width, height):
cx = width/2
cy = height/2
return x-cx, cy-y
def base_radius_squared(width, height):
radius = min(width, height) / 2
return radius ** 2
def polar_inversion(x, y, radius, strength):
invertDistance = lerp(1, radius/(x**2+y**2), strength)
return x*invertDistance, y*invertDistance
Strength is a float between -4 and 4 (inclusive). x and y are not the pixel coordinates of the pixel in the image, namely the origin is not at the upper left corner of the image, and y axis isn't downwards.
The x, y values used here are relative to the center of transformation, the origin is at the center of transformation, the center of transformation is defaulted at the center of image, and the y axis is upwards. I just want to clarify the coordinate system here.
So how do I apply the transformation to every pixel of an image as efficiently as possible without using for loop to iterate every pixel? How do I apply the transformation in a vectorized way?

Padding scipy affine_transform output to show non-overlapping regions of transformed images

I have source (src) image(s) I wish to align to a destination (dst) image using an Affine Transformation whilst retaining the full extent of both images during alignment (even the non-overlapping areas).
I am already able to calculate the Affine Transformation rotation and offset matrix, which I feed to scipy.ndimage.interpolate.affine_transform to recover the dst-aligned src image.
The problem is that, when the images are not fuly overlapping, the resultant image is cropped to only the common footprint of the two images. What I need is the full extent of both images, placed on the same pixel coordinate system. This question is almost a duplicate of this one - and the excellent answer and repository there provides this functionality for OpenCV transformations. I unfortunately need this for scipy's implementation.
Much too late, after repeatedly hitting a brick wall trying to translate the above question's answer to scipy, I came across this issue and subsequently followed to this question. The latter question did give some insight into the wonderful world of scipy's affine transformation, but I have as yet been unable to crack my particular needs.
The transformations from src to dst can have translations and rotation. I can get translations only working (an example is shown below) and I can get rotations only working (largely hacking around the below and taking inspiration from the use of the reshape argument in scipy.ndimage.interpolation.rotate). However, I am getting thoroughly lost combining the two. I have tried to calculate what should be the correct offset (see this question's answers again), but I can't get it working in all scenarios.
Translation-only working example of padded affine transformation, which follows largely this repo, explained in this answer:
from scipy.ndimage import rotate, affine_transform
import numpy as np
import matplotlib.pyplot as plt
nblob = 50
shape = (200, 100)
buffered_shape = (300, 200) # buffer for rotation and translation
def affine_test(angle=0, translate=(0, 0)):
np.random.seed(42)
# Maxiumum translation allowed is half difference between shape and buffered_shape
# Generate a buffered_shape-sized base image with random blobs
base = np.zeros(buffered_shape, dtype=np.float32)
random_locs = np.random.choice(np.arange(2, buffered_shape[0] - 2), nblob * 2, replace=False)
i = random_locs[:nblob]
j = random_locs[nblob:]
for k, (_i, _j) in enumerate(zip(i, j)):
# Use different values, just to make it easier to distinguish blobs
base[_i - 2 : _i + 2, _j - 2 : _j + 2] = k + 10
# Impose a rotation and translation on source
src = rotate(base, angle, reshape=False, order=1, mode="constant")
bsc = (np.array(buffered_shape) / 2).astype(int)
sc = (np.array(shape) / 2).astype(int)
src = src[
bsc[0] - sc[0] + translate[0] : bsc[0] + sc[0] + translate[0],
bsc[1] - sc[1] + translate[1] : bsc[1] + sc[1] + translate[1],
]
# Cut-out destination from the centre of the base image
dst = base[bsc[0] - sc[0] : bsc[0] + sc[0], bsc[1] - sc[1] : bsc[1] + sc[1]]
src_y, src_x = src.shape
def get_matrix_offset(centre, angle, scale):
"""Follows OpenCV.getRotationMatrix2D"""
angle = angle * np.pi / 180
alpha = scale * np.cos(angle)
beta = scale * np.sin(angle)
return (
np.array([[alpha, beta], [-beta, alpha]]),
np.array(
[
(1 - alpha) * centre[0] - beta * centre[1],
beta * centre[0] + (1 - alpha) * centre[1],
]
),
)
# Obtain the rotation matrix and offset that describes the transformation
# between src and dst
matrix, offset = get_matrix_offset(np.array([src_y / 2, src_x / 2]), angle, 1)
offset = offset - translate
# Determine the outer bounds of the new image
lin_pts = np.array([[0, src_x, src_x, 0], [0, 0, src_y, src_y]])
transf_lin_pts = np.dot(matrix.T, lin_pts) - offset[::-1].reshape(2, 1)
# Find min and max bounds of the transformed image
min_x = np.floor(np.min(transf_lin_pts[0])).astype(int)
min_y = np.floor(np.min(transf_lin_pts[1])).astype(int)
max_x = np.ceil(np.max(transf_lin_pts[0])).astype(int)
max_y = np.ceil(np.max(transf_lin_pts[1])).astype(int)
# Add translation to the transformation matrix to shift to positive values
anchor_x, anchor_y = 0, 0
if min_x < 0:
anchor_x = -min_x
if min_y < 0:
anchor_y = -min_y
shifted_offset = offset - np.dot(matrix, [anchor_y, anchor_x])
# Create padded destination image
dst_h, dst_w = dst.shape[:2]
pad_widths = [anchor_y, max(max_y, dst_h) - dst_h, anchor_x, max(max_x, dst_w) - dst_w]
dst_padded = np.pad(
dst,
((pad_widths[0], pad_widths[1]), (pad_widths[2], pad_widths[3])),
"constant",
constant_values=-1,
)
dst_pad_h, dst_pad_w = dst_padded.shape
# Create the aligned and padded source image
source_aligned = affine_transform(
src,
matrix.T,
offset=shifted_offset,
output_shape=(dst_pad_h, dst_pad_w),
order=3,
mode="constant",
cval=-1,
)
# Plot the images
fig, axes = plt.subplots(1, 4, figsize=(10, 5), sharex=True, sharey=True)
axes[0].imshow(src, cmap="viridis", vmin=-1, vmax=nblob)
axes[0].set_title("Source")
axes[1].imshow(dst, cmap="viridis", vmin=-1, vmax=nblob)
axes[1].set_title("Dest")
axes[2].imshow(source_aligned, cmap="viridis", vmin=-1, vmax=nblob)
axes[2].set_title("Source aligned to Dest padded")
axes[3].imshow(dst_padded, cmap="viridis", vmin=-1, vmax=nblob)
axes[3].set_title("Dest padded")
plt.show()
e.g.:
affine_test(0, (-20, 40))
gives:
With a zoom in showing the aligned in the padded images:
I require the full extent of the src and dst images aligned on the same pixel coordinates, with both rotations and translations.
Any help is greatly appreciated!
Complexity analysis
The problem is to determine three parameters
Let's suppose that you have a grid for angle, x and y displacements, each with size O(n) and that your images are of size O(n x n) so, rotation, translation, and comparison of the images all take O(n^2), since you have O(n^3) candidate transforms to try, you end up with complexity O(n^5), and probably that's why you are asking the question.
However the part of the displacement can be computed slightly more efficiently by computing maximum correlation using Fourier transforms. The Fourier transforms can be performed with complexity O(n log n) each axis, and we have to perform them to the two spatial dimensions, the complete correlation matrix can be computed in O(n^2 log^2 n), then we find the maximum with complexity O(n^2), so the overall time complexity of determining the best alignment is O(n^2 log^2 n). However you still want to search for the best angle, since we have O(n) candidate angles the overall complexity of this search will be O(n^3 log^2 n). Remember we are using python and we may have some significant overhead, so this complexity only gives us an idea of how difficult it will be, and I have handled problems like this before so I start confident.
Preparing some example
I will start by downloading an image and applying rotation and centering the image padding with zeros.
def centralized(a, width, height):
'''
Image centralized to the given width and height
by padding with zeros (black)
'''
assert width >= a.shape[0] and height >= a.shape[1]
ap = np.zeros((width, height) + a.shape[2:], a.dtype)
ccx = (width - a.shape[0])//2
ccy = (height - a.shape[1])//2
ap[ccx:ccx+a.shape[0], ccy:ccy+a.shape[1], ...] = a
return ap
def image_pair(im, width, height, displacement=(0,0), angle=0):
'''
this build an a pair of images as numpy arrays
from the input image.
Both images will be padded with zeros (black)
and roughly centralized.
and will have the specified shape
make sure that the width and height chosen are enough
to fit the rotated image
'''
a = np.array(im)
a1 = centralized(a, width, height)
a2 = centralized(ndimage.rotate(a, angle), width, height)
a2 = np.roll(a2, displacement, axis=(0,1))
return a1, a2
def random_transform():
angle = np.random.rand() * 360
displacement = np.random.randint(-100, 100, 2)
return displacement, angle
a1, a2 = image_pair(im, 512, 512, *random_transform())
plt.subplot(121)
plt.imshow(a1)
plt.subplot(122)
plt.imshow(a2)
The displacement search
The first thing is to compute the correlation of the image
def compute_correlation(a1, a2):
A1 = np.fft.rfftn(a1, axes=(0,1))
A2 = np.fft.rfftn(a2, axes=(0,1))
C = np.fft.irfftn(np.sum(A1 * np.conj(A2), axis=2))
return C
Then, let's create an example without rotation and confirm that the with the index of the maximum correlation we can find the displacement that fit one image to the other.
displacement, _ = random_transform()
a1, a2 = image_pair(im, 521, 512, displacement, angle=0)
C = compute_correlation(a1, a2)
np.unravel_index(np.argmax(C), C.shape), displacement
a3 = np.roll(a2, np.unravel_index(np.argmax(C), C.shape), axis=(0,1))
assert np.all(a3 == a1)
With rotation or interpolation this result may not be exact but it gives the displacement that will give us the closest possible alignment.
Let's put this in a function for future use
def get_aligned(a1, a2, angle):
a1_rotated = ndimage.rotate(a1, angle, reshape=False)
C = compute_correlation(a2, a1_rotated)
found_displacement = np.unravel_index(np.argmax(C), C.shape)
a1_aligned = np.roll(a1_rotated, found_displacement, axis=(0,1))
return a1_aligned
Searching for the angle
Now we can do something in two steps,
in one we compute the correlation for each angle, then with the angle that gives maximum correlation find the alignment.
displacement, angle = random_transform()
a1, a2 = image_pair(im, 521, 512, displacement, angle)
C_max = []
C_argmax = []
angle_guesses = np.arange(0, 360, 5)
for angle_guess in angle_guesses:
a1_rotated = ndimage.rotate(a1, angle_guess, reshape=False)
C = compute_correlation(a1_rotated, a2)
i = np.argmax(C)
v = C.reshape(-1)[i]
C_max.append(v)
C_argmax.append(i)
Let's see how the correlation looks like
plt.plot(angle_guesses, C_max);
We have a clear winner looking at this curve, even if a sunflower has some sort of rotation symmetry.
Let's apply the transformation to the original image and see how it looks like
a1_aligned = get_aligned(a1, a2, angle_guesses[np.argmax(C_max)])
plt.subplot(121)
plt.imshow(a2)
plt.subplot(122)
plt.imshow(a1_aligned)
Great, I wouldn't have done better than this manually.
I am using a sunflower image for beauty reasons, but the procedure is the same for any type of image. I use RGB showing that the image may have one additional dimension, i.e. it uses a feature vector, instead of the scalar feature, you can use reshape your data to (width, height, 1) if your feature is a scalar.
Working code below in case anyone else has this need of scipy's affine transformations:
def affine_test(angle=0, translate=(0, 0), shape=(200, 100), buffered_shape=(300, 200), nblob=50):
# Maxiumum translation allowed is half difference between shape and buffered_shape
np.random.seed(42)
# Generate a buffered_shape-sized base image
base = np.zeros(buffered_shape, dtype=np.float32)
random_locs = np.random.choice(np.arange(2, buffered_shape[0] - 2), nblob * 2, replace=False)
i = random_locs[:nblob]
j = random_locs[nblob:]
for k, (_i, _j) in enumerate(zip(i, j)):
base[_i - 2 : _i + 2, _j - 2 : _j + 2] = k + 10
# Impose a rotation and translation on source
src = rotate(base, angle, reshape=False, order=1, mode="constant")
bsc = (np.array(buffered_shape) / 2).astype(int)
sc = (np.array(shape) / 2).astype(int)
src = src[
bsc[0] - sc[0] + translate[0] : bsc[0] + sc[0] + translate[0],
bsc[1] - sc[1] + translate[1] : bsc[1] + sc[1] + translate[1],
]
# Cut-out destination from the centre of the base image
dst = base[bsc[0] - sc[0] : bsc[0] + sc[0], bsc[1] - sc[1] : bsc[1] + sc[1]]
src_y, src_x = src.shape
def get_matrix_offset(centre, angle, scale):
"""Follows OpenCV.getRotationMatrix2D"""
angle_rad = angle * np.pi / 180
alpha = np.round(scale * np.cos(angle_rad), 8)
beta = np.round(scale * np.sin(angle_rad), 8)
return (
np.array([[alpha, beta], [-beta, alpha]]),
np.array(
[
(1 - alpha) * centre[0] - beta * centre[1],
beta * centre[0] + (1 - alpha) * centre[1],
]
),
)
matrix, offset = get_matrix_offset(np.array([((src_y - 1) / 2) - translate[0], ((src_x - 1) / 2) - translate[
1]]), angle, 1)
offset += np.array(translate)
M = np.column_stack((matrix, offset))
M = np.vstack((M, [0, 0, 1]))
iM = np.linalg.inv(M)
imatrix = iM[:2, :2]
ioffset = iM[:2, 2]
# Determine the outer bounds of the new image
lin_pts = np.array([[0, src_y-1, src_y-1, 0], [0, 0, src_x-1, src_x-1]])
transf_lin_pts = np.dot(matrix, lin_pts) + offset.reshape(2, 1) # - np.array(translate).reshape(2, 1) # both?
# Find min and max bounds of the transformed image
min_x = np.floor(np.min(transf_lin_pts[1])).astype(int)
min_y = np.floor(np.min(transf_lin_pts[0])).astype(int)
max_x = np.ceil(np.max(transf_lin_pts[1])).astype(int)
max_y = np.ceil(np.max(transf_lin_pts[0])).astype(int)
# Add translation to the transformation matrix to shift to positive values
anchor_x, anchor_y = 0, 0
if min_x < 0:
anchor_x = -min_x
if min_y < 0:
anchor_y = -min_y
dot_anchor = np.dot(imatrix, [anchor_y, anchor_x])
shifted_offset = ioffset - dot_anchor
# Create padded destination image
dst_y, dst_x = dst.shape[:2]
pad_widths = [anchor_y, max(max_y, dst_y) - dst_y, anchor_x, max(max_x, dst_x) - dst_x]
dst_padded = np.pad(
dst,
((pad_widths[0], pad_widths[1]), (pad_widths[2], pad_widths[3])),
"constant",
constant_values=-10,
)
dst_pad_y, dst_pad_x = dst_padded.shape
# Create the aligned and padded source image
source_aligned = affine_transform(
src,
imatrix,
offset=shifted_offset,
output_shape=(dst_pad_y, dst_pad_x),
order=3,
mode="constant",
cval=-10,
)
E.g. running:
affine_test(angle=-25, translate=(10, -40))
will show:
and zoomed in:
Apologies the code is not nicely written as is.
Note that running this in the wild I notice it cannot handle any change in scale size of the images, but I am not certain it isn't something to do with how I calculate the transformation - so a caveat worth noting, and checking out, if you are aligning images with different scales.
If you have two images that are similar (or the same) and you want to align them, you can do it using both functions rotate and shift :
from scipy.ndimage import rotate, shift
You need to find first the difference of angle between the two images angle_to_rotate, having that you apply a rotation to src:
angle_to_rotate = 25
rotated_src = rotate(src, angle_to_rotate , reshape=True, order=1, mode="constant")
With reshape=True you avoid losing information from your original src matrix, and it pads the result so the image could be translated around the 0,0 indexes. You can calculate this translation as it is (x*cos(angle),y*sin(angle) where x and y are the dimensions of the image, but it probably won't matter.
Now you will need to translate the image to the source, for doing that you can use the shift function:
rot_translated_src = shift(rotated_src , [distance_x, distance_y])
In this case there is no reshape (because otherwise you wouldn't have any real translation) so if the image was not previously padded some information will be lost.
But you can do some padding with
np.pad(src, number, mode='constant')
To calculate distance_x and distance_y you will need to find a point that serves you as a reference between the rotated_src and the destination, then just calculate the distance in the x and y axis.
Summary
Make some padding in src, and dst
Find the angular distance between them.
Rotate src with scipy.ndimage.rotate using reshape=True
Find the horizontal and vertical distance distance_x, distance_y between the rotated image and dst
Translate your 'rotated_src' with scipy.ndimage.shift
Code
from scipy.ndimage import rotate, shift
import matplotlib.pyplot as plt
import numpy as np
First we make the destination image:
# make and plot dest
dst = np.ones([40,20])
dst = np.pad(dst,10)
dst[17,[14,24]]=4
dst[27,14:25]=4
dst[26,[14,25]]=4
rotated_dst = rotate(dst, 20, order=1)
plt.imshow(dst) # plot it
plt.imshow(rotated_dst)
plt.show()
We make the Source image:
# make_src image and plot it
src = np.zeros([40,20])
src = np.pad(src,10)
src[0:20,0:20]=1
src[7,[4,14]]=4
src[17,4:15]=4
src[16,[4,15]]=4
plt.imshow(src)
plt.show()
Then we align the src to the destination:
rotated_src = rotate(src, 20, order=1) # find the angle 20, reshape true is by default
plt.imshow(rotated_src)
plt.show()
distance_y = 8 # find this distances from rotated_src and dst
distance_x = 12 # use any visual reference or even the corners
translated_src = shift(rotated_src, [distance_y,distance_x])
plt.imshow(translated_src)
plt.show()
pd: If you find problems to find the angle and the distances in a programmatic way, please leave a comment providing a bit more of insight of what can be used as a reference that could be for example the frame of the image or some image features / data)

How to filter specific image coordinates from an image

I am reading an image, getting objects that have a certain brightness value, and then plotting the X and Y coords to the image.
But, there is a huge group of outliers, which are all located in a rectangular part of the image, Its X and Y coords are 1110-1977 (width) and 1069-1905 (height). From here, I'm looping through this little square portion of the image, and from my pre-created x and y arrays any values that have the same coords as shown there are removed.
However, this removes a lot more coords, which, for example, have X in the range 1110-1977. So the end result is a cross pattern filtering when I only want the square in the center to be filtered. How would I do this?
Code
from PIL import Image, ImageDraw
import numpy as np
from math import sqrt
imag = Image.open("Centaurus_A-DeNoiseAI-denoise.jpg")
imag = imag.convert ('RGB')
x=[]
y=[]
imag2=Image.open("Cen_A_cropped.jpg")
imag2=imag2.convert('RGB')
r=[]
g=[]
b=[]
width2, height2=imag2.size
for count2 in range(width2):
for i2 in range(height2):
X,Y=count2,i2
(R,G,B)=imag2.getpixel((X,Y))
r.append(R)
g.append(G)
b.append(B)
average_r=sum(r)/len(r)
average_g=sum(g)/len(g)
average_b=sum(b)/len(b)
brightness_average=sqrt(0.299*(average_r**2) + 0.587*(average_g**2) + 0.114*(average_b**2))
print("Avg. brightness "+str(brightness_average))
def calculate_brightness(galaxy,ref_clus,clus_mag):
delta_b=(galaxy/ref_clus)
bright=delta_b**2
mag=np.log(bright)/np.log(2.512)
return mag+clus_mag
count=0
X,Y = 1556,1568
(R,G,B) = imag.getpixel((X,Y))
width, height=imag.size
brightness = sqrt(0.299*(R**2) + 0.587*(G**2) + 0.114*(B**2))
print("Magnitude: "+str((calculate_brightness(13050, 15.79,3.7))))
reference=brightness_average/(calculate_brightness(13050, 15.79,3.7)/6.84)
print("Reference: "+str(reference))
for count in range(width):
for i in range(height):
X,Y = count,i
(R,G,B) = imag.getpixel((X,Y))
brightness = sqrt(0.299*(R**2) + 0.587*(G**2) + 0.114*(B**2))
if(reference<=brightness<=reference+3):
x.append(X)
y.append(Y)
#post processing----------------------------------------------------------------------------------------------------
for x2 in range(1110, 1977):
for y2 in range(1069, 1905):
X,Y=x2,y2
if(X in x and Y in y):
x.remove(X)
y.remove(Y)
#-------------------------------------------------------------------------------------------------------------------
with imag as im:
delta = 19
draw = ImageDraw.Draw(im)
for i in range(len(x)):
draw.rectangle([x[i-delta],y[i-delta],x[i-delta],y[i-delta]], fill=(0,255,0))
im.save("your_image.png")
Centaurus_A-DeNoiseAI-denoise.jpg
Cen_A_cropped.jpg
Your post-processing logic is flawed. You remove a bunch of X values in the range 1110-1977, without checking whether its corresponding Y value is also in the range of the box. Remove this code section instead and add that logic the first time you loop to gather your x and y coords.
for count in range(width):
for i in range(height):
X,Y = count,i
if 1110 <= X < 1977 and 1069 <= Y < 1905: # add these
continue # two lines
(R,G,B) = imag.getpixel((X,Y))
However, there is a better way of doing the exact same thing by using numpy arrays. Instead of writing explicit loops, you can vectorise a lot of your computations.
import numpy as np
from PIL import Image, ImageDraw
image = Image.open('Centaurus_A-DeNoiseAI-denoise.jpg').convert('RGB')
img1 = np.array(image)
img2 = np.array(Image.open('Cen_A_cropped.jpg').convert('RGB'))
coeffs = np.array([.299, .587, .114])
average = img2.mean(axis=(0, 1))
brightness_average = np.sqrt(np.sum(average**2 * coeffs))
reference = brightness_average / (calculate_brightness(13050, 15.79,3.7) / 6.84)
print(f'Avg. brightness: {brightness_average}')
print(f'Reference: {reference}')
brightness = np.sqrt(np.sum(img1.astype(int)**2 * coeffs, axis=-1))
accepted_brightness = (brightness >= reference) * (brightness <= reference + 3)
pixels_used = np.ones((img1.shape[:2]), dtype=bool)
pixels_used[1069:1905,1110:1977] = False
rows, cols = np.where(accepted_brightness * pixels_used)
with image as im:
draw = ImageDraw.Draw(im)
draw.point(list(zip(cols, rows)), fill=(0, 255, 0))
image.save('out.png')
The main trick used here is in the line
rows, cols = np.where(accepted_brightness * pixels_used)
accepted_brightess is a 2d array of each pixel with a boolean value whether its brightness is within your preferred range. pixels_used is another 2d boolean array, where every pixel is True, except from the pixels in the box near the centre you want to ignore. The combination of those two gives you the pixel coordinates that have the correct brightness and are not in the square in the centre.

Why isn't Matplotlib+Basemap showing islands?

I've been working with matplotlib and basemap to show some information about New York City. Up until now, I've been following this guide, but I've hit an issue. I'm trying to show manhattan island within my visualization, but I can't figure out why basemap isn't showing it as an island.
Here's the visualization that basemap is giving me:
Here's a screenshot of the bounding box I'm using:
And here's the code that is generating the image:
wl = -74.04006
sl = 40.683092
el = -73.834067
nl = 40.88378
m = Basemap(resolution='f', # c, l, i, h, f or None
projection='merc',
area_thresh=50,
lat_0=(wl + sl)/2, lon_0=(el + nl)/2,
llcrnrlon= wl, llcrnrlat= sl, urcrnrlon= el, urcrnrlat= nl)
m.drawmapboundary(fill_color='#46bcec')
m.fillcontinents(color='#f2f2f2',lake_color='#46bcec')
m.drawcoastlines()
m.drawrivers()
I thought that it might consider the water in between a river, but m.drawrivers() didn't appear to fix it. Any help is obviously extremely appreciated.
Thanks in advance!
One approach to get a better quality base map for your plots is building one from web map tiles at an appropriate zoom level. Here I demonstrate how to get them from openstreetmap web map servers. In this case, I use zoom level 10, and get 2 map tiles to combined as single image array. One of the drawbacks, the extent of the combined image is always larger than the values we asked for. Here is the working code:
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
import numpy as np
import math
import urllib2
import StringIO
from PIL import Image
# === Begin block1 ===
# Credit: BerndGit, answered Feb 15 '15 at 19:47. And ...
# Source: https://wiki.openstreetmap.org/wiki/Slippy_map_tilenames
def deg2num(lat_deg, lon_deg, zoom):
'''Lon./lat. to tile numbers'''
lat_rad = math.radians(lat_deg)
n = 2.0 ** zoom
xtile = int((lon_deg + 180.0) / 360.0 * n)
ytile = int((1.0 - math.log(math.tan(lat_rad) + (1 / math.cos(lat_rad))) / math.pi) / 2.0 * n)
return (xtile, ytile)
def num2deg(xtile, ytile, zoom):
'''Tile numbers to lon./lat.'''
n = 2.0 ** zoom
lon_deg = xtile / n * 360.0 - 180.0
lat_rad = math.atan(math.sinh(math.pi * (1 - 2 * ytile / n)))
lat_deg = math.degrees(lat_rad)
return (lat_deg, lon_deg) # NW-corner of the tile.
def getImageCluster(lat_deg, lon_deg, delta_lat, delta_long, zoom):
# access map tiles from internet
# no access/key or password is needed
smurl = r"http://a.tile.openstreetmap.org/{0}/{1}/{2}.png"
# useful snippet: smurl.format(zoom, xtile, ytile) -> complete URL
# x increases L-R; y Top-Bottom
xmin, ymax =deg2num(lat_deg, lon_deg, zoom) # get tile numbers (x,y)
xmax, ymin =deg2num(lat_deg+delta_lat, lon_deg+delta_long, zoom)
# PIL is used to build new image from tiles
Cluster = Image.new('RGB',((xmax-xmin+1)*256-1,(ymax-ymin+1)*256-1) )
for xtile in range(xmin, xmax+1):
for ytile in range(ymin, ymax+1):
try:
imgurl = smurl.format(zoom, xtile, ytile)
print("Opening: " + imgurl)
imgstr = urllib2.urlopen(imgurl).read()
# TODO: study, what these do?
tile = Image.open(StringIO.StringIO(imgstr))
Cluster.paste(tile, box=((xtile-xmin)*256 , (ytile-ymin)*255))
except:
print("Couldn't download image")
tile = None
return Cluster
# ===End Block1===
# Credit to myself
def getextents(latmin_deg, lonmin_deg, delta_lat, delta_long, zoom):
'''Return LL and UR, each with (long,lat) of real extent of combined tiles.
latmin_deg: bottom lat of extent
lonmin_deg: left long of extent
delta_lat: extent of lat
delta_long: extent of long, all in degrees
'''
# Tile numbers(x,y): x increases L-R; y Top-Bottom
xtile_LL, ytile_LL = deg2num(latmin_deg, lonmin_deg, zoom) #get tile numbers as specified by (x, y)
xtile_UR, ytile_UR = deg2num(latmin_deg + delta_lat, lonmin_deg + delta_long, zoom)
# from tile numbers, we get NW corners
lat_NW_LL, lon_NW_LL = num2deg(xtile_LL, ytile_LL, zoom)
lat_NW_LLL, lon_NW_LLL = num2deg(xtile_LL, ytile_LL+1, zoom) # next down below
lat_NW_UR, lon_NW_UR = num2deg(xtile_UR, ytile_UR, zoom)
lat_NW_URR, lon_NW_URR = num2deg(xtile_UR+1, ytile_UR, zoom) # next to the right
# get extents
minLat = lat_NW_LLL
minLon = lon_NW_LL
maxLat = lat_NW_UR
maxLon = lon_NW_URR
return (minLon, maxLon, minLat, maxLat) # (left, right, bottom, top) in degrees
# OP's values of extents for target area to plot
# some changes here (with larger zoom level) may lead to better final plot
wl = -74.04006
sl = 40.683092
el = -73.834067
nl = 40.88378
lat_deg = sl
lon_deg = wl
d_lat = nl - sl
d_long = el - wl
zoom = 10 # zoom level
# Acquire images. The combined images will be slightly larger that the extents
timg = getImageCluster(lat_deg, lon_deg, d_lat, d_long, zoom)
# This computes real extents of the combined tile images, and get (left, right, bottom, top)
latmin_deg, lonmin_deg, delta_lat, delta_long = sl, wl, nl-sl, el-wl
(left, right, bottom, top) = getextents(latmin_deg, lonmin_deg, delta_lat, delta_long, zoom) #units: degrees
# Set Basemap with proper parameters
m = Basemap(resolution='h', # h is nice
projection='merc',
area_thresh=50,
lat_0=(bottom + top)/2, lon_0=(left + right)/2,
llcrnrlon=left, llcrnrlat=bottom, urcrnrlon=right, urcrnrlat=top)
fig = plt.figure()
fig.set_size_inches(10, 12)
m.imshow(np.asarray(timg), extent=[left, right, bottom, top], origin='upper' )
m.drawcoastlines(color='gray', linewidth=3.0) # intentionally thick line
#m.fillcontinents(color='#f2f2f2', lake_color='#46bcec', alpha=0.6)
plt.show()
Hope it helps. The resulting plot:
Edit
To crop the image in order to get the exact area to plot is not difficult. The PIL module can handle that. Numpy's array slicing also works.

Stop pyplot.contour from drawing a contour along a discontinuity

I have a 2d map of a coordinate transform. The data at each point is the aximuthal angle in the original coordinate system, which goes from 0 to 360. I'm trying to use pyplot.contour to plot lines of constant angle, e.g. 45 degrees. The contour appears along the 45 degree line between the two poles, but there's an additional part to the contour that connects the two poles along the 0/360 discontinuity. This makes a very jagged ugly line as it basically just traces the pixels with a number close to 0 on one side and another close to 360 on the other.
Examples:
Here is an image using full colour map:
You can see the discontinuity along the blue/red curve on the left side. One side is 360 degrees, the other is 0 degrees. When plotting contours, I get:
Note that all contours connect the two poles, but even though I have NOT plotted the 0 degree contour, all the other contours follow along the 0 degree discontinuity (because pyplot thinks if it's 0 on one side and 360 on the other, there must be all other angles in between).
Code to produce this data:
import numpy as np
import matplotlib.pyplot as plt
jgal = np.array(
[
[-0.054875539726, -0.873437108010, -0.483834985808],
[0.494109453312, -0.444829589425, 0.746982251810],
[-0.867666135858, -0.198076386122, 0.455983795705],
]
)
def s2v3(rra, rdec, r):
pos0 = r * np.cos(rra) * np.cos(rdec)
pos1 = r * np.sin(rra) * np.cos(rdec)
pos2 = r * np.sin(rdec)
return np.array([pos0, pos1, pos2])
def v2s3(pos):
x = pos[0]
y = pos[1]
z = pos[2]
if np.isscalar(x):
x, y, z = np.array([x]), np.array([y]), np.array([z])
rra = np.arctan2(y, x)
low = np.where(rra < 0.0)
high = np.where(rra > 2.0 * np.pi)
if len(low[0]):
rra[low] = rra[low] + (2.0 * np.pi)
if len(high[0]):
rra[high] = rra[high] - (2.0 * np.pi)
rxy = np.sqrt(x ** 2 + y ** 2)
rdec = np.arctan2(z, rxy)
r = np.sqrt(x ** 2 + y ** 2 + z ** 2)
if x.size == 1:
rra = rra[0]
rdec = rdec[0]
r = r[0]
return rra, rdec, r
def gal2fk5(gl, gb):
rgl = np.deg2rad(gl)
rgb = np.deg2rad(gb)
r = 1.0
pos = s2v3(rgl, rgb, r)
pos1 = np.dot(pos.transpose(), jgal).transpose()
rra, rdec, r = v2s3(pos1)
dra = np.rad2deg(rra)
ddec = np.rad2deg(rdec)
return dra, ddec
def make_coords(resolution=50):
width = 9
height = 6
px = width * resolution
py = height * resolution
coords = np.zeros((px, py, 4))
for ix in range(0, px):
for iy in range(0, py):
l = 360.0 / px * ix - 180.0
b = 180.0 / py * iy - 90.0
dra, ddec = gal2fk5(l, b)
coords[ix, iy, 0] = dra
coords[ix, iy, 1] = ddec
coords[ix, iy, 2] = l
coords[ix, iy, 3] = b
return coords
coords = make_coords()
# now do one of these
# plt.imshow(coords[:,:,0],origin='lower') # color plot
plt.contour(
coords[:, :, 0], levels=[45, 90, 135, 180, 225, 270, 315]
) # contour plot with jagged ugliness
plt.show()
How can I either:
stop pyplot.contour from drawing a contour along the discontinuity
make pyplot.contour recognize that the 0/360 discontinuity in angle is not a real discontinuity at all.
I can just increase the resolution of the underlying data, but before I get a nice smooth line it starts to take a very long time and a lot of memory to plot.
I will also want to plot a contour along 0 degrees, but if I can figure out how to hide the discontinuity I can just shift it to somewhere else not near a contour. Or, if I can make #2 happen, it won't be an issue.
This is definitely still a hack, but you can get nice smooth contours with a two-fold approach:
Plot contours of the absolute value of the phase (going from -180˚ to 180˚) so that there is no discontinuity.
Plot two sets of contours in a finite region so that numerical defects close to the tops and bottoms of the extrema do not creep in.
Here is the complete code to append to your example:
Z = np.exp(1j*np.pi*coords[:,:,0]/180.0)
Z *= np.exp(0.25j*np.pi/2.0) # Shift to get same contours as in your example
X = np.arange(300)
Y = np.arange(450)
N = 2
levels = 90*(0.5 + (np.arange(N) + 0.5)/N)
c1 = plt.contour(X, Y, abs(np.angle(Z)*180/np.pi), levels=levels)
c2 = plt.contour(X, Y, abs(np.angle(Z*np.exp(0.5j*np.pi))*180/np.pi), levels=levels)
One can generalize this code to get smooth contours for any "periodic" function. What is left to be done is to generate a new set of contours with the correct values so that colormaps apply correctly, labels will be applied correctly etc. However, there does not seem to be a simple way of doing this with matplotlib: the relevant QuadContourSet class does everything and I do not see a simple way of constructing an appropriate contour object from the contours c1 and c2.
I was interested in the exact same problem. One solution is to NaN out the contours along the branch cut; see here; another is to use the max_jump argument in matplotx's contour().
I molded the solution into a Python package, cplot.
import cplot
import numpy as np
def f(z):
return np.exp(1 / z)
cplot.show(f, (-1.0, +1.0, 400), (-1.0, +1.0, 400))

Categories