Hello and thank you for your help, my problem is this:
I am working on python using satellite images and shapefile, both were original on geographical projection and passed it to pixel so when I plot first the Landsat image is OK, i don't mind the position of axis here, but when i add to the 2D array the shapefile, the areas (in this case), are upside down (I know why because of location of the origin of a image is different from a normal plot.
I try to applied origin='lower' but that is to the final result, when both image and shapefile are added and is not what i want. Also try .reverse(), [::-1] to the array of Y axis before adding to the image but no good result, hope you can help me.
This is the plot of areas in the jungle (of Peru if you are wondering) and blue box is the area i want to show on Landsat (satellite image):
!http://imgur.com/YlM8zhX
This is the area converted to pixels and added to the satellite image, look how is inverted due the location of the origin compared it to the first image
!http://imgur.com/VJN7QYW
thanks for your help.
EDIT: this is kind the code (is really extensive, so the most important is this
#THIS IS THE BORDER AREA EXTRACTED FROM THE SHAPEFILE AND ALREADY CONVERTED TO PIXEL
mos_ext = mos_total[(m0):(m1+1), int(minXY[0]):int(maxXY[0]+1 )]
#I just dilate a little for better visualization
mask3 = cv2.getStructuringElement(cv2.MORPH_RECT, (6, 6))
mos_ext=cv2.dilate(mos_ext, mask3, iterations=1)
#This IS THE LANDSAT IMAGE
im_ls2=mask_bi3.copy()
ADDING THE BORDERS OF AREA (WHAT IS SHOWN ON RED ON THE SECOND IMAGE)
coorde=np.where(mos_ext==255)
im_ls2[coorde[0], coorde[1], 0] = 255
im_ls2[coorde[0], coorde[1], 1] = 0
im_ls2[coorde[0], coorde[1], 2] = 0
fig3, axes3 = plt.subplots(1)
axes3.imshow(im_ls2, cmap='gray', interpolation='nearest')
Related
I have zero experience in Python but I need to use it as a data processing step for my project. I'm working with drone thermal images of vineyards and the objective is to separate the canopy pixels from the ground pixels based on elevation and temperature. Using a DEM (digital elevation model), a first distinction was made between the canopy and the ground. This results in a binary mask layer that can be put on top of the thermal image of the vineyard. That way, I now have a thermal image in Python of which most of the ground pixels are 0 and the canopy pixels have a value between 0 and 65535 representing the temperature. However, since the first distinction (using the DEM) is not precise enough, some ground pixels are also included in the canopy mask.
Now I want to make a second distinction using the temperature of the selected zones. I was able to make contours of all the canopy zones with opencv (so I have a complete list of all the contours representing the canopy zones - with some ground pixels). I aim to make a histogram per contour zone displaying the density of each pixelvalue within that zone. Hopefully I can then delete the pixels that are too hot (i.e. groundpixels).
Does anyone know how to generate histograms for every (filled) contour of an image? The format now is an 6082x4922 ndarray with values between 0 and 65535 of datatype uint16. I use PyCharm as an IDE.
Thanks in advance!
Approach:
Iterate through each contour in the mask
Find locations of pixels present in the mask
Find their corresponding values in the given image
Compute and plot the histogram
Code:
# reading given image in grayscale
img = cv2.imread('apples.jpg', 0)
# reading the mask in grayscale
img_mask = cv2.imread(r'apples_mask.jpg', 0)
# binarizing the mask
mask = cv2.threshold(img_mask,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)[1]
# find contours on the mask
contours, hierarchy = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
# the approach is encoded within this loop
for i, c in enumerate(contours):
# create a blank image of same shape
black = np.full(shape = im.shape, fill_value = 0, dtype = np.uint8)
# draw a single contour as a mask
single_object_mask = cv2.drawContours(black,[c],0,255, -1)
# coordinates containing white pixels of mask
coords = np.where(single_object_mask == 255)
# pixel intensities present within the image at these locations
pixels = img[coords]
# plot histogram
plt.hist(pixels,256,[0,256])
plt.savefig('apples_histogram_{}.jpg'.format(i)')
# to avoid plotting in the same plot
plt.clf()
Result: (the following are the 3 histogram plots)
If you remove plt.clf(), all the histograms will be plotted on a single plot
You can extend the same approach for your use case
Original image:
I am attempting to separate red, green and blue components of an image and display the resulting images in separate subplots.
To do this, for each colour, I have created an array of zeros the same shape as the original image (using the function np.zeros), and copied one of the image colours across using slicing.
However, the output just appears to be a red square, therefore I don't think the code is working how I would expect it to. Does anyone have any idea where I'm going wrong?
red_image[:,:,0] = red_channel
image = plt.imread('archway.jpg')
plt.imshow(image)
red_channel = image[:,:,0]
red_image = np.zeros(image.shape)
red_image[:,:,0] = red_channel
plt.imshow(red_image)
I want to rotate a black and white image. I am trying to use the rotate function as follows:
image.rotate(angle, fillcolor=255)
I am required to older versions of Python and Pillow, and they do not support the 'fillcolor' argument. I cannot upgrade to the newer versions due to certain restrictions and cannot use any external libraries.
Is there another way to fill the area outside the rotated image with white color using Pillow?
Rotated image has black color in the area outside the rotated part. I want to fill it with white color.
Original : Original image
Rotated :Rotated image
You can try Interpolating the Original Image, with the cropped one via Image.composite() to get rid of the black bars/borders.
from PIL import Image
img = Image.open(r"Image_Path").convert("RGBA")
angle = 30
img = img.rotate(angle)
new_img = Image.new('RGBA', img.size, 'white')
Alpha_Image = Image.composite(img, new_img, img)
Alpha_Image = Alpha_Image.convert(img.mode)
Alpha_Image.show()
The above code takes in an Image, converts it into mode RGBA (Alpha is required for this process), and then rotates the Image by 30 degrees. After that It creates a empty Image object of mode RGBA of the same dimensions as the original image, with each pixel having a default value of 255 each channel (i.e Pure white for RGB, and Full Opacity in the context of Alpha/Transparency). Then Interpolates the original image with this empty one using the mask of original Image (we are using the transparency mask of the first image). This results in the Desired images, where black bars/edges are replaced by white. In the end we convert the image color space to the original one.
ORIGINAL IMAGE:-
IMAGE AFTER ROTATING 30 DEGREES:-
An awkward option that has always worked for me, seeing as with my tools I always get a light gray "border" around the rotated image that interferes with filling:
add a border on the non-rotated image and use the fill color with that border.
The bordering operation is lossless and filling will be exact (and easy).
rotate the bordered image. The seam will now also be correct (but not exact unless you
rotate by 45° or 90°).
calculate the size of the rotated border using trigonometry. The result will not be exact (i.e. "131.12 pixel"). Usually you can do this in reverse, starting with an exact border on the rotated image and calculating the border you need to add, and adjust the border width so that the nonrotated border is exact. Example: with a rotated border of 170 pixels you get a nonrotated border of 140.3394 pixels. So you use a 510 pixel rotated border, resulting in the need to add a 421.018 pixel nonrotated border. This is close enough to 421 pixels that it is acceptable.
remove the rotated border.
This also helps avoiding some artefacts near the cut parts of the image that fall off the rotated image.
It has the drawback that you end up with a more massive rotation, with higher memory expenditure and computation time, especially if you use larger borders to increase precision.
Edit: As no external libraries are allowed, I would suggest cropping the rectangle you want and pasting it onto the original image, this could be done with magic numbers (of the rectangle's coordinates), this works for me (you might will need to tweek a little)
im = Image.open("mFul4.png")
rotated = im.rotate(105)
box = (55, 65,200,210)
d = rotated.crop(box=box)
im.paste(d, box=box)
im.save("ex.bmp" )
and the output
Edit2: This is the ugliest way, but it works, you might need to tweak the magic numbers a bit to have it more precise, I was working on your given image, so couldn't tell when i'm overdoing it. It produces the same output
from PIL import Image
im = Image.open("mFul4.png")
angle=105
cos = 0.240959049 # -cos(angle)
d = im.rotate(angle)
pix = d.load()
tri_x = 120
for i in range(4): # 4 triangles
for j in range(tri_x, -1, -1):
for k in range(int((tri_x-j)*cos)+1, -1, -1):
x,y =( j, k )if i <1 else (d.size[0]-j-1, d.size[1]-k-1)
if i in [2,3]:
y, x = (d.size[0] - j-2 , k) if i <3 else (j, d.size[1] - k)
pix[x,y] = (255, 255, 255, 255)
d.show()
I have code for rectangle cropping ,Honestly I'm beginner to python
this code was i saw on a site
I'm using PIL library
from PIL import Image
im = Image.open("lenna.png")
crop_rectangle = (50, 50, 200, 200)
cropped_im = im.crop(crop_rectangle)
cropped_im.show()
please help me to crop ellipse or circle region from a image
thank you in advance
Cropping an image to an elliptical or circle region will produce the same results as cropping to a square, if their extents are the same. I am assuming that you also want to mask the image as well as crop?
To do this, create a blank mask PIL Image with the same extent as the original, use PIL.ImageDraw.Draw to draw a polygon onto the image. The mask image should now now have binary pixel values where "1" represents masked. Then simply set all values in the original image to a masked value (i.e. np.nan) where the mask pixel values equal 1 (e.g. original_image[mask == 1] = np.nan).
Here is an example of the kinds of images I'll be dealing with:
(source: csverma at pages.cs.wisc.edu)
There is one bright spot on each ball. I want to locate the coordinates of the centre of the bright spot. How can I do it in Python or Matlab? The problem I'm having right now is that more than one points on the spot has the same (or roughly the same) white colour, but what I need is to find the centre of this 'cluster' of white points.
Also, for the leftmost and rightmost images, how can I find the centre of the whole circular object?
You can simply threshold the image and find the average coordinates of what is remaining. This handles the case when there are multiple values that have the same intensity. When you threshold the image, there will obviously be more than one bright white pixel, so if you want to bring it all together, find the centroid or the average coordinates to determine the centre of all of these white bright pixels. There isn't a need to filter in this particular case. Here's something to go with in MATLAB.
I've read in that image directly, converted to grayscale and cleared off the white border that surrounds each of the images. Next, I split up the image into 5 chunks, threshold the image, find the average coordinates that remain and place a dot on where each centre would be:
im = imread('http://pages.cs.wisc.edu/~csverma/CS766_09/Stereo/callight.jpg');
im = rgb2gray(im);
im = imclearborder(im);
%// Split up images and place into individual cells
split_point = floor(size(im,2) / 5);
images = mat2cell(im, size(im,1), split_point*ones(5,1));
%// Show image to place dots
imshow(im);
hold on;
%// For each image...
for idx = 1 : 5
%// Get image
img = images{idx};
%// Threshold
thresh = img > 200;
%// Find coordinates of thresholded image
[y,x] = find(thresh);
%// Find average
xmean = mean(x);
ymean = mean(y);
%// Place dot at centre
%// Make sure you offset by the right number of columns
plot(xmean + (idx-1)*split_point, ymean, 'r.', 'MarkerSize', 18);
end
I get this:
If you want a Python solution, I recommend using scikit-image combined with numpy and matplotlib for plotting. Here's the above code transcribed in Python. Note that I saved the image referenced by the link manually on disk and named it balls.jpg:
import skimage.io
import skimage.segmentation
import numpy as np
import matplotlib.pyplot as plt
# Read in the image
# Note - intensities are floating point from [0,1]
im = skimage.io.imread('balls.jpg', True)
# Threshold the image first then clear the border
im_clear = skimage.segmentation.clear_border(im > (200.0/255.0))
# Determine where to split up the image
split_point = int(im.shape[1]/5)
# Show image in figure and hold to place dots in
plt.figure()
plt.imshow(np.dstack([im,im,im]))
# For each image...
for idx in range(5):
# Extract sub image
img = im_clear[:,idx*split_point:(idx+1)*split_point]
# Find coordinates of thresholded image
y,x = np.nonzero(img)
# Find average
xmean = x.mean()
ymean = y.mean()
# Plot on figure
plt.plot(xmean + idx*split_point, ymean, 'r.', markersize=14)
# Show image and make sure axis is removed
plt.axis('off')
plt.show()
We get this figure:
Small sidenote
I could have totally skipped the above code and used regionprops (MATLAB link, scikit-image link). You could simply threshold the image, then apply regionprops to find the centroids of each cluster of white pixels, but I figured I'd show you a more manual way so you can appreciate the algorithm and understand it for yourself.
Hope this helps!
Use a 2D convolution and then find the point with the highest intensity. You can apply a concave non-linear function (such as exp) on intensity values before applying the 2d convolution, to intensify the bright spots relative to the dimmer parts of the image. Something like conv2(exp(img),ker)