How to deal with different image size - python

I am working on image to find outer body points but when I save them they have different size and which is creating problem.
My original image is of a person. (1.8Mb)
I create a mask of the person to detect the outer body parts from the original image and save it. (400kb)
From the mask, I obtain the outer body points and plot them on original image, but they not aligned because of difference in size of original and mask image.
To save images without axes and with full size so that it can match with original image I am saving them by the following method. After saving they look exactly same but due to difference in size points are not aligned.
plt.axis('off')
fig.axes.get_xaxis().set_visible(False)
fig.axes.get_yaxis().set_visible(False)
plt.savefig('kmask.jpg',bbox_inches='tight',pad_inches = 0,dpi=1500)
Result when I plot points on original image:
How to deal with such problems?

From what I can tell you are saving the mask in a different size than the original image.
One way to solve this is to first figure out the resolution of the original image. If you don't know you can always check:
img = plt.imread('body_image.jpg')
print(img.shape)
# The first two numbers correspond to the height and width of the image in pixels
The problem is that matplotlib doesn't deal with image resolution the same way. Instead it requires the figure size (in inches) and the DPI (or how many pixels per inch). One way would be to calculate what values you need, and save the image accordingly.
image_height_in_pixels = height_in_inches * dpi
Then use these two numbers to save the mask.
f = plt.figure(figsize=(height_in_inches, width_in_inches))
plt.axis('off')
plt.savefig('kmask.jpg', bbox_inches='tight', pad_inches=0, dpi=dpi)
If this doesn't work try saving the original image too, with matplotlib. This will ensure that both of them have the same dimensions.

Related

How do I produce images of closed loops with user-specified image dimensions? [duplicate]

Say I have an image of size 3841 x 7195 pixels. I would like to save the contents of the figure to disk, resulting in an image of the exact size I specify in pixels.
No axis, no titles. Just the image. I don't personally care about DPIs, as I only want to specify the size the image takes in the screen in disk in pixels.
I have read other threads, and they all seem to do conversions to inches and then specify the dimensions of the figure in inches and adjust dpi's in some way. I would like to avoid dealing with the potential loss of accuracy that could result from pixel-to-inches conversions.
I have tried with:
w = 7195
h = 3841
fig = plt.figure(frameon=False)
fig.set_size_inches(w,h)
ax = plt.Axes(fig, [0., 0., 1., 1.])
ax.set_axis_off()
fig.add_axes(ax)
ax.imshow(im_np, aspect='normal')
fig.savefig(some_path, dpi=1)
with no luck (Python complains that width and height must each be below 32768 (?))
From everything I have seen, matplotlib requires the figure size to be specified in inches and dpi, but I am only interested in the pixels the figure takes in disk. How can I do this?
To clarify: I am looking for a way to do this with matplotlib, and not with other image-saving libraries.
Matplotlib doesn't work with pixels directly, but rather physical sizes and DPI. If you want to display a figure with a certain pixel size, you need to know the DPI of your monitor. For example this link will detect that for you.
If you have an image of 3841x7195 pixels it is unlikely that you monitor will be that large, so you won't be able to show a figure of that size (matplotlib requires the figure to fit in the screen, if you ask for a size too large it will shrink to the screen size). Let's imagine you want an 800x800 pixel image just for an example. Here's how to show an 800x800 pixel image in my monitor (my_dpi=96):
plt.figure(figsize=(800/my_dpi, 800/my_dpi), dpi=my_dpi)
So you basically just divide the dimensions in inches by your DPI.
If you want to save a figure of a specific size, then it is a different matter. Screen DPIs are not so important anymore (unless you ask for a figure that won't fit in the screen). Using the same example of the 800x800 pixel figure, we can save it in different resolutions using the dpi keyword of savefig. To save it in the same resolution as the screen just use the same dpi:
plt.savefig('my_fig.png', dpi=my_dpi)
To to save it as an 8000x8000 pixel image, use a dpi 10 times larger:
plt.savefig('my_fig.png', dpi=my_dpi * 10)
Note that the setting of the DPI is not supported by all backends. Here, the PNG backend is used, but the pdf and ps backends will implement the size differently. Also, changing the DPI and sizes will also affect things like fontsize. A larger DPI will keep the same relative sizes of fonts and elements, but if you want smaller fonts for a larger figure you need to increase the physical size instead of the DPI.
Getting back to your example, if you want to save a image with 3841 x 7195 pixels, you could do the following:
plt.figure(figsize=(3.841, 7.195), dpi=100)
( your code ...)
plt.savefig('myfig.png', dpi=1000)
Note that I used the figure dpi of 100 to fit in most screens, but saved with dpi=1000 to achieve the required resolution. In my system this produces a png with 3840x7190 pixels -- it seems that the DPI saved is always 0.02 pixels/inch smaller than the selected value, which will have a (small) effect on large image sizes. Some more discussion of this here.
This worked for me, based on your code, generating a 93Mb png image with color noise and the desired dimensions:
import matplotlib.pyplot as plt
import numpy
w = 7195
h = 3841
im_np = numpy.random.rand(h, w)
fig = plt.figure(frameon=False)
fig.set_size_inches(w,h)
ax = plt.Axes(fig, [0., 0., 1., 1.])
ax.set_axis_off()
fig.add_axes(ax)
ax.imshow(im_np, aspect='normal')
fig.savefig('figure.png', dpi=1)
I am using the last PIP versions of the Python 2.7 libraries in Linux Mint 13.
Hope that helps!
The OP wants to preserve 1:1 pixel data. As an astronomer working with science images I cannot allow any interpolation of image data as it would introduce unknown and unpredictable noise or errors. For example, here is a snippet from a 480x480 image saved via pyplot.savefig():
Detail of pixels which matplotlib resampled to be roughly 2x2, but notice the column of 1x2 pixels
You can see that most pixels were simply doubled (so a 1x1 pixel becomes 2x2) but some columns and rows became 1x2 or 2x1 per pixel which means the the original science data has been altered.
As hinted at by Alka, plt.imsave() which will achieve what the OP is asking for. Say you have image data stored in image array im, then one can do something like
plt.imsave(fname='my_image.png', arr=im, cmap='gray_r', format='png')
where the filename has the "png" extension in this example (but you must still specify the format with format='png' anyway as far as I can tell), the image array is arr, and we chose the inverted grayscale "gray_r" as the colormap. I usually add vmin and vmax to specify the dynamic range but these are optional.
The end result is a png file of exactly the same pixel dimensions as the im array.
Note: the OP specified no axes, etc. which is what this solution does exactly. If one wants to add axes, ticks, etc. my preferred approach is to do that on a separate plot, saving with transparent=True (PNG or PDF) then overlay the latter on the image. This guarantees you have kept the original pixels intact.
Based on the accepted response by tiago, here is a small generic function that exports a numpy array to an image having the same resolution as the array:
import matplotlib.pyplot as plt
import numpy as np
def export_figure_matplotlib(arr, f_name, dpi=200, resize_fact=1, plt_show=False):
"""
Export array as figure in original resolution
:param arr: array of image to save in original resolution
:param f_name: name of file where to save figure
:param resize_fact: resize facter wrt shape of arr, in (0, np.infty)
:param dpi: dpi of your screen
:param plt_show: show plot or not
"""
fig = plt.figure(frameon=False)
fig.set_size_inches(arr.shape[1]/dpi, arr.shape[0]/dpi)
ax = plt.Axes(fig, [0., 0., 1., 1.])
ax.set_axis_off()
fig.add_axes(ax)
ax.imshow(arr)
plt.savefig(f_name, dpi=(dpi * resize_fact))
if plt_show:
plt.show()
else:
plt.close()
As said in the previous reply by tiago, the screen DPI needs to be found first, which can be done here for instance: http://dpi.lv
I've added an additional argument resize_fact in the function which which you can export the image to 50% (0.5) of the original resolution, for instance.
This solution works for matplotlib versions 3.0.1, 3.0.3 and 3.2.1.
def save_inp_as_output(_img, c_name, dpi=100):
h, w, _ = _img.shape
fig, axes = plt.subplots(figsize=(h/dpi, w/dpi))
fig.subplots_adjust(top=1.0, bottom=0, right=1.0, left=0, hspace=0, wspace=0)
axes.imshow(_img)
axes.axis('off')
plt.savefig(c_name, dpi=dpi, format='jpeg')
Because the subplots_adjust setting makes the axis fill the figure, you don't want to specify a bbox_inches='tight', as it actually creates whitespace padding in this case. This solution works when you have more than 1 subplot also.
I had same issue. I used PIL Image to load the images and converted to a numpy array then patched a rectangle using matplotlib. It was a jpg image, so there was no way for me to get the dpi from PIL img.info['dpi'], so the accepted solution did not work for me. But after some tinkering I figured out way to save the figure with the same size as the original.
I am adding the following solution here thinking that it will help somebody who had the same issue as mine.
import matplotlib.pyplot as plt
from PIL import Image
import numpy as np
img = Image.open('my_image.jpg') #loading the image
image = np.array(img) #converting it to ndarray
dpi = plt.rcParams['figure.dpi'] #get the default dpi value
fig_size = (img.size[0]/dpi, img.size[1]/dpi) #saving the figure size
fig, ax = plt.subplots(1, figsize=fig_size) #applying figure size
#do whatver you want to do with the figure
fig.tight_layout() #just to be sure
fig.savefig('my_updated_image.jpg') #saving the image
This saved the image with the same resolution as the original image.
In case you are not working with a jupyter notebook. you can get the dpi in the following manner.
figure = plt.figure()
dpi = figure.dpi
The matplotlib reference has examples about han to set the figure size in different units. For pixels:
px = 1/plt.rcParams['figure.dpi'] # pixel in inches
plt.subplots(figsize=(600*px, 200*px))
plt.text(0.5, 0.5, '600px x 200px', **text_kwargs)
plt.show()
https://matplotlib.org/stable/gallery/subplots_axes_and_figures/figure_size_units.html#
plt.imsave worked for me.
You can find the documentation here: https://matplotlib.org/3.2.1/api/_as_gen/matplotlib.pyplot.imsave.html
#file_path = directory address where the image will be stored along with file name and extension
#array = variable where the image is stored. I think for the original post this variable is im_np
plt.imsave(file_path, array)
Why everyone keep using matplotlib?
If your image is an numpy array with shape (3841, 7195, 3), its data type is numpy.uint8 and rgb value ranges from 0 to 255, you can simply save this array as an image without using matplotlib:
from PIL import Image
im = Image.fromarray(A)
im.save("your_file.jpeg")
I found this code from another post

Detect surfaces from a binary numpy array (image)

Assume that I have a binary numpy array (0 or 1 / True or False) that come from a .jpg image (2D array, from a grayscale image). I just made some processing to get the edges of the image, based on color change.
Now, from every surface/body from this array I need to get its centers.
Here the original image:
Here the processed one:
Now I need to get the centers of each surface generated for this lines (i.e. indexes that more or less point the center of each surface generated).
In the case you are interested, you can find the file (.npy) here:
https://gofile.io/d/K8U3ZK
Thanks a lot!
Found a solution that works. scipy.ndimage.label assigns a unique int. to each label or area, to validate the results I simply plot the output array
from scipy.ndimage import label
labeled_array, no_feats = label(my_binary_flower)
plt.imshow(labeled_array)

Adding colour overlay to greyscale image

I'm wanting to add pre-generated heatmaps over photographs. The colours in the images aren't important and to make the heatmap colours stand out I'm making the images greyscale. I've done this using
grey = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
However the greyscale image now has one fewer dimensions compared to the heatmap (which is BRG). How can I overlay the heatmap on top of the grey image?
With the two identical size and mode images in place, execute the following code.
from PIL import Images
im_1 = Image.open("/constr/pics1/100_canary.png")
# mode is RGBA
im_2 = Image.open("/constr/pics1/100_cockcrow.png")
# Check on mode, size and format first for compatibility.
# Make both modes the same
im_4 = Image.blend(im_1, im_2, 0.5)
im_4.show()

How to get border pixels of an image in python?

I have an image, using steganography I want to save the data in border pixels only.
In other words, I want to save data only in the least significant bits(LSB) of border pixels of an image.
Is there any way to get border pixels to store data( max 15 characters text) in the border pixels?
Plz, help me out...
OBTAINING BORDER PIXELS:
Masking operations are one of many ways to obtain the border pixels of an image. The code would be as follows:
a= cv2.imread('cal1.jpg')
bw = 20 //width of border required
mask = np.ones(a.shape[:2], dtype = "uint8")
cv2.rectangle(mask, (bw,bw),(a.shape[1]-bw,a.shape[0]-bw), 0, -1)
output = cv2.bitwise_and(a, a, mask = mask)
cv2.imshow('out', output)
cv2.waitKey(5000)
After I get an array of ones with the same dimension as the input image, I use cv2.rectangle function to draw a rectangle of zeros. The first argument is the image you want to draw on, second argument is start (x,y) point and the third argument is the end (x,y) point. Fourth argument is the color and '-1' represents the thickness of rectangle drawn (-1 fills the rectangle). You can find the documentation for the function here.
Now that we have our mask, you can use 'cv2.bitwise_and' (documentation) function to perform AND operation on the pixels. Basically what happens is, the pixels that are AND with '1' pixels in the mask, retain their pixel values. Pixels that are AND with '0' pixels in the mask are made 0. This way you will have the output as follows:
.
The input image was :
You have the border pixels now!
Using LSB planes to store your info is not a good idea. It makes sense when you think about it. A simple lossy compression would affect most of your hidden data. Saving your image as JPEG would result in loss of info or severe affected info. If you want to still try LSB, look into bit-plane slicing. Through bit-plane slicing, you basically obtain bit planes (from MSB to LSB) of the image. (image from researchgate.net)
I have done it in Matlab and not quite sure about doing it in python. In Matlab,
the function, 'bitget(image, 1)', returns the LSB of the image. I found a question on bit-plane slicing using python here. Though unanswered, you might want to look into the posted code.
To access border pixel and enter data into it.
A shape of an image is accessed by t= img.shape. It returns a tuple of the number of rows, columns, and channels.A component is RGB which 1,2,3 respectively.int(r[0]) is variable in which a value is stored.
import cv2
img = cv2.imread('xyz.png')
t = img.shape
print(t)
component = 2
img.itemset((0,0,component),int(r[0]))
img.itemset((0,t[1]-1,component),int(r[1]))
img.itemset((t[0]-1,0,component),int(r[2]))
img.itemset((t[0]-1,t[1]-1,component),int(r[3]))
print(img.item(0,0,component))
print(img.item(0,t[1]-1,component))
print(img.item(t[0]-1,0,component))
print(img.item(t[0]-1,t[1]-1,component))
cv2.imwrite('output.png',img)

Invert Y-axis on image on python before plot

Hello and thank you for your help, my problem is this:
I am working on python using satellite images and shapefile, both were original on geographical projection and passed it to pixel so when I plot first the Landsat image is OK, i don't mind the position of axis here, but when i add to the 2D array the shapefile, the areas (in this case), are upside down (I know why because of location of the origin of a image is different from a normal plot.
I try to applied origin='lower' but that is to the final result, when both image and shapefile are added and is not what i want. Also try .reverse(), [::-1] to the array of Y axis before adding to the image but no good result, hope you can help me.
This is the plot of areas in the jungle (of Peru if you are wondering) and blue box is the area i want to show on Landsat (satellite image):
!http://imgur.com/YlM8zhX
This is the area converted to pixels and added to the satellite image, look how is inverted due the location of the origin compared it to the first image
!http://imgur.com/VJN7QYW
thanks for your help.
EDIT: this is kind the code (is really extensive, so the most important is this
#THIS IS THE BORDER AREA EXTRACTED FROM THE SHAPEFILE AND ALREADY CONVERTED TO PIXEL
mos_ext = mos_total[(m0):(m1+1), int(minXY[0]):int(maxXY[0]+1 )]
#I just dilate a little for better visualization
mask3 = cv2.getStructuringElement(cv2.MORPH_RECT, (6, 6))
mos_ext=cv2.dilate(mos_ext, mask3, iterations=1)
#This IS THE LANDSAT IMAGE
im_ls2=mask_bi3.copy()
ADDING THE BORDERS OF AREA (WHAT IS SHOWN ON RED ON THE SECOND IMAGE)
coorde=np.where(mos_ext==255)
im_ls2[coorde[0], coorde[1], 0] = 255
im_ls2[coorde[0], coorde[1], 1] = 0
im_ls2[coorde[0], coorde[1], 2] = 0
fig3, axes3 = plt.subplots(1)
axes3.imshow(im_ls2, cmap='gray', interpolation='nearest')

Categories