I'm using a custom library to transform a 1D signal to a 2D representation. The output it's printed through plt.imshow(), used by a function inside of the library. I have the result but i don't want to save the picture locally. There is a way to get as a PIL image what is being used by plt.imshow?
EDIT: The answer is yes, as #Davide_sd is pointing out ax.images[idx].get_array() can be used to retrieve the data
You can use ax.images[idx].get_array() to retrieve the data, after which you can use it on PIL. ax is the axes where the image has been plotted. idx is the index of the image you are interested: if you have plotted a single image, then idx=0.
Related
I'm trying to use Pillow (PIL fork) to convert an image to greyscale and apply the pixel luminosity of the original image over it. I'm new to both Pillow and Python and am having difficulty with this. I tried using histogram data, the point function, etc., but most simply result in an increased contrast that I would get from calling ImageEnhance.Contrast(). Is there something simple that I'm missing or would this also require something like Numpy?
Thank you for any information you can provide.
While plotting single channel image (i.e. while plotting grayscale images) when using Python it does not plot in gray-scale.
Example: expected output, after converting a coloured image using COLOR_BGR2GRAY from open cv :
But, the output obtained is:
Can anyone help me find out, what is the exact issue?
Upon researching, I found out that, the issue is actually not with open cv, but it is with matplotlib package. While displaying the image, the matplotlib package uses a colormap and hence it has to be explicitly set to gray, using :
plt.imshow(image, cmap="gray")
I am trying to save a grayscale image using matplotlib savefig(). I find that the png file which is saved after the use of matplotlib savefig() is a bit different from the output image which is showed when the code runs. The output image which is generated when the code is running contains more details than the saved figure.
How can I save the output plot in such a manner that all details are stored in the output image?
My my code is given below:
import cv2
import matplotlib.pyplot as plt
plt.figure(1)
img_DR = cv2.imread(‘image.tif',0)
edges_DR = cv2.Canny(img_DR,20,40)
plt.imshow(edges_DR,cmap = 'gray')
plt.savefig('DR.png')
plt.show()
The input file (‘image.tif’) can be found from here.
Following is the output image which is generated when the code is running:
Below is the saved image:
Although the two aforementioned images denote the same picture, one can notice that they are slightly different. A keen look at the circular periphery of the two images shows that they are different.
Save the actual image to file, not the figure. The DPI between the figure and the actual created image from your processing will be different. Since you're using OpenCV, use cv2.imwrite. In your case:
cv2.imwrite('DR.png', edges_DR)
Use the PNG format as JPEG is lossy and would thus give you a reduction in quality to promote small file sizes. If accuracy is the key here, use a lossless compression standard and PNG is one example.
If you are somehow opposed to using OpenCV, Matplotlib has an equivalent image writing method called imsave which has the same syntax as cv2.imwrite:
plt.imsave('DR.png', edges_DR, cmap='gray')
Note that I am enforcing the colour map to be grayscale for imsave as it is not automatically inferred like how OpenCV writes images to file.
Since you are using cv2 to load the image, why not using it also to save it.
I think the command you are looking for is :
cv2.imwrite('gray.jpg', gray_image)
Using a DPI that matches the image size seems to make a difference.
The image is of size width=2240 and height=1488 (img_DR.shape). Using fig.get_size_inches() I see that the image size in inches is array([7.24, 5.34]). So an appropriate dpi is about 310 since 2240/7.24=309.4 and 1488/5.34=278.65.
Now I do plt.savefig('DR.png', dpi=310) and get
One experiment to do would be to choose a high enough DPI, calculate height and width of figure in inches, for example width_inch = width_pixel/DPI and set figure size using plt.figure(figsize=(width_inch, height_inch)), and see if the displayed image itself would increase/decrease in quality.
Hope this helps.
I have CSV files that I need to feed to a Deep-Learning network. Currently my CSV files are of size 360*480, but the network restricts them to be of size 224*224. I am using Python and Keras for the deep-learning part. So how can I resize the matrices?
I was thinking that since aspect ratio is 3:4, so if I resize them to 224:(224*4/3) = 224:299, and then crop the width of the matrix to 224, it could serve the purpose. But I cannot find a suitable function to do that. Please suggest.
I think you're looking for cv.resize() if you're using images.
If not, try numpy.ndarray.resize()
Image processing
If you want to do nontrivial alterations to the data as images (i.e. interpolating between pixel values, assuming that they represent photographs) then you might want to use proper image processing libraries for that. You'd need to treat them not as raw matrixes (csv of numbers) but convert them to rgb images, do the transformations you desire, and convert them back to a numpy matrix.
OpenCV (https://docs.opencv.org/3.4/da/d6e/tutorial_py_geometric_transformations.html)
or Pillow (https://pillow.readthedocs.io/en/3.1.x/reference/Image.html) might be useful to do that.
I found a short and simple way to solve this. This uses the Python Image Library/Pillow.
import numpy as np
import pylab as pl
from PIL import Image
matrix = np.array(list(csv.reader(open('./path/mat.csv', "r"), delimiter=","))).astype("uint8") #read csv
imgObj = Image.fromarray(matrix) #convert matrix to Image object
resized_imgObj = img.resize((224,224)) #resize Image object
imgObj.show()
resized_imgObj.show()
resized_matrix = np.asarray(img) #convert Image object to matrix
While numpy module also has a resize function, but it is not as useful as the aforementioned way.
When I tried it, the resized matrix had lost all the intricacies and aesthetic aspect of the original matrix. This is probably due to the fact that numpy.ndarray.resize doesn't interpolate and missing entries are filled with zeros.
So, for this case Image.resize() is more useful.
You could also convert the csv file to a list, truncate the list, and then convert the list to a numpy array and then use np.reshape.
I would like to make a QT based GUI program which overlays a heatmap on an image from 20fps streaming FHD video.
The target image looks like this
(Additionally, a colorbar beside an overlayed image shall also be displayed.)
The size of heatmap source for each image is 100x40, and therefore interpolation for FHD(1920x1080) is needed per frame.
(FYI, The min and max values of heatmap source are around 10 and 100000, respectively.)
First of all, I used cv2.VideoCapture function in Opencv to get images from video. And then, I googled some examples to combine image and heatmap using Matplotlib such as:
https://github.com/durandtibo/heatmap
Heatmap on top of image
Overlay an image segmentation with numpy and matplotlib
The problem that I faced is processing speed to meet 20fps for FHD resolution.
It seemed that Opencv is more adequete rather than Matplotlib for real-time processing.
(I couldn't find the good way to show heatmap and colorbar using Pyqtgraph even though it provides high speed.)
So, I searched another way using cv2.applyColorMap and cv2.resize.
It looks like cv2.applyColorMap function doesn't automatically adjust the range of values unlike imshow function in Matplotlib, and therefore the color of result image is strange.
Moreover, Opencv image needs to be adopted to QtWidget using QtGui.QImage and QtGui.QPixmap which results additional delay.
Finally, the overall processing time of the method I searched can not meet the requirement.
Please show me the way for the solution.
Thanks in advance.