Viewing one layer of a pyramid image Python - python

I'm trying to view the second largest layer from a pyramid image that has already been created. I am finding lots of tutorials on how to create pyramid images, but not so much info on how to specifically view layers of a pre-made image. I was previously trying to use cv2 and matplotlib to view my image, however I was getting some distortion by one of the image edges. The distortion looks almost like white noise on a TV screen.
This is the code I was using to try and view the image:
import matplotlib.pyplot as plt
import cv2
img = cv2.imread('test.tif')
plt.imshow(img)
plt.show()
The largest image layer is 5367 (width) by 3339 (height) pixels. Any suggestions on how to go about viewing these images would be greatly appreciated!

Related

Signal.Correlate2D/Signal.Convolution2D return unexpected and/or wrong results

So I'm trying to use correlation and convolution to find a sub-image (a "template") inside a larger image. In this case, I am trying to find this image:
Inside this image:
As you can see the template is actually just the first band/line in the second image. However, I've run into an issue that I don't understand. The first issue I ran into was the fact that both correlate2D and convolve2D returned noise to me. After looking through some SO posts, I found this post and this post. I was not normalizing the image and template, which is why I got noise in return. After normalizing I was able to get an output that looked better:
The expected result here would be a lot of green between 0 and 100 and little to no green for the rest of the axis. Though as you can tell by looking at the image above, this is not what I got and I am confused as to why this is the case. Convolution is no better:
I can't figure out what I'm doing wrong here. I tried using correlation2D with the images from the second SO post I linked to above to confirm that it is working correctly and I can confirm it is working properly:
Why does it work for the image/template given in the second SO post and not for my image/template? Is it due to the quality of the images? Here is my code:
import cv2
from scipy import signal
from matplotlib import pyplot as plt
# Load images. The flag "0" means that the images should be loaded in grayscale.
image = cv2.imread(r"/home/user/Data/image.png", 0)
template = cv2.imread(r"/home/user/Data/template.png", 0)
# Normalize
image = image - image.mean()
template = template - template.mean()
# Apply correlation and convolution to images
correlation = signal.correlate2d(image, template)
convolution = signal.convolve2d(image, template)
# Show image
plt.imshow(correlation)
plt.imshow(convolution)
plt.show()

Python Image Scaling

I'm trying to scale a screenshot using this code :
im = Image.open(img_path)
im = im.resize((newWidth,newHeight),Image.ANTIALIAS)
but this results in a very low quality image especially texts are impossible to read
Original
Click for original Image
Scaled
Click for scaled Image
I have tried other algorithms in PIL but none of them gives the result I wanted.
I actually tried to resize my images inside Office PowerPoint and texts are clear and readable.
PowerPoint scaled
Click for Office scaled Image
Are there any other ways which I can scale the Images ?
it had worked for me.
import imutils
im = imutils.resize(im, width=Image.ANTIALIAS)
if you want for details, you can examine https://www.programcreek.com/python/example/93640/imutils.resize

How to Use Sci-kit Learn reconstruct_from_patches_2d

I'm working on an imaging project that needs to read images, split them into overlapping patches, run some operation on the patches, and then recombine them into a single image. For this task, I decided to the sci-kit learn methods extract_patches_2d, and reconstruct_from_patches_2d.
https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.image.extract_patches_2d.html
https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.image.reconstruct_from_patches_2d.html
import numpy as np
import cv2
from sklearn.feature_extraction import image as extraction
img = cv2.imread("cat_small.jpg", cv2.IMREAD_COLOR)
grid_size = 500
images = extraction.extract_patches_2d(img, (grid_size, grid_size), max_patches=100)
image = extraction.reconstruct_from_patches_2d(images, img.shape)
cv2.imwrite("stack_overflow_test.jpg", image)
I can tell the extraction works correctly, since each of the patches can be saved as an individual image. The reconstruction does not work.
The image:
becomes:
Which looks entirely black when viewed on a white background, but does have some white pixels toward the top left (can be seen when opened in a separate tab). This same problem happens in grayscale.
I have tried adding astype(np.uint8) as explained in
How to convert array to image colour channel in python?
to no avail. How is this method used properly?

How to save grayscale image in Python?

I am trying to save a grayscale image using matplotlib savefig(). I find that the png file which is saved after the use of matplotlib savefig() is a bit different from the output image which is showed when the code runs. The output image which is generated when the code is running contains more details than the saved figure.
How can I save the output plot in such a manner that all details are stored in the output image?
My my code is given below:
import cv2
import matplotlib.pyplot as plt
plt.figure(1)
img_DR = cv2.imread(‘image.tif',0)
edges_DR = cv2.Canny(img_DR,20,40)
plt.imshow(edges_DR,cmap = 'gray')
plt.savefig('DR.png')
plt.show()
The input file (‘image.tif’) can be found from here.
Following is the output image which is generated when the code is running:
Below is the saved image:
Although the two aforementioned images denote the same picture, one can notice that they are slightly different. A keen look at the circular periphery of the two images shows that they are different.
Save the actual image to file, not the figure. The DPI between the figure and the actual created image from your processing will be different. Since you're using OpenCV, use cv2.imwrite. In your case:
cv2.imwrite('DR.png', edges_DR)
Use the PNG format as JPEG is lossy and would thus give you a reduction in quality to promote small file sizes. If accuracy is the key here, use a lossless compression standard and PNG is one example.
If you are somehow opposed to using OpenCV, Matplotlib has an equivalent image writing method called imsave which has the same syntax as cv2.imwrite:
plt.imsave('DR.png', edges_DR, cmap='gray')
Note that I am enforcing the colour map to be grayscale for imsave as it is not automatically inferred like how OpenCV writes images to file.
Since you are using cv2 to load the image, why not using it also to save it.
I think the command you are looking for is :
cv2.imwrite('gray.jpg', gray_image)
Using a DPI that matches the image size seems to make a difference.
The image is of size width=2240 and height=1488 (img_DR.shape). Using fig.get_size_inches() I see that the image size in inches is array([7.24, 5.34]). So an appropriate dpi is about 310 since 2240/7.24=309.4 and 1488/5.34=278.65.
Now I do plt.savefig('DR.png', dpi=310) and get
One experiment to do would be to choose a high enough DPI, calculate height and width of figure in inches, for example width_inch = width_pixel/DPI and set figure size using plt.figure(figsize=(width_inch, height_inch)), and see if the displayed image itself would increase/decrease in quality.
Hope this helps.

overlay an image and show lighter pixel at each pixel location

I have two black and white images that I would like to merge with the final image showing the lighter/ white pixel at each pixel location in both images. I tried the following code but it did not work.
background=Image.open('ABC.jpg').convert("RGBA")
overlay=Image.open('DEF.jpg').convert("RGBA")
background_width=1936
background_height=1863
background_width,background_height = background.size
overlay_resize= overlay.resize((background_width,background_height),Image.ANTIALIAS)
background.paste(overlay_resize, None, overlay_resize)
overlay=background.save("overlay.jpg")
fn=np.maximum(background,overlay)
fn1=PIL.Image.fromarray(fn)
plt.imshow(fnl)
plt.show()
The error message I get is cannot handle this data type. Any help or advice anyone could give would be great.
I think you are over-complicating things. You just need to read in both images and make them greyscale numpy arrays, then choose the lighter of the two pixels at each location.
So starting with these two images:
#!/usr/local/bin/python3
import numpy as np
from PIL import Image
# Open two input images and convert to greyscale numpy arrays
bg=np.array(Image.open('a.png').convert('L'))
fg=np.array(Image.open('b.png').convert('L'))
# Choose lighter pixel at each location
result=np.maximum(bg,fg)
# Save
Image.fromarray(result).save('result.png')
You will get this:
Keywords: numpy, Python, image, image processing, compose, blend, blend mode, lighten, lighter, Photoshop, equivalent, darken, overlay.

Categories