Hi guys please help me i can't solve this, i have this image
When i try to convert it to ico using
from PIL import Image,ImageTk
png = Image.open("converter.png")
png.save("icon.ico", format='ICO',quality=100)
And i use the icon for a shortcut in the desktop the result is this
It's stretched to the top and the bottom. How to solve this and keep the same aspect ratio of the image? Thanks
It doesn't work even with
from PIL import Image,ImageTk
png = Image.open("converter.png")
png.resize((64,64), resample=0)
png.save("icon.ico", format='ICO',quality=100)
You need to resize the image to a square, but you don't want to stretch or compress the image.
You can create a new square image and place your original image in the center.
from PIL import Image
png = Image.open("converter.png")
size = png.size
ico = Image.new(mode="RGBA", size=(max(size), max(size)), color=(0, 0, 0, 0))
ico.paste(png, (int((max(size)-size[0])/2), int((max(size)-size[1])/2)))
ico.save("icon.ico", format='ICO', quality=100)
You need to use Image.resize(size, resample=0) function to make .ico file squared (equal width and height.
In your case it would look something like this:
png.resize((64,64), resample=0)
You can fiddle with resample argument to make it look better.
resample – An optional resampling filter.
This can be one of PIL.Image.NEAREST (use nearest neighbour), >PIL.Image.BILINEAR (linear interpolation), PIL.Image.BICUBIC (cubic
spline interpolation), or PIL.Image.LANCZOS (a high-quality
downsampling filter).
If omitted, or if the image has mode “1” or “P”, it is set PIL.Image.NEAREST.
Related
I have pictures that I want to resize as they are currently quite big. The pictures are supposed to be going to Power BI and Power BI has the maximum limitation of around 32k base64 string. I created a function to resize the image but the image has become blurry and less visible after resizing. The length of the base64 image of 1 picture was around 150,000 which came down to around 7000.
# Converting into base64
outputBuffer = BytesIO()
img2.save(outputBuffer, format='JPEG')
bgBase64Data = outputBuffer.getvalue()
# Creating a new column for highlighted picture as base64
#image_base64_highlighted = base64.b64encode(bgBase64Data).decode() ## http://stackoverflow.com/q/16748083/2603230
#print(img2)
resize_factor = 30000/len(base64.b64encode(bgBase64Data))
im = Image.open(io.BytesIO(bgBase64Data))
out = im.resize( [int(resize_factor * s) for s in im.size] )
output_byte_io = io.BytesIO()
out.save(output_byte_io, 'JPEG')
final = output_byte_io.getvalue()
image_base64_highlighted = base64.b64encode(final).decode()
I think it is shrinking the image too much. Is there anyway I can improve the visibility of the image. I want to be able to see at least the text in the image. I cannot post the images due to PII. Any idea?
Encoding with base64 adds around 30% to your image size, so you should aim for a JPEG size of 24kB to ensure it remains under 32kB when encoded.
You can reduce an image to a target size of 24kB using my answer here.
You can also use wand to reduce the quality of a JPEG till it reaches a certain size:
from wand.image import Image
import io
# Create a canvas with noise to make incompressible
with Image(width=640, height=480, pseudo='xc:') as canvas:
canvas.noise('random')
# This is the critical line that enforces a max size for your JPEG
canvas.options['jpeg:extent'] = '72kb'
jpeg = canvas.make_blob('jpeg')
print(f'JPEG size: {len(jpeg)}')
You can do the same thing in the command-line by shelling out to ImageMagick:
magick INPUT.JPG -define jpeg:extent=24kb OUTPUT.JPG
I think you can do that with pygame itself. But its recommended for you to try open-cv python for this. I think you should use cv2.resize(). And the parameters are;
source : Input Image array (Single-channel, 8-bit or floating-point)
dsize : Size of the output array
dest : Output array (Similar to the dimensions and type of Input image array)
fx : Scale factor along the horizontal axis
fy : Scale factor along the vertical axis
interpolation: One of the above interpolation methods
I am trying to visualize a tiff image in an ipython notebook using the following code, import statements are ignored for clarity purposes.
from PIL import Image
orig_img_path = os.path.join("path/to/tiff/0.tiff")
img_orig = Image.open(orig_img_path,"r")
plt.imshow(img_orig)
plt.show()
The above snippet just shows me the following black image -
I know that the image pixel values are non-zero since I can open the original images on my MacBook and they look like the following -
I also double check that the pixel values are nonzero by using the following code of casting the PIL image to an np array using the following code and printing the array out -
img_arr = np.array(img_orig)
print(img_arr)
which gives me the following output -
I think I know what the issue is as well - that the matplotlib pyplot thinks that this is a PNG image or something like that and as we can see the pixel values are very small to be perceivable by the human eyes. I mean I can simply multiply the image by 255 which gives the following image as expected, although scaling by 255 is not entirely right since the pixel values need to be normalized for the minimum to corrspond to 0 and max to 255 assuming mat plot lib thinks that it is showing a PNG image -
I would like to know if there is a better way of doing so rather than me manually doing it all the time, any native way of displaying tiff in a Jupyter/ipython notebook which preserves all the good properties of tiff images such as floating point numbers etc.
Thanks and please let me know if anything is unclear.
Edit 1: Link to the original file is here - https://drive.google.com/file/d/1O1-QM6aeU5-QZhT36vOMjDND2vkZNgqB/view?usp=sharing
-- Megh
If you share your original image in .tiff format the solution might be more precise.
You are reading the image pixels as type float: 0-1, and after that, you parse them as uint8 : 0-255 which will turn all pixels values into 0 or 1: Black or almost Black
You can try the following approach to read your image (supposedly Black and White) and parse it:
import cv2
gray = cv2.imread("path/to/tiff/0.tiff", cv2.IMREAD_UNCHANGED)
cv2.namedWindow("MyImage", cv2.WINDOW_NORMAL)
cv2.imshow("MyImage", gray)
cv2.waitKey(0)
What is the range of values that you expect in that image? Did you do any preprocessing to it?
The image you are trying to display in matplotlib contains negative values. Visualizing works well for any kind of uint8 data (Grayscale and RGB) and data in the range of 0 - 1. Your displaying issue can be addressed by adding the min value of the image and then dividing by the max value (effectively normalizing your data to the range 0-1).
[...]
img = np.array(img_orig, dtype=float)
img += abs(np.min(img))
img /= np.max(img)
plt.imshow(img)
plt.show()
I wish to expand an image, so I can write something at the black expanded space under the original image, but it doesn't work.
I can't expand a black space and add it to the image, neither can write at a specific place
I'm new to the Pillow library, can anyone help?
You could do something like this:
read the image
create a new image (black by default) with the desired size
get data of the input image and put it down on the new one
from PIL import Image
HEIGH_OF_THE_BLACK_AREA = 100
with Image.open('image.jpg') as im:
new_im = Image.new(im.mode, size = (im.size[0], im.size[1] + HEIGH_OF_THE_BLACK_AREA))
new_im.putdata(im.getdata())
new_im.save('out.jpg')
I am new to Python and not really sure how to attack this problem.
What I am trying to do is to take a black and white image and change the value of the edge (x pixels thick) from 255 to some other greyscale value.
I need to do this to a set of png images inside of a folder. All images will be geometric (mostly a combination of straight lines) no crazy curves or patterns. Using Python 3.
Please check the images.
A typical file will look like this:
https://drive.google.com/open?id=13ls1pikNsO7ZbsHatC6cOr4O6Fj0MPOZ
I think this is what you want. The comments should explain pretty well what I going on:
#!/usr/bin/env python3
import numpy as np
from PIL import Image, ImageFilter
from skimage.morphology import dilation, square
# Open input image and ensure it is greyscale
image = Image.open('XYbase.png').convert('L')
# Find the edges
edges = image.filter(ImageFilter.FIND_EDGES)
# Convert edges to Numpy array and dilate (fatten) with our square structuring element
selem = square(6)
fatedges = dilation(np.array(edges),selem)
# Make Numpy version of our original image and set all fatedges to brightness 128
imnp = np.array(image)
imnp[np.nonzero(fatedges)] = 128
# Convert Numpy image back to PIL image and save
Image.fromarray(imnp).save('result.png')
So, if I start with this image:
The (intermediate) edges look like this:
And I get this as the result:
If you want the outlines fatter/thinner, increase/decrease the 6 in:
selem = square(6)
If you want the outlines lighter/darker, increase/decrease the 128 in:
imnp[np.nonzero(fatedges)] = 128
Keywords: image, image processing, fatten, thicken, outline, trace, edge, highlight, Numpy, PIL, Pillow, edge, edges, morphology, structuring element, skimage, scikit-image, erode, erosion, dilate, dilation.
I can interpret your question in a much simpler way, so I thought I'd answer that simpler question too. Maybe you already have a grey-ish edge around your shapes (like the Google drive files you shared) and just want to change all pixels that are neither black nor white into a different colour - and the fact that they are edges is irrelevant. That is much easier:
#!/usr/bin/env python3
import numpy as np
from PIL import Image
# Open input image and ensure it is greyscale
image = Image.open('XYBase.png').convert('L')
# Make Numpy version
imnp = np.array(image)
# Set all pixels that are neither black nor white to 220
imnp[(imnp>0) & (imnp<255)] = 220
# Convert Numpy image back to PIL image and save
Image.fromarray(imnp).save('result.png')
Any ideas how to use Python with the PIL module to shrink select all? I know this can be achieved with Gimp. I'm trying to package my app as small as possible, a GIMP install is not an option for the EU.
Say you have 2 images, one is 400x500, other is 200x100. They both are white with a 100x100 textblock somewhere within each image's boundaries. What I'm trying to do is automatically strip the whitespace around that text, load that 100x100 image textblock into a variable for further text extraction.
It's obviously not this simple, so just running the text extraction on the whole image won't work! I just wanted to query about the basic process. There is not much available on Google about this topic. If solved, perhaps it could help someone else as well...
Thanks for reading!
If you put the image into a numpy array, it's simple to find the edges which you can use PIL to crop. Here I'm assuming that the whitespace is the color (255,255,255), you can adjust to your needs:
from PIL import Image
import numpy as np
im = Image.open("test.png")
pix = np.asarray(im)
pix = pix[:,:,0:3] # Drop the alpha channel
idx = np.where(pix-255)[0:2] # Drop the color when finding edges
box = map(min,idx)[::-1] + map(max,idx)[::-1]
region = im.crop(box)
region_pix = np.asarray(region)
To show what the results look like, I've left the axis labels on so you can see the size of the box region:
from pylab import *
subplot(121)
imshow(pix)
subplot(122)
imshow(region_pix)
show()
The general algorithmn would be to find the color of the top left pixel, and then do a spiral scan inwards until you find a pixel not of that color. That will define one edge of your bounding box. Keep scanning until you hit one more of each edge.
http://blog.damiles.com/2008/11/basic-ocr-in-opencv/
might be of some help. You can use the simple bounding box method described in that tutorial or #Tyler Eaves spiral suggestion which works equally as well