I have pictures that I want to resize as they are currently quite big. The pictures are supposed to be going to Power BI and Power BI has the maximum limitation of around 32k base64 string. I created a function to resize the image but the image has become blurry and less visible after resizing. The length of the base64 image of 1 picture was around 150,000 which came down to around 7000.
# Converting into base64
outputBuffer = BytesIO()
img2.save(outputBuffer, format='JPEG')
bgBase64Data = outputBuffer.getvalue()
# Creating a new column for highlighted picture as base64
#image_base64_highlighted = base64.b64encode(bgBase64Data).decode() ## http://stackoverflow.com/q/16748083/2603230
#print(img2)
resize_factor = 30000/len(base64.b64encode(bgBase64Data))
im = Image.open(io.BytesIO(bgBase64Data))
out = im.resize( [int(resize_factor * s) for s in im.size] )
output_byte_io = io.BytesIO()
out.save(output_byte_io, 'JPEG')
final = output_byte_io.getvalue()
image_base64_highlighted = base64.b64encode(final).decode()
I think it is shrinking the image too much. Is there anyway I can improve the visibility of the image. I want to be able to see at least the text in the image. I cannot post the images due to PII. Any idea?
Encoding with base64 adds around 30% to your image size, so you should aim for a JPEG size of 24kB to ensure it remains under 32kB when encoded.
You can reduce an image to a target size of 24kB using my answer here.
You can also use wand to reduce the quality of a JPEG till it reaches a certain size:
from wand.image import Image
import io
# Create a canvas with noise to make incompressible
with Image(width=640, height=480, pseudo='xc:') as canvas:
canvas.noise('random')
# This is the critical line that enforces a max size for your JPEG
canvas.options['jpeg:extent'] = '72kb'
jpeg = canvas.make_blob('jpeg')
print(f'JPEG size: {len(jpeg)}')
You can do the same thing in the command-line by shelling out to ImageMagick:
magick INPUT.JPG -define jpeg:extent=24kb OUTPUT.JPG
I think you can do that with pygame itself. But its recommended for you to try open-cv python for this. I think you should use cv2.resize(). And the parameters are;
source : Input Image array (Single-channel, 8-bit or floating-point)
dsize : Size of the output array
dest : Output array (Similar to the dimensions and type of Input image array)
fx : Scale factor along the horizontal axis
fy : Scale factor along the vertical axis
interpolation: One of the above interpolation methods
Related
Hi guys please help me i can't solve this, i have this image
When i try to convert it to ico using
from PIL import Image,ImageTk
png = Image.open("converter.png")
png.save("icon.ico", format='ICO',quality=100)
And i use the icon for a shortcut in the desktop the result is this
It's stretched to the top and the bottom. How to solve this and keep the same aspect ratio of the image? Thanks
It doesn't work even with
from PIL import Image,ImageTk
png = Image.open("converter.png")
png.resize((64,64), resample=0)
png.save("icon.ico", format='ICO',quality=100)
You need to resize the image to a square, but you don't want to stretch or compress the image.
You can create a new square image and place your original image in the center.
from PIL import Image
png = Image.open("converter.png")
size = png.size
ico = Image.new(mode="RGBA", size=(max(size), max(size)), color=(0, 0, 0, 0))
ico.paste(png, (int((max(size)-size[0])/2), int((max(size)-size[1])/2)))
ico.save("icon.ico", format='ICO', quality=100)
You need to use Image.resize(size, resample=0) function to make .ico file squared (equal width and height.
In your case it would look something like this:
png.resize((64,64), resample=0)
You can fiddle with resample argument to make it look better.
resample – An optional resampling filter.
This can be one of PIL.Image.NEAREST (use nearest neighbour), >PIL.Image.BILINEAR (linear interpolation), PIL.Image.BICUBIC (cubic
spline interpolation), or PIL.Image.LANCZOS (a high-quality
downsampling filter).
If omitted, or if the image has mode “1” or “P”, it is set PIL.Image.NEAREST.
I am trying to create an image made up of coloured squares. I only need each square to be one pixel large, as it is just a single block colour. However, when I use this code, the image generated is extremely blurry. Is there anyway to make the boarders sharp?
def fancycolnw2(seq,m):
data=numbwall(seq,m)
#print(data)
for i in range(len(data)):
for j in range(len(data[i])):
if data[i][j]==' ':
data[i][j]=-1
im = Image.new('RGBA', (len(data[0]),len(data))) # create the Image of size 1 pixel
#print(data)
for i in range(len(data)-1):
for j in range(len(data[i])-1):
#print(i,j)
if data[i][j]==-1:
im.putpixel((j,i), ImageColor.getcolor('black', 'RGBA'))
if data[i][j]==0:
#print('howdy')
im.putpixel((j,i), ImageColor.getcolor('red', 'RGBA'))
if data[i][j]==1:
im.putpixel((j,i), ImageColor.getcolor('blue', 'RGBA'))
if data[i][j]==2:
im.putpixel((j,i), ImageColor.getcolor('grey', 'RGBA'))
im.show()
im.save('simplePixel.png') # or any image format
The result I get looks like this:
Image
It is the correct image, I just wish the boundaries between pixels were sharp. Any help would be greatly appreciated!
The image is perfectly sharp, but rather small. I suspect that you are "zooming in" to view it clearer, and that whatever program you are zooming with is filtering the image, because with most images this looks better. You need to find a viewing program that uses "nearest neighbour" resampling when zooming in, or generate a larger image to start with, for example by setting a 4-by-4 pixel block rather than individual pixels.
(Also, the code says "# or any other image format". Don’t use JPEG for this, as the lossy compression will likely wreck your image.)
I am trying to create a pipeline in which I first render an image using the blender python API (I am using Blender 2.90) and then perform some image processing in python. I want to fetch the image directly from blender without first writing the rendered image to disk and then loading it again. I ran the following code within the blender GUI to do so:
import bpy
import numpy as np
import PIL.Image as Image
from skimage.util import img_as_ubyte
resolution_x = 512
resolution_y = 512
# render settings
scene = bpy.context.scene
scene.render.engine = 'BLENDER_EEVEE'
scene.render.resolution_x = resolution_x
scene.render.resolution_y = resolution_y
scene.render.image_settings.file_format = 'PNG'
scene.render.filepath = "path/to/good_image.png"
# create Viewer Layer in Compositor
scene.use_nodes = True
tree = scene.node_tree
nodes = tree.nodes
links = tree.links
for node in nodes:
nodes.remove(node)
render_layer_node = nodes.new('CompositorNodeRLayers')
viewer_node = nodes.new('CompositorNodeViewer')
links.new(viewer_node.inputs[0], render_layer_node.outputs[0])
# render scene and get pixels from Viewer Node
bpy.ops.render.render(write_still=True)
pixels = bpy.data.images['Viewer Node'].pixels
# do some processing and save
img = np.flip(img_as_ubyte(np.array(pixels[:]).reshape((resolution_y, resolution_x, 4))), axis=0)
Image.fromarray(img).save("path/to/bad_image.png")
Problem: The image I get from the Viewer Node is much darker (bad image) than the image saved in the conventional way (good image). Does anyone have an idea why this happens and how to fix it? Does blender maybe treat pixel values differently than I expect?
Some additional information:
Before conversion to uint8, the values of the alpha channel within the dark image are 1.0 (as they actually should be). Background values in the dark image are not 0.0 or negative (as one might guess from appearance), but 0.05...
What I tried:
I thought that pixels might be scaled within range -1 to 1, so I rescaled the pixels to range 0 to 1 before transforming to uint8... Did not lead to the correct image either :(
It's because the image that you get from the Viewer Node is the one "straight from compositing" before color management takes place. You can have a look at the documentation here: this image is still in the linear space.
Your good_image.png on the other hand is obtained after transformation into the "Display Space" (see diagram in the doc). Hence it was transformed into a log-space, maybe gamma-corrected, etc.
Finally, you can get an image that is close to (but slightly different though) to the good image from the viewer node by calling bpy.data.images['Viewer Node'].save_render(filepath) instead, but there is no way to directly extract the color-managed version without rendering to a file first. You can probably do it yourself by adding PyOpenColorIO to your script and applying the color management from this module.
After searching for a few hours, I ended up on this link. A little background information follows.
I'm capturing live frames of a running embedded device via a hardware debugger. The captured frames are stored as raw binary files, without headers or format. After looking at the above link and understanding, albeit perfunctorily, the NumPY and Matplotlib, I was able to convert the raw binary data to an image successfully. This is important because I'm not sure if the link to the raw binary file will help any one.
I use this code:
import matplotlib.pyplot as plt # study documentation
import numpy as np # " "
iFile = "FramebufferL0_0.bin" # Layer-A
shape = (430, 430) # length and width of the image
dtype = np.dtype('<u2') # unsigned 16 bit little-endian.
oFile = "FramebufferL0_0.png"
fid = open(iFile, 'rb')
data = np.fromfile(fid, dtype)
image = data.reshape(shape)
plt.imshow(image, cmap = "gray")
plt.savefig(oFile)
plt.show()
Now, the image I'm showing is black and white because the color map is gray-scale (right?). The actual captured frame is NOT black and white. That is, the image I see on my embedded device is "colorful".
My question is, how can I calculate actual color of each pixel from the raw binary file? Is there a way I can get the actual color map of the image from the raw binary? I looked into this example and I'm sure that, if I'm able to calculate the R, G and B channels (and Alpha too), I'll be able to recreate the exact image. An example code would be of much help.
An RGBA image has 4 channels, one for each color and one for the alpha value. The binary file seems to have a single channel, as you don't report an error when performing the data.reshape(shape) operation (the shape for the corresponding RGBA image would be (430, 430, 4)).
I see two potential reasons:
The image actual does have colour information but when you are grabbing the data you are only grabbing one of the four channels.
The image is actually a gray-scale image, but the embedded device shows a pseudocolor image, creating the illusion of colour information. Without knowing what the colourmap is being used, it is hard to help you, other than point you towards matplotlib.pyplot.colormaps(), which lists all already available colour maps in matplotlib.
Could you
a) explain the exact source / type of imaging modality, and
b) show a photo of the output of the embedded device?
PS: Also, at least in my hands, the pasted binary file seems to have a size of 122629, which is incongruent with an image shape of (430,430).
I am trying to get the Python 2.7 PIL Library to work with JPEG images that are only available as a stream coming from a HDD image and are not complete.
I have set the option:
ImageFile.LOAD_TRUNCATED_IMAGES = True
And load the stream as far as it is available (or better said: as far as I am 100% sure that this data is still a image, not some other file type). I have tested different things and as far as I can tell (for JPEGs) PIL only accepts it as a valid JPEG Image if it finds the 0xFFDA (Start of Scan Marker). This is a short example of how I load the data:
from PIL import Image
from StringIO import StringIO
ImageFile.LOAD_TRUNCATED_IMAGES = True
with open("/path/to/image.raw", 'rb') as fp:
fp.seek("""jump to position in image where JPEG starts""")
data = fp.read("""number of bytes I know that those belong to that jpeg""")
img = Image.open(StringIO(data)) # This would throw exception if the data does
# not contain the 0xffda marker
pixel = img.load() # Would throw exception if LOAD_TRUNCATED_IMAGES = false
height,width = img.size
for i in range(height):
for j in range(width):
print pixel[i,j]
On the very last line I expected (or hoped) to see at least the read pixel data to be displayed. But for every pixel it returns (0,0,0).
The Question: Is what I am trying here not possible with PIL?
Some weeks ago I tried the same with a image file I truncated myself, simply by cutting data from it with an editor. It worked for the pixel-data that was available. As soon as it reached a pixel that I cut off, the program threw an exception (I will try this again later today to make sure that I am not remembering wrong).
If somebody is wondering why I am doing this: I need to make sure that the image/picture inside that hdd image is in consecutive blocks/clusters and is not fragmented. To make sure of this I wanted to use pixel matching.
EDIT:
I have tried it again and this is what I have seen.
I opened a truncated image in GIMP and it showed me a few pixel lines in the upper part, but PIL was not able to at least give me the RGB values of those pixels. It always returns (0,0,0).
I made the image slightly bigger such that the lower 4/5 of the image was not visible, but that was enough for PIL to show me the RGB values that were available. Everything else was (0,0,0).
I am still not 100% sure whether PIL can show me the RGB values, even if only view pixel-data is available.
I would try it with an uncompressed format like TGA. JPG being a compressed format may not make any sense to extract pixels from an incomplete image. JPEG actually stores the parameters for equations that describe the image, not pixel values. When you query a JPEG for a pixel value it evaluates the equations at that point and returns the result.
I have the same problem with Pillow==9.2.0
Let's downgrade to Pillow==8.3.2 and it works.
I don't really know about streaming, but I think that you simply cannot access rgb value the way you do.
Try:
rgb_im = img.convert('RGB')
r, g, b = rgb_im.getpixel((i, j))