I am viewing images by using streamlit without converting to RGB below is the line of code which I am using
st.image("image.jpg")
But the problem is I need to read some numbers inside the image but the image is blurred which means that I am not able to read the numbers present in the image.
Increasing the size of the image does not work.
Related
I am reading in a list of PNG images from a directory and I want to maintain the image sizes.
However, when I go to save them, each image is automatically resized to fit the entire page AND it automatically rotates the images to landscape rather than keeping them portrait.
Example of what I am doing below:
from PIL import Image
x = []
# get list of images
for i in [list of image files in my directory]:
im = Image.open(i)
im = im.convert('RGB') # will not save as RGBA
x.append(im)
im.save('my_doc.pdf', save_all=True, append_images=x)
Yet, the images which are all the same dimensions and sized to fit neatly onto a single page in portrait orientation, are being rotated to landscape orientation and stretched to the limits of the page making them blurry as a result.
I want one image per page, using the native dimensions of the images, and in portrait.
Is there a way to prevent PIL from doing this?
Thanks in advance!
I have never worked with TIF images, I have a specific problem definition where I need to work on satellite raster in .TIF format, which I need to upload on Google Earth Engine.
But when I convert image into Numpy array to apply some Image Processing techniques it looses its Georeferencing. Is there any way of preserving that in Output Image?
from PIL import Image
Image.MAX_IMAGE_PIXELS = None
ImagenTotal = numpy.asarray(Image.open('Well/newnew3.tif'))
imshow(ImagenTotal[10:-10,10:-10,:])
pylab.savefig('foo{}.tif'.)
I use cv.imread to read a png file in python. When I then use cv.imwrite function to immediately save the image i then find that the colours in the image have changed slightly. I am trying to perform character recognition on this image and the OCR performs far less well on the image in python than the original image.
The first image is the original, and the second is the saved one with OpenCV.
We can see that the green has changed slightly and whilst this does not seem important it affects the OCR and I therefore imagine that other changes are happening to the png. Does anyone know why this might be and how i can resolve this.
The code is as follows
img = cv2.imread('file.png')
cv2.imwrite('out.png', img)
When I run file.png in tesseract for character recognition I get great results but when I run out.png in tesseract far less words get recognised correctly.
When you have a .png image file you ought to read as a .png file.
I downloaded your image and did some analysis myself.
First, I read the image as you did:
img = cv2.imread('file.png')
img.shape returns (446, 864, 3) i.e an image with 3 channels.
Next I read the same image using cv2.IMREAD_UNCHANGED:
img = cv2.imread('file.png', cv2.IMREAD_UNCHANGED)
img.shape returns (446, 864, 4) i.e an image with 4 channels.
.png files have an additional transparency channel. So next you come accross a .png file read it using cv2.IMREAD_UNCHANGED flag
UPDATE:
Enlisting the various ways to read an image:
for var in dir(cv2):
if var.startswith('IMREAD'):
print(var)
returns:
IMREAD_ANYCOLOR
IMREAD_ANYDEPTH
IMREAD_COLOR
IMREAD_GRAYSCALE
IMREAD_LOAD_GDAL
IMREAD_UNCHANGED
The current goal of what I am writing is to download an image from S3, open it using Pillow, and show the image. I am able to download the image fine; it can be viewed properly from my photos. However, when I try to use the image with PIL, all of the pixel values are black. It does however, retain the shape of the image, which leads me to know that the image is at least being read. the code is shown below:
s3.Bucket(bucket).download_file(key, key) # downloaded perfectly fine
img = Image.open(key)
img.show() # shows all black with the Images's dimensions
I know I can read from bytes, but that will give me a 1d array of all the bytes instead of the dimensions needed for an image.
I'm using Python Wand module(version 0.4.3.) to convert an image stored in pdf to PNG. Final PNG quality is great when I saved the final image in its orignal image width and height. But, when I try to save it to smaller image final PNG gets blurry and quality is not that great.
The difference between two images is shown here. Top image is converted to original size (10800x7200px). The second one is scale to 1250x833px.
Is there any way I can improve the second image? I played with different filter and blur setting.But, could not get the image quality I want. Any help is greatly appreciated.
Code I used to convert PDF to png in its original size:
def pdf_to_png(pdf_name, res):
with Image(filename=pdf_name, resolution=res) as img:
with Image(width=img.width,height=img.height, background=Color("white")) as bg:
bg.composite(img,0,0)`
bg.save(filename="Drawing_improved_wand.png")`
pdf_to_png('Drawing_1.pdf', 300)
Code for resized png:
with Image(filename="Drawing_1.pdf", resolution=(300,300)) as img:
with Image(width=1250, height=833, background=Color("white")) as bg:
img.resize(1250, 833,filter='undefined', blur=1)
img.format = 'png'
bg.composite(img,0,0)
bg.save(filename='Drawing_improved_wand1250x833.png')
This is likely due to an inefficiency with how ImageMagick handles rasterization form PDF text + vectors, and not because of anything you're doing wrong. The large PNG likely has the same problems as the small one, but since the resolution is almost an order of magnitude higher, the effects become imperceptible.
If when exporting to the large PNG the file looks good, I would use this for further processing (like scaling down) and not the PDF.
have you try to set blur < 1?
for example:
img.resize(1250, 833,filter='undefined', blur=0.1)