A clean way to read a multi-band tiff file from URL? - python

I have a web service from which I want to load a multi-band image in-memory inside a Python script (ultimately I'll be converting the image into a numpy array). As far as I know packages such as PIL and imageio don't support this.
What is the preferred way of doing this? I want to avoid saving and reading images to disk.
If I save the file to disk and then load as a multi-band tiff with tifffile package things work fine (see code below); but, as I said, I want to avoid reading/writing from/to disk.
import requests
import tifffile as tiff
TMP = 'tmp.tiff'
def save_img(url, outfilename):
resp = requests.get(url)
with open(outfilename, 'wb') as f:
f.write(resp.content)
def read_img(url):
save_img(url, TMP)
return tiff.imread(TMP)

The following snippet does the trick. (Note that one should do some additional error checking on response object.)
import requests
import tifffile as tiff
import io
def read_image_from_url(url):
resp = requests.get(url)
# Check that request succeeded
return tiff.imread(io.BytesIO(resp.content))

I'm not sure about multi-band images -- if Pillow (née PIL) supports them, fine -- but this is the basic method to load images from URLs in-memory using Requests and Pillow:
import requests
from PIL import Image
from io import BytesIO
resp = requests.get('https://i.imgur.com/ZPXIw.jpg')
resp.raise_for_status()
sio = BytesIO(resp.content) # Create an in-memory stream of the content
img = Image.open(sio) # And load it
print(img)
outputs
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=605x532>

Related

Not able to save Image by downloading in python

I need to download a png file from a website and save the same in local directory .
The code is as below :
import pytesseract
from PIL import Image
from pathlib import Path
k = requests.get('https://somewebsite.com/somefile.png',stream =True)
Img=Image.open(k) # <----
Img.save("/new.png")
while executing it in JupyterNotebook
If I execute, i always get an error "response object has no attribute seek"
On the other hand , if I change the code to
Img= Image.open(k.raw), it works fine
I need to understand why it is so
You can save image data from a link using open() and write() functions:
import requests
URL = "https://images.unsplash.com/photo-1574169207511-e21a21c8075a?ixlib=rb-1.2.1&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=880&q=80"
name = "IMG.jpg" #The name of the image once saved
Picture_request = requests.get(URL)
if Picture_request.status_code == 200:
with open(name, 'wb') as f:
f.write(Picture_request.content)
Per pillow the docs:
:param fp: A filename (string), pathlib.Path object or a file object.
The file object must implement file.read,
file.seek, and file.tell methods,
and be opened in binary mode.
response itself is just the response object. Using response.raw implements read, seek, and tell.
However, you should use response.content to get the raw bytes of the image. If you want to open it, then use io.BytesIO (quick explanation here).
import requests
from PIL import Image
from io import BytesIO
URL = "whatever"
name = "image.jpg"
response = requests.get(URL)
mybytes = BytesIO()
mybytes.write(response.content) # write the bytes into `mybytes`
mybytes.seek(0) # set pointer back to the beginning
img = Image.open(mybytes) # now pillow reads from this io and gets all the bytes we want
# do things to img

python generate QR code png without file output

I am using pypng and pyqrcode for QR code image generation in django app.
import os
from pyqrcode import create
import png # pypng
import base64
def embed_QR(url_input, name):
embedded_qr = create(url_input)
embedded_qr.png(name, scale=7)
def getQrWithURL(url):
name = 'url.png'
embed_QR(url, name)
with open(name, "rb") as image_file:
image_data = base64.b64encode(image_file.read()).decode('utf-8')
return image_data
When I call getQrWithURL with a url, it produces a file url.png to my directory. Is there a way to only get the image data without producing a file output?
thanks for your help.
Use a BytesIO as a writable stream:
import io
# Make a writeable stream
buffer = io.BytesIO()
# Create QR and write to buffer
embedded_qr = create(url_input)
embedded_qr.png(buffer,scale=7)
# Extract buffer contents - this is what you would get by reading a PNG disk file but without creating it
PNG = buffer.getvalue()
Your variable PNG now contains this:
b'\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x00\xe7\x00\x00\x00\xe7\x01\x00\x00\x00\x00\xcd\x8f|\x8d\x00\x00\x01CIDATx\x9c\xed\x971\x92\x830\x0cE\xc5PPr\x04\x8e\x92\xa3\xc1\xd1|\x14\x8e\x90\x92\x82\xb1V_2\t\xde\xb0;i?c\x15q\xecG\xf3\x91\xfc%D\xff\x89,\x8d6\xda\xe8\x97\xf4)\x88AW\xfb\xed\xb7Q\x93\xef\'fj\x7ft\x1f\x9e\xf2Xt\xb7\xe34\x97Cb\x9a\xa43\x8a\xe3XL\xfd-\xa8\xc8\x1c\x19\xbc\x11-\xbb\x1b\xd0\xa3&m\x11\xe8\xbd\xaeX"\x1a\xbe\x01\xa1Q\x9aW\xaeBE\x8fXe\xee\xf4\x1c\xc44\xcd\x19\n\x93dX\xfcc\xb1\xcd\xc8L\x91A\x08\xb5c\xd5\xcd\x1f\xea\xb7*\xbfl\xd4\xf4\x9ao\xb8\xdeB\xbb\xd3-c\xa4h\xbc\x9dn\xa3\xd5\xa4\t\x1dW\xdf\t5\x9dt\xb1b,N\x88\xc8}\xe5\x93t\xd4\x8aQ\xa1Pp\xbd\x06\xe43\x9f\x9d\x90\x90\xfa-\xdb\xa3\xe3b\xa2\xc0Rw+6\n\',U\x18S\xdfo\xdf\xa0\xa3\x18\xf70J\xac\xf2\xea\xc6\xf5\xdb\xa0\xa3%\xdc>"\x91R\xbd\r>z\xccHq\xcb|T\xba\x98\xfa\xa8\xe8G1J\xcfN\xfd[\xc3/[x\xbbQ\x04=\x8d\xfek\x16\xafn\xf1\xfc\x14m\x18BK\xef\x9a\xa8\xa9\xd7\xa4\'\xed\xfd\x902\xd3\xf0\r\x8c\x12z\xb4\xe1\x0fW\xa1\xa2\x7fF\xa3\x8d6\xfa\x15\xfd\x01\xb9MCH#\xc3\xa2\x96\x00\x00\x00\x00IEND\xaeB`\x82'

Converting Image to Bytearray with Python

I want to convert Image file to Bytearray. I extracted image from pdf file with minecart lib, but I cant find a way to convert it to bytearray. This is my code:
import minecart
from PIL import Image
import io
pdffile = open('sample6.pdf', 'rb')
doc = minecart.Document(pdffile)
for page in doc.iter_pages():
print(page)
img = page.images[0].as_pil()
print(img) # <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=1641x2320 at 0x7FBDF02E6A00>
print(type(img)) # <class 'PIL.JpegImagePlugin.JpegImageFile'>
I have tried to use bytearray(img) but It does not work.
Do you have solution for this (solution that does not consume to much time)?
Create io.BytesIO buffer and write to it using PIL.Image.save. Set appropriate quality and other parameters as per requirement.
import io
from PIL import Image
def convert_pil_image_to_byte_array(img):
img_byte_array = io.BytesIO()
img.save(img_byte_array, format='JPEG', subsampling=0, quality=100)
img_byte_array = img_byte_array.getvalue()
return img_byte_array
References:
Why is the quality of JPEG images produced by PIL so poor?

Extracting text from scanned PDF without saving the scan as a new file image

I would like to extract text from scanned PDFs.
My "test" code is as follows:
from pdf2image import convert_from_path
from pytesseract import image_to_string
from PIL import Image
converted_scan = convert_from_path('test.pdf', 500)
for i in converted_scan:
i.save('scan_image.png', 'png')
text = image_to_string(Image.open('scan_image.png'))
with open('scan_text_output.txt', 'w') as outfile:
outfile.write(text.replace('\n\n', '\n'))
I would like to know if there is a way to extract the content of the image directly from the object converted_scan, without saving the scan as a new "physical" image file on the disk?
Basically, I would like to skip this part:
for i in converted_scan:
i.save('scan_image.png', 'png')
I have a few thousands scans to extract text from. Although all the generated new image files are not particularly heavy, it's not negligible and I find it a bit overkill.
EDIT
Here's a slightly different, more compact approach than Colonder's answer, based on this post. For .pdf files with many pages, it might be worth adding a progress bar to each loop using e.g. the tqdm module.
from wand.image import Image as w_img
from PIL import Image as p_img
import pyocr.builders
import regex, pyocr, io
infile = 'my_file.pdf'
tool = pyocr.get_available_tools()[0]
tool = tools[0]
req_image = []
txt = ''
# to convert pdf to img and extract text
with w_img(filename = infile, resolution = 200) as scan:
image_png = scan.convert('png')
for i in image_png.sequence:
img_page = w_img(image = i)
req_image.append(img_page.make_blob('png'))
for i in req_image:
content = tool.image_to_string(
p_img.open(io.BytesIO(i)),
lang = tool.get_available_languages()[0],
builder = pyocr.builders.TextBuilder()
)
txt += content
# to save the output as a .txt file
with open(infile[:-4] + '.txt', 'w') as outfile:
full_txt = regex.sub(r'\n+', '\n', txt)
outfile.write(full_txt)
UPDATE MAY 2021
I realized that although pdf2image is simply calling a subprocess, one doesn't have to save images to subsequently OCR them. What you can do is just simply (you can use pytesseract as OCR library as well)
from pdf2image import convert_from_path
for img in convert_from_path("some_pdf.pdf", 300):
txt = tool.image_to_string(img,
lang=lang,
builder=pyocr.builders.TextBuilder())
EDIT: you can also try and use pdftotext library
pdf2image is a simple wrapper around pdftoppm and pdftocairo. It internally does nothing more but calls subprocess. This script should do what you want, but you need a wand library as well as pyocr (I think this is a matter of preference, so feel free to use any library for text extraction you want).
from PIL import Image as Pimage, ImageDraw
from wand.image import Image as Wimage
import sys
import numpy as np
from io import BytesIO
import pyocr
import pyocr.builders
def _convert_pdf2jpg(in_file_path: str, resolution: int=300) -> Pimage:
"""
Convert PDF file to JPG
:param in_file_path: path of pdf file to convert
:param resolution: resolution with which to read the PDF file
:return: PIL Image
"""
with Wimage(filename=in_file_path, resolution=resolution).convert("jpg") as all_pages:
for page in all_pages.sequence:
with Wimage(page) as single_page_image:
# transform wand image to bytes in order to transform it into PIL image
yield Pimage.open(BytesIO(bytearray(single_page_image.make_blob(format="jpeg"))))
tools = pyocr.get_available_tools()
if len(tools) == 0:
print("No OCR tool found")
sys.exit(1)
# The tools are returned in the recommended order of usage
tool = tools[0]
print("Will use tool '%s'" % (tool.get_name()))
# Ex: Will use tool 'libtesseract'
langs = tool.get_available_languages()
print("Available languages: %s" % ", ".join(langs))
lang = langs[0]
print("Will use lang '%s'" % (lang))
# Ex: Will use lang 'fra'
# Note that languages are NOT sorted in any way. Please refer
# to the system locale settings for the default language
# to use.
for img in _convert_pdf2jpg("some_pdf.pdf"):
txt = tool.image_to_string(img,
lang=lang,
builder=pyocr.builders.TextBuilder())

Python image processing of picture directly from the web

I am writing python code to take an image from the web and calculate the standard deviation, ... and do other image processing with it. I have the following code:
from scipy import ndimage
from urllib2 import urlopen
from urllib import urlretrieve
import urllib2
import Image
import ImageFilter
def imagesd(imagelist):
for imageurl in imagelist:
opener1 = urllib2.build_opener()
page1 = opener1.open(imageurl)
im = page1.read()
#localfile = urlretrieve(
#img = Image.fromstring("RGBA", (1,1), page1.read())
#img = list(im.getdata())
# page1.read()
print img
#standard_deviation(p
Now I keep going back and forth because I am not sure how to take the image directly from the web, without saving it to disk, and passing it to the standard deviation function.
Any hints/help would be greatly appreciated.
Thanks.
PIL (Python Imaging Library) methods "fromstring" and "frombuffer" expect the image data in a raw, uncompacted, format.
When you do page1.read() you get the binary file data. In order to have PIL understanding it, you have to make this data mimick a file, and pass it to the "Image.open" method, which understands the file format as it is read from the web (i.e., the .jpg, gif, or .png data instead of raw pixel values)
Try something like this:
from cStringIO import StringIO
(...)
data = StringIO(page1.read())
img = Image.open(data)

Categories