Storing images from web into variable in Python - python

I have a lot of URLs of images stored on the web, example of a URL is as follows :
https://m.media-amazon.com/images/M/MV5BOWE4M2UwNWEtODFjOS00M2JiLTlhOGQtNTljZjI5ZTZlM2MzXkEyXkFqcGdeQXVyNjUwNzk3NDc#._V1_QL75_UX190_CR0
I want to load images from a similar URL as mentioned above and then do some operations on that image then return the resulting image.
So here's my code :
def get_image_from_url(url, path):
try:
# downloading image from url
img = requests.get(url)
with open(path, 'wb') as f:
f.write(img.content)
# reading image, str(path) since path is object of Pathlib's path class
img = cv2.imread(str(path), cv2.IMREAD_COLOR)
# some operations
# deleting that downloaded image since it is of no use now
if os.path.exists(path):
os.remove(path)
return resulting_image
except Exception as e:
return np.zeros((224, 224, 3), np.uint8)
But this process is taking too much time so I thought instead of downloading and deleting the image I will directly load that image present on the URL into a variable.
Something like this :
def store_image_from_url(url):
image = get_image_from_url(url) # without downloading it into my computer
# do some operations
return resulting_image
Is there any way to do the same?
Thank you

As How can I read an image from an Internet URL in Python cv2, scikit image and mahotas?, it can be something like this :
import cv2
import urllib
import numpy as np
def get_image_from_url(url):
req = urllib.urlopen(url)
arr = np.asarray(bytearray(req.read()), dtype=np.uint8)
img = cv2.imdecode(arr, -1)
return img

Related

Add white background and resize images in a folder

I want to add a white background to my transparant images (png) and resize them. The images are located in a folder. I need to do bulk work, not 1 image at the time.
I removed the background from the images first with rembg (works good) and now I want to change the images.
My code
import rembg
import glob
from pathlib import Path
from rembg import remove, new_session
session = new_session()
for file in Path(r'C:\test\images').glob('*.jpg'):
input_path = str(file)
output_path = str(file.parent / (file.stem + ".out.png"))
with open(input_path, 'rb') as i:
with open(output_path, 'wb') as o:
input = i.read()
output = remove(input, session=session)
o.write(output)
I do not know how to add the white backgroud and resize with python because I'm fairly new to this. Thank you in advance!
I think you want a helper function to do the work, something like:
from PIL import Image
import rembg
def process(session, image, *, size=None, bgcolor='white'):
"session is a rembg Session, and image is a PIL Image"
if size is not None:
image = image.resize(size)
else:
size = image.size
result = Image.new("RGB", size, bgcolor)
out = rembg.remove(image, session=session)
result.paste(out, mask=out)
return result
The idea being that you pass a rembg Session and a Pillow Image in and it will remove the background and flatten that image, resizing along the way.
As a working example, you could do something like:
from io import BytesIO
import requests
session = rembg.new_session("u2netp")
res = requests.get("https://picsum.photos/600")
res.raise_for_status()
with Image.open(BytesIO(res.content)) as img:
out = process(session, img, size=(256, 256), bgcolor='#F0E68C')
out.save("output.png")
For example, an input and output might be:
If you wanted to work with lots of files, your pathlib objects can be passed directly to Pillow:
from pathlib import Path
for path_in in Path(r'C:\test\images').glob('*.jpg'):
path_out = path_in.parent / f"{path_in.stem}-out.png"
# no point processing images that have already been done!
if path_out.exists():
continue
with Image.open(path_in) as img:
out = process(session, img, size=(256, 256), bgcolor='#F0E68C')
out.save(path_out)
Update: it's often worth adding a check into these loops so they can be rerun and not have to process everything again. If you really do want images to be re-processed then just delete *-out.png

How to convert image(ndarray object) to image object so that it can be JSON serialized?

I am new to python and programming world. I have a code which converts image into numpy array. I want to learn how to reverse it i.e. convert the numpy array into image.
I have an rest api code, which takes an image from post method, convert it to numpy array, do some processing and return some results. However, I am trying to modify the code, so that, I can take two image as input from post method, convert it to numpy array, combine those images as one and send that final image as json response.
I have successfully modified the code, so that, it accepts two images as input. I will add later the Code for combining two image into one. Currently, I am trying to send image as json response. For that, I am just trying to send the image I got from post method, as it is. But I am receiving an error of
"Object of type 'ndarray' is not JSON serializable".
So, I thought, I should convert the ndarray object(Previously created) must be convert back into image so that it can be json serialized. How to do that?
# import the necessary packages
from django.views.decorators.csrf import csrf_exempt
from django.http import JsonResponse
import numpy as np
import urllib.request
import json
import cv2
import os
#csrf_exempt
def detect(request):
# initialize the data dictionary to be returned by the request
data = {"success":False}
# check to see if this is a post request
if request.method == "POST":
# check to see if an image was uploaded
if request.FILES.get("image1", None) and request.FILES.get("image2", None) is not None:
# grab the uploaded image
image1 = _grab_image1(stream=request.FILES["image1"])
image2 = _grab_image2(stream=request.FILES["image2"])
# otherwise, assume that a URL was passed in
else:
# grab the URL from the request
url = request.POST.get("url", None)
# if the URL is None, then return an error
if url is None:
data["error"] = "No URL provided."
return JsonResponse(data)
# load the image and convert
image1 = _grab_image1(url=url)
image2 = _grab_image2(url=url)
# Code for combining two image
data.update({"final1": image1,"final2": image2, "success": True})
# return a JSON response
return JsonResponse(data)
def _grab_image1(path=None, stream=None, url=None):
# if the path is not None, then load the image from disk
if path is not None:
image1 = cv2.imread(path) #loads the image
# otherwise, the image does not reside on disk
else:
# if the URL is not None, then download the image
if url is not None:
resp = urllib.request.urlopen(url)
data = resp.read()
# if the stream is not None, then the image has been uploaded
elif stream is not None:
data = stream.read()
# convert the image to a NumPy array and then read it into
# OpenCV format
image1 = np.asarray(bytearray(data), dtype="uint8")
image1 = cv2.imdecode(image1, cv2.IMREAD_COLOR)
# return the image
return image1
def _grab_image2(path=None, stream=None, url=None):
# if the path is not None, then load the image from disk
if path is not None:
image2 = cv2.imread(path) #loads the image
# otherwise, the image does not reside on disk
else:
# if the URL is not None, then download the image
if url is not None:
resp = urllib.request.urlopen(url)
data = resp.read()
# if the stream is not None, then the image has been uploaded
elif stream is not None:
data = stream.read()
# convert the image to a NumPy array and then read it into
# OpenCV format
image2 = np.asarray(bytearray(data), dtype="uint8")
image2 = cv2.imdecode(image2, cv2.IMREAD_COLOR)
# return the image
return image2
Convert image(ndarray object) so that, it could be json serialized.
I don't think it's possible to do what you are trying to do in that way... maybe yo could try two things:
Store it somewhere in the server and serialize its URL.
Encode the image, put the coded image in the JSON and decoded it later. You could try base64 python's library.
If choosing the second option, simply encode the ndarray like this:
coded_image = base64.b64encode(image)
And for decoding:
decoded_image = base64.decodestring(coded_image)

Read a base 64 encoded image from memory using OpenCv python library

I'm working on an app that to do some facial recognition from a webcam stream. I get base64 encoded data uri's of the canvas and want to use it to do something like this:
cv2.imshow('image',img)
The data URI looks something like this:
data:image/gif;base64,R0lGODlhEAAQAMQAAORHHOVSKudfOulrSOp3WOyDZu6QdvCchPGolfO0o/XBs/fNwfjZ0frl3/zy7////wAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACH5BAkAABAALAAAAAAQABAAAAVVICSOZGlCQAosJ6mu7fiyZeKqNKToQGDsM8hBADgUXoGAiqhSvp5QAnQKGIgUhwFUYLCVDFCrKUE1lBavAViFIDlTImbKC5Gm2hB0SlBCBMQiB0UjIQA7
So, for clarity I've shown what the image looks like so the base64 string is not broken.
<img src="data:image/gif;base64,R0lGODlhEAAQAMQAAORHHOVSKudfOulrSOp3WOyDZu6QdvCchPGolfO0o/XBs/fNwfjZ0frl3/zy7////wAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACH5BAkAABAALAAAAAAQABAAAAVVICSOZGlCQAosJ6mu7fiyZeKqNKToQGDsM8hBADgUXoGAiqhSvp5QAnQKGIgUhwFUYLCVDFCrKUE1lBavAViFIDlTImbKC5Gm2hB0SlBCBMQiB0UjIQA7">
The official doc says, that imread accepts a file path as the argument. From this SO answer, if I do something like:
import base64
imgdata = base64.b64decode(imgstring) #I use imgdata as this variable itself in references below
filename = 'some_image.jpg'
with open(filename, 'wb') as f:
f.write(imgdata)
The above code snippet works and the image file gets generated properly. However I don't think so many File IO operations are feasible considering I'd be doing this for every frame of the stream. I want to be able to read the image into the memory directly creating the img object.
I have tried two solutions that seem to be working for some people.
Using PIL reference:
pilImage = Image.open(StringIO(imgdata))
npImage = np.array(pilImage)
matImage = cv.fromarray(npImage)
I get cv not defined as I have openCV3 installed which is available to me as cv2 module. I tried img = cv2.imdecode(npImage,0), this returns nothing.
Getting the bytes from decoded string and converting it into an numpy array of sorts
file_bytes = numpy.asarray(bytearray(imgdata), dtype=numpy.uint8)
img = cv2.imdecode(file_bytes, 0) #Here as well I get returned nothing
The documentation doesn't really mention what the imdecode function returns. However, from the errors that I encountered, I guess it is expecting a numpy array or a scalar as the first argument. How do I get a handle on that image in memory so that I can do cv2.imshow('image',img) and all kinds of cool stuff thereafter.
I hope I was able to make myself clear.
This is my solution for python 3.7 and without using PIL
import base64
def readb64(uri):
encoded_data = uri.split(',')[1]
nparr = np.fromstring(base64.b64decode(encoded_data), np.uint8)
img = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
return img
i hope that this solutions works for all
This worked for me on python 2, and doesn't require PIL/pillow or any other dependencies (except cv2):
Edit: for python3 use base64.b64decode(encoded_data) to decode instead.
import cv2
import numpy as np
def data_uri_to_cv2_img(uri):
encoded_data = uri.split(',')[1]
nparr = np.fromstring(encoded_data.decode('base64'), np.uint8)
img = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
return img
data_uri = "data:image/jpeg;base64,/9j/4AAQ..."
img = data_uri_to_cv2_img(data_uri)
cv2.imshow(img)
You can just use both cv2 and pillow like this:
import base64
from PIL import Image
import cv2
from StringIO import StringIO
import numpy as np
def readb64(base64_string):
sbuf = StringIO()
sbuf.write(base64.b64decode(base64_string))
pimg = Image.open(sbuf)
return cv2.cvtColor(np.array(pimg), cv2.COLOR_RGB2BGR)
cvimg = readb64('R0lGODlhEAAQAMQAAORHHOVSKudfOulrSOp3WOyDZu6QdvCchPGolfO0o/XBs/fNwfjZ0frl3/zy7////wAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACH5BAkAABAALAAAAAAQABAAAAVVICSOZGlCQAosJ6mu7fiyZeKqNKToQGDsM8hBADgUXoGAiqhSvp5QAnQKGIgUhwFUYLCVDFCrKUE1lBavAViFIDlTImbKC5Gm2hB0SlBCBMQiB0UjIQA7')
cv2.imshow(cvimg)
I found this simple solution.
import cv2
import numpy as np
import base64
image = "" # raw data with base64 encoding
decoded_data = base64.b64decode(image)
np_data = np.fromstring(decoded_data,np.uint8)
img = cv2.imdecode(np_data,cv2.IMREAD_UNCHANGED)
cv2.imshow("test", img)
cv2.waitKey(0)
Source : https://gist.github.com/HoweChen/7cdd09b08147133d8e1fbe9b52c24768

Upload Image To Imgur After Resizeing In PIL

I am writing a script which will get an image from a link. Then the image will be resized using the PIL module and the uploaded to Imgur using pyimgur. I dont want to save the image on disk, instead manipulate the image in memory and then upload it from memory to Imgur.
The Script:
from pyimgur import Imgur
import cStringIO
import requests
from PIL import Image
LINK = "http://pngimg.com/upload/cat_PNG106.png"
CLIENT_ID = '29619ae5d125ae6'
im = Imgur(CLIENT_ID)
def _upload_image(img, title):
uploaded_image = im.upload_image(img, title=title)
return uploaded_image.link
def _resize_image(width, height, link):
#Retrieve our source image from a URL
fp = requests.get(link)
#Load the URL data into an image
img = cStringIO.StringIO(fp.content)
im = Image.open(img)
#Resize the image
im2 = im.resize((width, height), Image.NEAREST)
#saving the image into a cStringIO object to avoid writing to disk
out_im2 = cStringIO.StringIO()
im2.save(out_im2, 'png')
return out_im2.getvalue()
When I run this script I get this error: TypeError: file() argument 1 must be encoded string without NULL bytes, not str
Anyone has a solution in mind?
It looks like the same problem as this, and the solution is to use StringIO.
A common tip for searching such issues is to search using the generic part of the error message/string.

Python image processing of picture directly from the web

I am writing python code to take an image from the web and calculate the standard deviation, ... and do other image processing with it. I have the following code:
from scipy import ndimage
from urllib2 import urlopen
from urllib import urlretrieve
import urllib2
import Image
import ImageFilter
def imagesd(imagelist):
for imageurl in imagelist:
opener1 = urllib2.build_opener()
page1 = opener1.open(imageurl)
im = page1.read()
#localfile = urlretrieve(
#img = Image.fromstring("RGBA", (1,1), page1.read())
#img = list(im.getdata())
# page1.read()
print img
#standard_deviation(p
Now I keep going back and forth because I am not sure how to take the image directly from the web, without saving it to disk, and passing it to the standard deviation function.
Any hints/help would be greatly appreciated.
Thanks.
PIL (Python Imaging Library) methods "fromstring" and "frombuffer" expect the image data in a raw, uncompacted, format.
When you do page1.read() you get the binary file data. In order to have PIL understanding it, you have to make this data mimick a file, and pass it to the "Image.open" method, which understands the file format as it is read from the web (i.e., the .jpg, gif, or .png data instead of raw pixel values)
Try something like this:
from cStringIO import StringIO
(...)
data = StringIO(page1.read())
img = Image.open(data)

Categories