I am writing a flask application that receives two image URLs as parameters. Traditionally, I would download this images on the server local disk and carry out my image processing operations. I am downloading them using following code.
urllib.request.urlretrieve(image1url, image_id + '.jpg')
After this I read the image using :
original_image = Image.open(image_id + '.jpg')
and carry out my Image Processing operations like crop and applying a few filters.
original_crop = original_image.crop((x, y, x + width / 3, y + height / 3))
Also, I use ImageFilter operations on this image. Now this code will be deployed on a server. If i continue this way I will keep downloading and saving images on the disk of the server. Of course, I understand that deleting the images after I am done with my Image Processing operations is one option. But if I get a few 100 calls per second, I might at some point of time use up a lot of space. The application is multi threaded using the call
app.run(threaded=true)
which works like a charm.
I want to know if there is a way to load an image without using disk storage of the server. Thus reducing the hard disk space requirements of my service.
if you don't want to store images in temporary files you can wrap URL content in stream and pass it to Image.open
import io
import urllib.request
from PIL import Image
# your avatar URL as example
url = ('https://www.gravatar.com/avatar/3341748e9b07c9854d50799e0e247fa3'
'?s=328&d=identicon&response=PG&f=1')
content = urllib.request.urlopen(url).read()
original_image = Image.open(io.BytesIO(content))
You could move them to a known remote location and fetch them back as needed. Using Amazon S3 or a hosted FTP service like BrickFTP are both easy. S3 is especially cheap since you only pay for what you use -- no monthly fee. Brick is a great service if you want to make access to the images as simple as possible for other applications and users but there is a minimum monthly fee.
Related
I am trying to create an image thumbnail creation function using python, running in a google cloud platform's function. The image is sent as a base64 string to the cloud function, manipulated and made smaller with Python's Pillow package. It is then uploaded as an image, going from a Pillow Image object, to a BytesIO object, then saved to google cloud storage. This is all done successfully.
The problem here is very strange: Google Cloud Storage does not recognize the image until an access token is created manually. Otherwise, the image is left in an infinite loop, never loading, and never being able to be used.
I have reviewed this SO post, which has a very similar problem to mine (the image here shows exactly my problem: an uploaded file cannot be loaded properly), but it differs in two imporant categories: 1) They are manipulating the image array directly, while my code never touches it and 2) they are working in Node.js, where the Firebase SDK is different than in Python.
The code to generate the image is as follows:
def thumbnailCreator(request):
# Setting up the resourcse we are going to use
storage_client = storage.Client()
stor_bucket = storage_client.bucket(BUCKET_LINK)
# Retriving the Data
sent_data = request.get_json()['data']
name = sent_data['name']
userID = sent_data['userID']
# Process to go between base64 string to bytes, to a file object
imageString = stor_bucket.blob(PATH_TO_FULL_SIZE_IMAGE).download_as_string()
imageFile = BytesIO(imageString)
image = Image.open(imageFile)
# Resizing the image is the goal
image = image.resize(THUMBNAIL_SIZE)
# Go between pillow Image object to a file
imageFile = BytesIO()
image.save(imageFile, format='PNG')
imageBytes = imageFile.getvalue()
image64 = base64.b64encode(imageBytes)
imageFile.seek(0)
# Uploading the Data
other_blob = stor_bucket.blob(PATH_FOR_THUMBNAIL_IMAGE)
other_blob.upload_from_file(imageFile, content_type = 'image/png')
return {'data': {'response': 'ok', 'status': 200}}
Again, this works. I have a feeling there is something wrong with the MIME type. I am a novice when it comes to this type of programming/networking/image manipulation, so I'm always looking for a better way to do this. Anyway, thanks for any and all help.
It appears that the premise of this question - that a access token must be made manually for the image to work - is not accurate. After further testing, the error came from other parts of the code base I was working in. The above python script does work for image manipulation. An access token to the image can be generated via code, and be provided client-side.
Leaving this up in case someone stumbles upon it in the future when they need to work with Pillow/PIL in the Google Cloud Platform.
I am still relatively new to Google Earth Engine, but had a question about getting the download URLs for each image in an Image Collection. The question is exactly that, how do I obtain the set of download urls for each image in an imagery collection--given that I have pared each image down to a reasonable size.
Note, I did look through Stackoverflow and found an existing question about downloading a single image to a google Drive, but this question does not provide any information about image collections.
how to download images using google earth engine's python API
The following bit of code will generate some small image patches of Landsat 8 imagery. Each patch is only a handful of kb in size.
import ee
ee.Initialize()
col = ee.ImageCollection('LANDSAT/LC08/C01/T1').select('B8')
col.filterDate('1/1/2015', '1/30/2015')
boundary_region = ee.Geometry.Point([-2.40986111110000012, 26.76033333330000019]).buffer(300)
col.filterBounds(boundary_region)
def clipper(image):
return image.clip(region)
col.map(clipper)
Now I would like to download each image in the collection col.
Earth Engine provides a couple of ways to iterate over imagery collections: the map() and iterate() functions, but apparently both of those functions do not work for the download functions.
Now it seems like once I can generate the url for a single image using the function:
boundary_geojson = ee.Geometry.Polygon(boundary_region.bounds().getInfo().coordinates[0]).toGeoJSONString()
def downloader(image):
url = ee.data.makeDownloadUrl(
ee.data.getDownloadId({
'image': image.serialize(),
'scale': 30,
'filePerBand': 'false',
'name': 'test',
'region': boundary_geojson,
}))
return url
Note, for some reason I can't seem to get the boundary_geojson variable to work properly.
now the set of urls should be a call as simple as
col.map(downloader)
but that does not work.
Does anyone know how to generate the list of urls the correspond to the images in an image collection?
I am writing a webcrawler that finds and saves the urls of all the images on a website. I can get these without problem. I need to upload these urls, along with a thumbnail version of them, to a server via http request, which will render the image and collect feature information to use in various AI applications.
For some urls this works no problem.
http://images.asos-media.com/products/asos-waxed-parka-raincoat-with-zip-detail/7260214-1-khaki
resizes into
http://images.asos-media.com/products/asos-waxed-parka-raincoat-with-zip-detail/7260214-1-khaki?wid=200
but for actual .jpg images this method doesn't work, like for this one:
https://cdn-images.farfetch-contents.com/11/85/29/57/11852957_8811276_480.jpg
How can I resize the jpgs via url?
Resizing the image via the URL only works if the site you're hitting is using a dynamic media service or tool in their stack. That's why ASOS will allow you to append a query with the dimensions for resize, however different DM tools will have different query parameters.
If you want to make it tolerant you're best off downloading the image, resizing it with Python and then uploading it.
I am trying to write a python program to download images from glance service. However, I could not find a way to download images from the cloud using the API. In the documentation which can be found here:
http://docs.openstack.org/user-guide/content/sdk_manage_images.html
they explain how to upload images, but not to download them.
The following code shows how to get image object, but I don't now what to do with this object:
import novaclient.v1_1.client as nvclient
name = "cirros"
nova = nvclient.Client(...)
image = nova.images.find(name=name)
is there any way to download the image file and save it on disk using this object "image"?
Without installing glance cli you can download image via HTTP call as described here:
http://docs.openstack.org/developer/glance/glanceapi.html#retrieve-raw-image-data
For the python client you can use
img = client.images.get(IMAGE_ID)
and then call
client.images.data(img) # or img.data()
to retrieve generator by which you can access raw data of image.
Full example (saving image from glance to disk):
img = client.images.find(name='cirros-0.3.2-x86_64-uec')
file_name = "%s.img" % img.name
image_file = open(file_name, 'w+')
for chunk in img.data():
image_file.write(chunk)
You can do this using glance CLI with image-download command:
glance image-download [--file <FILE>] [--progress] <IMAGE>
You will have to install glance cli for this.
Also depending upon the cloud provider/service that you are using, this operation may be disabled for regular user. You might have to check with your provider.
I am successfully posting an image to my Google AppEngine application using the following code:
def post(self):
image_data = self.request.get('file')
file_name = files.blobstore.create(mime_type='image/png')
# Open the file and write to it
with files.open(file_name, 'a', exclusive_lock=True) as f:
f.write(image_data)
# Finalize the file. Do this before attempting to read it.
files.finalize(file_name)
# Get the file's blob key
blob_key = files.blobstore.get_blob_key(file_name)
self.response.out.write(images.get_serving_url( blob_key ))
However, when I browse the the URL outputted by get_serving_url(), the image is always at a reduced resolution. Why? I've checked and double checked that the image being posted is of the correct size (from an iPhone camera, so approx 3200x2400 resolution). Yet, the served image is always 512x384.
I'm fairly new to GAE, but I thought that the code above should store the image in the BlobStore rather than the datastore, circumventing the 1 MB limit.
Does anyone have any idea what could be going on?
Cheers,
Brett
Found a solution. Or at least something that works for me.
My appending =sXX onto the end of the served URL, AppEngine will serve the image at the XX resolution. For instance, if the line:
self.response.out.write(images.get_serving_url( blob_key ))
returns:
http://appengine.sample.com/appengineurlkey
Then when calling the url above results, the image will be a lower resolution image,
Then by calling the URL:
http://appengine.sample.com/appengineurlkey**=s1600**
the resulting served image will be at 1600x1200 resolution (or a similar resolution restricted by maintaining the aspect ratio).
The explanation for what you're seeing is explained in https://developers.google.com/appengine/docs/python/images/functions
In the doc for get_serving_url:
When resizing or cropping an image, you must specify the new size using an integer 0 to 1600. The maximum size is defined in IMG_SERVING_SIZES_LIMIT. The API resizes the image to the supplied value, applying the specified size to the image's longest dimension and preserving the original aspect ratio.
An alternative approach, if you want to serve full-sized images, is to upload your images directly to the blobstore, then serve them from there. (I.e., bypass the image API completely.) See https://developers.google.com/appengine/docs/python/blobstore/overview