Opening an Image From a URL with pgmagick - python

In pgmagick, you initialize an image like this:
Image('my_image.png')
I will be operating on files stored remotely on S3 and would rather not temporarily store them on disk. Is there any way to open an image file from a URL instead? When I try to simply replace the file name with the URL, I get an error: Unable to open file.
I'd like to be able to use a URL. If anyone has any suggestions on that or how to extend pgmagick to achieve it, I'd be elated.

The easiest way (in my mind) is to use the awesome requests library. You can fetch each image from the server one at a time, then open it with Image():
from StringIO import StringIO
import requests
from pgmagick import Image, Blob
r = requests.get('https://server.com/path/to/image1.png', auth=('user', 'pass'))
img = Image(Blob(StringIO(r.content)))
And that's all there is to it. Authentication is of course not required, but may be necessary depending on your S3 setup. Have fun!

Related

Open Image from requests.response.content

What I am trying to do is quite simple when dealing with a local file, but the problem comes when I try to do it with a remote URL.
Basically, I am trying to create a PIL image object from a file extracted from a URL. Of course, I could always fetch the URL and store it in a temporary file, then open it in an image object, but that seems very inefficient.
Here is what I have:
from PIL import Image
import requests
from io import BytesIO
response = requests.get(url)
img = Image.open(BytesIO(response.content))
So the Code is not returning the image if anyone knows.

Python webbrowser not functioning with GIS server

I am trying to write a code that will download all the data from a server which holds the .rar files about imaginary cadastrial particles for student projects. What I got for now is the query for the server which only needs to input a specific number of particle and access it as url to download the .rar file.
url = 'http://www.pg.geof.unizg.hr/geoserver/wfs?request=getfeature&version=1.0.0&service=wfs&&propertyname=broj,naziv_ko,kc_geom&outputformat=SHAPE-ZIP&typename=gf:katastarska_cestica&filter=<Filter+xmlns="http://www.opengis.net/ogc"><And><PropertyIsEqualTo><PropertyName>broj</PropertyName><Literal>1900/1</Literal></PropertyIsEqualTo><PropertyIsEqualTo><PropertyName>naziv_ko</PropertyName><Literal>Suma Striborova Stara (9997)</Literal></PropertyIsEqualTo></And></Filter>'
This is the "url" I want to open with the web browser module for a particle "1900/1" but this way I get an error:
This XML file does not appear to have any style information associated with it. The document tree is shown below.
When I manually input this url it downloads the file without a problem.
What is the way I can make this python web application work?
I used a webbrowser.open_new(url) option which does not work.
You're using the wrong tool. webbrowser is for controlling a native web browser. If you just want to download a file, use the requests module (or urllib.request if you can't install Requests).
import requests
r = requests.get('http://www.pg.geof.unizg.hr/geoserver/wfs', params={
'request': 'getfeature',
...
'filter': '<Filter xmlns=...>'
})
print(r.content) # or write it to a file, or whatever
Note requests will handle encoding GET parameters for you -- you don't need to worry about escaping the request yourself.

Getting an empty jpg file, when try to Download an image

I have been trying to download an image from website (no username and password required) but every time I am getting an empty file. I have used conventional urllib .retrieve and requests methodologies but getting the same result. One thing more is that if I try to open the same image manually by copy pasting the URL after 15-20 min then that image itself does not open. I am assuming that some sort of session handling is required in this case . Below is my code which returns me empty image.
import os
import urllib
def savePic(url):
uri="C:\Python27\Scripts\Photosurl2.jpg"
if url!="":
urllib.urlretrieve(url, uri)
savePic("http://www-nass.nhtsa.dot.gov/nass/cds/GetBinary.aspx?ImageView&ImageID=491410290&Desc=Lookback+from+final+rest&Title=Scene+Photos+-+image1&Version=1&Extend=jpg")
Any help is appreciated.
When you try to implement some HTTP code in Python do not forget to validate that you can use curl or wget to perform these HTTP requests. This will save you a lot of time trying to debug a problem that is not in your code.
The also have very good verbose modes which will give you some hints regarding what you are missing.
Also, most senior Python developers are using the requests library instead of the urllib ones.
PS. Requests library is easier to use than urllib.

Download a file from GoogleDrive exportlinks

Trying to download a file directly using Python and the Google Drive API exportlinks response.
Suppose I have an export link like this:
a) https://docs.google.com/feeds/download/documents/export/Export?id=xxxx&exportFormat=docx
To download this file, I simply paste it into the browser, and the file automatically downloads to my Downloads folder.
How do I do the same thing in Python?
EX: module.download_file_using_url(https://docs.google.com/feeds/download/documents/export/Export?id=xxxx&exportFormat=docx)
This is a repost of How do I download a file over HTTP using Python?
In Python 2, use urllib2 which comes with the standard library.
import urllib2 response = urllib2.urlopen('http://www.example.com/') html = response.read()
This is the most basic way to use the library, minus any error handling. You can also do more complex stuff such as changing headers. The documentation can be found here.

urllib.urlretrieve is failing

I'm trying to download data using commands below.
import urllib
url = 'http://www.nse-india.com/content/historical/EQUITIES/2002/MAR/cm01MAR2002bhav.csv.zip'
urllib.urlretrieve(url, 'myzip')
What I see in the file generated file my.zip is,
You don't have permission to access "http://www.nse-india.com/content/historical/EQUITIES/2002/MAR/cm01MAR2002bhav.csv.zip" on this server.<P>
Reference #18.7d427b5c.1311889977.25329891
But I'm able to download the file from the website without any problem.
What is the reason for this?
You may need to use urllib2 and set the user-agent header to something it recognizes. It might just be blocking anything that doesn't appear to be a normal user.

Categories