Image uploading from Python or cURL to Tumblr? - python

I'm working on a project that involves uploading an image to tumblr from Python. I've had luck using Tumblr's API( http://www.tumblr.com/docs/en/api ) in doing regular text-posts, but image uploads have been giving me trouble. The error messages their server returns have been limited to just telling me that there was an "Error Uploading Photo", which has been less than helpful.
Since their API seems to be based in using standard HTTP POST operations, I know that there has to be a way to do this. Unfortunately, I haven't made any progress for a couple of days, and I've decided to resort to bothering you guys about it.
I have tried using curl, and python's libraries: httplib, urllib, urllib2, and a third party library called urllib2_file (http://fabien.seisen.org/python/urllib2_file/). I'm frustrated that I haven't gotten them to work-- but I'm willing to try other additional terminal apps you can come up with.
Each method works fine with simple text posts, but each one of them doesn't seem to get the photo uploading done properly.
Here's my syntax for doing it with urllib2_file. Since urllib2 doesn't support 'multipart/form-data' methods for uploading data, I'm using urllib2_file to add that functionality-- but I haven't been able to get it to work. The tumblr api says that their servers accept multipart/form-data as well as the 'normal post' method for uploading files. I'd be happy if either worked.
import urllib, urllib2, urllib2_file
url = "http://www.tumblr.com/api/write"
values1 = { 'email':'EMAIL',
'password':'PASSWORD',
'type':'regular',
'title':'Pythons urllib2',
'body':'its pretty nice. Not sure how to make it upload stuff yet, though. Still getting some "error uploading photo" errors... So unhelpful.'}
values2 = { 'email':'EMAIL',
'password':'PASSWORD',
'type':'photo',
'data': open('../data/media/pics/2009/05-14/100_1167.JPG'),
'caption':'Caption'}
data = urllib.urlencode(values2)
print "just before defining the request"
req = urllib2.Request(url,data)
print "just before doing the urlopen."
#response = urllib2.urlopen(req)
try:
response = urllib2.urlopen(req)
except urllib2.URLError, e:
print e.code
print e.read()
print "figure out how to handle .read() properly"
#the_page = response.read()
#print the_page
print "done"
This would be the ideal way if it worked since using dictionaries to define the fields is really easy and I could make it look much cleaner in the future.
Any advice on how to troubleshoot what could be going wrong would be appreciated. At this point I don't know how to learn what could be going wrong. I wish I had the attention span for the http RFC.
I've been considering sniffing the packets between my computer at the server-- but reverse-engineering HTTP might be overkill.
Thanks!

'data': open('../data/media/pics/2009/05-14/100_1167.JPG'),
Looks like you're just passing in a file object .. add a .read() there

Tumblr has API v2 defined for Python. You can find it at GitHub PyTumblr.
I have used it to create a terminal based tool for using tumblr which is called teblr. You can find the source code here: https://github.com/vijaykumarhackr/teblr/

Related

How to handle a post request from a Python program

So basically I would like to know how to handle a POST request from a python program, and store it on the website server so I can make a GET request to retrieve that information. I'm hoping you can help me. Currently this is my code:
import requests
url = 'mywebsitehere.com'
source_code = 'print('Hello World')
data = {'code': source_code, 'format': 'python'}
r = requests.post(url = url, data = data)
print(r.text)
I'm trying to send some code and the format for the code in the post request, but I'm not sure how to handle the post request once it reaches the website so other programs can access it with GET requests. I know how to actually send POST and GET requests in Python just not how to handle them once they reach the website/server. From my research, it seems like you have to make a PHP file or something and specify individual boxes or variables for the program to enter the information into.
I know it's a really noob question but I am just starting to get into more advanced stuff with Python and modules and stuff.
I'm going to learn more about general web development so instead of just barely understanding it I can get a good grasp on the post requests and actually develop my website into something custom rather than copying and pasting other peoples work without completely understanding it.
...also I'm not sure how to close a post to "answered" or something but yeah.

How to structure get 'review link' request from Vimeo API?

How to structure GET 'review link' request from Vimeo API?
New to python and assume others might benefit from my ignorance.
I'm simply trying to upload via the new vimeo api and return a 'review link'.
Are there current examples of the vimeo-api in python? I've read the documentation and can upload perfectly fine. However, when it comes to the http GET I can't seem to figure it out. Im using python2.7.5 and have tried requests library. Im ready to give up and just go back to PHP because its documented so much better.
Any python programmers out there familiar?
EDIT: Since this was written the vimeo.py library was rebuilt. This is now as simple as taking the API URI and requesting vc.get('/videos/105113459') and looking for the review link in the response.
The original:
If you know the API URL you want to retrieve this for, you can convert it into a vimeo.py call by replacing the slashes with dots. The issue with this is that in Python attributes (things separated by the dots), are syntax errors.
With our original rule, if you wanted to see /videos/105113459 in the python library you would do vc.videos.105113459() (if you had vc = vimeo.VimeoClient(<your token and app data>)).
To resolve this you can instead use python's getattr() built-in function to retrieve this. In the end you use getattr(vc.videos, '105113459')() and it will return the result of GET /videos/105113459.
I know it's a bit complicated, but rest assured there are improvements that we're working on to eliminate this common workaround.

403 'Access Denied' Error when opening web page with urllib2 in Python

I'm trying to get definitions of words using Google and urllib2 by opening this url, https://www.google.com/search?q=define+<something> and parsing the source for the definition. However, when I try to access the page I get a 403 Error, supposedly to prevent data mining in this sort of fasion. I'm fairly sure it wouldn't be wise to try and bypass that, so i'm wondering if there's an alternative for accessing data from Google's servers, or a data dump I should be using.
Edit: Here is the extent of the code i'm using to access the URL;
url = "https://www.google.com/search?q=define+" + word
try:
source = ulib.urlopen(url)
except ulib.HTTPError, e:
print e.fp.read()
We would need to see your code for confirmation, but your question was probably answered here. In a nutshell, you need to define your user agent.

Get DOM from webpage in python

hello guys i'm wondering how to get DOM from web page !
so check out this
Example.com>Get Dom>Get Document from Dom > Get Cookie Values from Document
i tried this code but not working
response.urllib2.urlopen('http://Example.com')
print response.info().getheader("cookie")
also i tried print response.read()
but it's ouput None for print response.info().getheader("cookie")
i tried Set-Cookie i got values but not exact same from the broswer !! i open the web via webtext editor (Firebug) and i got diffrent information so i'm confused is Set-Cookie equal to cookie
i dunno please give me some suggest
There is something here about HTTP Cookies with Python. You might actually be better off using / learning about python's httplib / http.client documented here, that would allow you to simulate / build an http client. Or even use the more generic urllib documented here, that handle more protocols / arbitrary resources, and with it say you can access headers via the urllib.urlretrieve method is there were any.

Using Python to download a document that's not explicitly referenced in a URL

I wrote a web crawler in Python 2.6 using the Bing API that searches for certain documents and then downloads them for classification later. I've been using string methods and urllib.urlretrieve() to download results whose URL ends in .pdf, .ps etc., but I run into trouble when the document is 'hidden' behind a URL like:
http://www.oecd.org/officialdocuments/displaydocument/?cote=STD/CSTAT/WPNA(2008)25&docLanguage=En
So, two questions. Is there a way in general to tell if a URL has a pdf/doc etc. file that it's linking to if it's not doing so explicitly (e.g. www.domain.com/file.pdf)? Is there a way to get Python to snag that file?
Edit:
Thanks for replies, several of which suggest downloading the file to see if it's of the correct type. Only problem is... I don't know how to do that (see question #2, above). urlretrieve(<above url>) gives only an html file with an href containing that same url.
There's no way to tell from the URL what it's going to give you. Even if it ends in .pdf it could still give you HTML or anything it likes.
You could do a HEAD request and look at the content-type, which, if the server isn't lying to you, will tell you if it's a PDF.
Alternatively you can download it and then work out whether what you got is a PDF.
In this case, what you refer to as "a document that's not explicitly referenced in a URL" seems to be what is known as a "redirect". Basically, the server tells you that you have to get the document at another URL. Normally, python's urllib will automatically follow these redirects, so that you end up with the right file. (and - as others have already mentioned - you can check the response's mime-type header to see if it's a pdf).
However, the server in question is doing something strange here. You request the url, and it redirects you to another url. You request the other url, and it redirects you again... to the same url! And again... And again... At some point, urllib decides that this is enough already, and will stop following the redirect, to avoid getting caught in an endless loop.
So how come you are able to get the pdf when you use your browser? Because apparently, the server will only serve the pdf if you have cookies enabled. (why? you have to ask the people responsible for the server...) If you don't have the cookie, it will just keep redirecting you forever.
(check the urllib2 and cookielib modules to get support for cookies, this tutorial might help)
At least, that is what I think is causing the problem. I haven't actually tried doing it with cookies yet. It could also be that the server is does not "want" to serve the pdf, because it detects you are not using a "normal" browser (in which case you would probably need to fiddle with the User-Agent header), but it would be a strange way of doing that. So my guess is that it is somewhere using a "session cookie", and in the case you haven't got one yet, keeps on trying to redirect.
As has been said there is no way to tell content type from URL. But if you don't mind getting the headers for every URL you can do this:
obj = urllib.urlopen(URL)
headers = obj.info()
if headers['Content-Type'].find('pdf') != -1:
# we have pdf file, download whole
...
This way you won't have to download each URL just it's headers. It's still not exactly saving network traffic, but you won't get better than that.
Also you should use mime-types instead of my crude find('pdf').
No. It is impossible to tell what kind of resource is referenced by a URL just by looking at it. It is totally up to the server to decide what he gives you when you request a certain URL.
Check the mimetype with the urllib.info() function. This might not be 100% accurate, it really depends on what the site returns as a Content-Type header. If it's well behaved it'll return the proper mime type.
A PDF should return application/pdf, but that may not be the case.
Otherwise you might just have to download it and try it.
You can't see it from the url directly. You could try to only download the header of the HTTP response and look for the Content-Type header. However, you have to trust the server on this - it could respond with a wrong Content-Type header not matching the data provided in the body.
Detect the file type in Python 3.x and webapp with url to the file which couldn't have an extension or a fake extension. You should install python-magic, using
pip3 install python-magic
For Mac OS X, you should also install libmagic using
brew install libmagic
Code snippet
import urllib
import magic
from urllib.request import urlopen
url = "http://...url to the file ..."
request = urllib.request.Request(url)
response = urlopen(request)
mime_type = magic.from_buffer(response.read())
print(mime_type)

Categories