Python requests - Put ignores file upload - python

I am trying to upload a file through python requests using the PUT method. But on the server side, the file is never received.
The code that I am using is:
files = {'test' : open(r"C:\Users\test.jar", 'rb')}
response = session.put(api_base + url.get('url').format(foo, bar),
headers=headers, data=data, files=files)
Does PUT ignore file uploads? Is it only valid for posts?
What am I doing wrong?

It seems that Python's request module does not work for file uploads if you use the PUT method as you did. The documentation only shows one way to use POST requests. I'm not sure if this is a bug or not working intentionally.
Others also have problems using PUT requests for file transfer, see e.g:
Cannot PUT file to Django REST API by python requests
Python requests PUT method creates a zero byte file
OP moved to PycURL, which seems to work and also seems to be a good alternative.

Related

How to receive URL File as parameter and save it to disk using FastAPI?

I would like to know if there is a way for FastAPI to receive a URL of a file as a parameter and save this file to disk? I know it is possible with the requests library using requests.get() method, but is it possible with FastAPI to receive the URL and save it directly?
I tried using file: UploadFile = File(...), but then it doesn't download the file when the URL is sent.
I don't believe so. I've come across this before and was unable to find a solution (and ended up using requests like you mentioned), but seeing this I wanted to check again more thoroughly.
Reviewing the uvicorn and fastapi repositories by searching the code itself, I see no functions/code that reference requests or urllib (they do use urllib.parse/quote, etc though) that would be 2 likely suspects to build requests. They do use httpx.AsyncClient, but only in tests. I would expect to see some use of these libraries in the main uvicorn/fastapi libraries if they had code to make external requests.
Seeing the above, I actually think I will change my code to use httpx.AsyncClient anyways since it is already a dependency.

Send file without MultipartPostHandler (Python)

I'm using MultipartPostHandler in file sending. My code is following:
params = {'file':open(file_name, 'rb')}
headers = {'cookie':session_id}
urllib2.install_opener(urllib2.build_opener(MultipartPostHandler.MultipartPostHandler))
response = urllib2.urlopen(urllib2.Request("www.example.com/upload", params, headers))
How could I do the same (send file to the server) without using MultipartPostHandler? It would be good to use only buid-in python modules and urllib2. Is it possible.
MultipartPostHandler needs to install it using easy_install, pip or
from source. I want like to write the python script that would not
require new instalations.
Just add it to your original script - it is just one file. Copy paste the code for the module.
Unfortunately, there is no direct method available to post a multiple part file using urllib2. But there are ways to accomplish that by writing a custom form object using mimetype and mimetools module. You could follow this recipe and adopt your form to do a mutipart upload using urllib2.
(In Python3, urllib.request data can take a pointer to file object and that does read the whole file into memory)

Converting a cURL command to Python's urllib2

guys! I'm currently working with an API that uses the following cURL line:
curl -vF "files[]=#file.zip" "http://s5.example.com/123"
I'm not interested in using other libraries such as pycurl, because I really only need to convert a line or two. I really appreciate your help!
All you need is urlopen method of urllib module.. .
Syntax:
urllib2.urlopen(url[, data][, timeout])
data is the string specifying additional stuff to send to the server
Example:
import urllib
tlds = urllib.urlopen("http://data.iana.org/TLD/tlds-alpha-by-domain.txt").readlines()
this example shows a single line code to extract all tlds into a a list called tlds by crawling it from iana.org
For complex operations/options/parameters of urlopen method you can check the official documentation: http://docs.python.org/library/urllib2.html
This request posts the file contents in a file upload field.
The http body of a POST request uploading a file is tricky to assemble - for ordinary text fields, one can just send a dictionary in the "data=" parameter to a urllib2.urlopen call, and the fields will be posted.
File uploads, though, need to send a complex http body, with customized headers.
I don't know if urllib2 can do this - but I found an example on how to do it using httplib and mimetypes (both part of Python's stdlib) - so you can pick the example code and reuse it on your project:
http://code.activestate.com/recipes/146306-http-client-to-post-using-multipartform-data/

Using Python to download a document that's not explicitly referenced in a URL

I wrote a web crawler in Python 2.6 using the Bing API that searches for certain documents and then downloads them for classification later. I've been using string methods and urllib.urlretrieve() to download results whose URL ends in .pdf, .ps etc., but I run into trouble when the document is 'hidden' behind a URL like:
http://www.oecd.org/officialdocuments/displaydocument/?cote=STD/CSTAT/WPNA(2008)25&docLanguage=En
So, two questions. Is there a way in general to tell if a URL has a pdf/doc etc. file that it's linking to if it's not doing so explicitly (e.g. www.domain.com/file.pdf)? Is there a way to get Python to snag that file?
Edit:
Thanks for replies, several of which suggest downloading the file to see if it's of the correct type. Only problem is... I don't know how to do that (see question #2, above). urlretrieve(<above url>) gives only an html file with an href containing that same url.
There's no way to tell from the URL what it's going to give you. Even if it ends in .pdf it could still give you HTML or anything it likes.
You could do a HEAD request and look at the content-type, which, if the server isn't lying to you, will tell you if it's a PDF.
Alternatively you can download it and then work out whether what you got is a PDF.
In this case, what you refer to as "a document that's not explicitly referenced in a URL" seems to be what is known as a "redirect". Basically, the server tells you that you have to get the document at another URL. Normally, python's urllib will automatically follow these redirects, so that you end up with the right file. (and - as others have already mentioned - you can check the response's mime-type header to see if it's a pdf).
However, the server in question is doing something strange here. You request the url, and it redirects you to another url. You request the other url, and it redirects you again... to the same url! And again... And again... At some point, urllib decides that this is enough already, and will stop following the redirect, to avoid getting caught in an endless loop.
So how come you are able to get the pdf when you use your browser? Because apparently, the server will only serve the pdf if you have cookies enabled. (why? you have to ask the people responsible for the server...) If you don't have the cookie, it will just keep redirecting you forever.
(check the urllib2 and cookielib modules to get support for cookies, this tutorial might help)
At least, that is what I think is causing the problem. I haven't actually tried doing it with cookies yet. It could also be that the server is does not "want" to serve the pdf, because it detects you are not using a "normal" browser (in which case you would probably need to fiddle with the User-Agent header), but it would be a strange way of doing that. So my guess is that it is somewhere using a "session cookie", and in the case you haven't got one yet, keeps on trying to redirect.
As has been said there is no way to tell content type from URL. But if you don't mind getting the headers for every URL you can do this:
obj = urllib.urlopen(URL)
headers = obj.info()
if headers['Content-Type'].find('pdf') != -1:
# we have pdf file, download whole
...
This way you won't have to download each URL just it's headers. It's still not exactly saving network traffic, but you won't get better than that.
Also you should use mime-types instead of my crude find('pdf').
No. It is impossible to tell what kind of resource is referenced by a URL just by looking at it. It is totally up to the server to decide what he gives you when you request a certain URL.
Check the mimetype with the urllib.info() function. This might not be 100% accurate, it really depends on what the site returns as a Content-Type header. If it's well behaved it'll return the proper mime type.
A PDF should return application/pdf, but that may not be the case.
Otherwise you might just have to download it and try it.
You can't see it from the url directly. You could try to only download the header of the HTTP response and look for the Content-Type header. However, you have to trust the server on this - it could respond with a wrong Content-Type header not matching the data provided in the body.
Detect the file type in Python 3.x and webapp with url to the file which couldn't have an extension or a fake extension. You should install python-magic, using
pip3 install python-magic
For Mac OS X, you should also install libmagic using
brew install libmagic
Code snippet
import urllib
import magic
from urllib.request import urlopen
url = "http://...url to the file ..."
request = urllib.request.Request(url)
response = urlopen(request)
mime_type = magic.from_buffer(response.read())
print(mime_type)

How to debug a file upload?

I'm trying to upload a PDF file to a website using Hot Banana's content management system using a Python script. I've successfully logged into the site and can log out, but I can't seem to get file uploads to work.
The file upload is part of a large complicated web form that submits the form data and PDF file though a POST. Using Firefox along with the Firebug and Tamper Data extensions I took a peek at what the browser was sending in the POST and where it was going. I believe I mimicked the data the browser was sending in the code, but I'm still having trouble.
I'm importing cookielib to handle cookies, poster to encode the PDF, and urllib and urllib2 to build the request and send it to the URL.
Is it possible that registering the poster openers is clobbering the cookie processor openers? Am I doing this completely wrong?
Edit: What's a good way to debug the process? At the moment, I'm just dumping out the urllib2 response to a text file and examining the output to see if it matches what I get when I do a file upload manually.
Edit 2: Chris Lively suggested I post the error I'm getting. The response from urllib2 doesn't generate an exception, but just returns:
<script>
if (parent != window) {
parent.document.location.reload();
} else {
parent.document.location = 'login.cfm';
}
</script>
I'll keep at it.
A tool like WireShark will give you a more complete trace at a much lower-level than the firefox plugins.
Often this can be something as simple as not setting the content-type correctly, or failing to include content-length.
"What's a good way to debug [a web services] process?"
At the moment, I'm just dumping out the urllib2 response to a text file and examining the output to see if it matches what I get when I do a file upload manually.
Correct. That's about all there is.
HTTP is a very simple protocol -- you make a request (POST, in this case) and the server responds. Not much else involved and not much more you can do while debugging.
What else would you like? Seriously. What kind of debugger are you imagining might exist for this kind of stateless protocol?
You might be better off instrumenting the server to see why this is failing, rather than trying to debug this on the client side.

Categories