Download doesn't start in web browser by flask.Response - python

In Flask (micro web framework), we have a view as:
#app.route('/download/<id>/<resolution>/<extension>/')
def download_by_id(id, resolution=None, extension=None):
stream = youtube.stream_url(id, resolution, extension)
binary = requests.get(stream['url'], stream=True)
return flask.Response(
binary,
headers={'Content-Disposition': 'attachment; '
'filename=' + stream['filename']})
In template we have a link as Download 240p Video and when it's clicked, it should start downloading that video.
Issue is:
It is working fine in some browsers where no Download Manager like IDM etc. is installed. But IDM fails to download it. IDM just hangs at http://example.com/download/adkdsk457jds/240p/mp4/
Same is the case with Firefox's own download manager. Firefox just downloads a plain .html page and not the actual video.
But, videos gets downloaded successfully in Chrome when no IDM or other Download Manager is installed.
Please help and advice why it's not working. Do i need to change something in code?

You haven't included any response information, including the content type; you need to copy over a little more information about the original response to communicate what type of response you are returning. Otherwise defaults are used (dictated either by the HTTP standard or by Flask).
Specifically, at the very least you want to copy across the content type, length, and the transfer encoding:
headers={
'Content-Disposition': 'attachment; filename=' + stream['filename']
}
for header in ('content-type', 'content-length', 'transfer-encoding'):
if header in binary.headers:
headers[header] = binary.headers[header]
return flask.Response(binary.raw, headers=headers)
I'm using the response.raw underlying raw file object; this should work too but has the added advantage that any compression applied by YouTube is retained.
Some download managers may try to use a HTTP range request to grab a download in parallel, even when the server is not advertising that it supports such requests. You should probably respond with a 406 Not Acceptable response (requesting byte ranges when not supported is a Accept-* violation). You'll need to log what headers the download manager sends to be sure if this is the case.

Add 'Content-Type': 'application/octet-stream' to headers

Related

How to upload a binary/video file using Python http.client PUT method?

I am communicating with an API using HTTP.client in Python 3.6.2.
In order to upload a file it requires a three stage process.
I have managed to talk successfully using POST methods and the server returns data as I expect.
However, the stage that requires the actual file to be uploaded is a PUT method - and I cannot figure out how to syntax the code to include a pointer to the actual file on my storage - the file is an mp4 video file.
Here is a snippet of the code with my noob annotations :)
#define connection as HTTPS and define URL
uploadstep2 = http.client.HTTPSConnection("grabyo-prod.s3-accelerate.amazonaws.com")
#define headers
headers = {
'accept': "application/json",
'content-type': "application/x-www-form-urlencoded"
}
#define the structure of the request and send it.
#Here it is a PUT request to the unique URL as defined above with the correct file and headers.
uploadstep2.request("PUT", myUniqueUploadUrl, body="C:\Test.mp4", headers=headers)
#get the response from the server
uploadstep2response = uploadstep2.getresponse()
#read the data from the response and put to a usable variable
step2responsedata = uploadstep2response.read()
The response I am getting back at this stage is an
"Error 400 Bad Request - Could not obtain the file information."
I am certain this relates to the body="C:\Test.mp4" section of the code.
Can you please advise how I can correctly reference a file within the PUT method?
Thanks in advance
uploadstep2.request("PUT", myUniqueUploadUrl, body="C:\Test.mp4", headers=headers)
will put the actual string "C:\Test.mp4" in the body of your request, not the content of the file named "C:\Test.mp4" as you expect.
You need to open the file, read it's content then pass it as body. Or to stream it, but AFAIK http.client does not support that, and since your file seems to be a video, it is potentially huge and will use plenty of RAM for no good reason.
My suggestion would be to use requests, which is a way better lib to do this kind of things:
import requests
with open(r'C:\Test.mp4'), 'rb') as finput:
response = requests.put('https://grabyo-prod.s3-accelerate.amazonaws.com/youruploadpath', data=finput)
print(response.json())
I do not know if it is useful for you, but you can try to send a POST request with requests module :
import requests
url = ""
data = {'title':'metadata','timeDuration':120}
mp3_f = open('/path/your_file.mp3', 'rb')
files = {'messageFile': mp3_f}
req = requests.post(url, files=files, json=data)
print (req.status_code)
print (req.content)
Hope it helps .

Python file upload from url using requests library

I want to upload a file to an url. The file I want to upload is not on my computer, but I have the url of the file. I want to upload it using requests library. So, I want to do something like this:
url = 'http://httpbin.org/post'
files = {'file': open('report.xls', 'rb')}
r = requests.post(url, files=files)
But, only difference is, the file report.xls comes from some url which is not in my computer.
The only way to do this is to download the body of the URL so you can upload it.
The problem is that a form that takes a file is expecting the body of the file in the HTTP POST. Someone could write a form that takes a URL instead, and does the fetching on its own… but that would be a different form and request than the one that takes a file (or, maybe, the same form, with an optional file and an optional URL).
You don't have to download it and save it to a file, of course. You can just download it into memory:
urlsrc = 'http://example.com/source'
rsrc = requests.get(urlsrc)
urldst = 'http://example.com/dest'
rdst = requests.post(urldst, files={'file': rsrc.content})
Of course in some cases, you might always want to forward along the filename, or some other headers, like the Content-Type. Or, for huge files, you might want to stream from one server to the other without downloading and then uploading the whole file at once. You'll have to do any such things manually, but almost everything is easy with requests, and explained well in the docs.*
* Well, that last example isn't quite easy… you have to get the raw socket-wrappers off the requests and read and write, and make sure you don't deadlock, and so on…
There is an example in the documentation that may suit you. A file-like object can be used as a stream input for a POST request. Combine this with a stream response for your GET (passing stream=True), or one of the other options documented here.
This allows you to do a POST from another GET without buffering the entire payload locally. In the worst case, you may have to write a file-like class as "glue code", allowing you to pass your glue object to the POST that in turn reads from the GET response.
(This is similar to a documented technique using the Node.js request module.)
import requests
img_url = "http://...."
res_src = requests.get(img_url)
payload={}
files=[
('files',('image_name.jpg', res_src.content,'image/jpeg'))
]
headers = {"token":"******-*****-****-***-******"}
response = requests.request("POST", url, headers=headers, data=payload, files=files)
print(response.text)
above code is working for me.

How to serve filetype object in python

I'm using the URLLib2 method to download a file from another server via a rest api (the url can't be exposed to the user--that's why it needs to be done on the backend).
It gives me the following response:
(<addinfourl at 4365818480 whose fp = <google.appengine.dist27.socket._fileobject object at 0x1043883d0>>
I'm now trying to find a way to serve this file to the end user (a download). I did quite a bit of research tonight but had no luck. I tried using print .read() and that didn't help either.
Here's some additional information:
The Platform is Google Appengine. And below is the relevant code:
In calltrunk.get_recording:
req = urllib2.Request(url, None, forward_headers)
print response[0].read()
stream = urllib2.urlopen(req)
In my main.py
response = calltrunk.get_recording(ConversationId=cId)
print response[0].read()
Could really use a hand here!

Receive attachment with urllib - Python

I am testing my webpage software by sending requests from python to it. I am able to send requests, receive responses and parse the json. However, one option on the webpage is to download files. I send the download request and can confirm that the response headers contain what I expect (application/octet-stream and the appropriate filename) but the Content-Length is 0. If the length is 0, I assume the file was not actually sent. I am able to download files from other means so I know my software works but I am having trouble with getting it to work with python.
I build up the request then do:
f = urllib.request.urlopen(request)
f.body = f.read()
I expect data to be in f.body but it is empty (I see "b''")
Is there a different way to access the file contents from an attachment in python?
Is there a different way to access the file contents from an attachment in python?
This is in python-requests instead urllib, since I'm more familiar with that.
import requests
url = "http://example.com/foobar.jpg"
#make request
r = requests.get(url)
attachment_data = r.content
#save to file
with open(r"C:/pictures/foobar.jpg", 'wb') as f:
f.write(attachment_data)
Turns out I needed to throw some data into the file in order to have something in the body. I should've noticed this much sooner.

Posting only part of a file with Python's poster.encode

Using the poster.encode module, this works when I post a whole file to Solr:
f = open(filePath, 'rb')
datagen, headers = multipart_encode({'file': f})
# use wt=json because it's more convenient to navigate
request = urllib2.Request(SOLR_BASE_URL + 'update/extract?extractOnly=true&extractFormat=text&indent=true&wt=json', datagen, headers) # assumes solrPath ends in '/'
extracted = urllib2.urlopen(request).read()
However, for some files I'd like to send only the first n bytes of the files. I thought this would work:
f = open(filePath, 'rb')
mp = MultipartParam('file', fileobj=f, filesize=f)
datagen, headers = multipart_encode({'file': mp})
# use wt=json because it's more convenient to navigate
request = urllib2.Request(SOLR_BASE_URL + 'update/extract?extractOnly=true&extractFormat=text&indent=true&wt=json', datagen, headers) # assumes solrPath ends in '/'
extracted = urllib2.urlopen(request).read()
...but I get a timed out request (and the odd thing is that I then have to restart apache before requests to my web2py app work again). I get a 'http 400 content missing' error from urlopen() when I leave off the filesize argument. Am I just using MultipartParam incorrectly?
(The point of all this is that I'm using Solr to extract text content and metadata from files. For video and audio files, I'd like to get away with sending just the first 100-300k or so, as presumably the relevant data's all in the file headers.)
The reason you're having trouble is that mime encoding introduces sentinels in the post, if you don't specify the file size - that means that you have to do chunked transfer encoding so that the web server knows when to stop reading the file. But, that's the other problem - if you stop sending a MIME encoded POST to a server mid-stream, it'll just sit there waiting for the block to finish. Chunked transfer encoding and mixed-multipart mime encoding are both dead serious when it comes down to message segment sizes.
If you only want to send 100-300k of data, then only read that much, then every post you make to the server will terminate at the byte you want and the web server is expecting.

Categories