I want to implement force in python pyramid framework when a request come like
example.com/media/files/test.mp3
it open in the browser and start running. i want to stop it and make it forcefully download.
I just working this way and it work for me force download i send file name request parameter
#view_config(route_name='download')
def download_view(request):
MEDIA_PATH= os.path.join(PROJECT_ROOT, 'media'),
if request.params.get('filename', ''):
filename = request.params['filename']
file_path = MEDIA_PATH + filename
base_file_name = os.path.basename(file_path)
response = FileResponse(file_path, request=request,cache_max_age=86400)
headers = response.headers
headers['Content-Type'] = 'application/download'
headers['Accept-Ranges'] = 'bite'
headers['Content-Disposition'] = 'attachment;filename=' +base_file_name
return response
add this view in init.py
config.add_route('download', '/download')
send file name parameter it work for me.
Just add download="test.mp3" to the download link.
So it would be like:
Download Now
Related
I was working on the python confluence API for downloading attachment from confluence page, I need to download only files with .mpp extension. Tried with glob and direct parameters but didnt work.
Here is my code:
file_name = glob.glob("*.mpp")
attachments_container = confluence.get_attachments_from_content(page_id=33110, start=0, limit=1,filename=file_name)
print(attachments_container)
attachments = attachments_container['results']
for attachment in attachments:
fname = attachment['title']
download_link = confluence.url + attachment['_links']['download']
r = requests.get(download_link, auth = HTTPBasicAuth(confluence.username,confluence.password))
if r.status_code == 200:
if not os.path.exists('phoenix'):
os.makedirs('phoenix')
fname = ".\\phoenix\\" +fname
glob.glob() operates on your local folder. So you can't use that as a filter for get_attachments_from_content(). Also, don't specify a limit of since that gets you just one/the first attachment. Specify a high limit or whatever default will include all of them. (You may have to paginate results.)
However, you can exclude the files you don't want by checking the title of each attachment before you download it, which you have as fname = attachment['title'].
attachments_container = confluence.get_attachments_from_content(page_id=33110, limit=1000)
attachments = attachments_container['results']
for attachment in attachments:
fname = attachment['title']
if not fname.lower().endswith('.mpp'):
# skip file if it's not got that extension
continue
download_link = ...
# rest of your code here
Also, your code looks like a copy-paste from this answer but you've changed the actual "downloading" part of it. So if your next StackOverflow question is going to be "how to download a file from confluence", use that answer's code.
I'm using the following django/python code to stream a file to the browser:
wrapper = FileWrapper(file(path))
response = HttpResponse(wrapper, content_type='text/plain')
response['Content-Length'] = os.path.getsize(path)
return response
Is there a way to delete the file after the reponse is returned? Using a callback function or something?
I could just make a cron to delete all tmp files, but it would be neater if I could stream files and delete them as well from the same request.
You can use a NamedTemporaryFile:
from django.core.files.temp import NamedTemporaryFile
def send_file(request):
newfile = NamedTemporaryFile(suffix='.txt')
# save your data to newfile.name
wrapper = FileWrapper(newfile)
response = HttpResponse(wrapper, content_type=mime_type)
response['Content-Disposition'] = 'attachment; filename=%s' % os.path.basename(modelfile.name)
response['Content-Length'] = os.path.getsize(modelfile.name)
return response
temporary file should be deleted once the newfile object is evicted.
For future references:
I just had the case in which I couldn't use temp files for downloads.
But I still needed to delete them after it; so here is how I did it (I really didn't want to rely on cron jobs or celery or wossnames, its a very small system and I wanted it to stay that way).
def plug_cleaning_into_stream(stream, filename):
try:
closer = getattr(stream, 'close')
#define a new function that still uses the old one
def new_closer():
closer()
os.remove(filename)
#any cleaning you need added as well
#substitute it to the old close() function
setattr(stream, 'close', new_closer)
except:
raise
and then I just took the stream used for the response and plugged into it.
def send_file(request, filename):
with io.open(filename, 'rb') as ready_file:
plug_cleaning_into_stream(ready_file, filename)
response = HttpResponse(ready_file.read(), content_type='application/force-download')
# here all the rest of the heards settings
# ...
return response
I know this is quick and dirty but it works. I doubt it would be productive for a server with thousands of requests a second, but that's not my case here (max a few dozens a minute).
EDIT: Forgot to precise that I was dealing with very very big files that could not fit in memory during the download. So that is why I am using a BufferedReader (which is what is underneath io.open())
Mostly, we use periodic cron jobs for this.
Django already has one cron job to clean up lost sessions. And you're already running it, right?
See http://docs.djangoproject.com/en/dev/topics/http/sessions/#clearing-the-session-table
You want another command just like this one, in your application, that cleans up old files.
See this http://docs.djangoproject.com/en/dev/howto/custom-management-commands/
Also, you may not really be sending this file from Django. Sometimes you can get better performance by creating the file in a directory used by Apache and redirecting to a URL so the file can be served by Apache for you. Sometimes this is faster. It doesn't handle the cleanup any better, however.
One way would be to add a view to delete this file and call it from the client side using an asynchronous call (XMLHttpRequest). A variant of this would involve reporting back from the client on success so that the server can mark this file for deletion and have a periodic job clean it up.
This is just using the regular python approach (very simple example):
# something generates a file at filepath
from subprocess import Popen
# open file
with open(filepath, "rb") as fid:
filedata = fid.read()
# remove the file
p = Popen("rm %s" % filepath, shell=True)
# make response
response = HttpResponse(filedata, content-type="text/plain")
return response
Python 3.7 , Django 2.2.5
from tempfile import NamedTemporaryFile
from django.http import HttpResponse
with NamedTemporaryFile(suffix='.csv', mode='r+', encoding='utf8') as f:
f.write('\uFEFF') # BOM
f.write('sth you want')
# ref: https://docs.python.org/3/library/tempfile.html#examples
f.seek(0)
data=f.read()
response = HttpResponse(data, content_type="text/plain")
response['Content-Disposition'] = 'inline; filename=export.csv'
I am trying to download a .tar file through a link not directly to the file. If you browse to the page a popup appears with "Select path to download"
the url looks like this: http://document.internal.somecompany.com/Download?DocNo=2/1449-CUF10101/1
I am new to python.
This is my code so far with this part of the project:
manager = urllib2.HTTPPasswordMgrWithDefaultRealm()
manager.add_password(None, url, secrets["user"], secrets["password"])
#Create an authentication handler using the password manager
auth = urllib2.HTTPBasicAuthHandler(manager)
#Create an opener that will replace the default urlopen method on further calls
opener = urllib2.build_opener(auth)
urllib2.install_opener(opener)
#Here you should access the full url you wanted to open
response = urllib2.urlopen(url)
print response
Printing the response return this: <addinfourl at 139931044873856 whose fp = <socket._fileobject object at 0x7f443c35e8d0>>
I do not know how to go further, or if the response is anything near correct? I need to open the .tar and access a raml-file in it, and I do not know if I need to download the file and open it or just open it directly and print out the raml-file.
Any suggestions?
You need to open a file in 'wb' mode and write the content of the file you are trying to download, as the following:
response = urllib2.urlopen(url)
with open(os.path.basename(url), 'wb') as wf:
wf.write(response.read())
You can also specify the path instead of using just os.path.basename(url)
and have a look at tarfile for more info on how to deal with .tar files
In my project, when a user clicks a link, an AJAX request sends the information required to create a CSV. The CSV takes a long time to generate and so I want to be able to include a download link for the generated CSV in the AJAX response. Is this possible?
Most of the answers I've seen return the CSV in the following way:
return Response(
csv,
mimetype="text/csv",
headers={"Content-disposition":
"attachment; filename=myplot.csv"})
However, I don't think this is compatible with the AJAX response I'm sending with:
return render_json(200, {'data': params})
Ideally, I'd like to be able to send the download link in the params dict. But I'm also not sure if this is secure. How is this problem typically solved?
I think one solution may the futures library (pip install futures). The first endpoint can queue up the task and then send the file name back, and then another endpoint can be used to retrieve the file. I also included gzip because it might be a good idea if you are sending larger files. I think more robust solutions use Celery or Rabbit MQ or something along those lines. However, this is a simple solution that should accomplish what you are asking for.
from flask import Flask, jsonify, Response
from uuid import uuid4
from concurrent.futures import ThreadPoolExecutor
import time
import os
import gzip
app = Flask(__name__)
# Global variables used by the thread executor, and the thread executor itself
NUM_THREADS = 5
EXECUTOR = ThreadPoolExecutor(NUM_THREADS)
OUTPUT_DIR = os.path.dirname(os.path.abspath(__file__))
# this is your long running processing function
# takes in your arguments from the /queue-task endpoint
def a_long_running_task(*args):
time_to_wait, output_file_name = int(args[0][0]), args[0][1]
output_string = 'sleeping for {0} seconds. File: {1}'.format(time_to_wait, output_file_name)
print(output_string)
time.sleep(time_to_wait)
filename = os.path.join(OUTPUT_DIR, output_file_name)
# here we are writing to a gzipped file to save space and decrease size of file to be sent on network
with gzip.open(filename, 'wb') as f:
f.write(output_string)
print('finished writing {0} after {1} seconds'.format(output_file_name, time_to_wait))
# This is a route that starts the task and then gives them the file name for reference
#app.route('/queue-task/<wait>')
def queue_task(wait):
output_file_name = str(uuid4()) + '.csv'
EXECUTOR.submit(a_long_running_task, [wait, output_file_name])
return jsonify({'filename': output_file_name})
# this takes the file name and returns if exists, otherwise notifies it is not yet done
#app.route('/getfile/<name>')
def get_output_file(name):
file_name = os.path.join(OUTPUT_DIR, name)
if not os.path.isfile(file_name):
return jsonify({"message": "still processing"})
# read without gzip.open to keep it compressed
with open(file_name, 'rb') as f:
resp = Response(f.read())
# set headers to tell encoding and to send as an attachment
resp.headers["Content-Encoding"] = 'gzip'
resp.headers["Content-Disposition"] = "attachment; filename={0}".format(name)
resp.headers["Content-type"] = "text/csv"
return resp
if __name__ == '__main__':
app.run()
I'm trying to serve a txt file generated with some content and i am having some issues. I'vecreated the temp files and written the content using NamedTemporaryFile and just set delete to false to debug however the downloaded file does not contain anything.
My guess is the response values are not pointed to the correct file, hense nothing is being downloaded, heres my code:
f = NamedTemporaryFile()
f.write(p.body)
response = HttpResponse(FileWrapper(f), mimetype='application/force-download')
response['Content-Disposition'] = 'attachment; filename=test-%s.txt' % p.uuid
response['X-Sendfile'] = f.name
Have you considered just sending p.body through the response like this:
response = HttpResponse(mimetype='text/plain')
response['Content-Disposition'] = 'attachment; filename="%s.txt"' % p.uuid
response.write(p.body)
XSend requires the path to the file in
response['X-Sendfile']
So, you can do
response['X-Sendfile'] = smart_str(path_to_file)
Here, path_to_file is the full path to the file (not just the name of the file)
Checkout this django-snippet
There can be several problems with your approach:
file content does not have to be flushed, add f.flush() as mentioned in comment above
NamedTemporaryFile is deleted on closing, what might happen just as you exit your function, so the webserver has no chance to pick it up
temporary file name might be out of paths which web server is configured to send using X-Sendfile
Maybe it would be better to use StreamingHttpResponse instead of creating temporary files and X-Sendfile...
import urllib2;
url ="http://chart.apis.google.com/chart?cht=qr&chs=300x300&chl=s&chld=H|0";
opener = urllib2.urlopen(url);
mimetype = "application/octet-stream"
response = HttpResponse(opener.read(), mimetype=mimetype)
response["Content-Disposition"]= "attachment; filename=aktel.png"
return response