How to access temporary uploaded file in web2py? - python

I am using a sqlform to upload a video file and want to encoding the video file while uploading. But I noticed that the upload file is not saved to uploads directory utill it is completely uploaded. Is there a temporary file and how can I access it ?Thanks.

I'm not sure how you might be able to process the file while it is uploading (i.e., process the bytes as they are received by the server), but if you can wait until the file is fully uploaded, you can access the uploaded file as a Python cgi.FieldStorage object:
def upload():
if request.vars.myfile:
video = encode_video(request.vars.myfile.file)
[do something with video]
form = SQLFORM.factory(Field('myfile', 'upload',
uploadfolder='/path/to/upload')).process()
return dict(form=form)
Upon upload, request.vars.myfile will be a cgi.FieldStorage object, and the open file object will be in request.vars.myfile.file. Note, if the encoding takes a while, you might want to pass it off to a task queue rather than handle it in the controller.

Related

Django uploading file from disk to S3 using django-storages

In my Django project I use django-storages to save files to S3 uploaded via a Form.
Model is defined as
class Uploads(models.Model):
file = models.FileField(upload_to=GetUploadPath)
I'm making changes to the file that was uploaded via Form by saving to disk and then trying to pass a File object to the model.save() method.
s='C:\Users\XXX\File.csv'
with open(os.path.join(settings.MEDIA_ROOT, s),"rb") as f:
file_to_s3 = File(f)
If I pass the file object using request.FILES.get('file') then the in-memory file gets uploaded properly, however when I try to upload the modified file from disk, I get this error,
RuntimeError: Input C:\Users\XXX\File.csv of type: <class 'django.core.files.base.File'> is not supported.
Followed this post but doesn't help, any thought's please.

Uploaded Files get overwritten in Dropbox

I am trying to upload users files to DropBox in Django. When I use the built in 'open()' function, it throws the following exception:
expected str, bytes or os.PathLike object, not TemporaryUploadedFile
When I don't, the file gets uploaded successfully but is blank (write mode).
UPLOAD HANDLER:
def upload_handler(DOC, PATH):
dbx = dropbox.Dropbox(settings.DROPBOX_APP_ACCESS_TOKEN)
with open(DOC, 'rb') as f:
dbx.files_upload(f.read(), PATH)
dbx.sharing_create_shared_link_with_settings(PATH)
How do I upload files or pass a mode to DropBox API without it being overwritten?
To specify a write mode when uploading files to Dropbox, pass the desired WriteMode to the files_upload method as the mode parameter. That would look like this:
dbx.files_upload(f.read(), PATH, mode=dropbox.files.WriteMode('overwrite')
This only controls how Dropbox commits the file (see the WriteMode docs for info); it doesn't control what data you're uploading. In your code, it is uploading whatever is returned by f.read(), so make sure that's what you expect it to be.

Sending file as an attachment

in my attempt to send a file to the user iam using the following:
return static_file( filename, root='/home/nikos/public_html/static/files' )
But when it comes to .pdf files it opens them to the browser instead of just sendign the file and all other files like .docx it sends them with the filename being just 'file' and not with original file's filename.
How can i send the files properly as attachments?
As mentioned in the docs you can simply pass a download=True argument and that should be it.
e.g.
return static_file(filename, root='/static/files', download=True)
You can also suggest a different filename for the download and pass that instead of True, e.g. download="Custom "+filename

How to download a file from Flask and render template afterwards?

In Flask I want the user to be able to download a file from a S3 bucket using Boto. Of course if Flask downloads something from S3 it stores the file on the server. However I want the the file to be downloaded to the users machine if they click a download button. Is that possible? My idea was the following. When the download button gets clicked Flask fetches the file from S3 and stores it on the server afterwards the users machine downloads the file. Afterwards the file on the server gets deleted. Please tell me if there is a better way. If I do it like this it works. Unfortunately I need to render my dashboard again after the file was downloaded so I can't return the file.
Flask:
#app.route('/download')
#login_required
def dowloadfile():
try:
#Boto3 downloading the file file.csv here
return send_file('file.csv', attachment_filename='file.csv')
except Exception as ermsg:
print(ermsg)
return render_template('dashboard.html', name=current_user.username, ermsg=ermsg)
HTML:
<button class="buttonstyle" onclick="showImage();">Download</button>
The problem is that when the download button is clicked a full screen image is shown which is my loading screen. This loading screen disappears when the function is done and a new html is rendered. With the code above the loading screen appears and stays forever even when the file is aready downloaded to the users machine. How could you fix that?

Flask - Handling Form File & Upload to AWS S3 without Saving to File

I am using a Flask app to receive a mutipart/form-data request with an uploaded file (a video, in this example).
I don't want to save the file in the local directory because this app will be running on a server, and saving it will slow things down.
I am trying to use the file object created by the Flask request.files[''] method, but it doesn't seem to be working.
Here is that portion of the code:
#bp.route('/video_upload', methods=['POST'])
def VideoUploadHandler():
form = request.form
video_file = request.files['video_data']
if video_file:
s3 = boto3.client('s3')
s3.upload_file(video_file.read(), S3_BUCKET, 'video.mp4')
return json.dumps('DynamoDB failure')
This returns an error:
TypeError: must be encoded string without NULL bytes, not str
on the line:
s3.upload_file(video_file.read(), S3_BUCKET, 'video.mp4')
I did get this to work by first saving the file and then accessing that saved file, so it's not an issue with catching the request file. This works:
video_file.save(form['video_id']+".mp4")
s3.upload_file(form['video_id']+".mp4", S3_BUCKET, form['video_id']+".mp4")
What would be the best method to handle this file data in memory and pass it to the s3.upload_file() method? I am using the boto3 methods here, and I am only finding examples with the filename used in the first parameter, so I'm not sure how to process this correctly using the file in memory. Thanks!
First you need to be able to access the raw data sent to Flask. This is not as easy as it seems, since you're reading a form. To be able to read the raw stream you can use flask.request.stream, which behaves similarly to StringIO. The trick here is, you cannot call request.form or request.file because accessing those attributes will load the whole stream into memory or into a file.
You'll need some extra work to extract the right part of the stream (which unfortunately I cannot help you with because it depends on how your form is made, but I'll let you experiment with this).
Finally you can use the set_contents_from_file function from boto, since upload_file does not seem to deal with file-like objects (StringIO and such).
Example code:
from boto.s3.key import Key
#bp.route('/video_upload', methods=['POST'])
def VideoUploadHandler():
# form = request.form <- Don't do that
# video_file = request.files['video_data'] <- Don't do that either
video_file_and_metadata = request.stream # This is a file-like object which does not only contain your video file
# This is what you need to implement
video_title, video_stream = extract_title_stream(video_file_and_metadata)
# Then, upload to the bucket
s3 = boto3.client('s3')
bucket = s3.create_bucket(bucket_name, location=boto.s3.connection.Location.DEFAULT)
k = Key(bucket)
k.key = video_title
k.set_contents_from_filename(video_stream)

Categories