Python htpasswd set zipfiles and delete - python

I have a flask web app and after a long time run, I want to send an email with a download link to the zip file created and stored after the long time run. The zip file should be htpasswd protected. My ideas up to now:
Create a zip file with the results and store it inside a folder in the flask app root
Question: How to set a htpasswd the zip file?
Sent an email with flask-mail with the link and the password
Delete the zip file after some time
How to check when a file needs to be deleted? My idea was to check with every newly submitted job, delete all jobs older than xy weeks.

with zipfile.ZipFile(memory_file, 'w') as zf:
files = result['files']
for individualFile in files:
data = zipfile.ZipInfo(individualFile['fileName'])
data.date_time = time.localtime(time.time())[:6]
data.compress_type = zipfile.ZIP_DEFLATED
zf.writestr(data, individualFile['fileData'])
memory_file.seek(0)
return send_file(memory_file, attachment_filename='capsule.zip', as_attachment=True)

Related

# Python, Upload file in Dropbox using python

Python, how find out the path of Dropbox and upload a file there
I want to upload daily csv file to dropbox account, but I'm getting ValidationError and others.
my code:
#finding the path
import pathlib
import dropbox
import os
# Automation is the name of my folder at dropbox
pathlib.Path.home() / "Automation"
Out[37]: WindowsPath('C:/Users/pb/Automation')
dbx = dropbox.Dropbox('My-token here')
dbx.users_get_current_account()
Out[38]: FullAccount(account_id='accid', name=Name(given_name='pb', surname='manager', familiar_name='pb', display_name='pb', abbreviated_name='pb'), email='example#example.com', email_verified=True, disabled=False, locale='en', referral_link='https://www.dropbox.com/referrals/codigo', is_paired=False, account_type=AccountType('basic', None), root_info=UserRootInfo(root_namespace_id='1111111111', home_namespace_id='11111111'), profile_photo_url='https://dl-web.dropbox.com/account_photo/get/sssssssssssssssssss', country='US', team=None, team_member_id=None)
# Now trying to see something in the folder, I just want upload file there
response = dbx.files_list_folder(path='user:/pb/automation')
print(response)
for entry in dbx.files_list_folder('https://www.dropbox.com/home/automation').entries:
print(entry.name)
ValidationError: 'user:/pb/automation' did not match pattern '(/(.|[\r\n])*)?|id:.*|(ns:[0-9]+(/.*)?)'
That error happens because the path parameter that the API is expecting needs to start with a '/'. It could be called out better in the docs.
Is the Automation folder in the root of your Dropbox directory? If so, then '/automation' should be sufficient for path. Try tinkering with the /files/list_folder endpoint in the Dropbox API explorer until you find the correct path.
Your for loop is likely to throw an error too, though. Are you just trying to loop over the results of the list_folder call? I'd suggest changing to
for entry in response:
print entry

Copying .csv file located on Heroku server to local drive

I have a Python/Django web application deployed to Heroku that writes information to a .csv file.
Once the file has been written I want to pull it down from the Heroku server to the users local drive.
I don't need to persist the file anywhere and so I am avoiding using S3 or storing in the database.
I have used the Heroku "ps:copy" command which works but surely this would mean the user would need the Heroku CLI installed on their machine for this to work?
Is there any other way?
I've pasted the code below that currently generates the .csv using the djqscsv library that works with Django QuerySets:
# Generate report filename
filename = djqscsv.generate_filename(qs, append_datestamp=True)
# Generate report
try:
with open(filename, 'ab') as csv_file:
print(filename)
write_csv(qs, csv_file)
messages.success(request, 'Consultation added to report successfully!')
messages.warning(request, 'Note: Certain needs may not appear in report, \
this is a result of filtering process.')
So once "csv_file" has been written I would then redirect to the "csv_view" you have described above, obviously without writing any further rows?
This should do the trick. When sent to the csv_view, Django generates a CSV and has it automatically download to the client's browser.
Your provided code:
# Generate report filename
filename = djqscsv.generate_filename(qs, append_datestamp=True)
# Generate report
try:
with open(filename, 'ab') as csv_file:
print(filename)
write_csv(qs, csv_file)
messages.success(request, 'Consultation added to report successfully!')
messages.warning(request, 'Note: Certain needs may not appear in report, \
this is a result of filtering process.')
You need to merge this code with my code into the same view.
def csv_view(request):
filename = djqscsv.generate_filename(qs, append_datestamp=True)
response = HttpResponse(content_type='text/csv')
response['Content-Disposition'] = 'attachment; filename="{}.csv"',format(filename)
writer = csv.writer(response)
writer.writerow(qs) #use a for loop if you have multiple rows
messages.success(request, 'Consultation added to report successfully!')
messages.warning(request, 'Note: Certain needs may not appear in report, \
this is a result of filtering process.')
return response
Just to be clear, csv_view is where the CSV is generated, not merely a link to the CSV generated in another view.
This method does not save the CSV to the Dyno either. I thought that it did and just deleted it after, but I don't think it saves it to the server ever.
Discovered that the djqscsv library has the render_to_csv_response included which solves the problem:
# Generate file name from QuerySet
filename = djqscsv.generate_filename(qs, append_datestamp=True)
# Autodownload csv file in browser
return render_to_csv_response(qs, filename=filename)

How can I download zip to python directory from google storage after obtaing response object?

After running the following code successfully, I think I am close to get access to the zip file in gcloud storage. However, I really cannot figure out what to do next, download or something to make the zip file available for python environment as a programmable object.
from gs import GSClient
client = GSClient()
object_meta = client.get("b/rcmikejupyter/o/output1.zip")
with client.get("b/rcmikejupyter/o/output1.zip", params=dict(alt="media"), stream=True) as res:
object_bytes = res.raw.read()
Assuming this is a byesobject
with open("pathto/yourfile.zip", "wb") as file:
file.write(object_bytes)

My function containing send_file() does not seem to update

I am using a Flask application to update some PDF files, convert them to an Excel file and send this file back to the user. I am using an instance folder to store the pdf and the excel files.
But when the user press the button "Download" in order to download the generated Excel file, an old file is downloaded (from an older session).
Moreover, when I try to change my code, for example, I changed the name of this Excel file: I can see the new name in the instance folder, but when I download the file with the webapp, it is still the old name (and old file). I have no idea where the webapp is looking for this old file...
MEDIA_FOLDER = '/media/htmlfi/'
app = Flask(__name__)
app.config.from_object(Config)
INSTANCE_FOLDER = app.instance_path
app.config['UPLOAD_FOLDER'] = INSTANCE_FOLDER+MEDIA_FOLDER
#app.route('/file/')
def send():
folder = app.config['UPLOAD_FOLDER']
try:
return send_file(folder+ "file.xlsx", as_attachment=True)
finally:
os.remove(folder+ "file.xlsx")
<a href="{{ url_for('send') }}" ><button class='btn btn-default'>DOWNLOAD</button></a>
I am really new to webapp in general, thank you for your help :)
send_file takes a cache_timeout parameter which is the number of seconds you want to cache the download. By default is 12 hours.
return send_file(
file.file_path(),
as_attachment=True,
cache_timeout=app.config['FILE_DOWNLOAD_CACHE_TIMEOUT'],
attachment_filename=file.file_name
)
http://flask.pocoo.org/docs/1.0/api/

Dropbox Python API not updating file

my code is uploading a txt file to my drop box, but the document it self is empty of content. It only reading inside the title of the file 'test_data.txt', the data itself which is in the real file is not there. The file never updates either when running the script a second time, but I suspect this is because the file is not being updated (it's not actually reading the contents of the .txt file). If anyone could help me with this I would appreciate it.
import dropbox
from dropbox.files import WriteMode
overwrite = WriteMode('overwrite', None)
token = 'xxxx'
dbx = dropbox.Dropbox(token)
dbx.users_get_current_account()
dbx.files_upload('test_data.txt', '/test_data.txt', mode = WriteMode('overwrite'))
files_upload should recieve a content to upload. In your current code you are asking to upload string "test_data.txt" as file "/test_data.txt".
with open('test_data.txt', 'rb') as fh:
dbx.files_upload(fh.read(), '/test_data.txt')

Categories