I am getting a file posting from a file:
file = request.post['ufile']
I want to get the path. How can I get it?
You have to use the request.FILES dictionary.
Check out the official documentation about the UploadedFile object, you can use the UploadedFile.temporary_file_path attribute, but beware that only files uploaded to disk expose it (that is, normally, when using the TemporaryFileUploadHandler uploads handler).
upload = request.FILES['ufile']
path = upload.temporary_file_path
In the normal case, though, you would like to use the file handler directly:
upload = request.FILES['ufile']
content = upload.read() # For small files
# ... or ...
for chunk in upload.chunks():
do_somthing_with_chunk(chunk) # For bigger files
You should use request.FILES['ufile'].file.name
you will get like this /var/folders/v7/1dtcydw51_s1ydkmypx1fggh0000gn/T/tmpKGp4mX.upload
and use file.name, your upload file have to bigger than 2.5M.
if you want to change this , see File Upload Settings
We cannot get the file path from the post request, only the filename, because flask doesn't has the file system access. If you need to get the file and perform some operations on it then you can try creating a temp directory save the file there, you can also get the path.
import tempfile
import shutil
dirpath = tempfile.mkdtemp()
# perform some operations if needed
shutil.rmtree(dirpath) # remove the temp directory
Related
My code takes a list of file and folder paths, loops through them, then uploads to a Google Drive. If the path given is a directory, the code creates a .zip file before uploading. Once the upload is complete, I need the code to delete the .zip file that was created but the deletion throws an error: PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\Temp\Newfolder.zip'. The file_path being given is C:\Temp\Newfolder. From what I can tell the only process using the file is this script but with open(...) should be closing the file when the processing is complete. Looking for suggestions on what could be done differently.
import os
import zipfile
import time
def add_list(filePathList, filePath):
filePathList.append(filePath)
return filePathList
def deleteFiles(filePaths):
for path in filePaths:
os.remove(path)
for file_path in file_paths:
if os.path.isdir(file_path) and not os.path.splitext(file_path)[1]:
# Compress the folder into a .zip file
folder_name = os.path.basename(file_path)
zip_file_path = os.path.join('C:\\Temp', folder_name + '.zip')
with zipfile.ZipFile(zip_file_path, 'w') as zip_file:
for root, dirs, files in os.walk(file_path):
for filename in files:
file_to_zip = os.path.join(root, filename)
zip_file.write(file_to_zip, os.path.relpath(file_to_zip, file_path))
zip_file.close()
# Update the file path to the zipped file
file_path = zip_file_path
# Create request body
request_body = {
'name': os.path.basename(file_path),
'mimeType': 'application/zip' if file_path.endswith('.zip') else 'text/plain',
'parents': [parent_id],
'supportsAllDrives': True
}
#Open the file and execute the request
with open(file_path, "rb") as f:
media_file = MediaFileUpload(file_path, mimetype='application/zip' if file_path.endswith('.zip') else 'text/plain')
upload_file = service.files().create(body=request_body, media_body=media_file, supportsAllDrives=True).execute()
# Print the response
print(upload_file)
if file_path.endswith('.zip'):
add_list(filePathList, file_path)
time.sleep(10)
deleteFiles(filePathList)
You're using with statements to properly close the open files, but you're not actually using the open files, just passing the path to another API that is presumably opening the file for you under the hood. Check the documentation for the APIs here:
media_file = MediaFileUpload(file_path, mimetype='application/zip' if file_path.endswith('.zip') else 'text/plain')
upload_file = service.files().create(body=request_body, media_body=media_file, supportsAllDrives=True).execute()
to figure out if MediaFileUpload objects, or the various things done with .create/.execute that uses it, provides some mechanism for ensuring deterministic resource cleanup (as is, if MediaFileUpload opens the file, and nothing inside .create/.execute explicitly closes it, at least one such file is guaranteed to be open when you try to remove them at the end, possibly more than one if reference cycles are involved or you're on an alternate Python interpreter, causing the problem you see on Windows systems). I can't say what might or might not be required, because you don't show the implementation, or specify the package it comes from.
Even if you are careful not to hold open handles yourself, there are cases where other processes can lock it, and you need to retry a few times (virus scanners can cause this issue by attempting to scan the file as you're deleting it; I believe properly implemented there'd be no problem, but they're software written by humans, and therefore some of them will misbehave). It's particularly common for files that are freshly created (virus scanners being innately suspicious of any new file, especially compressed files), and as it happens, you're creating fresh zip files and deleting them in short order, so you may be stuck retrying a few times.
I am stuck with a really weird problem.
I have a file called: "test.txt".
It is in the same directory with views.py. But I can't read it... FileNotFoundError.
But if I create read_file.py in the same directory with views.py and test.txt, it works absolutely fine.
What is wrong with views? Is this some sort of restriction by Django?
This code works on read_file, doesn't work on views.py:
fkey = open("test.txt", "rb")
key = fkey.read()
I think the problem may be relative vs absolute file handling. Using the following Python snippet, you can work out where the interpreter's current working directory is:
import os
print(os.getcwd())
You should notice it's not in the same directory as the views.py, as views.py is likely being invoked/called from another module. You may choose to change all your open calls to include the whole path to these files, or use an implementation like this:
import os
# This function can be used as a replacement for `open`
# which will allow you to access files from the file.
def open_relative(path, flag="r"):
# This builds the relative path by joining the current directory
# plus the current filename being executed.
relative_path = os.path.join(os.path.dirname(__file__), path)
return open(relative_path, flag) # return file handler
Then, this should work:
fkey = open_relative("test.txt", "rb")
key = fkey.read()
Hope this helps!
Attempting to retrieve a file from FTP and save it to an S3 bucket within lambda function.
I can confirm the first part of the code works as I can see the list of files printed to Cloudwatch logs.
import ftplib
from ftplib import FTP
import zipfile
import boto3
s3 = boto3.client('s3')
S3_OUTPUT_BUCKETNAME = 'my-s3bucket'
ftp = FTP('ftp.godaddy.com')
ftp.login(user='auctions', passwd='')
ftp.retrlines('LIST')
The next part was resulting in the following error:
module initialization error: [Errno 30] Read-only file system: 'tdnam_all_listings.csv.zip'
However I managed to overcome this by adding 'tmp' to the file location as per following code:
fileName = 'all_expiring_auctions.json.zip'
with open('/tmp/fileName', 'wb') as file:
ftp.retrbinary('RETR ' + fileName, file.write)
Next, I am attempting to unzip the file from the temporary loaction
with zipfile.ZipFile('/tmp/fileName', 'r') as zip_ref:
zip_ref.extractall('')
Finally, I am attempting save the file to a particular 'folder' in the s3 bucket, as follows:
data = open('/tmp/all_expiring_auctions.json')
s3.Bucket('brnddmn-s3').upload_fileobj('data','my-s3bucket/folder/')
The code produces no errors that I can see in the log, however the unzipped file is not reaching the destination despite my efforts.
Any help greatly appreciated.
Firstly, you have to use the tmp directory for working with files in Lambda. The ZipFile extractall('') will create the extract in your current working directory though, assuming the zip content is a simple plain text file with no relative path. To create the extract in tmp directory, use
zip_ref.extract_all('tmp')
I'm not sure why there are no errors logged. data = open(...) should throw an error if no file is found. If required you can explicitly print if file exists:
import os
print(os.path.exists('tmp/all_expiring_auctions.json')) # True/False
Finally, once you have ensured the file exists, the argument for Bucket() should be the bucket name. Not sure if your bucket name is 'brnddmn-s3' or 'my-s3bucket'. Also, the first argument to upload_fileobj() should be a file object, i.e., data instead of string 'data'. The second argument should be the object key (filename in S3) instead of the folder name.
Putting it together, the last line should look like this.
S3_OUTPUT_BUCKETNAME = 'my-s3bucket' # Replace with your S3 bucket name
s3.Bucket(S3_OUTPUT_BUCKETNAME).upload_fileobj(data,'folder/all_expiring_auctions.json')
My flask app has a function that compresses log files in a directory into a zip file and then sends the file to the user to download. The compression works, except that when the client receives the zipfile, the zip contains a series of folders that matches the absolute path of the original files that were zipped in the server. However, the zipfile that was made in the server static folder does not.
Zipfile contents in static folder: "log1.bin, log2.bin"
Zipfile contents that was sent to user: "/home/user/server/data/log1.bin, /home/user/server/data/log2.bin"
I don't understand why using "send_file" seems to make this change to the zip file contents and fills the received zip file with sub folders. The actual contents of the received zip file do in fact match the contents of the sent zip file in terms of data, but the user has to click through several directories to get to the files. What am I doing wrong?
#app.route("/download")
def download():
os.chdir(data_dir)
if(os.path.isfile("logs.zip")):
os.remove("logs.zip")
log_dir = os.listdir('.')
log_zip = zipfile.ZipFile('logs.zip', 'w')
for log in log_dir:
log_zip.write(log)
log_zip.close()
return send_file("logs.zip", as_attachment=True)
Using send_from_directory(directory, "logs.zip", as_attachment=True) fixed everything. It appears this call is better for serving up static files.
So with py2exe you can add additional data inside the library zip file, now I was wondering, how do you access this data, do you need to read it out from the zipfile or can you just access it like any other file ? or perhaps there's another way to access it.
I personally never used the zipfile. Instead, I pass the data files my program used in the setup method and use the bundle_files option (as described at the bottom of this page). For instance, the program I create using this call
setup(name = "Urban Planning",
windows = [{'script': "main.py", "dest_base": "Urban_Planning"}],
options = opts, # Dictionary of options
zipfile = None, # Don't create zip file
data_files = Mydata_files) # Add list of data files to folder
also has a piece before it where a config file and some images for the user interface are added like this
Mydata_files = [] # List of data files to include
# Get the images from the [script root]/ui/images/ folder
for files in os.listdir(sys.path[0] + '/ui/images/'):
f1 = sys.path[0] + '/ui/images/' + files
if os.path.isfile(f1): # This will skip directories
f2 = 'ui/images', [f1]
Mydata_files.append(f2)
# Get the config file from the [script root]/Configs folder
Mydata_files.append(('Configs', [sys.path[0] + '/Configs/defaults.cfg']))
This way I can call my config file just like I would while running the script using idle or command prompt and my UI images display correctly.