Im creating an api using flask where zip file should be downloaded at client side. The zip file is converted in to binary files and sent to client. The client regenerates the binary file back in to zip file. The server side is working fine and a zip file is downloaded but inside the file is empty. How to fix this?
this is server side
'''
#app.route('/downloads/', methods=['GET'])
def download():
from flask import Response
import io
import zipfile
import time
FILEPATH = "/home/Ubuntu/api/files.zip"
fileobj = io.BytesIO()
with zipfile.ZipFile(fileobj, 'w') as zip_file:
zip_info = zipfile.ZipInfo(FILEPATH)
zip_info.date_time = time.localtime(time.time())[:6]
zip_info.compress_type = zipfile.ZIP_DEFLATED
with open(FILEPATH, 'rb') as fd:
zip_file.writestr(zip_info, fd.read())
fileobj.seek(0)
return Response(fileobj.getvalue(),
mimetype='application/zip',
headers={'Content-Disposition': 'attachment;filename=files.zip'})
# client side
bin_data=b"response.content" #Whatever binary data you have store in a variable
binary_file_path = 'files.zip' #Name for new zip file you want to regenerate
with open(binary_file_path, 'wb') as f:
f.write(bin_data)
Related
I am trying to build a simple Streamlit app, where, I am uploading a CSV file, then loads it into dataframe, display the dataframe, and then upload it to a pre-defined FTP server.
The first part is working, the file is successfully uploaded and visualized, but then I cannot upload it to the FTP server. This is my code:
import ftplib
import pandas as pd
import streamlit as st
ftp_server = "ftp.test.com"
ftp_username = "user"
ftp_password = "password"
input_file = st.file_uploader("Upload a CSV File",type=['csv'])
if (input_file is not None) and input_file.name.endswith(".csv"):
df = pd.read_csv(input_file, delimiter="\t", encoding = 'ISO-8859-1')
st.dataframe(df)
session = ftplib.FTP(ftp_server, ftp_username, ftp_password)
file = open(input_file, "rb")
session.storbinary(input_file.name, input_file)
input_file.close()
session.quit()
st.success(f"The {input_file.name} was successfully uploaded to the FTP server: {ftp_server}!")
I am getting an error that
TypeError: expected str, bytes or os.PathLike object, not UploadedFile.
I am using Streamlit v.1.1.0.
Please note that I have simplified my code and replaced the FTP credentials. In the real world, I would probably use try/except for the session connection, etc.
I guess you get the error here:
file = open(input_file, "rb")
That line is both wrong and useless (you never use the file). Remove it.
You might need to seek the input_file back to the beginning after you have read it in read_csv:
file_input.seek(0)
You are missing the upload command (STOR) the storbinary call:
session.storbinary("STOR " + input_file.name, input_file)
I have the following problem in Python:
I am looking to create a zipfile in Blob Storage consisting of files from an array of URLs but I don't want to create the entire zipfile in memory and then upload it. I ideally want to stream the files to the zipfile in blob storage. I found this write up for C# https://andrewstevens.dev/posts/stream-files-to-zip-file-in-azure-blob-storage/
as well as this answer also in C# https://stackoverflow.com/a/54767264/10550055 .
I haven't been able to find equivalent functionality in the python azure blob SDK and python zipfile library.
Try this :
from zipfile import ZipFile
from azure.storage.blob import BlobServiceClient
import os,requests
tempPath = '<temp path>'
if not os.path.isdir(tempPath):
os.mkdir(tempPath)
zipFileName = 'test.zip'
storageConnstr = ''
container = ''
blob = BlobServiceClient.from_connection_string(storageConnstr).get_container_client(container).get_blob_client(zipFileName)
fileURLs = {'https://cdn.pixabay.com/photo/2015/04/23/22/00/tree-736885__480.jpg',
'http://1812.img.pp.sohu.com.cn/images/blog/2009/11/18/18/8/125b6560a6ag214.jpg',
'http://513.img.pp.sohu.com.cn/images/blog/2009/11/18/18/27/125b6541abcg215.jpg'}
def download_url(url, save_path, chunk_size=128):
r = requests.get(url, stream=True)
with open(save_path, 'wb') as fd:
for chunk in r.iter_content(chunk_size=chunk_size):
fd.write(chunk)
zipObj = ZipFile(tempPath + zipFileName, 'w')
#download file and write to zip
for url in fileURLs:
localFilePath = tempPath + os.path.basename(url)
download_url(url,localFilePath)
zipObj.write(localFilePath)
zipObj.close()
#upload zip
with open(tempPath + zipFileName, 'rb') as stream:
blob.upload_blob(stream)
I have this code for server
#app.route('/get', methods=['GET'])
def get():
return send_file("token.jpg", attachment_filename=("token.jpg"), mimetype='image/jpg')
and this code for getting response
r = requests.get(url + '/get')
And i need to save file from response to hard drive. But i cant use r.files. What i need to do in these situation?
Assuming the get request is valid. You can use use Python's built in function open, to open a file in binary mode and write the returned content to disk. Example below.
file_content = requests.get('http://yoururl/get')
save_file = open("sample_image.png", "wb")
save_file.write(file_content.content)
save_file.close()
As you can see, to write the image to disk, we use open, and write the returned content to 'sample_image.png'. Since your server-side code seems to be returning only one file, the example above should work for you.
You can set the stream parameter and extract the filename from the HTTP headers. Then the raw data from the undecoded body can be read and saved chunk by chunk.
import os
import re
import requests
resp = requests.get('http://127.0.0.1:5000/get', stream=True)
name = re.findall('filename=(.+)', resp.headers['Content-Disposition'])[0]
dest = os.path.join(os.path.expanduser('~'), name)
with open(dest, 'wb') as fp:
while True:
chunk = resp.raw.read(1024)
if not chunk: break
fp.write(chunk)
I'm trying to create a function that downloads a file from a FTP in memory and returns it. In this case I am trying download a zip file and unzip it without writing the file locally, but I am getting the following error:
ValueError: I/O operation on closed file.
Here is my current code:
from io import BytesIO
from ftplib import FTP_TLS
def download_from_ftp(fp):
"""
Retrieves file from a ftp
"""
ftp_host = 'some ftp url'
ftp_user = 'ftp username'
ftp_pass = 'ftp password'
with FTP_TLS(ftp_host) as ftp:
ftp.login(user=ftp_user, passwd=ftp_pass)
ftp.prot_p()
with BytesIO() as download_file:
ftp.retrbinary('RETR ' + fp, download_file.write)
download_file.seek(0)
return download_file
And here is my code that tries and unzips the file:
import zipfile
from ftp import download_from_ftp
ftp_file = download_from_ftp('ftp zip file path')
with zipfile.ZipFile(ftp_file, 'r') as zip_ref:
# do some stuff with files in the zip
By instantiating BytesIO as a context manager, it closes the file handle upon exit, so download_file no longer has an open file handle when it is returned to the caller.
You can simply assign the instantiated BytesIO object a variable for return instead. Change:
with BytesIO() as download_file:
to:
download_file = BytesIO()
and dedent the block.
Using the ftplib in Python, you can download files, but it seems you are restricted to use the file name only (not the full file path). The following code successfully downloads the requested code:
import ftplib
ftp=ftplib.FTP("ladsweb.nascom.nasa.gov")
ftp.login()
ftp.cwd("/allData/5/MOD11A1/2002/001")
ftp.retrbinary('RETR MOD11A1.A2002001.h00v08.005.2007079015634.hdf',open("MOD11A1.A2002001.h00v08.005.2007079015634.hdf",'wb').write)
As you can see, first a login to the site (ftp.login()) is established and then the current directory is set (ftp.cwd()). After that you need to declare the file name to download the file that resides in the current directory.
How about downloading the file directly by using its full path/link?
import ftplib
ftp = ftplib.FTP("ladsweb.nascom.nasa.gov")
ftp.login()
a = 'allData/5/MOD11A1/2002/001/MOD11A1.A2002001.h00v08.005.2007079015634.hdf'
fhandle = open('ftp-test', 'wb')
ftp.retrbinary('RETR ' + a, fhandle.write)
fhandle.close()
This solution uses the urlopen function in the urllib module. The urlopen function will let you download ftp and http urls. I like using it because you can connect and get all the data in one line. The last three lines extract the filename from the url and then save the data to that filename.
from urllib import urlopen
url = 'ftp://ladsweb.nascom.nasa.gov/allData/5/MOD11A1/2002/001/MOD11A1.A2002001.h00v08.005.2007079015634.hdf'
data = urlopen(url).read()
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
f.write(data)