Related
I'm performing a simple task of uploading a file using Python requests library. I searched Stack Overflow and no one seemed to have the same problem, namely, that the file is not received by the server:
import requests
url='http://nesssi.cacr.caltech.edu/cgi-bin/getmulticonedb_release2.cgi/post'
files={'files': open('file.txt','rb')}
values={'upload_file' : 'file.txt' , 'DB':'photcat' , 'OUT':'csv' , 'SHORT':'short'}
r=requests.post(url,files=files,data=values)
I'm filling the value of 'upload_file' keyword with my filename, because if I leave it blank, it says
Error - You must select a file to upload!
And now I get
File file.txt of size bytes is uploaded successfully!
Query service results: There were 0 lines.
Which comes up only if the file is empty. So I'm stuck as to how to send my file successfully. I know that the file works because if I go to this website and manually fill in the form it returns a nice list of matched objects, which is what I'm after. I'd really appreciate all hints.
Some other threads related (but not answering my problem):
Send file using POST from a Python script
http://docs.python-requests.org/en/latest/user/quickstart/#response-content
Uploading files using requests and send extra data
http://docs.python-requests.org/en/latest/user/advanced/#body-content-workflow
If upload_file is meant to be the file, use:
files = {'upload_file': open('file.txt','rb')}
values = {'DB': 'photcat', 'OUT': 'csv', 'SHORT': 'short'}
r = requests.post(url, files=files, data=values)
and requests will send a multi-part form POST body with the upload_file field set to the contents of the file.txt file.
The filename will be included in the mime header for the specific field:
>>> import requests
>>> open('file.txt', 'wb') # create an empty demo file
<_io.BufferedWriter name='file.txt'>
>>> files = {'upload_file': open('file.txt', 'rb')}
>>> print(requests.Request('POST', 'http://example.com', files=files).prepare().body.decode('ascii'))
--c226ce13d09842658ffbd31e0563c6bd
Content-Disposition: form-data; name="upload_file"; filename="file.txt"
--c226ce13d09842658ffbd31e0563c6bd--
Note the filename="file.txt" parameter.
You can use a tuple for the files mapping value, with between 2 and 4 elements, if you need more control. The first element is the filename, followed by the contents, and an optional content-type header value and an optional mapping of additional headers:
files = {'upload_file': ('foobar.txt', open('file.txt','rb'), 'text/x-spam')}
This sets an alternative filename and content type, leaving out the optional headers.
If you are meaning the whole POST body to be taken from a file (with no other fields specified), then don't use the files parameter, just post the file directly as data. You then may want to set a Content-Type header too, as none will be set otherwise. See Python requests - POST data from a file.
(2018) the new python requests library has simplified this process, we can use the 'files' variable to signal that we want to upload a multipart-encoded file
url = 'http://httpbin.org/post'
files = {'file': open('report.xls', 'rb')}
r = requests.post(url, files=files)
r.text
Client Upload
If you want to upload a single file with Python requests library, then requests lib supports streaming uploads, which allow you to send large files or streams without reading into memory.
with open('massive-body', 'rb') as f:
requests.post('http://some.url/streamed', data=f)
Server Side
Then store the file on the server.py side such that save the stream into file without loading into the memory. Following is an example with using Flask file uploads.
#app.route("/upload", methods=['POST'])
def upload_file():
from werkzeug.datastructures import FileStorage
FileStorage(request.stream).save(os.path.join(app.config['UPLOAD_FOLDER'], filename))
return 'OK', 200
Or use werkzeug Form Data Parsing as mentioned in a fix for the issue of "large file uploads eating up memory" in order to avoid using memory inefficiently on large files upload (s.t. 22 GiB file in ~60 seconds. Memory usage is constant at about 13 MiB.).
#app.route("/upload", methods=['POST'])
def upload_file():
def custom_stream_factory(total_content_length, filename, content_type, content_length=None):
import tempfile
tmpfile = tempfile.NamedTemporaryFile('wb+', prefix='flaskapp', suffix='.nc')
app.logger.info("start receiving file ... filename => " + str(tmpfile.name))
return tmpfile
import werkzeug, flask
stream, form, files = werkzeug.formparser.parse_form_data(flask.request.environ, stream_factory=custom_stream_factory)
for fil in files.values():
app.logger.info(" ".join(["saved form name", fil.name, "submitted as", fil.filename, "to temporary file", fil.stream.name]))
# Do whatever with stored file at `fil.stream.name`
return 'OK', 200
You can send any file via post api while calling the API just need to mention files={'any_key': fobj}
import requests
import json
url = "https://request-url.com"
headers = {"Content-Type": "application/json; charset=utf-8"}
with open(filepath, 'rb') as fobj:
response = requests.post(url, headers=headers, files={'file': fobj})
print("Status Code", response.status_code)
print("JSON Response ", response.json())
#martijn-pieters answer is correct, however I wanted to add a bit of context to data= and also to the other side, in the Flask server, in the case where you are trying to upload files and a JSON.
From the request side, this works as Martijn describes:
files = {'upload_file': open('file.txt','rb')}
values = {'DB': 'photcat', 'OUT': 'csv', 'SHORT': 'short'}
r = requests.post(url, files=files, data=values)
However, on the Flask side (the receiving webserver on the other side of this POST), I had to use form
#app.route("/sftp-upload", methods=["POST"])
def upload_file():
if request.method == "POST":
# the mimetype here isnt application/json
# see here: https://stackoverflow.com/questions/20001229/how-to-get-posted-json-in-flask
body = request.form
print(body) # <- immutable dict
body = request.get_json() will return nothing. body = request.get_data() will return a blob containing lots of things like the filename etc.
Here's the bad part: on the client side, changing data={} to json={} results in this server not being able to read the KV pairs! As in, this will result in a {} body above:
r = requests.post(url, files=files, json=values). # No!
This is bad because the server does not have control over how the user formats the request; and json= is going to be the habbit of requests users.
Upload:
with open('file.txt', 'rb') as f:
files = {'upload_file': f.read()}
values = {'DB': 'photcat', 'OUT': 'csv', 'SHORT': 'short'}
r = requests.post(url, files=files, data=values)
Download (Django):
with open('file.txt', 'wb') as f:
f.write(request.FILES['upload_file'].file.read())
Regarding the answers given so far, there was always something missing that prevented it to work on my side. So let me show you what worked for me:
import json
import os
import requests
API_ENDPOINT = "http://localhost:80"
access_token = "sdfJHKsdfjJKHKJsdfJKHJKysdfJKHsdfJKHs" # TODO: get fresh Token here
def upload_engagement_file(filepath):
url = API_ENDPOINT + "/api/files" # add any URL parameters if needed
hdr = {"Authorization": "Bearer %s" % access_token}
with open(filepath, "rb") as fobj:
file_obj = fobj.read()
file_basename = os.path.basename(filepath)
file_to_upload = {"file": (str(file_basename), file_obj)}
finfo = {"fullPath": filepath}
upload_response = requests.post(url, headers=hdr, files=file_to_upload, data=finfo)
fobj.close()
# print("Status Code ", upload_response.status_code)
# print("JSON Response ", upload_response.json())
return upload_response
Note that requests.post(...) needs
a url parameter, containing the full URL of the API endpoint you're calling, using the API_ENDPOINT, assuming we have an http://localhost:8000/api/files endpoint to POST a file
a headers parameter, containing at least the authorization (bearer token)
a files parameter taking the name of the file plus the entire file content
a data parameter taking just the path and file name
Installation required (console):
pip install requests
What you get back from the function call is a response object containing a status code and also the full error message in JSON format. The commented print statements at the end of upload_engagement_file are showing you how you can access them.
Note: Some useful additional information about the requests library can be found here
Some may need to upload via a put request and this is slightly different that posting data. It is important to understand how the server expects the data in order to form a valid request. A frequent source of confusion is sending multipart-form data when it isn't accepted. This example uses basic auth and updates an image via a put request.
url = 'foobar.com/api/image-1'
basic = requests.auth.HTTPBasicAuth('someuser', 'password123')
# Setting the appropriate header is important and will vary based
# on what you upload
headers = {'Content-Type': 'image/png'}
with open('image-1.png', 'rb') as img_1:
r = requests.put(url, auth=basic, data=img_1, headers=headers)
While the requests library makes working with http requests a lot easier, some of its magic and convenience obscures just how to craft more nuanced requests.
In Ubuntu you can apply this way,
to save file at some location (temporary) and then open and send it to API
path = default_storage.save('static/tmp/' + f1.name, ContentFile(f1.read()))
path12 = os.path.join(os.getcwd(), "static/tmp/" + f1.name)
data={} #can be anything u want to pass along with File
file1 = open(path12, 'rb')
header = {"Content-Disposition": "attachment; filename=" + f1.name, "Authorization": "JWT " + token}
res= requests.post(url,data,header)
I have created an endpoint, as shown below:
#app.post("/report/upload")
def create_upload_files(files: UploadFile = File(...)):
try:
with open(files.filename,'wb+') as wf:
wf.write(file.file.read())
wf.close()
except Exception as e:
return {"error": e.__str__()}
It is launched with uvicorn:
../venv/bin/uvicorn test_upload:app --host=0.0.0.0 --port=5000 --reload
I am performing some tests of uploading a file of around 100 MB using Python requests, and takes around 128 seconds:
f = open(sys.argv[1],"rb").read()
hex_convert = binascii.hexlify(f)
items = {"files": hex_convert.decode()}
start = time.time()
r = requests.post("http://192.168.0.90:5000/report/upload",files=items)
end = time.time() - start
print(end)
I tested the same upload script with an API endpoint using Flask and takes around 0.5 seconds:
from flask import Flask, render_template, request
app = Flask(__name__)
#app.route('/uploader', methods = ['GET', 'POST'])
def upload_file():
if request.method == 'POST':
f = request.files['file']
f.save(f.filename)
return 'file uploaded successfully'
if __name__ == '__main__':
app.run(host="192.168.0.90",port=9000)
Is there anything I am doing wrong?
You can write the file(s) using synchronous writing , after defining the endpoint with def, as shown in this answer, or using asynchronous writing (utilising aiofiles), after defining the endpoint with async def; UploadFile methods are async methods, and thus, you need to await them. Example is given below. For more details on def vs async def and how they may affect your API's performance (depending on the tasks performed inside the endpoints), please have a look at this answer.
Upload Single File
app.py
from fastapi import File, UploadFile
import aiofiles
#app.post("/upload")
async def upload(file: UploadFile = File(...)):
try:
contents = await file.read()
async with aiofiles.open(file.filename, 'wb') as f:
await f.write(contents)
except Exception:
return {"message": "There was an error uploading the file"}
finally:
await file.close()
return {"message": f"Successfuly uploaded {file.filename}"}
Read the File in chunks
Or, you can use async in the chunked manner, to avoid loading the entire file into memory. If, for example, you have 8GB of RAM, you can’t load a 50GB file (not to mention that the available RAM will always be less than the total amount that is installed, as the native OS and other applications running on your machine will use some of the RAM). Hence, in that case, you should rather load the file into memory in chunks and process the data one chunk at a time. This method, however, may take longer to complete, depending on the chunk size you choose; below, that is 1024 * 1024 bytes (= 1MB). You can adjust the chunk size as desired.
from fastapi import File, UploadFile
import aiofiles
#app.post("/upload")
async def upload(file: UploadFile = File(...)):
try:
async with aiofiles.open(file.filename, 'wb') as f:
while contents := await file.read(1024 * 1024):
await f.write(contents)
except Exception:
return {"message": "There was an error uploading the file"}
finally:
await file.close()
return {"message": f"Successfuly uploaded {file.filename}"}
Alternatively, you could use shutil.copyfileobj(), which is used to copy the contents of a file-like object to another file-like object (see this answer as well). By default the data is read in chunks with the default buffer (chunk) size being 1MB (i.e., 1024 * 1024 bytes) for Windows and 64KB for other platforms (see source code here). You can specify the buffer size by passing the optional length parameter. Note: If negative length value is passed, the entire contents of the file will be read—see f.read() documentation as well, which .copyfileobj() uses under the hood. The source code of .copyfileobj() can be found here—there isn't really anything that different from the previous approach in reading/writing the file contents. However, .copyfileobj() uses blocking I/O operations behind the scenes, and this would result in blocking the entire server (if used inside an async def endpoint). Thus, to avoid that , you could use Starlette's run_in_threadpool() to run all the needed functions in a separate thread (that is then awaited) to ensure that the main thread (where coroutines are run) does not get blocked. The same exact function is used by FastAPI internally when you call the async methods of the UploadFile object, i.e., .write(), .read(), .close(), etc.—see source code here. Example:
from fastapi import File, UploadFile
from fastapi.concurrency import run_in_threadpool
import shutil
#app.post("/upload")
async def upload(file: UploadFile = File(...)):
try:
f = await run_in_threadpool(open, file.filename, 'wb')
await run_in_threadpool(shutil.copyfileobj, file.file, f)
except Exception:
return {"message": "There was an error uploading the file"}
finally:
if 'f' in locals(): await run_in_threadpool(f.close)
await file.close()
return {"message": f"Successfuly uploaded {file.filename}"}
test.py
import requests
url = 'http://127.0.0.1:8000/upload'
file = {'file': open('images/1.png', 'rb')}
resp = requests.post(url=url, files=file)
print(resp.json())
Upload Multiple Files
app.py
from fastapi import File, UploadFile
import aiofiles
#app.post("/upload")
async def upload(files: List[UploadFile] = File(...)):
for file in files:
try:
contents = await file.read()
async with aiofiles.open(file.filename, 'wb') as f:
await f.write(contents)
except Exception:
return {"message": "There was an error uploading the file(s)"}
finally:
await file.close()
return {"message": f"Successfuly uploaded {[file.filename for file in files]}"}
Read the Files in chunks
To read the file(s) in chunks instead, see the approaches described earlier in this answer.
test.py
import requests
url = 'http://127.0.0.1:8000/upload'
files = [('files', open('images/1.png', 'rb')), ('files', open('images/2.png', 'rb'))]
resp = requests.post(url=url, files=files)
print(resp.json())
Update
Digging into the source code, it seems that the latest versions of Starlette (which FastAPI uses underneath) use a SpooledTemporaryFile (for UploadFile data structure) with max_size attribute set to 1MB (1024 * 1024 bytes) - see here - in contrast to older versions where max_size was set to the default value, i.e., 0 bytes, such as the one here.
The above means, in the past, data used to be fully loaded into memory no matter the size of file (which could lead to issues when a file couldn't fit into RAM), whereas, in the latest version, data is spooled in memory until the file size exceeds max_size (i.e., 1MB), at which point the contents are written to disk; more specifically, to the OS's temporary directory (Note: this also means that the maximum size of file you can upload is bound by the storage available to the system's temporary directory.. If enough storage (for your needs) is available on your system, there's nothing to worry about; otherwise, please have a look at this answer on how to change the default temporary directory). Thus, the process of writing the file multiple times—that is, initially loading the data into RAM, then, if the data exceeds 1MB in size, writing the file to temporary directory, then reading the file from temporary directory (using file.read()) and finally, writing the file to a permanent directory—is what makes uploading file slow compared to using Flask framework, as OP noted in their question (though, the difference in time is not that big, but just a few seconds, depending on the size of file).
Solution
The solution (if one needs to upload files quite larger than 1MB and uploading time is important to them) would be to access the request body as a stream. As per Starlette documentation, if you access .stream(), then the byte chunks are provided without storing the entire body to memory (and later to temporary directory, if the body contains file data that exceeds 1MB). Example is given below, where time of uploading is recorded on client side, and which ends up being the same as when using Flask framework with the example given in OP's question.
app.py
from fastapi import Request
import aiofiles
#app.post('/upload')
async def upload(request: Request):
try:
filename = request.headers['filename']
async with aiofiles.open(filename, 'wb') as f:
async for chunk in request.stream():
await f.write(chunk)
except Exception:
return {"message": "There was an error uploading the file"}
return {"message": f"Successfuly uploaded {filename}"}
In case your application does not require saving the file to disk, and all you need is the file to be loaded directly into memory, you can just use the below (make sure your RAM has enough space available to accommodate the accumulated data):
from fastapi import Request
#app.post('/upload')
async def upload(request: Request):
body = b''
try:
filename = request.headers['filename']
async for chunk in request.stream():
body += chunk
except Exception:
return {"message": "There was an error uploading the file"}
#print(body.decode())
return {"message": f"Successfuly uploaded {filename}"}
test.py
import requests
import time
with open("images/1.png", "rb") as f:
data = f.read()
url = 'http://127.0.0.1:8000/upload'
headers = {'filename': '1.png'}
start = time.time()
resp = requests.post(url=url, data=data, headers=headers)
end = time.time() - start
print(f'Elapsed time is {end} seconds.', '\n')
print(resp.json())
For more details and code examples (on uploading multiple Files and Form/JSON data) using the approach above, please have a look at this answer.
I'm using FastAPI and currently I return a csv which I read from SQL server with pandas. (pd.read_sql())
However the csv is quite big for the browser and I want to return it with a File response:
https://fastapi.tiangolo.com/advanced/custom-response/ (end of the page).
I cannot seem to do this without first writing it to a csv file which seems slow and will clutter the filesystem with csv's on every request.
So my questions way, is there way to return a FileResponse from a sql database or pandas dataframe.
And if not, is there a way to delete the generated csv files, after it has all been read by the client?
Thanks for your help!
Kind regards,
Stephan
Based HEAVILY off this https://github.com/tiangolo/fastapi/issues/1277
Turn your dataframe into a stream
use a streaming response
Modify headers so it's a download (optional)
from fastapi.responses import StreamingResponse
import io
#app.get("/get_csv")
async def get_csv():
df = pandas.DataFrame(dict(col1 = 1, col2 = 2))
stream = io.StringIO()
df.to_csv(stream, index = False)
response = StreamingResponse(iter([stream.getvalue()]),
media_type="text/csv"
)
response.headers["Content-Disposition"] = "attachment; filename=export.csv"
return response
Adding to the code that was previously mentioned, I found it useful to place another response header, in order for the client to be able to see the "Content-Disposition". This is due to the fact, that only CORS-safelisted response headers can be seen by default by the client. "Content-Disposition" is not part of this list, so it must be added explicitly https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Expose-Headers.
I don't know if there is another way to specify this, for the client or server in a more general way so that it applies to all the necessary endpoints, but this is the way I applied it.
#router.post("/files", response_class = StreamingResponse)
async def anonymization(file: bytes = File(...), config: str = Form(...)):
# file as str
inputFileAsStr = StringIO(str(file,'utf-8'))
# dataframe
df = pd.read_csv(inputFileAsStr)
# send to function to handle anonymization
results_df = anonymize(df, config)
# output file
outFileAsStr = StringIO()
results_df.to_csv(outFileAsStr, index = False)
response = StreamingResponse(
iter([outFileAsStr.getvalue()]),
media_type='text/csv',
headers={
'Content-Disposition': 'attachment;filename=dataset.csv',
'Access-Control-Expose-Headers': 'Content-Disposition'
}
)
# return
return response
I was beating my head against the wall on this one as well. My use case is slightly different as I am storing images, pdfs, etc. as blobs in my maria database. I found that the trick was to pass the blob contents to BytesIO and the rest was simple.
from fastapi.responses import StreamingResponse
from io import BytesIO
#router.get('/attachment/{id}')
async def get_attachment(id: int):
mdb = messages(s.MARIADB)
attachment = mdb.getAttachment(id)
memfile = BytesIO(attachment['content'])
response = StreamingResponse(memfile, media_type=attachment['contentType'])
response.headers["Content-Disposition"] = f"inline; filename={attachment['name']}"
return response
I'm performing a simple task of uploading a file using Python requests library. I searched Stack Overflow and no one seemed to have the same problem, namely, that the file is not received by the server:
import requests
url='http://nesssi.cacr.caltech.edu/cgi-bin/getmulticonedb_release2.cgi/post'
files={'files': open('file.txt','rb')}
values={'upload_file' : 'file.txt' , 'DB':'photcat' , 'OUT':'csv' , 'SHORT':'short'}
r=requests.post(url,files=files,data=values)
I'm filling the value of 'upload_file' keyword with my filename, because if I leave it blank, it says
Error - You must select a file to upload!
And now I get
File file.txt of size bytes is uploaded successfully!
Query service results: There were 0 lines.
Which comes up only if the file is empty. So I'm stuck as to how to send my file successfully. I know that the file works because if I go to this website and manually fill in the form it returns a nice list of matched objects, which is what I'm after. I'd really appreciate all hints.
Some other threads related (but not answering my problem):
Send file using POST from a Python script
http://docs.python-requests.org/en/latest/user/quickstart/#response-content
Uploading files using requests and send extra data
http://docs.python-requests.org/en/latest/user/advanced/#body-content-workflow
If upload_file is meant to be the file, use:
files = {'upload_file': open('file.txt','rb')}
values = {'DB': 'photcat', 'OUT': 'csv', 'SHORT': 'short'}
r = requests.post(url, files=files, data=values)
and requests will send a multi-part form POST body with the upload_file field set to the contents of the file.txt file.
The filename will be included in the mime header for the specific field:
>>> import requests
>>> open('file.txt', 'wb') # create an empty demo file
<_io.BufferedWriter name='file.txt'>
>>> files = {'upload_file': open('file.txt', 'rb')}
>>> print(requests.Request('POST', 'http://example.com', files=files).prepare().body.decode('ascii'))
--c226ce13d09842658ffbd31e0563c6bd
Content-Disposition: form-data; name="upload_file"; filename="file.txt"
--c226ce13d09842658ffbd31e0563c6bd--
Note the filename="file.txt" parameter.
You can use a tuple for the files mapping value, with between 2 and 4 elements, if you need more control. The first element is the filename, followed by the contents, and an optional content-type header value and an optional mapping of additional headers:
files = {'upload_file': ('foobar.txt', open('file.txt','rb'), 'text/x-spam')}
This sets an alternative filename and content type, leaving out the optional headers.
If you are meaning the whole POST body to be taken from a file (with no other fields specified), then don't use the files parameter, just post the file directly as data. You then may want to set a Content-Type header too, as none will be set otherwise. See Python requests - POST data from a file.
(2018) the new python requests library has simplified this process, we can use the 'files' variable to signal that we want to upload a multipart-encoded file
url = 'http://httpbin.org/post'
files = {'file': open('report.xls', 'rb')}
r = requests.post(url, files=files)
r.text
Client Upload
If you want to upload a single file with Python requests library, then requests lib supports streaming uploads, which allow you to send large files or streams without reading into memory.
with open('massive-body', 'rb') as f:
requests.post('http://some.url/streamed', data=f)
Server Side
Then store the file on the server.py side such that save the stream into file without loading into the memory. Following is an example with using Flask file uploads.
#app.route("/upload", methods=['POST'])
def upload_file():
from werkzeug.datastructures import FileStorage
FileStorage(request.stream).save(os.path.join(app.config['UPLOAD_FOLDER'], filename))
return 'OK', 200
Or use werkzeug Form Data Parsing as mentioned in a fix for the issue of "large file uploads eating up memory" in order to avoid using memory inefficiently on large files upload (s.t. 22 GiB file in ~60 seconds. Memory usage is constant at about 13 MiB.).
#app.route("/upload", methods=['POST'])
def upload_file():
def custom_stream_factory(total_content_length, filename, content_type, content_length=None):
import tempfile
tmpfile = tempfile.NamedTemporaryFile('wb+', prefix='flaskapp', suffix='.nc')
app.logger.info("start receiving file ... filename => " + str(tmpfile.name))
return tmpfile
import werkzeug, flask
stream, form, files = werkzeug.formparser.parse_form_data(flask.request.environ, stream_factory=custom_stream_factory)
for fil in files.values():
app.logger.info(" ".join(["saved form name", fil.name, "submitted as", fil.filename, "to temporary file", fil.stream.name]))
# Do whatever with stored file at `fil.stream.name`
return 'OK', 200
You can send any file via post api while calling the API just need to mention files={'any_key': fobj}
import requests
import json
url = "https://request-url.com"
headers = {"Content-Type": "application/json; charset=utf-8"}
with open(filepath, 'rb') as fobj:
response = requests.post(url, headers=headers, files={'file': fobj})
print("Status Code", response.status_code)
print("JSON Response ", response.json())
#martijn-pieters answer is correct, however I wanted to add a bit of context to data= and also to the other side, in the Flask server, in the case where you are trying to upload files and a JSON.
From the request side, this works as Martijn describes:
files = {'upload_file': open('file.txt','rb')}
values = {'DB': 'photcat', 'OUT': 'csv', 'SHORT': 'short'}
r = requests.post(url, files=files, data=values)
However, on the Flask side (the receiving webserver on the other side of this POST), I had to use form
#app.route("/sftp-upload", methods=["POST"])
def upload_file():
if request.method == "POST":
# the mimetype here isnt application/json
# see here: https://stackoverflow.com/questions/20001229/how-to-get-posted-json-in-flask
body = request.form
print(body) # <- immutable dict
body = request.get_json() will return nothing. body = request.get_data() will return a blob containing lots of things like the filename etc.
Here's the bad part: on the client side, changing data={} to json={} results in this server not being able to read the KV pairs! As in, this will result in a {} body above:
r = requests.post(url, files=files, json=values). # No!
This is bad because the server does not have control over how the user formats the request; and json= is going to be the habbit of requests users.
Upload:
with open('file.txt', 'rb') as f:
files = {'upload_file': f.read()}
values = {'DB': 'photcat', 'OUT': 'csv', 'SHORT': 'short'}
r = requests.post(url, files=files, data=values)
Download (Django):
with open('file.txt', 'wb') as f:
f.write(request.FILES['upload_file'].file.read())
Regarding the answers given so far, there was always something missing that prevented it to work on my side. So let me show you what worked for me:
import json
import os
import requests
API_ENDPOINT = "http://localhost:80"
access_token = "sdfJHKsdfjJKHKJsdfJKHJKysdfJKHsdfJKHs" # TODO: get fresh Token here
def upload_engagement_file(filepath):
url = API_ENDPOINT + "/api/files" # add any URL parameters if needed
hdr = {"Authorization": "Bearer %s" % access_token}
with open(filepath, "rb") as fobj:
file_obj = fobj.read()
file_basename = os.path.basename(filepath)
file_to_upload = {"file": (str(file_basename), file_obj)}
finfo = {"fullPath": filepath}
upload_response = requests.post(url, headers=hdr, files=file_to_upload, data=finfo)
fobj.close()
# print("Status Code ", upload_response.status_code)
# print("JSON Response ", upload_response.json())
return upload_response
Note that requests.post(...) needs
a url parameter, containing the full URL of the API endpoint you're calling, using the API_ENDPOINT, assuming we have an http://localhost:8000/api/files endpoint to POST a file
a headers parameter, containing at least the authorization (bearer token)
a files parameter taking the name of the file plus the entire file content
a data parameter taking just the path and file name
Installation required (console):
pip install requests
What you get back from the function call is a response object containing a status code and also the full error message in JSON format. The commented print statements at the end of upload_engagement_file are showing you how you can access them.
Note: Some useful additional information about the requests library can be found here
Some may need to upload via a put request and this is slightly different that posting data. It is important to understand how the server expects the data in order to form a valid request. A frequent source of confusion is sending multipart-form data when it isn't accepted. This example uses basic auth and updates an image via a put request.
url = 'foobar.com/api/image-1'
basic = requests.auth.HTTPBasicAuth('someuser', 'password123')
# Setting the appropriate header is important and will vary based
# on what you upload
headers = {'Content-Type': 'image/png'}
with open('image-1.png', 'rb') as img_1:
r = requests.put(url, auth=basic, data=img_1, headers=headers)
While the requests library makes working with http requests a lot easier, some of its magic and convenience obscures just how to craft more nuanced requests.
In Ubuntu you can apply this way,
to save file at some location (temporary) and then open and send it to API
path = default_storage.save('static/tmp/' + f1.name, ContentFile(f1.read()))
path12 = os.path.join(os.getcwd(), "static/tmp/" + f1.name)
data={} #can be anything u want to pass along with File
file1 = open(path12, 'rb')
header = {"Content-Disposition": "attachment; filename=" + f1.name, "Authorization": "JWT " + token}
res= requests.post(url,data,header)
The API I'm working on has a method to send images by POSTing to /api/pictures/ with the picture file in the request.
I want to automate some sample images with Python's requests library, but I'm not exactly sure how to do it. I have a list of URLs that point to images.
rv = requests.get('http://api.randomuser.me')
resp = rv.json()
picture_href = resp['results'][0]['user']['picture']['thumbnail']
rv = requests.get(picture_href)
resp = rv.content
rv = requests.post(prefix + '/api/pictures/', data = resp)
rv.content returns bytecode. I get a 400 Bad Request from the server but no error message. I believe I'm either 'getting' the picture wrong when I do rv.content or sending it wrong with data = resp. Am I on the right track? How do I send files?
--Edit--
I changed the last line to
rv = requests.post('myapp.com' + '/api/pictures/', files = {'file': resp})
Server-side code (Flask):
file = request.files['file']
if file and allowed_file(file.filename):
...
else:
abort(400, message = 'Picture must exist and be either png, jpg, or jpeg')
Server aborts with status code 400 and the message above. I also tried reading resp with BytesIO, didn't help.
The problem is your data is not a file, its a stream of bytes. So it does not have a "filename" and I suspect that is why your server code is failing.
Try sending a valid filename along with the correct mime type with your request:
files = {'file': ('user.gif', resp, 'image/gif', {'Expires': '0'})}
rv = requests.post('myapp.com' + '/api/pictures/', files = files)
You can use imghdr to figure out what kind of image you are dealing with (to get the correct mime type):
import imghdr
image_type = imghdr.what(None, resp)
# You should improve this logic, by possibly creating a
# dictionary lookup
mime_type = 'image/{}'.format(image_type)