I am trying to send a json file to a url (post). I am using data parameter to open the file and send it to the url:
r = requests.post(url_info, data=open(f, 'rb'), headers=headers, verify=False)
This works right now but I have been asked to use "json" insteand of "data" to send the file.
I have seen some examples using json created as dictionaries where this is working but I am not able to make it work from a file.
I have tried to use it directly:
json=open(f, 'rb')
In f I have the route to the json file.
Serialize the json file with json.dumps(open.... But I always get a message telling me that
Try using the json module to load JSON data from the file.
import requests
import json
# open the file and load JSON data from it
with open(f, "r") as file:
data = json.load(f) # type(data) -> <class 'dict'>
# send POST request with the loaded data
r = requests.post(url_info, data=data, headers=headers, verify=False)
Related
Need to have JSON file contents inserted inside {} for Payload. Unable to do this successfully. Any thoughts?
Attempted to write the JSON file contents as a string, this failed. Attempted to insert JSON file into payload = {}, failed.
import requests, meraki, json, os, sys
with open('networkid.txt') as file:
array = file.readlines()
for line in array:
line = line.rstrip("\n")
url = 'https://api.meraki.com/api/v0/networks/%s/alertSettings' %line
payload = {}
headers = { 'X-Cisco-Meraki-API-Key': 'API Key','Content-Type': 'application/json'}
response = requests.request('PUT', url, headers = headers, data = payload, allow_redirects=True, timeout = 10)
print(response.text)
I am writing a script to deploy parameters to Meraki networks via API. I have the JSON information formatted correctly and in its own file and what I am needing is to insert the JSON data into the location of Payload in the script. Any ideas on how to do this? I already have a for loop which is necessary to run a list of network ID's contained in a .txt file. Any thoughts?
The data parameter in requests.request takes (optional) Dictionary, list of tuples, bytes, or file-like object to send in the body of the :class:Request.
You can convert your properly formatted json file to a python dictionary using json.load:
with open('json_information.json') as f:
payload = json.load(f)
Then you can directly pass data=payload into the call to requests.request:
with open('networkid.txt') as file:
array = file.readlines()
for line in array:
line = line.rstrip("\n")
url = 'https://api.meraki.com/api/v0/networks/%s/alertSettings' % line
headers = { 'X-Cisco-Meraki-API-Key': 'API Key','Content-Type': 'application/json'}
response = requests.request('PUT',
url,
headers=headers,
data=payload,
timeout = 10) # allow_redirects is True by default
print(response.text)
I'm trying to post a json file to influxdb on my local host. This is the code:
import json
import requests
url = 'http://localhost:8086/write?db=mydb'
files ={'file' : open('sample.json', 'rb')}
r = requests.post(url, files=files)
print(r.text)
This is what sample.json looks like:
{
"region" : "eu-west-1",
"instanceType": "m1.small"
}
My response gives the following errors:
{"error":"unable to parse '--1bee44675e8c42d8985e750b2483e0a8\r':
missing fields\nunable to parse 'Content-Disposition: form-data;
name=\"file\"; filename=\"sample.json\"\r': invalid field
format\nunable to parse '\r': missing fields\nunable to parse '{':
missing fields\nunable to parse '\"region\" : \"eu-west-1\",': invalid
field format\nunable to parse '\"instanceType\": \"m1.small\"': invalid
field format\nunable to parse '}': missing fields"}
My json seems to be a valid json file. I am not sure what I am doing wrong.
I think that the fault maybe is that you just open the file but not read it. I mean since you want to post the content of the json object which is stored on the file, and not the file itself, it may be better to do that instead:
import json
import requests
url = 'http://localhost:8086/write?db=mydb'
json_data = open('sample.json', 'rb').read() # read the json data from the file
r = requests.post(url, data=json_data) # post them as data
print(r.text)
which is actually your code modified just a bit...
Writing data with JSON was deprecated for performance reasons and has since been removed.
See GitHub issue comments 107043910.
I've been looking around for ways to upload large file with additional data, but there doesn't seem to be any solution. To upload file, I've been using this code and it's been working fine with small file:
with open("my_file.csv", "rb") as f:
files = {"documents": ("my_file.csv", f, "application/octet-stream")}
data = {"composite": "NONE"}
headers = {"Prefer": "respond-async"}
resp = session.post("my/url", headers=headers, data=data, files=files)
The problem is that the code loads the whole file up before sending, and I would run into MemoryError when uploading large files. I've looked around, and the way to stream data is to set
resp = session.post("my/url", headers=headers, data=f)
but I need to add {"composite": "NONE"} to the data. If not, the server wouldn't recognize the file.
You can use the requests-toolbelt to do this:
import requests
from requests_toolbelt.multipart import encoder
session = requests.Session()
with open('my_file.csv', 'rb') as f:
form = encoder.MultipartEncoder({
"documents": ("my_file.csv", f, "application/octet-stream"),
"composite": "NONE",
})
headers = {"Prefer": "respond-async", "Content-Type": form.content_type}
resp = session.post(url, headers=headers, data=form)
session.close()
This will cause requests to stream the multipart/form-data upload for you.
Hey I am trying to import data that is already formatted as JSON data. I am trying to get it to be read in Python so I can use it for a http post request. I have tried saving it as .JSON and .txt and using json.dumps on both files but I'm still getting it in the wrong format. The code is below. I am guessing it is reading in the wrong format as the response from the post is getting an error. However when I use postman for the job, no error.
workingFile = 'D:\\test.json'
file = open(workingFile, 'r')
read = [file.read()]
data = json.dumps(read)
url = 'http://webaddress'
username = 'username'
password = 'password'
requestpost = requests.post(url, data, auth=(username, password))
workingFile = 'D:\\test.json'
with open(workingFile, 'r') as fh:
data = json.load(fh)
url = 'http://webaddress'
username = 'username'
password = 'password'
requestpost = requests.post(url, json=data, auth=(username, password))
By specifying json=data, requests encodes the payload as json instead of form data
To read json data from file
Parsing values from a JSON file using Python?
To read json data from string
Convert string to JSON using Python
I am sending a CSV file to a server using a POST request.
I am using a file-like object with requests.post
Will there be a problem if the CSV file is quite big and I have limited memory or the fact that I use a file-like object will never load the whole file in memory? I am not sure about that.
I know there is the stream option but it sounds like it's more for getting the response and not sending data.
headers = {
'content-type': 'text/csv',
}
csvfile = '/path/file.csv'
with open(csvfile) as f:
r = requests.post(url, data=f, headers=headers)
Using an open file object as the data parameter ensures that requests will stream the data for you.
If a file size can be determined (via the OS filesystem), the file object is streamed using a 8kb buffer. If no filesize can be determined, a Transfer-Encoding: chunked request is sent sending the data per line instead (the object is used as an iterable).
If you were to use the files= parameter for a multipart POST, on the other hand, the file would be loaded into memory before sending. Use the requests-toolbelt package to stream multi-part uploads:
import requests
from requests_toolbelt.multipart.encoder import MultipartEncoder
csvfile = '/path/file.csv'
with open(csvfile) as f:
m = MultipartEncoder(fields={'csv_field_name': ('file.csv', f, 'text/csv')})
headers = {'Content-Type': m.content_type}
r = requests.post(url, data=m, headers=headers)
This will not load the entire file into memory, it will be split into chunks and transmitted a little at a time. You can see this in the source code here.