I'm having an issue with importing data into Infoblox in bulk in python3
The input comes from a csv file, which has 2 columns (hostname and ip). This data is used in a loop to create multiple host records at once via API call. The script works fine if I use it only for the creation of "one" record, even in a loop. Once I use the script for multiple creations then it fails.
---
import requests
import csv
with open("./ib/devices.csv","r") as f:
csv_reader = csv.reader(f)
for item in csv_reader:
host = item[0]
hostname = host+".domain.com"
ipv4 = item[1] # This works fine
#payload = "{\"name\":\"testie.domain.com\",\"ipv4addrs\": [{\"ipv4addr\":\"192.168.1.1\"}]}"
# This doesn't work
payload = {"name": hostname,"ipv4addrs": [{"ipv4addr":ipv4}]}
headers = {'content-type': "application/json"}
response = requests.request("POST", url, auth=(username, password), data=payload, headers=headers, verify=False)
print(response.text)
----
Does anyone know what the issue might be?
This is the error I receive:
{'name': 'testie.domain.com', 'ipv4addrs': [{'ipv4addr': '192.168.1.1'}]}
{ "Error": "AdmConProtoError: JSON Decoding: No JSON object could be decoded",
"code": "Client.Ibap.Proto.JSONDecoding",
"text": "JSON Decoding: No JSON object could be decoded"
If any suggestions or better solution for creating multiple records at once, I'm always open for suggestions.
Related
I'm trying to make my zabbix send a payload with trigger information to a rundeck server, so it runs a script based into the information to restart services and solve low level problems.
I'm facing a problem trying to parse the json to send it to rundeck, here's the code on my script # zabbix server:
#!/usr/bin/env python3
import sys
import requests
import json
import logging
# set logging
#logging.basicConfig(filename='/tmp/script.log', level=logging.INFO)
# Webhook URL ( localhost is a example )
api_url = 'http://localhost:4440/api/41/webhook/rLT583Yb0lOkObrFvm6iUz3YjLKWqhal#webhook-listener'
# setting up zabbix payload
#payload = sys.stdin.read() ( couldnt make this work )
payload = '{"alertid": "666", "clock": "1615696071", "message": "Trigger: Server memory usage is too high\nTrigger status: PROBLEM\nTrigger severity: Warning\nTrigger URL: http://zabbix.example.com/tr_events.php?triggerid=11839&eventid=23945\n\nItem values:\n\n1. Memory usage is 85.61% (sign: greater than) (threshold: 70.00%)", "sendto": "admin#example.com", "subject": "Zabbix server: {TRIGGER.NAME}", "eventid": "12345", "hostname": "Testing Variables"}'
# Loading the JSON ( already tried the methods below )
#data = dict(payload)
#data = json.loads(payload)
# Validando o payload
if 'hostname' not in data or not isinstance(data['hostname'], str):
logging.error('Payload com valor hostname inválido')
sys.exit(1)
if 'alertid' not in data or not isinstance(data['event_id'], str):
logging.error('Payload com valor event_id invalido')
sys.exit(1)
# Sorting payload fields
hostname = data['hostname']
event_id = data['alertid']
# setting trigger data as data dictionary ( rundeck data )
trigger_data = {
'hostname': hostname,
'event_id': event_id,
}
# Saving payload as txt to validate
#with open('/tmp/payload.txt', 'w') as f:
#f.write(payload)
# Set up headers
headers = {'Content-Type': 'application/json'}
# Manda o Post
response = requests.post(api_url, data=payload, headers=headers)
# Logando a resposta
logging.info(response.text)
# Print
print(response.text)
This script should bring trigger data through a payload into rundeck, so rundeck understands the payload and read the argument, so i can base my job on the 'hostname' variable ( that zabbix sends ).
I'm getting json parsing error when i run it
json.decoder.JSONDecodeError: Invalid control character at: line 1 column 95 (char 94)
I'm loosing hair, i already tried other ways to parse this json ( ast and dict ), but not seems to work.
already tried other ways to parse the json
already sent the data via the payload directly as text ( json validated )
I ended encoding the webhook function of Zabbix instead of using a script. Thank you guys for your help.
I am using http.client to hit a POST api and get the json response. My code is working correctly when I print response.read(). However, for some reason, this response is limited to only 20 results and the total count of results is over 20,000. I want to get the complete response in a variable using response.read().decode(), I am hoping that the variable will contain the complete json string. The issue is that I am getting an empty string when I used decode(). How do I get this done? How do I get the complete results?
import http.client
host = 'jooble.org'
key = 'API_KEY'
connection = http.client.HTTPConnection(host)
#request headers
headers = {
"Content-type": "application/json"}
#json query
body = '{ "keywords": "sales", "location": "MA"}'
connection.request('POST','/api/' + key, body, headers)
response = connection.getresponse()
print(response.status, response.reason)
print(response.read())
print(response.read().decode())
Don't call response.read() twice. response is a stream, so each call to read() continues from where the previous one ended. Since the first call is reading the entire response, the second one doesn't read anything.
If you want to print the encoded and decoded response, assign response.read() to a variable, then decode that.
data = response.read()
print(data)
print(data.decode())
But this can be done more simply using the requests module.
import requests
host = 'jooble.org'
key = 'API_KEY'
body = { "keywords": "sales", "location": "MA"}
response = requests.post(f'https://{host}/api/{key}', json=body)
print(response.content)
Note that in this version body is a dictionary, not a string. The json parameter automatically converts it to JSON.
I was trying to automate the task of pushing some files to various folder in a repo. I tried using Rest API provided by azure. When using Pushes Create API for the same, from the docs this is the content in the request body
snapshot of request body
This is the snapshot of python code that I wrote:
The code that I am using
In the above code repositeries_url contains the valid API URL.
When I run the code I am getting response code 400 and printing the JSON gives me
{'$id': '1', 'innerException': None, 'message': 'The body of the request contains invalid Json.\r\nParameter name: contentStream', 'typeName': 'Microsoft.TeamFoundation.SourceControl.WebServer.InvalidArgumentValueException, Microsoft.TeamFoundation.SourceControl.WebServer', 'typeKey': 'InvalidArgumentValueException', 'errorCode': 0, 'eventId': 0}
Why is this error coming and how can I rectify this error?
I found below mistake in your python code.
In your code, you defined payload["commits"] as commits array.
But you mistakenly assign temp as its value. You need to append temp to payload["commits"] array. ie. payload["commits"].append(temp)
Also if you want to use json in the request.post method to post the data. You can directly pass the payload object. Like below:
response = requests.post(url = repo_url, json = payload, headers = header)
Or you can use data in request.post method to post the json string. See below:
response = requests.post(url = repo_url, data= jsonPayload, headers = header)
See below full code:
....
temp["changes"].append(aksh)
payload["commits"].append(temp) ##append temp to commits array
jsonPayload=json.dumps(payload)
#pass payload to json parameter directly
response = requests.post(url = repo_url, json= payload, headers = header)
#you can also use data
#response = requests.post(url = repo_url, data= jsonPayload, headers = header)
I am trying to automate a bulk request for Elasticsearch via Python.
Therefore, i am preparing the data for the request body as follows (saved in a list as separate rows):
data =
[{"index":{"_id": ID}},
{"tag": {"input": [tag], "weight":count}}]
Then i will use requests to do the Api call:
r = requests.put(endpoint, json = data, auth = auth)
This is giving me the Error:
b'{"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"The bulk request must be terminated by a newline [\\n]"}],"type":"illegal_argument_exception","reason":"The bulk request must be terminated by a newline [\\n]"},"status":400}'
I know that i need to put a newline at the end of the request, and there lies my problem:
How can i append a newline to that given data structure? I tried to append '\n' to my list at the end but that didnt work out.
Thank you guys!
The payload's content type must be ndjson and the index attribute needs be specified as well. Here's a working snippet:
import requests
import json
endpoint = 'http://localhost:9200/_bulk'
# vvvvvv
data = [{"index": {"_index": "123", "_id": 123}},
{"tag": {"input": ['tag'], "weight":10}}]
# vvv vvv
payload = '\n'.join([json.dumps(line) for line in data]) + '\n'
r = requests.put(endpoint,
# `data` instead of `json`!
data=payload,
headers={
# it's a requirement
'Content-Type': 'application/x-ndjson'
})
print(r.json())
P.S.: You may want to consider the bulk helper in the official py client.
Need to have JSON file contents inserted inside {} for Payload. Unable to do this successfully. Any thoughts?
Attempted to write the JSON file contents as a string, this failed. Attempted to insert JSON file into payload = {}, failed.
import requests, meraki, json, os, sys
with open('networkid.txt') as file:
array = file.readlines()
for line in array:
line = line.rstrip("\n")
url = 'https://api.meraki.com/api/v0/networks/%s/alertSettings' %line
payload = {}
headers = { 'X-Cisco-Meraki-API-Key': 'API Key','Content-Type': 'application/json'}
response = requests.request('PUT', url, headers = headers, data = payload, allow_redirects=True, timeout = 10)
print(response.text)
I am writing a script to deploy parameters to Meraki networks via API. I have the JSON information formatted correctly and in its own file and what I am needing is to insert the JSON data into the location of Payload in the script. Any ideas on how to do this? I already have a for loop which is necessary to run a list of network ID's contained in a .txt file. Any thoughts?
The data parameter in requests.request takes (optional) Dictionary, list of tuples, bytes, or file-like object to send in the body of the :class:Request.
You can convert your properly formatted json file to a python dictionary using json.load:
with open('json_information.json') as f:
payload = json.load(f)
Then you can directly pass data=payload into the call to requests.request:
with open('networkid.txt') as file:
array = file.readlines()
for line in array:
line = line.rstrip("\n")
url = 'https://api.meraki.com/api/v0/networks/%s/alertSettings' % line
headers = { 'X-Cisco-Meraki-API-Key': 'API Key','Content-Type': 'application/json'}
response = requests.request('PUT',
url,
headers=headers,
data=payload,
timeout = 10) # allow_redirects is True by default
print(response.text)