I am trying to loads and print JSON response in shell script.I dont have any idea how to achieve this.Please help me on this.
Code:
#!/bin/sh
malop_q=$(curl -X GET -k -H "SEC: xxxxxxxxxxxxxxxxxxxxxx" 'https://127.0.0.1/api/reference_data/sets/malopid?fields=data(value)')
echo $malop_q
JSON Response:
{"data":[{"value":"11.945403842773683082"},{"value":"11.945403842773683082"},{"value":"11.945403842773683082"}]}
Expected OP is
I need to print values from above JSON response is:
11.945403842773683082
11.945403842773683082
11.945403842773683082
Thanks in advance.
'
The following python code do the parsing, assuming that you save it as: my_json.py
import json,sys
obj=json.load(sys.stdin)
for i in range(len(obj['data'])):
print obj['data'][i]['value']
You can get the respond using:
malop_q=$(curl -X GET -k -H "SEC: xxxxxxxxxxxxxxxxxxxxxx" 'https://127.0.0.1/api/reference_data/sets/malopid?fields=data(value)')
echo $malop_q | python my_json.py
or in one line:
curl -X GET -k -H "SEC: xxxxxxxxxxxxxxxxxxxxxx" 'https://127.0.0.1/api/reference_data/sets/malopid?fields=data(value)' | python my_json.py
With Python :
import json
with open('file.json') as json_file:
datas = json.load(json_file)
for d in datas["data"]:
print(d["value"])
Related
I'am desperatly trying to turn the following CURL into a python code :
curl -X POST -F data=#C:\search.csv -F columns=NO_VOIE -F columns=TYPE_DE_VOIE -F columns=VOIE -F columns=CODE_POSTAL -F columns=COMMUNE -F result_columns=latitude -F result_columns=longitude -F result_columns=score https://api-adresse.data.gouv.fr/search/csv/
It goes without saying this CURL is working perfectly, giving a list a geocoded adresses, based on a CSV file C:\search.csv which stored the adresses to be geocoded with GPS tags.
search.csv =
INDEX,NO_VOIE,TYPE_DE_VOIE,VOIE,CODE_POSTAL,COMMUNE
201620160000000,77,RUE,TONY REVILLON,1750,SAINT-LAURENT-SUR-SAONE
201620160000001,,,LES BROTTEAUX,1160,VARAMBON
201620160000002,,,LES BROTTEAUX,1160,VARAMBON
201620160000003,,,LES BROTTEAUX,1160,VARAMBON
201620160000004,5246,,LES BROTTEAUX,1160,VARAMBON
My python 3.7 code is :
import requests
url = 'https://api-adresse.data.gouv.fr/search/csv/'
params = {'data':'c:\search.csv',
'columns':'VOIE'}
output = requests.post (url,json=params).json()
print (output)
{'code': 400, 'message': 'A CSV file must be provided in data field'}
S.O.S
API doc is here : https://geo.api.gouv.fr/adresse
I have the following bash script I've been trying to convert to python. I have the main part but I'm having some trouble on how to translate these two lines cut=${auth#*<ChallengeCode>} and authStr=${cut%</ChallengeCode>*}. The first request returns XML that contains <ChallengeCode> , I need to be able to extract that code and store it for future use.
BASH CODE:
#!/bin/bash
IP=IP_ADDR
USER=USERNAME
PASS=PASSWORD
auth=$(curl -ks -H "Content-Type: text/xml" -c "cookies.txt" "https://${IP}/goform/login?cmd=login&user=admin&type=1")
cut=${auth#*<ChallengeCode>}
authStr=${cut%</ChallengeCode>*}
hash=$(echo -n ${authStr}:GDS3710lZpRsFzCbM:${PASS} | md5sum | tr -d '/n')
hash=$(echo $hash | cut -d' ' -f1 | tr -d '/n')
curl -ks -H "Content-Type: text/xml" -c "cookies.txt" "https://${IP}/goform/login?cmd=login&user=admin&authcode=${hash}&type=1"
curl -ks -H "Content-Type: image/jpeg" --cookie "cookies.txt" "https://${IP}/snapshot/view0.jpg" >> snapshot.jpg
PYTHON CODE:
import requests
import hashlib
hmd5 = hashlib.md5()
ip = "192.168.100.178"
user = "admin"
password = "Password1"
auth = requests.get('https://{0}/goform/login', headers=headers, params=params, verify=False).format(ip)
chcode = (This is where I want to put the challenge code i get back from the previous request)
hstring = "{0}:GDS3710lZpRsFzCbM:{1}".format(chcode,password).encode()
hmd5.update(hstring)
hashauth = hmd5.hexdigest()
response = requests.get('https://{0}/snapshot/view0.jpg', headers=headers, cookies=cookies, verify=False).format(ip)
Any advice on how I could better improve the code would also be appreciated.
If your request returns XML, it'd be suitable to use a XML parser. Presuming you've imported xml.etree.ElementTree perhaps with:
import xml.etree.ElementTree as ET
You can have it parse your response:
root_el = ET.fromstring(auth.text)
And then use XPath (might be different depending on structure of your XML) to find your element and get value of text it contains:
chcode = root_el.find("./ChallengeCode").text
While one of the virtues of a real programming language is the availability of powerful libraries (e.g., for parsing XML), there is a direct analog of your Bash substring operations that is particularly simple given the limited use of wildcards:
${a#*foo} — a.partition("foo")[0]
${a%foo*} — a.rpartition("foo")[-1]
${a##*foo} — a.rpartition("foo")[0]
${a%%foo*} — a.partition("foo")[-1]
I filtered issues without subtasks by python:
#!/usr/bin/python
import sys
import json
sys.stdout = open('output.txt','wt')
datapath = sys.argv[1]
data = json.load(open(datapath))
for issue in data['issues']:
if len(issue['fields']['subtasks']) == 0:
print(issue['key'])
in output.txt tasks without subtasks are stored (and it works fine):
TECH-729
TECH-124
Now have different issue, it seems values in $p variable isn't passed to CURL (able to login to JIRA but not to create subtasks):
while read -r p; do
echo $p
curl -D- -u user:pass -X POST --data "{\"fields\":{\"project\":{\"key\":\"TECH\"},\"parent\":{\"key\":\"$p\"},\"summary\":\"TestChargen#Nr\",\"description\":\"some description\",\"issuetype\":{\"name\":\"Sub-task\"},\"customfield_10107\":{\"id\":\"10400\"}}}" -H "Content-Type:application/jso#n" https://jira.companyu.com/rest/api/latest/issue/
done <output.txt
echo output is as it should
TECH-731
TECH-729 (so curl should run twice for every output value
But curl just logs in without creating subtasks, when hardcoding instead of $p then curl executes twice for same project ID
Really don't know why,but this code worked, thanks everyone
for project in `cat output.txt`; do
echo $project
curl -D- -u user:pass -X POST --data "{\"fields\":{\"project\":{\"key\":\"TECH\"},\"parent\":{\"key\":\"$project\"},\"summary\":\"TestChargenNr\",\"description\":\"some description\",\"issuetype\":{\"name\":\"Sub-task\"},\"customfield_10107\":{\"id\":\"10400\"}}}" -H "Content-Type:application/json" https://jira.company.com/rest/api/latest/issue/
done
I have the following command which adds a user to the administrator group of a gerrit instance,
curl -X POST -H "Content-Type:application/json;charset=UTF-8" -u nidhi:pswd http://host_ip:port/a/groups/Administrators/members.add -d '{"members":["user#example.com"]}'
When I run this command on my terminal, it runs perfectly and gives the expected output.
But, I want to execute this command in python either using the subprocess or pycurl library.
Using subprocess I wrote the following code,
def add_user_to_administrator(u_name,url):
bashCommand = 'curl -X POST -H "Content-Type:application/json;charset=UTF-8" -u nidhi:pswd http://'+url+'/a/groups/Administrators/members.add -d '+"'"+'{"members":["$u_n#example.com"]}'+"'"
bashCommand = string.Template(bashCommand).substitute({'u_n':u_name})
print bashCommand.split()
process = subprocess.Popen(bashCommand.split())
It shows no error but no changes are seen in the administrator group.
I tried the same using pycurl,
def add_user_to_administrator2(u_name,url):
pf = json.dumps({"members":[str(str(u_name)+"#example.com")]})
headers = ['Content-Type:application/json;charset=UTF-8']
pageContents = StringIO.StringIO()
p = pycurl.Curl()
p.setopt(pycurl.FOLLOWLOCATION, 1)
p.setopt(pycurl.POST, 1)
p.setopt(pycurl.HTTPHEADER, headers)
p.setopt(pycurl.POSTFIELDS, pf)
p.setopt(pycurl.WRITEFUNCTION, pageContents.write)
p.setopt(pycurl.VERBOSE, True)
p.setopt(pycurl.DEBUGFUNCTION, test)
p.setopt(pycurl.USERPWD, "nidhi:pswd")
pass_url=str("http://"+url+"/a/groups/Administrators/Administrators/members.add").rstrip('\n')
print pass_url
p.setopt(pycurl.URL, pass_url)
p.perform()
p.close()
pageContents.seek(0)
print pageContents.readlines()
This throws an error, it cannot find the account members.
The variable mentioned url is of the form host_ip:port.
I have tried a lot to fix these errors. I dont know where I am going wrong. Any help would be appreciated.
a) string escaping
For the subprocess/curl usage, you should be escaping your string tokens rather than manually adding extra ':
...stuff'+"'"+'more.stuff...
Escape using \ before the character i.e. using
"curl -X POST -H \"Content-Type:application/json;charset=UTF-8\""
will keep the " around the Content-Type section.
More on escape characters here: Lexical Analysis - String Literals
...The backslash () character is used to escape characters that otherwise have a special meaning...
b) popen
Looking at the popen docs their example uses shlex.split() to split their command line into args. shlex splits the string a bit differently:
print(bashCommand.split())
['curl', '-X', 'POST', '-H', '"Content-Type:application/json;charset=UTF-8"', '-u', 'nidhi:pswd', 'http://TEST_URL/a/groups/Administrators/members.add', '-d', '\'{"members":["TEST_USER#example.com"]}\'']
print(shlex.split(bashCommand))
['curl', '-X', 'POST', '-H', 'Content-Type:application/json;charset=UTF-8', '-u', 'nidhi:pswd', 'http://TEST_URL/a/groups/Administrators/members.add', '-d', '{"members":["TEST_USER#example.com"]}']
you can see shlex removes excess quoting.
c) http response code
Try using -I option in curl to get a HTTP response code back (and the rest of the HTTP headers):
$curl -h
...
-I, --head Show document info only
Even though you're using subprocess to start/make the request, it should still print the return value to the console(stdout).
d) putting it all together
I changed the how the url and u_name are interpolated into the string.
import shlex
import subprocess
def add_user_to_administrator(u_name, url):
bashCommand = "curl -I -X POST -H \"Content-Type:application/json;charset=UTF-8\" -u nidhi:pswd http://%(url)s/a/groups/Administrators/members.add -d '{\"members\":[\"%(u_n)s#example.com\"]}'"
bashCommand = bashCommand % {'u_n': u_name, 'url': url}
args = shlex.split(bashCommand)
process = subprocess.Popen(args)
add_user_to_administrator('TEST_USER', 'TEST_URL')
If none of this helps, and you're getting no response from gerrit, I'd check gerrit logs to see what happens when it receives your request.
you should try [urllib2] (python2) or urllib(python3) to post json data ;
for subprocess.Popen : subprocess.Popen.communicate(https://docs.python.org/3.5/library/subprocess.html?highlight=subprocess#subprocess.Popen.communicate) may give you help,or just exec bashCommand in shell to see the difference;
for pycurl , I have not use it ,but please you add error info.
I need to implement a python script that acts as a server that handles data in the given format:
$ curl -d "longitude=-2&latitude=4" http://localhost:8080
This script will interpret the longitude and latitude data in such a way as to return where this specific data falls. For another example with potential output would look something like this:
$ ./state-server &
[1] 21507
$ curl -d "longitude=-2&latitude=4" http://localhost:8080/
["Kentucky"]
$
How would I access these variables within my script file?
Thanks.
You could use subprocess to get the output.
from subprocess import check_output
out = check_output(["curl", "-d ","longitude=-77.036133&latitude=40.513799" ,"http://localhost:8080/"])
If you want to pass the lat and long as args:
lat, lon = ....
out = check_output(["curl", "-d ","longitude={}&latitude={}".format(lat, lon) ,"http://localhost:8080/"]
But there are lots of ways using just python to do what you want using requests etc... If it is json that gets returned requests can parse the json into a dict where you can access the output by key.
If you are passing in the lat and lon from the command line you can use sys.argv:
import sys
lat, lon = sys.argv[1:]
out = check_output(["curl", "-d ","longitude={}&latitude={}".format(lat, lon) ,"http://localhost:8080/"])
So you run the script and pass the args like:
$ cat test.py
import sys
lat, lon = sys.argv[1:]
print(lat, lon)
$ python setup.py 1234 5678
('1234', '5678')
Obviously passing lat and lon to check_output in your own script.
If you want to actually parse the output of the curl command to get the variables:
s = """[1] 21507
$ curl -d "longitude=-2&latitude=4" http://localhost:8080/
["Kentucky"]
$"""
import re
lat = re.search("longitude=(-?\d+(.\d+)?)",s)
lon = re.search("latitude=(-?\d+(.\d+)?)",s)
print(lat.group(1), lon.group(1))
('-2', '4')
Don't waste your time with curl when you have Python modules like requests at your fingertips
import requests
payload = {'longitude': '-77.036133', 'latitude': '40.513799'}
r = requests.get("http://localhost:8080/", params=payload)
txt = r.text
As a general practice, using the system shell from scripts should not be done unless it's unfeasible with your native scripting language. With Python it's going to be much easier to use native libraries than to use curl for this. To install the requests library, see the requests installation doc.