Converting a BASH script to Python problem - python

I have the following bash script I've been trying to convert to python. I have the main part but I'm having some trouble on how to translate these two lines cut=${auth#*<ChallengeCode>} and authStr=${cut%</ChallengeCode>*}. The first request returns XML that contains <ChallengeCode> , I need to be able to extract that code and store it for future use.
BASH CODE:
#!/bin/bash
IP=IP_ADDR
USER=USERNAME
PASS=PASSWORD
auth=$(curl -ks -H "Content-Type: text/xml" -c "cookies.txt" "https://${IP}/goform/login?cmd=login&user=admin&type=1")
cut=${auth#*<ChallengeCode>}
authStr=${cut%</ChallengeCode>*}
hash=$(echo -n ${authStr}:GDS3710lZpRsFzCbM:${PASS} | md5sum | tr -d '/n')
hash=$(echo $hash | cut -d' ' -f1 | tr -d '/n')
curl -ks -H "Content-Type: text/xml" -c "cookies.txt" "https://${IP}/goform/login?cmd=login&user=admin&authcode=${hash}&type=1"
curl -ks -H "Content-Type: image/jpeg" --cookie "cookies.txt" "https://${IP}/snapshot/view0.jpg" >> snapshot.jpg
PYTHON CODE:
import requests
import hashlib
hmd5 = hashlib.md5()
ip = "192.168.100.178"
user = "admin"
password = "Password1"
auth = requests.get('https://{0}/goform/login', headers=headers, params=params, verify=False).format(ip)
chcode = (This is where I want to put the challenge code i get back from the previous request)
hstring = "{0}:GDS3710lZpRsFzCbM:{1}".format(chcode,password).encode()
hmd5.update(hstring)
hashauth = hmd5.hexdigest()
response = requests.get('https://{0}/snapshot/view0.jpg', headers=headers, cookies=cookies, verify=False).format(ip)
Any advice on how I could better improve the code would also be appreciated.

If your request returns XML, it'd be suitable to use a XML parser. Presuming you've imported xml.etree.ElementTree perhaps with:
import xml.etree.ElementTree as ET
You can have it parse your response:
root_el = ET.fromstring(auth.text)
And then use XPath (might be different depending on structure of your XML) to find your element and get value of text it contains:
chcode = root_el.find("./ChallengeCode").text

While one of the virtues of a real programming language is the availability of powerful libraries (e.g., for parsing XML), there is a direct analog of your Bash substring operations that is particularly simple given the limited use of wildcards:
${a#*foo} — a.partition("foo")[0]
${a%foo*} — a.rpartition("foo")[-1]
${a##*foo} — a.rpartition("foo")[0]
${a%%foo*} — a.partition("foo")[-1]

Related

Search txt file line by line into variable and for each value run CURL command

I filtered issues without subtasks by python:
#!/usr/bin/python
import sys
import json
sys.stdout = open('output.txt','wt')
datapath = sys.argv[1]
data = json.load(open(datapath))
for issue in data['issues']:
if len(issue['fields']['subtasks']) == 0:
print(issue['key'])
in output.txt tasks without subtasks are stored (and it works fine):
TECH-729
TECH-124
Now have different issue, it seems values in $p variable isn't passed to CURL (able to login to JIRA but not to create subtasks):
while read -r p; do
echo $p
curl -D- -u user:pass -X POST --data "{\"fields\":{\"project\":{\"key\":\"TECH\"},\"parent\":{\"key\":\"$p\"},\"summary\":\"TestChargen#Nr\",\"description\":\"some description\",\"issuetype\":{\"name\":\"Sub-task\"},\"customfield_10107\":{\"id\":\"10400\"}}}" -H "Content-Type:application/jso#n" https://jira.companyu.com/rest/api/latest/issue/
done <output.txt
echo output is as it should
TECH-731
TECH-729 (so curl should run twice for every output value
But curl just logs in without creating subtasks, when hardcoding instead of $p then curl executes twice for same project ID
Really don't know why,but this code worked, thanks everyone
for project in `cat output.txt`; do
echo $project
curl -D- -u user:pass -X POST --data "{\"fields\":{\"project\":{\"key\":\"TECH\"},\"parent\":{\"key\":\"$project\"},\"summary\":\"TestChargenNr\",\"description\":\"some description\",\"issuetype\":{\"name\":\"Sub-task\"},\"customfield_10107\":{\"id\":\"10400\"}}}" -H "Content-Type:application/json" https://jira.company.com/rest/api/latest/issue/
done

Want to run a curl command either through python's subprocess or pycurl

I have the following command which adds a user to the administrator group of a gerrit instance,
curl -X POST -H "Content-Type:application/json;charset=UTF-8" -u nidhi:pswd http://host_ip:port/a/groups/Administrators/members.add -d '{"members":["user#example.com"]}'
When I run this command on my terminal, it runs perfectly and gives the expected output.
But, I want to execute this command in python either using the subprocess or pycurl library.
Using subprocess I wrote the following code,
def add_user_to_administrator(u_name,url):
bashCommand = 'curl -X POST -H "Content-Type:application/json;charset=UTF-8" -u nidhi:pswd http://'+url+'/a/groups/Administrators/members.add -d '+"'"+'{"members":["$u_n#example.com"]}'+"'"
bashCommand = string.Template(bashCommand).substitute({'u_n':u_name})
print bashCommand.split()
process = subprocess.Popen(bashCommand.split())
It shows no error but no changes are seen in the administrator group.
I tried the same using pycurl,
def add_user_to_administrator2(u_name,url):
pf = json.dumps({"members":[str(str(u_name)+"#example.com")]})
headers = ['Content-Type:application/json;charset=UTF-8']
pageContents = StringIO.StringIO()
p = pycurl.Curl()
p.setopt(pycurl.FOLLOWLOCATION, 1)
p.setopt(pycurl.POST, 1)
p.setopt(pycurl.HTTPHEADER, headers)
p.setopt(pycurl.POSTFIELDS, pf)
p.setopt(pycurl.WRITEFUNCTION, pageContents.write)
p.setopt(pycurl.VERBOSE, True)
p.setopt(pycurl.DEBUGFUNCTION, test)
p.setopt(pycurl.USERPWD, "nidhi:pswd")
pass_url=str("http://"+url+"/a/groups/Administrators/Administrators/members.add").rstrip('\n')
print pass_url
p.setopt(pycurl.URL, pass_url)
p.perform()
p.close()
pageContents.seek(0)
print pageContents.readlines()
This throws an error, it cannot find the account members.
The variable mentioned url is of the form host_ip:port.
I have tried a lot to fix these errors. I dont know where I am going wrong. Any help would be appreciated.
a) string escaping
For the subprocess/curl usage, you should be escaping your string tokens rather than manually adding extra ':
...stuff'+"'"+'more.stuff...
Escape using \ before the character i.e. using
"curl -X POST -H \"Content-Type:application/json;charset=UTF-8\""
will keep the " around the Content-Type section.
More on escape characters here: Lexical Analysis - String Literals
...The backslash () character is used to escape characters that otherwise have a special meaning...
b) popen
Looking at the popen docs their example uses shlex.split() to split their command line into args. shlex splits the string a bit differently:
print(bashCommand.split())
['curl', '-X', 'POST', '-H', '"Content-Type:application/json;charset=UTF-8"', '-u', 'nidhi:pswd', 'http://TEST_URL/a/groups/Administrators/members.add', '-d', '\'{"members":["TEST_USER#example.com"]}\'']
print(shlex.split(bashCommand))
['curl', '-X', 'POST', '-H', 'Content-Type:application/json;charset=UTF-8', '-u', 'nidhi:pswd', 'http://TEST_URL/a/groups/Administrators/members.add', '-d', '{"members":["TEST_USER#example.com"]}']
you can see shlex removes excess quoting.
c) http response code
Try using -I option in curl to get a HTTP response code back (and the rest of the HTTP headers):
$curl -h
...
-I, --head Show document info only
Even though you're using subprocess to start/make the request, it should still print the return value to the console(stdout).
d) putting it all together
I changed the how the url and u_name are interpolated into the string.
import shlex
import subprocess
def add_user_to_administrator(u_name, url):
bashCommand = "curl -I -X POST -H \"Content-Type:application/json;charset=UTF-8\" -u nidhi:pswd http://%(url)s/a/groups/Administrators/members.add -d '{\"members\":[\"%(u_n)s#example.com\"]}'"
bashCommand = bashCommand % {'u_n': u_name, 'url': url}
args = shlex.split(bashCommand)
process = subprocess.Popen(args)
add_user_to_administrator('TEST_USER', 'TEST_URL')
If none of this helps, and you're getting no response from gerrit, I'd check gerrit logs to see what happens when it receives your request.
you should try [urllib2] (python2) or urllib(python3) to post json data ;
for subprocess.Popen : subprocess.Popen.communicate(https://docs.python.org/3.5/library/subprocess.html?highlight=subprocess#subprocess.Popen.communicate) may give you help,or just exec bashCommand in shell to see the difference;
for pycurl , I have not use it ,but please you add error info.

How to parse and loads JSON in bash shell script?

I am trying to loads and print JSON response in shell script.I dont have any idea how to achieve this.Please help me on this.
Code:
#!/bin/sh
malop_q=$(curl -X GET -k -H "SEC: xxxxxxxxxxxxxxxxxxxxxx" 'https://127.0.0.1/api/reference_data/sets/malopid?fields=data(value)')
echo $malop_q
JSON Response:
{"data":[{"value":"11.945403842773683082"},{"value":"11.945403842773683082"},{"value":"11.945403842773683082"}]}
Expected OP is
I need to print values from above JSON response is:
11.945403842773683082
11.945403842773683082
11.945403842773683082
Thanks in advance.
'
The following python code do the parsing, assuming that you save it as: my_json.py
import json,sys
obj=json.load(sys.stdin)
for i in range(len(obj['data'])):
print obj['data'][i]['value']
You can get the respond using:
malop_q=$(curl -X GET -k -H "SEC: xxxxxxxxxxxxxxxxxxxxxx" 'https://127.0.0.1/api/reference_data/sets/malopid?fields=data(value)')
echo $malop_q | python my_json.py
or in one line:
curl -X GET -k -H "SEC: xxxxxxxxxxxxxxxxxxxxxx" 'https://127.0.0.1/api/reference_data/sets/malopid?fields=data(value)' | python my_json.py
With Python :
import json
with open('file.json') as json_file:
datas = json.load(json_file)
for d in datas["data"]:
print(d["value"])

how to retrieve data from curl -d command in python

I need to implement a python script that acts as a server that handles data in the given format:
$ curl -d "longitude=-2&latitude=4" http://localhost:8080
This script will interpret the longitude and latitude data in such a way as to return where this specific data falls. For another example with potential output would look something like this:
$ ./state-server &
[1] 21507
$ curl -d "longitude=-2&latitude=4" http://localhost:8080/
["Kentucky"]
$
How would I access these variables within my script file?
Thanks.
You could use subprocess to get the output.
from subprocess import check_output
out = check_output(["curl", "-d ","longitude=-77.036133&latitude=40.513799" ,"http://localhost:8080/"])
If you want to pass the lat and long as args:
lat, lon = ....
out = check_output(["curl", "-d ","longitude={}&latitude={}".format(lat, lon) ,"http://localhost:8080/"]
But there are lots of ways using just python to do what you want using requests etc... If it is json that gets returned requests can parse the json into a dict where you can access the output by key.
If you are passing in the lat and lon from the command line you can use sys.argv:
import sys
lat, lon = sys.argv[1:]
out = check_output(["curl", "-d ","longitude={}&latitude={}".format(lat, lon) ,"http://localhost:8080/"])
So you run the script and pass the args like:
$ cat test.py
import sys
lat, lon = sys.argv[1:]
print(lat, lon)
$ python setup.py 1234 5678
('1234', '5678')
Obviously passing lat and lon to check_output in your own script.
If you want to actually parse the output of the curl command to get the variables:
s = """[1] 21507
$ curl -d "longitude=-2&latitude=4" http://localhost:8080/
["Kentucky"]
$"""
import re
lat = re.search("longitude=(-?\d+(.\d+)?)",s)
lon = re.search("latitude=(-?\d+(.\d+)?)",s)
print(lat.group(1), lon.group(1))
('-2', '4')
Don't waste your time with curl when you have Python modules like requests at your fingertips
import requests
payload = {'longitude': '-77.036133', 'latitude': '40.513799'}
r = requests.get("http://localhost:8080/", params=payload)
txt = r.text
As a general practice, using the system shell from scripts should not be done unless it's unfeasible with your native scripting language. With Python it's going to be much easier to use native libraries than to use curl for this. To install the requests library, see the requests installation doc.

What does this Perl XML filter look like in Python?

curl -u $1:$2 --silent "https://mail.google.com/mail/feed/atom" | perl -ne 'print "\t" if /<name>/; print "$2\n" if /<(title|name)>(.*)<\/\1>/;'
I have this shell script which gets the Atom feed with command-line arguments for the username and password. I was wondering if this type of thing was possible in Python, and if so, how I would go about doing it. The atom feed is just regular XML.
Python does not lend itself to compact one liners quite as well as Perl. This is primarily for three reasons:
With Perl, whitespace is insignificant in almost all cases. In Python, whitespace is very significant.
Perl has some helpful shortcuts for one liners, such as perl -ne or perl -pe that put an implicit loop around the line of code.
There is a large body a cargo-cult Perl one liners to do useful things.
That all said, this python is close to what you posted in Perl:
curl -u $1:$2 --silent "https://mail.google.com/mail/feed/atom" | python -c '
import sys
for s in sys.stdin:
s=s.strip()
if not s: print '\t',
else: print s
'
It is a little difficult to do better because, as stated in my comment, the Perl you posted is incomplete. You have:
perl -ne 'print "\t" if //; print "$2\n" if /(.*)/;'
Which is equivalent to:
LINE:
while (<>) {
print "\t" if //; # print a tab for a blank line
print "$2\n" if /(.*)/; # nonsensical. Print second group but only
# a single match group defined...
}
Edit
While it is trivial to rewrite that Perl in Python, here is something a bit better:
#!/usr/bin/python
from xml.dom.minidom import parseString
import sys
def get_XML_doc_stdin(f):
return xml.dom.minidom.parse(f)
def get_tagged_data2(tag, index=0):
xmlData = dom.getElementsByTagName(tag)[index].firstChild.data
return xmlData
data=sys.stdin.read()
dom = parseString(data)
ele2=get_tagged_data2('title')
print ele2
count=int(get_tagged_data2('fullcount'))
print count,"New Messages:"
for i in range(0,count):
nam=get_tagged_data2('name',i)
email=get_tagged_data2('email',i)
print " {0}: {1} <{2}>".format(i+1,nam,email)
Now save that in a text file, run chmod +x on it, then:
curl -u $1:$2 --silent "https://mail.google.com/mail/feed/atom" |
/path/pythonfile.py
It produces this:
Gmail - Inbox for xxxxxxx#gmail.com
2 New Messages:
1: bob smith <bob#smith.com>
2: Google Alerts <googlealerts-noreply#google.com>
edit 2
And if you don't like that, here is the Python 1 line filter:
curl -u $1:$2 --silent "https://mail.google.com/mail/feed/atom" |python -c '
import sys, re
for t,m in re.findall(r"<(title|name)>(.*)<\/\1>",sys.stdin.read()):
print "\t",m
'
You may use an "URL opener" from the urllib2 standard Python module with a handler for authentication. For example:
#!/usr/bin/env python
import getpass
import sys
import urllib2
def main(program, username=None, password=None, url=None):
# Get input if any argument is missing
username = username or raw_input('Username: ')
password = password or getpass.getpass('Password: ')
url = url or 'https://mail.google.com/mail/feed/atom'
# Create password manager
password_mgr = urllib2.HTTPPasswordMgrWithDefaultRealm()
password_mgr.add_password(None, url, username, password)
# Create HTTP Authentication handler and URL opener
authhandler = urllib2.HTTPBasicAuthHandler(password_mgr)
opener = urllib2.build_opener(authhandler)
# Fetch URL and print content
response = opener.open(url)
print response.read()
if __name__ == '__main__':
main(*sys.argv)
If you'd like to extract information from the feed too, you should check how to parse Password-Protected Feeds with feedparser.

Categories