Adding a payload when opening urls with urllib - python

I created a chatbot that connects to a server and can read messages, now I'm at the point where I need to send messages, requiring request payload (according to the Network tab in Developer tools on google chrome). My opener consists of nothing but the following:
import urllib
import urllib2
from cookielib import CookieJar
self.cj = CookieJar()
self.opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(self.cj))
To stay and connected and read messages, I do the following, I do the following:
def connect(self,settings,xhr):
xhr_polling = self.get_code(xhr)
data = self.opener.open("http://chat2-1.wikia.com:80/socket.io/1/xhr-polling/" + xhr_polling + "?name=HairyBot&key=" +
settings['chatkey'] + "&roomId=" + str(settings['room']) + "&t=" + timestamp())
return data.read()
Settings consisting of the roomId and chatkey. The timestamp function creates a timestamp in accordance to what the servers needs (which isn't necessary to know for this question). Back to the question though, how can a payload be added to the opener to send a message to the chat?

As a suggestion, I recommend use the Requests library. It makes this stuff really simple:
import requests
session = requests.session() # For connection pooling
def connect(self,settings,xhr):
xhr_polling = self.get_code(xhr)
request = session.get('http://chat2-1.wikia.com:80/socket.io/1/xhr-polling/' + xhr_polling, params={
'name': 'HairyBot',
'key': settings['chatkey'],
'roomId': settings['room'],
't': timestamp()
})
return request.text
If you want to send a POST request instead, just change get to post and add some data:
def connect(self,settings,xhr):
xhr_polling = self.get_code(xhr)
request = session.post('http://chat2-1.wikia.com:80/socket.io/1/xhr-polling/' + xhr_polling, params={
'name': 'HairyBot',
'key': settings['chatkey'],
'roomId': settings['room'],
't': timestamp()
}, data={
'key': 'value'
})
return request.text

I'm not sure what you mean by "a payload", but presumably it's just another form variable named payload. If so, you send it the same way you do any other form variable, and you're already sending a bunch—roomId, t, etc.
One way of sending form variables is by URL-encoding them, tacking them onto the query string, and sending a GET request. That's what you're doing now. (It would be better to use proper urllib methods instead of hacking it together with string concatenation, but the end result is the same.)
The other way is by sending a POST body. The urllib2 documentation explains how to do this, and there are plenty of good examples online, but basically all you have to do is call urllib.urlencode() on your name-value pairs, then pass the result as the second argument (or as a keyword argument named data) to the open call.
In other words, something like this:
data = self.opener.open("http://chat2-1.wikia.com:80/socket.io/1/xhr-polling/" + xhr_polling,
urllib.urlencode(("name", "HairyBot"),
("key", settings['chatkey']),
("roomId", str(settings['room']),
("key", settings['chatkey']),
("t", timestamp()),
("payload", payload)))
Or, if you prefer, most servers will allow you to send some parameters on the query string and others in the POST data, so you can leave your existing code alone and just make one change:
data = self.opener.open("http://chat2-1.wikia.com:80/socket.io/1/xhr-polling/" + xhr_polling + "?name=HairyBot&key=" +
settings['chatkey'] + "&roomId=" + str(settings['room']) + "&t=" + timestamp(),
urllib.urlencode(("payload", payload)))

Related

How to print text of POST request without making request

If I make the request
api-key = 'asdfklhsdfkjahsdlgkjahlkdjahfsa'
url = 'https://www.website.com'
headers = {'api-key': api-key,
'Content-Type': 'application/json'}
request_data = {'foo': 'bar', 'egg': 'spam'}
result = requests.post(url, headers=headers, data=request_data)
The server is contacted. Suppose that instead I want to do something like
request_string = requests.foobar(url, headers=headers, data=request_data)
import os
os.system('curl ' + request_string)
So that I can look to see what the request is doing without bothering the server (possibly to the point that I could c&p it into curl), what would foobar be? Or in general, what is a way to inspect the contents of the request without making it?
Here's another post that implies that you can use Request().prepare() to observe the request without actually sending the request.
Furthermore the official documentation reads "In some cases you may wish to do some extra work to the body or headers (or anything else really) before sending a request. The simple recipe for this is the following" and then it illustrates Request.prepare()

(Python) Bittrex API v3 keeps returning invalid content hash

Writing a bot for a personal project, and the Bittrex api refuses to validate my content hash. I've tried everything I can think of and all the suggestions from similar questions, but nothing has worked so far. Tried hashing 'None', tried a blank string, tried the currency symbol, tried the whole uri, tried the command & balance, tried a few other things that also didn't work. Reformatted the request a few times (bytes/string/dict), still nothing.
Documentation says to hash the request body (which seems synonymous with payload in similar questions about making transactions through the api), but it's a simple get/chcek balance request with no payload.
Problem is, I get a 'BITTREX ERROR: INVALID CONTENT HASH' response when I run it.
Any help would be greatly appreciated, this feels like a simple problem but it's been frustrating the hell out of me. I am very new to python, but the rest of the bot went very well, which makes it extra frustrating that I can't hook it up to my account :/
import hashlib
import hmac
import json
import os
import time
import requests
import sys
# Base Variables
Base_Url = 'https://api.bittrex.com/v3'
APIkey = os.environ.get('B_Key')
secret = os.environ.get('S_B_Key')
timestamp = str(int(time.time() * 1000))
command = 'balances'
method = 'GET'
currency = 'USD'
uri = Base_Url + '/' + command + '/' + currency
payload = ''
print(payload) # Payload Check
# Hashes Payload
content = json.dumps(payload, separators=(',', ':'))
content_hash = hashlib.sha512(bytes(json.dumps(content), "utf-8")).hexdigest()
print(content_hash)
# Presign
presign = (timestamp + uri + method + str(content_hash) + '')
print(presign)
# Create Signature
message = f'{timestamp}{uri}{method}{content_hash}'
sign = hmac.new(secret.encode('utf-8'), message.encode('utf-8'),
hashlib.sha512).hexdigest()
print(sign)
headers = {
'Api-Key': APIkey,
'Api-Timestamp': timestamp,
'Api-Signature': sign,
'Api-Content-Hash': content_hash
}
print(headers)
req = requests.get(uri, json=payload, headers=headers)
tracker_1 = "Tracker 1: Response =" + str(req)
print(tracker_1)
res = req.json()
if req.ok is False:
print('bullshit error #1')
print("Bittex response: %s" % res['code'], file=sys.stderr)
I can see two main problems:
You are serialising/encoding the payload separately for the hash (with json.dumps and then bytes) and for the request (with the json=payload parameter to request.get). You don't have any way of knowing how the requests library will format your data, and if even one byte is different you will get a different hash. It is better to convert your data to bytes first, and then use the same bytes for the hash and for the request body.
GET requests do not normally have a body (see this answer for more details), so it might be that the API is ignoring the payload you are sending. You should check the API docs to see if you really need to send a request body with GET requests.

What is the pythonic way of building full urls for links?

I'm looking for a way to build urls in python3 without having to do string concatenation. I get that I can
import requests
url_endpoint = 'https://www.duckduckgo.com'
mydict = {'q': 'whee! Stanford!!!', 'something': 'else'}
resp = requests.get(url_endpoint, params=mydict)
print(resp.url) # THIS IS EXACTLY WHAT I WANT
or
from requests import Request, Session
s = Session()
req = Request('GET', url, params={'q': 'blah'})
print(req.url)
# I didn't get this to work, but from the docs
# it should build the url without making the call
or
url = baseurl + "?" + urllib.urlencode(params)
I like that the request library intelligently decides to drop ? if it isn't needed, but that code actually makes a full GET request so instead of just building a full text url (which I plan to dump to an html tag). I am using django, but I didn't see anything to help with that in the core library.
Django comes with QueryDicts which basically do everything you want.
def make_url(url, args=None):
query = QueryDict(mutable=True)
query.update(args or {})
return '{}{}{}'.format(url, '?' if query else '', query.urlencode())
It supports multiple values per argument just like you can encounter in a url: example.com/foo?a=1&a=2&a=3.

Infoblox WAPI: how to search for an IP

Our network team uses InfoBlox to store information about IP ranges (Location, Country, etc.)
There is an API available but Infoblox's documentation and examples are not very practical.
I would like to search via the API for details about an IP. To start with - I would be happy to get anything back from the server. I modified the only example I found
import requests
import json
url = "https://10.6.75.98/wapi/v1.0/"
object_type = "network"
search_string = {'network':'10.233.84.0/22'}
response = requests.get(url + object_type, verify=False,
data=json.dumps(search_string), auth=('adminname', 'adminpass'))
print "status code: ", response.status_code
print response.text
which returns an error 400
status code: 400
{ "Error": "AdmConProtoError: Invalid input: '{\"network\": \"10.233.84.0/22\"}'",
"code": "Client.Ibap.Proto",
"text": "Invalid input: '{\"network\": \"10.233.84.0/22\"}'"
}
I would appreciate any pointers from someone who managed to get this API to work with Python.
UPDATE: Following up on the solution, below is a piece of code (it works but it is not nice, streamlined, does not perfectly checks for errors, etc.) if someone one day would have a need to do the same as I did.
def ip2site(myip): # argument is an IP we want to know the localization of (in extensible_attributes)
baseurl = "https://the_infoblox_address/wapi/v1.0/"
# first we get the network this IP is in
r = requests.get(baseurl+"ipv4address?ip_address="+myip, auth=('youruser', 'yourpassword'), verify=False)
j = simplejson.loads(r.content)
# if the IP is not in any network an error message is dumped, including among others a key 'code'
if 'code' not in j:
mynetwork = j[0]['network']
# now we get the extended atributes for that network
r = requests.get(baseurl+"network?network="+mynetwork+"&_return_fields=extensible_attributes", auth=('youruser', 'youpassword'), verify=False)
j = simplejson.loads(r.content)
location = j[0]['extensible_attributes']['Location']
ipdict[myip] = location
return location
else:
return "ERROR_IP_NOT_MAPPED_TO_SITE"
By using requests.get and json.dumps, aren't you sending a GET request while adding JSON to the query string? Essentially, doing a
GET https://10.6.75.98/wapi/v1.0/network?{\"network\": \"10.233.84.0/22\"}
I've been using the WebAPI with Perl, not Python, but if that is the way your code is trying to do things, it will probably not work very well. To send JSON to the server, do a POST and add a '_method' argument with 'GET' as the value:
POST https://10.6.75.98/wapi/v1.0/network
Content: {
"_method": "GET",
"network": "10.233.84.0/22"
}
Content-Type: application/json
Or, don't send JSON to the server and send
GET https://10.6.75.98/wapi/v1.0/network?network=10.233.84.0/22
which I am guessing you will achieve by dropping the json.dumps from your code and handing search_string to requests.get directly.

Making a POST call instead of GET using urllib2

There's a lot of stuff out there on urllib2 and POST calls, but I'm stuck on a problem.
I'm trying to do a simple POST call to a service:
url = 'http://myserver/post_service'
data = urllib.urlencode({'name' : 'joe',
'age' : '10'})
content = urllib2.urlopen(url=url, data=data).read()
print content
I can see the server logs and it says that I'm doing GET calls, when I'm sending the data
argument to urlopen.
The library is raising an 404 error (not found), which is correct for a GET call, POST calls are processed well (I'm also trying with a POST within a HTML form).
Do it in stages, and modify the object, like this:
# make a string with the request type in it:
method = "POST"
# create a handler. you can specify different handlers here (file uploads etc)
# but we go for the default
handler = urllib2.HTTPHandler()
# create an openerdirector instance
opener = urllib2.build_opener(handler)
# build a request
data = urllib.urlencode(dictionary_of_POST_fields_or_None)
request = urllib2.Request(url, data=data)
# add any other information you want
request.add_header("Content-Type",'application/json')
# overload the get method function with a small anonymous function...
request.get_method = lambda: method
# try it; don't forget to catch the result
try:
connection = opener.open(request)
except urllib2.HTTPError,e:
connection = e
# check. Substitute with appropriate HTTP code.
if connection.code == 200:
data = connection.read()
else:
# handle the error case. connection.read() will still contain data
# if any was returned, but it probably won't be of any use
This way allows you to extend to making PUT, DELETE, HEAD and OPTIONS requests too, simply by substituting the value of method or even wrapping it up in a function. Depending on what you're trying to do, you may also need a different HTTP handler, e.g. for multi file upload.
This may have been answered before: Python URLLib / URLLib2 POST.
Your server is likely performing a 302 redirect from http://myserver/post_service to http://myserver/post_service/. When the 302 redirect is performed, the request changes from POST to GET (see Issue 1401). Try changing url to http://myserver/post_service/.
Have a read of the urllib Missing Manual. Pulled from there is the following simple example of a POST request.
url = 'http://myserver/post_service'
data = urllib.urlencode({'name' : 'joe', 'age' : '10'})
req = urllib2.Request(url, data)
response = urllib2.urlopen(req)
print response.read()
As suggested by #Michael Kent do consider requests, it's great.
EDIT: This said, I do not know why passing data to urlopen() does not result in a POST request; It should. I suspect your server is redirecting, or misbehaving.
The requests module may ease your pain.
url = 'http://myserver/post_service'
data = dict(name='joe', age='10')
r = requests.post(url, data=data, allow_redirects=True)
print r.content
it should be sending a POST if you provide a data parameter (like you are doing):
from the docs:
"the HTTP request will be a POST instead of a GET when the data parameter is provided"
so.. add some debug output to see what's up from the client side.
you can modify your code to this and try again:
import urllib
import urllib2
url = 'http://myserver/post_service'
opener = urllib2.build_opener(urllib2.HTTPHandler(debuglevel=1))
data = urllib.urlencode({'name' : 'joe',
'age' : '10'})
content = opener.open(url, data=data).read()
Try this instead:
url = 'http://myserver/post_service'
data = urllib.urlencode({'name' : 'joe',
'age' : '10'})
req = urllib2.Request(url=url,data=data)
content = urllib2.urlopen(req).read()
print content
url="https://myserver/post_service"
data["name"] = "joe"
data["age"] = "20"
data_encoded = urllib2.urlencode(data)
print urllib2.urlopen(url + "?" + data_encoded).read()
May be this can help

Categories