Alternative to requests.post function? - python

I can't use the python requests package (long story, let's just assume I've exhausted all possibilities for using that package)
Is there an alternative package that would provide the exact functionality of the following code?
import requests
requests.post(URL, data=DATA, auth=(USERNAME, PASSWORD), headers=HEADER)

One of alternative ways, httplib usage
import httplib
import urllib
from base64 import b64encode
# your form
form_data = {'a':1 'b':2}
params = urllib.urlencode(form_data)
# build authorization
user_and_pass = b64encode(b"username:password").decode("ascii")
# headers
headers = {'Authorization': 'Basic %s' % user_and_pass}
# connection
conn = httplib.HTTPConnection("example.com")
conn.request('POST', '/v3/call_api', params, headers)
# get result
response = conn.getresponse()
print response.status, response.reason
data = response.read()
conn.close()

If your only problem is importing because of the module name, you can always import by full path or alter the module search path and reset it after importing. In order to avoid conflicts you can use import requests as requests2 or something of the like.
See the following question for the first option or the documentation about the search path.
How to import a module given the full path?
https://docs.python.org/2/tutorial/modules.html#the-module-search-path

from urllib2.urlopen
"the HTTP request will be a POST instead of a GET when the data parameter is provided."
So something like (i'll use an image example just because it was so darn obscure to figure out):
import urllib
import urllib2
import numpy as np
import cv2
image_to_send = np.zeros(512, np.uint8)
buffer_image = \
np.array(cv2.imencode('.png', image_to_send)[1]).tostring()
post_data = \
urllib.urlencode((('img_type','.png'),('img_data',buffer_image)))
req = \
urllib2.urlopen('http://www.yourdestination.com/imageupload/', data=post_data)
if you want authentication, you'll need to subclass urllib.FancyURLopener, and override prompt_user_passwd

There is an alternative to python requests you can try,
https://github.com/juancarlospaco/faster-than-requests

Related

Artifactory AQL Query Using Python 3 'Requests'

Would anybody be able to help me identify where I am going wrong with a very basic AQL I am trying to execute using Python 3 and the 'Requests' library?
No matter what I do I cant seem to get past a 400: Bad Request. It is clearly something to do with the formatting of the data I am trying to Post but just can't see it. I'm assuming I want to pass it as a string as when posting sql queries they have to be in plain text.
Just as an aside, I can execute this query absolutely fine using postman etc.
import requests
from requests.auth import HTTPBasicAuth
import json
HEADERS = {'Content-type': 'text'}
DATA = 'items.find({"type":"folder","repo":{"$eq":"BOS-Release-Builds"}}).sort({"$desc":["created"]}).limit(1)'
response = requests.post('https://local-Artifactory/artifactory/api/search/aql', headers=HEADERS, auth=HTTPBasicAuth('user', 'password'), data = DATA)
print (response.status_code)
##print (response.json())
def jprint(obj):
text = json.dumps(obj, sort_keys=True, indent=4)
print (text)
jprint(response.json())
print(response.url)
print(response.encoding)
Ok, after sitting staring at the code for another few minutes I spotted my mistake.
I shouldnt have defined the Header content type in the request and instead let the 'Requests' library deal with this for you.
So the code should look like:
import requests
from requests.auth import HTTPBasicAuth
import json
DATA = 'items.find({"type":"folder","repo":{"$eq":"BOS-Release-Builds"}}).sort({"$desc":["created"]}).limit(1)'
response = requests.post('https://uk-artifactory.flowbird.group/artifactory/api/search/aql', auth=HTTPBasicAuth('user', 'password!'), data = DATA)
print (response.status_code)
##print (response.json())
def jprint(obj):
text = json.dumps(obj, sort_keys=True, indent=4)
print (text)
jprint(response.json())
print(response.url)
print(response.encoding)

Python3: Custom Encrypted Headers URLLIB - KrakenAPI

Alright, so I'm a little outside of my league on this one I think.
I'm attempting to facilitate custom HTTP headers what is noted here:
API-Key = API key
API-Sign = Message signature using HMAC-SHA512 of (URI path + SHA256(nonce + POST data)) and base64 decoded secret API key
from https://www.kraken.com/help/api
I'm trying to work solely out of urllib if at all possible.
Below is one of many attempts to get it encoded like required:
import os
import sys
import time
import datetime
import urllib.request
import urllib.parse
import json
import hashlib
import hmac
import base64
APIKey = 'ThisKey'
secret = 'ThisSecret'
data = {}
data['nonce'] = int(time.time()*1000)
data['asset'] = 'ZUSD'
uripath = '/0/private/TradeBalance'
postdata = urllib.parse.urlencode(data)
encoded = (str(data['nonce']) + postdata).encode()
message = uripath.encode() + hashlib.sha256(encoded).digest()
signature = hmac.new(base64.b64decode(secret),
message, hashlib.sha512)
sigdigest = base64.b64encode(signature.digest())
#this is purely for checking how things are coming along.
print(sigdigest.decode())
headers = {
'API-Key': APIKey,
'API-Sign': sigdigest.decode()
}
The above may be working just fine, where I'm struggling now is appropriately getting it to the site.
This is my most recent attempt:
myBalance = urllib.urlopen('https://api.kraken.com/0/private/TradeBalance', urllib.parse.urlencode({'asset': 'ZUSD'}).encode("utf-8"), headers)
Any help is greatly appreciated.
Thanks!
urlopen doesn't support adding headers, so you need to create a Request object and pass it to urlopen:
url = 'https://api.kraken.com/0/private/TradeBalance'
body = urllib.parse.urlencode({'asset': 'ZUSD'}).encode("utf-8")
headers = {
'API-Key': APIKey,
'API-Sign': sigdigest.decode()
}
request = urllib.request.Request(url, data=body, headers=headers)
response = urllib.request.urlopen(request)

Issue with AppDynamics REST call with Python

I tried to call AppDynamics API using python requests but face an issue.
I wrote a sample code using the python client as follows...
from appd.request import AppDynamicsClient
c = AppDynamicsClient('URL','group','appd#123')
for app in c.get_applications():
print app.id, app.name
It works fine.
But if I do a simple call like the following
import requests
usr =<uid>
pwd =<pwd>
url ='http://10.201.51.40:8090/controller/rest/applications?output=JSON'
response = requests.get(url,auth=(usr,pwd))
print 'response',response
I get the following response:
response <Response [401]>
Am I doing anything wrong here ?
Couple of things:
I think the general URL format for app dynamics applications are (notice the '#'):
url ='http://10.201.51.40:8090/controller/#/rest/applications?output=JSON'
Also, I think the requests.get method needs an additional parameter for the 'account'. For instance, my auth format looks like:
auth = (_username + '#' + _account, _password)
I am able to get a right response code back with this config. Let me know if this works for you.
You could also use native python code for more control:
example:
import os
import sys
import urllib2
import base64
# if you have a proxy else comment out this line
proxy = urllib2.ProxyHandler({'https': 'proxy:port'})
opener = urllib2.build_opener(proxy)
urllib2.install_opener(opener)
username = "YOUR APPD REST API USER NAME"
password = "YOUR APPD REST API PASSWORD"
#Enter your request
request = urllib2.Request("https://yourappdendpoint/controller/rest/applications/141/events?time-range-type=BEFORE_NOW&duration-in-mins=5&event-types=ERROR,APPLICATION_ERROR,DIAGNOSTIC_SESSION&severities=ERROR")
base64string = base64.encodestring('%s:%s' % (username, password)).replace('\n', '')
request.add_header("Authorization", "Basic %s" % base64string)
response = urllib2.urlopen(request)
html = response.read()
This will get you the response and you can parse the XML as needed.
If you prefer it in JSON simply specify it in the request.

Python - Get Header information from URL

I've been searching all around for a Python 3.x code sample to get HTTP Header information.
Something as simple as get_headers equivalent in PHP cannot be found in Python easily. Or maybe I am not sure how to best wrap my head around it.
In essence, I would like to code something where I can see whether a URL exists or not
something in the line of
h = get_headers(url)
if(h[0] == 200)
{
print("Bingo!")
}
So far, I tried
h = http.client.HTTPResponse('http://docs.python.org/')
But always got an error
To get an HTTP response code in python-3.x, use the urllib.request module:
>>> import urllib.request
>>> response = urllib.request.urlopen(url)
>>> response.getcode()
200
>>> if response.getcode() == 200:
... print('Bingo')
...
Bingo
The returned HTTPResponse Object will give you access to all of the headers, as well. For example:
>>> response.getheader('Server')
'Apache/2.2.16 (Debian)'
If the call to urllib.request.urlopen() fails, an HTTPError Exception is raised. You can handle this to get the response code:
import urllib.request
try:
response = urllib.request.urlopen(url)
if response.getcode() == 200:
print('Bingo')
else:
print('The response code was not 200, but: {}'.format(
response.get_code()))
except urllib.error.HTTPError as e:
print('''An error occurred: {}
The response code was {}'''.format(e, e.getcode()))
For Python 2.x
urllib, urllib2 or httplib can be used here. However note, urllib and urllib2 uses httplib. Therefore, depending on whether you plan to do this check a lot (1000s of times), it would be better to use httplib. Additional documentation and examples are here.
Example code:
import httplib
try:
h = httplib.HTTPConnection("www.google.com")
h.connect()
except Exception as ex:
print "Could not connect to page."
For Python 3.x
A similar story to urllib (or urllib2) and httplib from Python 2.x applies to the urllib2 and http.client libraries in Python 3.x. Again, http.client should be quicker. For more documentation and examples look here.
Example code:
import http.client
try:
conn = http.client.HTTPConnection("www.google.com")
conn.connect()
except Exception as ex:
print("Could not connect to page.")
and if you wanted to check the status codes you would need to replace
conn.connect()
with
conn.request("GET", "/index.html") # Could also use "HEAD" instead of "GET".
res = conn.getresponse()
if res.status == 200 or res.status == 302: # Specify codes here.
print("Page Found!")
Note, in both examples, if you would like to catch the specific exception relating to when the URL doesn't exist, rather than all of them, catch the socket.gaierror exception instead (see the socket documentation).
You can use requests module to check it:
import requests
url = "http://www.example.com/"
res = requests.get(url)
if res.status_code == 200:
print("bingo")
You can also check header contents before making downloading the whole content of the webpage by using header.
you can use the urllib2 library
import urllib2
if urllib2.urlopen(url).code == 200:
print "Bingo"

In-python MediaWiki authentication from cookies

What would be easiest way to use MediaWiki cookies in some Python CGI scripts (on the same domain, ofc) for authentication (including MW's OpenID, especially)?
Access from python to MediaWiki database is possible, too.
A very easy way to use Cookies with mediawiki is as follows:
from cookielib import CookieJar
import urllib2
import urllib
import json
cj = CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
Now, requests can be made using opener. For example:
login_data = {
'action': 'login',
'lgname': 'Example',
'lgpassword': 'Foobar',
'format': 'json'
}
data = urllib.urlencode(login_data)
request = opener.open('http://en.wikipedia.org/w/api.php',data)
content = json.load(request)
login_data['token'] = content['login']['token']
data_2 = urllib.urlencode(login_data)
request_2 = opener.open('http://en.wikipedia.org/w/api.php',data_2)
content_2 = json.load(request_2)
print content_2['login']['result']
In the example above, if the Cookiejar was not created, the login wouldn't fully work, asking for another token. Though, it'd be recommended to use a mediawiki wrapper already created such as pywikipedia, mwhair, pytybot, simplemediawiki or wikitools, with hundreds of other mediawiki wrappers in python.
You could connect to and modify the SQL-database without HTTP and cookies using the MySQLdb module, but this is often the wrong solution to do MediaWiki maintenance. Though read-only-access should not be a problem.
The best way to access MediaWiki with a script is to use the api.php.
The best known Python based MediaWiki-API-bot is the Pywikibot (former Pywikipediabot).
The easiest way to save cookies in Python might be to use the http.cookiejar module.
Its documentation contains some simple examples.
I extracted functional example code out of my own MediaWiki-bot:
#!/usr/bin/python3
import http.cookiejar
import urllib.request
import urllib.parse
import json
s_login_name = 'example'
s_login_password = 'secret'
s_api_url = 'http://en.wikipedia.org/w/api.php'
s_user_agent = 'StackOverflowExample/0.0.1.2012.09.26.1'
def api_request(d_post_params):
d_post_params['format'] = 'json'
r_post_params = urllib.parse.urlencode(d_post_params).encode('utf-8')
o_url_request = urllib.request.Request(s_api_url, r_post_params)
o_url_request.add_header('User-Agent', s_user_agent)
o_http_response = o_url_opener.open(o_url_request)
s_reply = o_http_response.read().decode('utf-8')
d_reply = json.loads(s_reply)
return (o_http_response.code, d_reply)
o_cookie_jar = http.cookiejar.CookieJar()
o_http_cookie_processor = urllib.request.HTTPCookieProcessor(o_cookie_jar)
o_url_opener = urllib.request.build_opener(o_http_cookie_processor)
d_post_params = {'action': 'login', 'lgname': s_login_name}
i_code, d_reply = api_request(d_post_params)
print('http code: %d' % (i_code))
print('api reply: %s' % (d_reply))
s_login_token = d_reply['login']['token']
d_post_params = {
'action': 'login',
'lgname': s_login_name,
'lgpassword': s_login_password,
'lgtoken':s_login_token
}
i_code, d_reply = api_request(d_post_params)
print('http code: %d' % (i_code))
print('api reply: %s' % (d_reply))
Classes, error handling and sub-functions have been removed to increase readability.
The cookies saved in o_url_opener can also be used for requests to index.php.
You could also login via index.php (fake a browser request) but this would include parsing of HTML-output.
Variable name legend:
# Unicode string
s_* = 'a'
# Bytes (raw string)
r_* = b'a'
# Dictionary
d_* = {'a':1}
# Integer number
i_* = 4711
# Other objects
o_* = SomeClass()

Categories