I'm trying to do an authentication against Auth0 from a Python script using the PKCE authentication flow, and I'm getting an error that Auth0 can't see exactly one of my URI parameters.
<<CAPITAL LETTERS>> represent missing constants related to the authentication exchange being tested.
import requests
import urllib.parse
def process_auth_callback(authorization_code, callback_uri):
payload = {
'grant_type': 'authorization_code',
'client_id': <<AUTH CLIENT ID>>,
'code_verifier': <<CODE VERIFIER>>,
'code': authorization_code,
'redirect_uri': urllib.parse.quote(callback_uri)
}
r = requests.post('https://<<APP ID>>.us.auth0.com/oauth/token', data=payload)
print(r.request.body)
print(r.text)
process_auth_callback(<<AUTHORIZATION CODE>>, 'http://localhost:1234/login')
I get the error back from Auth0's API:
{"error":"unauthorized_client","error_description":"The redirect URI is wrong. You sent null//null, and we expected http://localhost:1234"}
However, the request body prints as the following: grant_type=authorization_code&client_id=<<AUTH CLIENT ID>>&code_verifier=<<CODE VERIFIER>>&code=<<AUTHORIZATION CODE>>&redirect_uri=http%253A%2F%2Flocalhost%253A1234%2Flogin
This appears to include the correct redirect URI, so I'm not sure why the API is reporting null//null. Is this an issue with how I'm using requests? Something else?
Ah, I found my own answer not long after.
The key is the %253A in the URI encoding in the outgoing request body. (See this answer) Python's requests library is already URI-encoding the parameters, so my URI encoded urllib.parse.quote(callback_uri) is then being encoded again during the data preprocessing prior to send. Auth0's API is unable to parse this and processes it as null//null.
I am working on a 3rd party API, and unfortunately, the support is quite bad without much detailed document, they just provided an example on Ruby to sign a request before making the call to API, like this
# Config your keys
access_id = "YOUR_ACCESS_KEY"
secret_key = "YOUR_SECRET_KEY"
api_endpoint = "https://api.example.com/endpoint"
# Make a request
request = Curl::Easy.new(api_endpoint)
## always set header Content-Type with application/json
request.headers["Content-Type"] = "application/json"
# Sign the request with the keys
# https://www.rubydoc.info/gems/api-auth/2.4.1/ApiAuth.sign!
signed = ApiAuth.sign!(request, access_id, secret_key, :override_http_method => "GET")
# signed
signed.perform
signed.body_str
I've tried to experiment with requests but not successful.
Is there any library from Python or how can I implement this chunk of Ruby code to Python?
Thank you very much
I am trying to connect to the api as explained in http://api.instatfootball.com/ , It is supposed to be something like the following get /[lang]/data/[action].[format]?login=[login]&pass=[pass]. I know the [lang], [action] and [format] I need to use and I also have a login and password but don´t know how to access to the information inside the API.
If I write the following code:
import requests
r = requests.get('http://api.instatfootball.com/en/data/stat_params_players.json', auth=('login', 'pass'))
r.text
with the actual login and pass, I get the following output:
{"status":"error"}
This API requires authentication as parameters over an insecure connection, so be aware that this is highly lacking on the API part.
import requests
username = 'login'
password = 'password'
base_url = 'http://api.instatfootball.com/en/data/{endpoint}.json'
r = requests.get(base_url.format(endpoint='stat_params_players'), params={'login': username, 'pass': password})
data = r.json()
print(r.status_code)
print(r.text)
You will need to make a http-request using the URL. This will return the requested data in the response body. Depending on the [format] parameter, you will need to decode the data from xml / json to a native Python object.
As rdas already commented, you can use the request library for python (https://requests.readthedocs.io/en/master/). You will also find some code samples there. It will also do proper decoding of JSON data.
If you want to play around with the API a bit, you can use a tool like Postman for testing and debugging your requests. (https://www.postman.com/)
I'm having trouble converting curl code to python in order to access a token to an API.
The given code is:
curl -k -d "grant_type=client_credentials&scope=PRODUCTION" -H "Authorization :Basic <long base64 value>, Content-Type: application/x-www-form-urlencoded" https://api-km.it.umich.edu/token
I know that -H indicates a header, however Im not sure what to do with -d. So far I have:
authorizationcode = 'username:password'
authorizationcode = base64.standard_b64encode(authorizationcode)
header = {'Authorization ': 'Basic ' + authorizationcode, 'Content-Type': 'application/x-www-form-' + authorizationcode}
r = requests.post('https://api-km.it.umich.edu/token',
data = 'grant_type=client_credentials&scope=PRODUCTION',
headers = header)
Also, these are the instructions:
Obtain your consumer key and consumer secret from the API Directory. These are generated on the Subscriptions page after an application is successfully subscribed an API.
Combine the consumer key and consumer secret keys in the format: consumer-key:consumer-secret. Encode the combined string using base64. Most programming languages have a method to base64 encode a string. For an example of encoding to base64. Visit the base64encode site for more information.
Execute a POST call to the token API to get an access token.
Our data is correct however we are getting a 415 error from the server.
Assistance would be greatly appreciated.
A 415 Error is described in http://www.checkupdown.com/status/E415.html as "Unsupported media type"
As #krock mentioned, the content-type is not specified as application/x-www-form-urlencoded, rather it is being set to x-www-form- + your auth code.
You are setting an incorrect Content-Type header:
'Content-Type': 'application/x-www-form-' + authorizationcode
That should be 'application/x-www-form-urlencode'. You do not, however, have to set it at all as requests does this for you automatically if you pass in a dictionary to the data argument.
requests will also handle the Authorization header for you; pass in the username and password to the auth argument as a tuple:
auth = ('username', 'password')
params = {'grant_type': 'client_credentials', 'scope': 'PRODUCTION'}
r = requests.post('https://api-km.it.umich.edu/token', data=params, auth=auth)
where user and password are the parts before and after the colon. requests will produce the correct Basic base64-encoded header for you from those two strings.
This question has been asked here before. The accepted answer was probably obvious to both questioner and answerer---but not to me. I have commented on the above question to get more precisions, but there was no response. I also approached the meta Q&A for help on how to bring back questions from their grave, and got no answer either.
The answer to the here above question was:
From the client's perspective, an OpenID login is very similar to any other web-based login. There isn't a defined protocol for the client; it is an ordinary web session that varies based on your OpenID provider. For this reason, I doubt that any such libraries exist. You will probably have to code it yourself.
I know how to log onto a website with Python already, using the Urllib2 module. But that's not enough for me to guess how to authenticate to an OpenID.
I'm actually trying to get my StackOverflow inbox in json format, for which I need to be logged in.
Could someone provide a short intro or a link to a nice tutorial on how to do that?
Well I myself don't know much about OpenID but your post (and the bounty!!) got me interested.
This link tells the exact flow of OpenID authentication sequence (Atleast for v1.0. The new version is 2.0). From what I could make out, the steps would be something like
You fetch the login page of stackoverflow that will also provide an option to login using OpenID (As a form field).
You send ur openID which is actually a form of uri and NOT username/email(If it is Google profile it is your profile ID)
Stackoverflow will then connect to your ID provider (in this case google) and send you a redirect to google login page and another link to where you should redirect later (lets say a)
You can login to the google provided page conventionally (using POST method from Python)
Google provides a cryptographic token (Not pretty sure about this step) in return to your login request
You send the new request to a with this token.
Stackoverflow will contact google with this token. If authenticity established, it will return a session ID
Later requests to STackOverflow should have this session ID
No idea about logging out!!
This link tells about various responses in OpenID and what they mean. So maybe it will come in handy when your code your client.
Links from the wiki page OpenID Explained
EDIT: Using Tamper Data Add on for Firefox, the following sequence of events can be constructed.
User sends a request to the SO login page. On entering the openID in the form field the resulting page sends a 302 redirecting to a google page.
The redirect URL has a lot of OpenID parameters (which are for the google server). One of them is return_to=https://stackoverflow.com/users/authenticate/?s=some_value.
The user is presented with the google login page. On login there are a few 302's which redirect the user around in google realm.
Finally a 302 is received which redirects user to stackoverflow's page specified in 'return_to' earlier
During this entire series of operation a lot of cookie's have been generated which must be stored correctly
On accessing the SO page (which was 302'd by google), the SO server processes your request and in the response header sends a field "Set-Cookie" to set cookies named gauth and usr with a value along with another 302 to stackoverflow.com. This step completes your login
Your client simply stores the cookie usr
You are logged in as long as you remeber to send the Cookie usr with any request to SO.
You can now request your inbox just remeber to send the usr cookie with the request.
I suggest you start coding your python client and study the responses carefully. In most cases it will be a series of 302's with minimal user intervention (except for filling out your Google username and password and allowing the site page).
However to make it easier, you could just login to SO using your browser, copy all the cookie values and make a request using urllib2 with the cookie values set.
Of course in case you log out on the browser, you will have to login again and change the cookie value in your python program.
I know this is close to archeology, digging a post that's two years old, but I just wrote a new enhanced version of the code from the validated answer, so I thought it may be cool to share it here, as this question/answers has been a great help for me to implement that.
So, here's what's different:
it uses the new requests library that is an enhancement over urllib2 ;
it supports authenticating using google's and stackexchange's openid provider.
it is way shorter and simpler to read, though it has less printouts
here's the code:
#!/usr/bin/env python
import sys
import urllib
import requests
from BeautifulSoup import BeautifulSoup
def get_google_auth_session(username, password):
session = requests.Session()
google_accounts_url = 'http://accounts.google.com'
authentication_url = 'https://accounts.google.com/ServiceLoginAuth'
stack_overflow_url = 'http://stackoverflow.com/users/authenticate'
r = session.get(google_accounts_url)
dsh = BeautifulSoup(r.text).findAll(attrs={'name' : 'dsh'})[0].get('value').encode()
auto = r.headers['X-Auto-Login']
follow_up = urllib.unquote(urllib.unquote(auto)).split('continue=')[-1]
galx = r.cookies['GALX']
payload = {'continue' : follow_up,
'followup' : follow_up,
'dsh' : dsh,
'GALX' : galx,
'pstMsg' : 1,
'dnConn' : 'https://accounts.youtube.com',
'checkConnection' : '',
'checkedDomains' : '',
'timeStmp' : '',
'secTok' : '',
'Email' : username,
'Passwd' : password,
'signIn' : 'Sign in',
'PersistentCookie' : 'yes',
'rmShown' : 1}
r = session.post(authentication_url, data=payload)
if r.url != authentication_url: # XXX
print "Logged in"
else:
print "login failed"
sys.exit(1)
payload = {'oauth_version' : '',
'oauth_server' : '',
'openid_username' : '',
'openid_identifier' : ''}
r = session.post(stack_overflow_url, data=payload)
return session
def get_so_auth_session(email, password):
session = requests.Session()
r = session.get('http://stackoverflow.com/users/login')
fkey = BeautifulSoup(r.text).findAll(attrs={'name' : 'fkey'})[0]['value']
payload = {'openid_identifier': 'https://openid.stackexchange.com',
'openid_username': '',
'oauth_version': '',
'oauth_server': '',
'fkey': fkey,
}
r = session.post('http://stackoverflow.com/users/authenticate', allow_redirects=True, data=payload)
fkey = BeautifulSoup(r.text).findAll(attrs={'name' : 'fkey'})[0]['value']
session_name = BeautifulSoup(r.text).findAll(attrs={'name' : 'session'})[0]['value']
payload = {'email': email,
'password': password,
'fkey': fkey,
'session': session_name}
r = session.post('https://openid.stackexchange.com/account/login/submit', data=payload)
# check if url changed for error detection
error = BeautifulSoup(r.text).findAll(attrs={'class' : 'error'})
if len(error) != 0:
print "ERROR:", error[0].text
sys.exit(1)
return session
if __name__ == "__main__":
prov = raw_input('Choose your openid provider [1 for StackOverflow, 2 for Google]: ')
name = raw_input('Enter your OpenID address: ')
pswd = getpass('Enter your password: ')
if '1' in prov:
so = get_so_auth_session(name, pswd)
elif '2' in prov:
so = get_google_auth_session(name, pswd)
else:
print "Error no openid provider given"
r = so.get('http://stackoverflow.com/inbox/genuwine')
print r.json()
the code is also available as a github gist
HTH
This answer sums up what others have said below, especially RedBaron, plus adding a method I used to get to the StackOverflow Inbox using Google Accounts.
Using the Tamper Data developer tool of Firefox and logging on to StackOVerflow, one can see that OpenID works this way:
StackOverflow requests authentication from a given service (here Google), defined in the posted data;
Google Accounts takes over and checks for an already existing cookie as proof of authentication;
If no cookie is found, Google requests authentication and sets a cookie;
Once the cookie is set, StackOverflow acknowledges authentication of the user.
The above sums up the process, which in reality is more complicated, since many redirects and cookie exchanges occur indeed.
Because reproducing the same process programmatically proved somehow difficult (and that might just be my illiteracy), especially trying to hunt down the URLs to call with all locale specifics etc. I opted for loging on to Google Accounts first, getting a well deserved cookie and then login onto Stackoverflow, which would use the cookie for authentication.
This is done simply using the following Python modules: urllib, urllib2, cookielib and BeautifulSoup.
Here is the (simplified) code, it's not perfect, but it does the trick. The extended version can be found on Github.
#!/usr/bin/env python
import urllib
import urllib2
import cookielib
from BeautifulSoup import BeautifulSoup
from getpass import getpass
# Define URLs
google_accounts_url = 'http://accounts.google.com'
authentication_url = 'https://accounts.google.com/ServiceLoginAuth'
stack_overflow_url = 'https://stackoverflow.com/users/authenticate'
genuwine_url = 'https://stackoverflow.com/inbox/genuwine'
# Build opener
jar = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(jar))
def request_url(request):
'''
Requests given URL.
'''
try:
response = opener.open(request)
except:
raise
return response
def authenticate(username='', password=''):
'''
Authenticates to Google Accounts using user-provided username and password,
then authenticates to StackOverflow.
'''
# Build up headers
user_agent = 'Mozilla/5.0 (Ubuntu; X11; Linux i686; rv:8.0) Gecko/20100101 Firefox/8.0'
headers = {'User-Agent' : user_agent}
# Set Data to None
data = None
# Build up URL request with headers and data
request = urllib2.Request(google_accounts_url, data, headers)
response = request_url(request)
# Build up POST data for authentication
html = response.read()
dsh = BeautifulSoup(html).findAll(attrs={'name' : 'dsh'})[0].get('value').encode()
auto = response.headers.getheader('X-Auto-Login')
follow_up = urllib.unquote(urllib.unquote(auto)).split('continue=')[-1]
galx = jar._cookies['accounts.google.com']['/']['GALX'].value
values = {'continue' : follow_up,
'followup' : follow_up,
'dsh' : dsh,
'GALX' : galx,
'pstMsg' : 1,
'dnConn' : 'https://accounts.youtube.com',
'checkConnection' : '',
'checkedDomains' : '',
'timeStmp' : '',
'secTok' : '',
'Email' : username,
'Passwd' : password,
'signIn' : 'Sign in',
'PersistentCookie' : 'yes',
'rmShown' : 1}
data = urllib.urlencode(values)
# Build up URL for authentication
request = urllib2.Request(authentication_url, data, headers)
response = request_url(request)
# Check if logged in
if response.url != request._Request__original:
print '\n Logged in :)\n'
else:
print '\n Log in failed :(\n'
# Build OpenID Data
values = {'oauth_version' : '',
'oauth_server' : '',
'openid_username' : '',
'openid_identifier' : 'https://www.google.com/accounts/o8/id'}
data = urllib.urlencode(values)
# Build up URL for OpenID authetication
request = urllib2.Request(stack_overflow_url, data, headers)
response = request_url(request)
# Retrieve Genuwine
data = None
request = urllib2.Request(genuwine_url, data, headers)
response = request_url(request)
print response.read()
if __name__ == '__main__':
username = raw_input('Enter your Gmail address: ')
password = getpass('Enter your password: ')
authenticate(username, password)
You need to implement cookies on any "login" page, in Python you use cookiejar. For example:
jar = cookielib.CookieJar()
myopener = urllib2.build_opener(urllib2.HTTPCookieProcessor(jar))
#myopener now supports cookies.
....
I made a simple script that logins to stackoverflow.com using Mozilla Firefox cookies. It's not entirely automated, because you need to login manually, but it's all i managed to do.
Scipt is actual for latest versions of FF ( i'm using 8.0.1 ), but you need to get latest sqlite dll, because default one that comes with python 2.7 can't open DB. You can get it here: http://www.sqlite.org/sqlite-dll-win32-x86-3070900.zip
import urllib2
import webbrowser
import cookielib
import os
import sqlite3
import re
from time import sleep
#login in Firefox. Must be default browser. In other cases log in manually
webbrowser.open_new('http://stackoverflow.com/users/login')
#wait for user to log in
sleep(60)
#Process profiles.ini to get path to cookies.sqlite
profile = open(os.path.join(os.environ['APPDATA'],'Mozilla','Firefox','profiles.ini'), 'r').read()
COOKIE_DB = os.path.join(os.environ['APPDATA'],'Mozilla','Firefox','Profiles',re.findall('Profiles/(.*)\n',profile)[0],'cookies.sqlite')
CONTENTS = "host, path, isSecure, expiry, name, value"
#extract cookies for specific host
def get_cookies(host):
cj = cookielib.LWPCookieJar()
con = sqlite3.connect(COOKIE_DB)
cur = con.cursor()
sql = "SELECT {c} FROM moz_cookies WHERE host LIKE '%{h}%'".format(c=CONTENTS, h=host)
cur.execute(sql)
for item in cur.fetchall():
c = cookielib.Cookie(0, item[4], item[5],
None, False,
item[0], item[0].startswith('.'), item[0].startswith('.'),
item[1], False,
item[2],
item[3], item[3]=="",
None, None, {})
cj.set_cookie(c)
return cj
host = 'stackoverflow'
cj = get_cookies(host)
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
response = opener.open('http://stackoverflow.com').read()
# if username in response - Auth successful
if 'Stanislav Golovanov' in response:
print 'Auth successful'