I am using python social auth with django app. In email validation, the link received on the mail works fine in the same browser from which authentication was initiated but it show Missing needed parameter state in different browser.Did anybody fix this issue ?
The issue is discussed here Issue #577
This is because there's no partial pipeline data in other browser!
Christopher Keefer has worked on monkey-patch for this by fetching the session_key for Django session table. There's also a blog article here on this , refer Step 3 of this article.
# Monkey patching - an occasionally necessary evil.
from social import utils
from social.exceptions import InvalidEmail
from django.core import signing
from django.core.signing import BadSignature
from django.contrib.sessions.models import Session
from django.conf import settings
def partial_pipeline_data(backend, user=None, *args, **kwargs):
"""
Monkey-patch utils.partial_pipeline_data to enable us to retrieve session data by signature key in request.
This is necessary to allow users to follow a link in an email to validate their account from a different
browser than the one they were using to sign up for the account, or after they've closed/re-opened said
browser and potentially flushed their cookies. By adding the session key to a signed base64 encoded signature
on the email request, we can retrieve the necessary details from our Django session table.
We fetch only the needed details to complete the pipeline authorization process from the session, to prevent
nefarious use.
"""
data = backend.strategy.request_data()
if 'signature' in data:
try:
signed_details = signing.loads(data['signature'], key=settings.EMAIL_SECRET_KEY)
session = Session.objects.get(pk=signed_details['session_key'])
except BadSignature, Session.DoesNotExist:
raise InvalidEmail(backend)
session_details = session.get_decoded()
backend.strategy.session_set('email_validation_address', session_details['email_validation_address'])
backend.strategy.session_set('next', session_details.get('next'))
backend.strategy.session_set('partial_pipeline', session_details['partial_pipeline'])
backend.strategy.session_set(backend.name + '_state', session_details.get(backend.name + '_state'))
backend.strategy.session_set(backend.name + 'unauthorized_token_name',
session_details.get(backend.name + 'unauthorized_token_name'))
partial = backend.strategy.session_get('partial_pipeline', None)
if partial:
idx, backend_name, xargs, xkwargs = \
backend.strategy.partial_from_session(partial)
if backend_name == backend.name:
kwargs.setdefault('pipeline_index', idx)
if user: # don't update user if it's None
kwargs.setdefault('user', user)
kwargs.setdefault('request', backend.strategy.request_data())
xkwargs.update(kwargs)
return xargs, xkwargs
else:
backend.strategy.clean_partial_pipeline()
utils.partial_pipeline_data = partial_pipeline_data
This fixes the problem to much extent, still its not perfect. It will fail if session_key gets deleted/changed in the database. Django updates session_key each time the session data changes. So in case any other user logs in the same browser the session_key gets changed and user can't verify with the email link.
Omab has mentioned in the comment on issue,
I see the problem now, and even if I think that this could be solved with a re-write of the email validation pipeline, this affects all the pipeline functions that use the partial mechanism, so, I'm already working on a restructure of the pipeline serialization functionality that will improve this behavior. Basically the pipeline data will be dumped to a DB table and a hash code will be used to identify the processes which can be stopped and continue later, removing the dependency of the session.
Looking for update on this.
Related
I currently have a monolithic Python script which performs an OAuth authentication, returning an OAuth1Session, and then proceeds to perform some business logic using that OAuth1Session to gain authorization to a third-party service.
I need to split this up into two separate scripts, one which performs the OAuth authentication and will run on one machine, and the other which will run on a remote machine to perform the business logic authorized against the third-party service.
How can I serialize the OAuth1Session object so that the authenticated tokens can be handed off seamlessly from the authentication script on machine A to the processing script on machine B?
I tried the obvious:
print(json.dumps(session))
But I got this error:
TypeError: Object of type OAuth1Session is not JSON serializable
Is there a canonical solution for this simple requirement?
UPDATE
Here's the entire source code. Please note this is not my code, I downloaded it from the author and now I'm trying to modify it to work a bit differently.
"""This Python script provides examples on using the E*TRADE API endpoints"""
from __future__ import print_function
import webbrowser
import json
import logging
import configparser
import sys
import requests
from rauth import OAuth1Service
def oauth():
"""Allows user authorization for the sample application with OAuth 1"""
etrade = OAuth1Service(
name="etrade",
consumer_key=config["DEFAULT"]["CONSUMER_KEY"],
consumer_secret=config["DEFAULT"]["CONSUMER_SECRET"],
request_token_url="https://api.etrade.com/oauth/request_token",
access_token_url="https://api.etrade.com/oauth/access_token",
authorize_url="https://us.etrade.com/e/t/etws/authorize?key={}&token={}",
base_url="https://api.etrade.com")
base_url = config["DEFAULT"]["PROD_BASE_URL"]
# Step 1: Get OAuth 1 request token and secret
request_token, request_token_secret = etrade.get_request_token(
params={"oauth_callback": "oob", "format": "json"})
# Step 2: Go through the authentication flow. Login to E*TRADE.
# After you login, the page will provide a text code to enter.
authorize_url = etrade.authorize_url.format(etrade.consumer_key, request_token)
webbrowser.open(authorize_url)
text_code = input("Please accept agreement and enter text code from browser: ")
# Step 3: Exchange the authorized request token for an authenticated OAuth 1 session
session = etrade.get_auth_session(request_token,
request_token_secret,
params={"oauth_verifier": text_code})
return(session, base_url)
# loading configuration file
config = configparser.ConfigParser()
config.read(sys.argv[1])
(session, base_url) = oauth()
print(base_url)
print(json.dumps(session))
#original code
#market = Market(session, base_url)
#quotes = market.quotes(sys.argv[2])
Please note the last two commented-out lines. That is the original code: Immediate after the oauth is performed, the code invokes some business functionality. I want to break this up into two separate scripts running as isolated processes: Script 1 performs the oauth and persists the session, Script 2 reads the session from a file and performs the business functionality.
Unfortunately it fails at the last line, print(json.dumps(session)).
"XY Problem" Alert
My goal is to split up the script into two so that the business logic can run in a separate machine from the authentication code. I believe that the way to do this is to serialize the session object and then parse it back in the second script. Printing out the session using json.dumps() is an intermediate step, 'Y', in my journey to solving problem 'X'. If you can think of a better way to achieve the goal, that could be a valid answer.
From the comments available in the source code here: https://github.com/litl/rauth/blob/a6d887d7737cf21ec896a8104f25c2754c694011/rauth/session.py
You only need to serialize some attributes of your object to reinstantiate it:
Line 103
def __init__(self,
consumer_key,
consumer_secret,
access_token=None,
access_token_secret=None,
signature=None,
service=None):
...
Thus I would suggest serializing the following dict on the first machine:
info_to_serialize = {
'consumer_key': session.consumer_key,
'consumer_secret': session.consumer_secret,
'access_token': session.access_token,
'access_token_secret': session.access_token_secret
}
serialized_data = json.dumps(info_to_serialize)
And on the second machine reinstantiate your session like that:
from rauth.session import OAuth1Session
info_deserialized = json.loads(serialized_data)
session = OAuth1Session(**info_deserialized)
Hope this helped
I am a little stuck with writing a custom authenticator for jupyterhub. Most probably because I do not understand the inner workings of the available REMOTE_USER authenticator. I am not sure if it is applicable in my case... anyhow... this is what I'd like to do:
My general idea: I have a server that authenticates a user with his or her institutional login. After logging into the institution server/website, the users' data are encoded -- only some details to identify the user. They are then redirected to a the jupyterhub domain in the following way
https://<mydomain>/hub/login?data=<here go the encrypted data>
Now, if a request gets sent like this to my jupyterhub-domain, I'd like to decrypt the submitted data, and authenticate the user.
My trial:
I tried it with the following code. But it seems I am too nooby... :D
So please, pedantic comments are welcome :D
from tornado import gen
from jupyterhub.auth import Authenticator
class MyAuthenticator(Authenticator):
login_service = "my service"
authenticator_login_url="authentication url"
#gen.coroutine
def authenticate(self,handler,data=None):
# some verifications go here
# if data is verified the username is returned
My first problem... clicking the button on the login page, doesn't redirect me to my Authentication URL... it seems the variable authenticator_login_url from the login template is set somewhere else...
Second problem... a request made to .../hub/login?data=... is not evaluated by the authenticator (it seems...)
So: Has somebody any hints for me how to go about this?
As you see I followed the tutorials here:
https://universe-docs.readthedocs.io/en/latest/authenticators.html
So the following code does the job, however, I am always open to improvements.
So, what I did was redirect an empty login attempt to the login-url and deny access. If data is presented, check the validity of the data. If verified, user can login.
from tornado import gen, web
from jupyterhub.handlers import BaseHandler
from jupyterhub.auth import Authenticator
class MyAuthenticator(Authenticator):
login_service = "My Service"
#gen.coroutine
def authenticate(self,handler,data=None):
rawd = None
# If we receive no data we redirect to login page
while (rawd is None):
try:
rawd = handler.get_argument("data")
except:
handler.redirect("<The login URL>")
return None
# Do some verification and get the data here.
# Get the data from the parameters send to your hub from the login page, say username, access_token and email. Wrap everythin neatly in a dictionary and return it.
userdict = {"name": username}
userdict["auth_state"] = auth_state = {}
auth_state['access_token'] = verify
auth_state['email'] = email
#return the dictionary
return userdict
Simply add the file to the Python path, so that Jupyterhub is able to find it and make the necessary configurations in your jupyterhub_config.py file.
I have a RESTful API written in pyramid/cornice. It provides an API for an Ember client.
I have followed the cornice tutorial and have a valid_token validator which I use on many views as methods of resource classes.
def valid_token(request):
header = 'Authorization'
token = request.headers.get(header)
if token is None:
request.errors.add('headers', header, "Missing token")
request.errors.status = 401
return
session = DBSession.query(Session).get(token)
if not session:
request.errors.add('headers', header, "invalid token")
request.errors.status = 401
request.validated['session'] = session
Now I want to start selectively protecting resources. The Pyramid way seems to be to register authentication/authorization policies. The ACLAuthorizationPolicy seems to provide access to the nice ACL tooling in pyramid. However, it seems that pyramid needs both authentication and authorization policies to function. Since I'm authenticating with my validator this is confusing me.
Can I use ACL to control authorization whilst authenticating using my cornice valid_token validator? Do I need to register pyramid authentication or authorization policies?
I'm a bit confused, having little experience of using ACL in pyramid.
It is not an easy question :)
Shortly:
What you implemented in your validator is already taken care of by Pyramid with an AuthenticationPolicy
Start setting up a SessionAuthenticationPolicy with your custom callback (see code)
Once this authn setup, you will have those 401 responses, and your session value in the request.authenticated_userid attribute. You can also custom stuff in the request.registry object.
The only reason to keep your validator is if you want to return the invalid token messages in the 401 response. But for that, you can define a custom 401 pyramid view (using #forbidden_view_config)
Once you have that, you can setup a custom authorization for your views. You can find a very simple example in Cliquet first versions here : authz code and view perm
Good luck!
You may wanna do something like:
from pyramid.authentication import SessionAuthenticationPolicy
from pyramid.authorization import ACLAuthorizationPolicy
from your_module import valid_token
authn_policy = SessionAuthenticationPolicy(debug=True, callback=valid_token)
authz_policy = ACLAuthorizationPolicy()
config = Configurator(authentication_policy=authn_policy,authorization_policy=authz_policy)
And ofcourse in the Configuration will receive other arguments like settigns, locale_negociator, ...........
Hope this will help
Authenticating requests, especially with Google's API's is so incredibly confusing!
I'd like to make authorized HTTP POST requests through python in order to query data from the datastore. I've got a service account and p12 file all ready to go. I've looked at the examples, but it seems no matter which one I try, I'm always unauthorized to make requests.
Everything works fine from the browser, so I know my permissions are all in order. So I suppose my question is, how do I authenticate, and request data securely from the Datastore API through python?
I am so lost...
You probably should not be using raw POST requests to use Datastore, instead use the gcloud library to do the heavy lifting for you.
I would also recommend the Python getting started page, as it has some good tutorials.
Finally, I recorded a podcast where I go over the basics of using Datastore with Python, check it out!
Here is the code, and here is an example:
#Import libraries
from gcloud import datastore
import os
#The next few lines will set up your environment variables
#Replace "YOUR_RPOEJCT_ID_HERE" with the correct value in code.py
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "key.json"
projectID = "YOUR_RPOEJCT_ID_HERE"
os.environ["GCLOUD_TESTS_PROJECT_ID"] = projectID
os.environ["GCLOUD_TESTS_DATASET_ID"] = projectID
datastore.set_default_dataset_id(projectID)
#Let us build a message board / news website
#First, create a fake email for our fake user
email = "me#fake.com"
#Now, create a 'key' for that user using the email
user_key = datastore.Key('User', email)
#Now create a entity using that key
new_user = datastore.Entity( key=user_key )
#Add some fields to the entity
new_user["name"] = unicode("Iam Fake")
new_user["email"] = unicode(email)
#Push entity to the Cloud Datastore
datastore.put( new_user )
#Get the user from datastore and print
print( datastore.get(user_key) )
This code is licensed under Apache v2
I'm just now getting into authentication in my app, and all of the pyramid examples that I can find explain the straightforward parts very well, but handwave over the parts that don't make any sense to me.
Most of the examples look something like this:
login = request.params['login']
password = request.params['password']
if USERS.get(login) == password:
headers = remember(request, login)
return HTTPFound(location = came_from,
headers = headers)
And from init:
session_factory = UnencryptedCookieSessionFactoryConfig(
settings['session.secret']
)
authn_policy = SessionAuthenticationPolicy()
authz_policy = ACLAuthorizationPolicy()
Trying to track down the point in which the login actually happens, I'm assuming it's this one:
headers = remember(request, login)
It appears to me that what is going on is we're storing the username in the session cookie.
If I put this line in my app, the current user is magically logged in, but why?
How does pyramid know that I'm passing a username? It looks like I'm just passing the value of login. Further, this variable is named differently in different examples.
Even if it does know that it's a username, how does it connect it with the user ID? If I run authenticated_userid(request) afterwards, it works, but how has the system connected the username with the userid? I don't see any queries as part of the remember() documentation.
Pyramid's security system revolves around principals; your login value is that principal. It is up to your code to provide remember() with a valid principal name; if your login name filled in the form is used as your principal, then that's great. If you are using an email address but use a database primary key as the principal string, then you'd have to map that yourself.
What exactly remember() does depends on your authentication policy; it is up to the policy to 'know' from request to request what principal you asked it to remember.
If you are using the AuthTktAuthenticationPolicy policy, then the principal value is stored in a cryptographically signed cookie; your next response will have a Set-Cookie header added. Then next time a request comes in with that cookie, provided it is still valid and the signature checks out, the policy now 'knows' what principle is making that request.
When that request then tries to access a protected resource, Pyramid sees that a policy is in effect, and asks that policy for the current authenticated principle.