This question has been asked here before. The accepted answer was probably obvious to both questioner and answerer---but not to me. I have commented on the above question to get more precisions, but there was no response. I also approached the meta Q&A for help on how to bring back questions from their grave, and got no answer either.
The answer to the here above question was:
From the client's perspective, an OpenID login is very similar to any other web-based login. There isn't a defined protocol for the client; it is an ordinary web session that varies based on your OpenID provider. For this reason, I doubt that any such libraries exist. You will probably have to code it yourself.
I know how to log onto a website with Python already, using the Urllib2 module. But that's not enough for me to guess how to authenticate to an OpenID.
I'm actually trying to get my StackOverflow inbox in json format, for which I need to be logged in.
Could someone provide a short intro or a link to a nice tutorial on how to do that?
Well I myself don't know much about OpenID but your post (and the bounty!!) got me interested.
This link tells the exact flow of OpenID authentication sequence (Atleast for v1.0. The new version is 2.0). From what I could make out, the steps would be something like
You fetch the login page of stackoverflow that will also provide an option to login using OpenID (As a form field).
You send ur openID which is actually a form of uri and NOT username/email(If it is Google profile it is your profile ID)
Stackoverflow will then connect to your ID provider (in this case google) and send you a redirect to google login page and another link to where you should redirect later (lets say a)
You can login to the google provided page conventionally (using POST method from Python)
Google provides a cryptographic token (Not pretty sure about this step) in return to your login request
You send the new request to a with this token.
Stackoverflow will contact google with this token. If authenticity established, it will return a session ID
Later requests to STackOverflow should have this session ID
No idea about logging out!!
This link tells about various responses in OpenID and what they mean. So maybe it will come in handy when your code your client.
Links from the wiki page OpenID Explained
EDIT: Using Tamper Data Add on for Firefox, the following sequence of events can be constructed.
User sends a request to the SO login page. On entering the openID in the form field the resulting page sends a 302 redirecting to a google page.
The redirect URL has a lot of OpenID parameters (which are for the google server). One of them is return_to=https://stackoverflow.com/users/authenticate/?s=some_value.
The user is presented with the google login page. On login there are a few 302's which redirect the user around in google realm.
Finally a 302 is received which redirects user to stackoverflow's page specified in 'return_to' earlier
During this entire series of operation a lot of cookie's have been generated which must be stored correctly
On accessing the SO page (which was 302'd by google), the SO server processes your request and in the response header sends a field "Set-Cookie" to set cookies named gauth and usr with a value along with another 302 to stackoverflow.com. This step completes your login
Your client simply stores the cookie usr
You are logged in as long as you remeber to send the Cookie usr with any request to SO.
You can now request your inbox just remeber to send the usr cookie with the request.
I suggest you start coding your python client and study the responses carefully. In most cases it will be a series of 302's with minimal user intervention (except for filling out your Google username and password and allowing the site page).
However to make it easier, you could just login to SO using your browser, copy all the cookie values and make a request using urllib2 with the cookie values set.
Of course in case you log out on the browser, you will have to login again and change the cookie value in your python program.
I know this is close to archeology, digging a post that's two years old, but I just wrote a new enhanced version of the code from the validated answer, so I thought it may be cool to share it here, as this question/answers has been a great help for me to implement that.
So, here's what's different:
it uses the new requests library that is an enhancement over urllib2 ;
it supports authenticating using google's and stackexchange's openid provider.
it is way shorter and simpler to read, though it has less printouts
here's the code:
#!/usr/bin/env python
import sys
import urllib
import requests
from BeautifulSoup import BeautifulSoup
def get_google_auth_session(username, password):
session = requests.Session()
google_accounts_url = 'http://accounts.google.com'
authentication_url = 'https://accounts.google.com/ServiceLoginAuth'
stack_overflow_url = 'http://stackoverflow.com/users/authenticate'
r = session.get(google_accounts_url)
dsh = BeautifulSoup(r.text).findAll(attrs={'name' : 'dsh'})[0].get('value').encode()
auto = r.headers['X-Auto-Login']
follow_up = urllib.unquote(urllib.unquote(auto)).split('continue=')[-1]
galx = r.cookies['GALX']
payload = {'continue' : follow_up,
'followup' : follow_up,
'dsh' : dsh,
'GALX' : galx,
'pstMsg' : 1,
'dnConn' : 'https://accounts.youtube.com',
'checkConnection' : '',
'checkedDomains' : '',
'timeStmp' : '',
'secTok' : '',
'Email' : username,
'Passwd' : password,
'signIn' : 'Sign in',
'PersistentCookie' : 'yes',
'rmShown' : 1}
r = session.post(authentication_url, data=payload)
if r.url != authentication_url: # XXX
print "Logged in"
else:
print "login failed"
sys.exit(1)
payload = {'oauth_version' : '',
'oauth_server' : '',
'openid_username' : '',
'openid_identifier' : ''}
r = session.post(stack_overflow_url, data=payload)
return session
def get_so_auth_session(email, password):
session = requests.Session()
r = session.get('http://stackoverflow.com/users/login')
fkey = BeautifulSoup(r.text).findAll(attrs={'name' : 'fkey'})[0]['value']
payload = {'openid_identifier': 'https://openid.stackexchange.com',
'openid_username': '',
'oauth_version': '',
'oauth_server': '',
'fkey': fkey,
}
r = session.post('http://stackoverflow.com/users/authenticate', allow_redirects=True, data=payload)
fkey = BeautifulSoup(r.text).findAll(attrs={'name' : 'fkey'})[0]['value']
session_name = BeautifulSoup(r.text).findAll(attrs={'name' : 'session'})[0]['value']
payload = {'email': email,
'password': password,
'fkey': fkey,
'session': session_name}
r = session.post('https://openid.stackexchange.com/account/login/submit', data=payload)
# check if url changed for error detection
error = BeautifulSoup(r.text).findAll(attrs={'class' : 'error'})
if len(error) != 0:
print "ERROR:", error[0].text
sys.exit(1)
return session
if __name__ == "__main__":
prov = raw_input('Choose your openid provider [1 for StackOverflow, 2 for Google]: ')
name = raw_input('Enter your OpenID address: ')
pswd = getpass('Enter your password: ')
if '1' in prov:
so = get_so_auth_session(name, pswd)
elif '2' in prov:
so = get_google_auth_session(name, pswd)
else:
print "Error no openid provider given"
r = so.get('http://stackoverflow.com/inbox/genuwine')
print r.json()
the code is also available as a github gist
HTH
This answer sums up what others have said below, especially RedBaron, plus adding a method I used to get to the StackOverflow Inbox using Google Accounts.
Using the Tamper Data developer tool of Firefox and logging on to StackOVerflow, one can see that OpenID works this way:
StackOverflow requests authentication from a given service (here Google), defined in the posted data;
Google Accounts takes over and checks for an already existing cookie as proof of authentication;
If no cookie is found, Google requests authentication and sets a cookie;
Once the cookie is set, StackOverflow acknowledges authentication of the user.
The above sums up the process, which in reality is more complicated, since many redirects and cookie exchanges occur indeed.
Because reproducing the same process programmatically proved somehow difficult (and that might just be my illiteracy), especially trying to hunt down the URLs to call with all locale specifics etc. I opted for loging on to Google Accounts first, getting a well deserved cookie and then login onto Stackoverflow, which would use the cookie for authentication.
This is done simply using the following Python modules: urllib, urllib2, cookielib and BeautifulSoup.
Here is the (simplified) code, it's not perfect, but it does the trick. The extended version can be found on Github.
#!/usr/bin/env python
import urllib
import urllib2
import cookielib
from BeautifulSoup import BeautifulSoup
from getpass import getpass
# Define URLs
google_accounts_url = 'http://accounts.google.com'
authentication_url = 'https://accounts.google.com/ServiceLoginAuth'
stack_overflow_url = 'https://stackoverflow.com/users/authenticate'
genuwine_url = 'https://stackoverflow.com/inbox/genuwine'
# Build opener
jar = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(jar))
def request_url(request):
'''
Requests given URL.
'''
try:
response = opener.open(request)
except:
raise
return response
def authenticate(username='', password=''):
'''
Authenticates to Google Accounts using user-provided username and password,
then authenticates to StackOverflow.
'''
# Build up headers
user_agent = 'Mozilla/5.0 (Ubuntu; X11; Linux i686; rv:8.0) Gecko/20100101 Firefox/8.0'
headers = {'User-Agent' : user_agent}
# Set Data to None
data = None
# Build up URL request with headers and data
request = urllib2.Request(google_accounts_url, data, headers)
response = request_url(request)
# Build up POST data for authentication
html = response.read()
dsh = BeautifulSoup(html).findAll(attrs={'name' : 'dsh'})[0].get('value').encode()
auto = response.headers.getheader('X-Auto-Login')
follow_up = urllib.unquote(urllib.unquote(auto)).split('continue=')[-1]
galx = jar._cookies['accounts.google.com']['/']['GALX'].value
values = {'continue' : follow_up,
'followup' : follow_up,
'dsh' : dsh,
'GALX' : galx,
'pstMsg' : 1,
'dnConn' : 'https://accounts.youtube.com',
'checkConnection' : '',
'checkedDomains' : '',
'timeStmp' : '',
'secTok' : '',
'Email' : username,
'Passwd' : password,
'signIn' : 'Sign in',
'PersistentCookie' : 'yes',
'rmShown' : 1}
data = urllib.urlencode(values)
# Build up URL for authentication
request = urllib2.Request(authentication_url, data, headers)
response = request_url(request)
# Check if logged in
if response.url != request._Request__original:
print '\n Logged in :)\n'
else:
print '\n Log in failed :(\n'
# Build OpenID Data
values = {'oauth_version' : '',
'oauth_server' : '',
'openid_username' : '',
'openid_identifier' : 'https://www.google.com/accounts/o8/id'}
data = urllib.urlencode(values)
# Build up URL for OpenID authetication
request = urllib2.Request(stack_overflow_url, data, headers)
response = request_url(request)
# Retrieve Genuwine
data = None
request = urllib2.Request(genuwine_url, data, headers)
response = request_url(request)
print response.read()
if __name__ == '__main__':
username = raw_input('Enter your Gmail address: ')
password = getpass('Enter your password: ')
authenticate(username, password)
You need to implement cookies on any "login" page, in Python you use cookiejar. For example:
jar = cookielib.CookieJar()
myopener = urllib2.build_opener(urllib2.HTTPCookieProcessor(jar))
#myopener now supports cookies.
....
I made a simple script that logins to stackoverflow.com using Mozilla Firefox cookies. It's not entirely automated, because you need to login manually, but it's all i managed to do.
Scipt is actual for latest versions of FF ( i'm using 8.0.1 ), but you need to get latest sqlite dll, because default one that comes with python 2.7 can't open DB. You can get it here: http://www.sqlite.org/sqlite-dll-win32-x86-3070900.zip
import urllib2
import webbrowser
import cookielib
import os
import sqlite3
import re
from time import sleep
#login in Firefox. Must be default browser. In other cases log in manually
webbrowser.open_new('http://stackoverflow.com/users/login')
#wait for user to log in
sleep(60)
#Process profiles.ini to get path to cookies.sqlite
profile = open(os.path.join(os.environ['APPDATA'],'Mozilla','Firefox','profiles.ini'), 'r').read()
COOKIE_DB = os.path.join(os.environ['APPDATA'],'Mozilla','Firefox','Profiles',re.findall('Profiles/(.*)\n',profile)[0],'cookies.sqlite')
CONTENTS = "host, path, isSecure, expiry, name, value"
#extract cookies for specific host
def get_cookies(host):
cj = cookielib.LWPCookieJar()
con = sqlite3.connect(COOKIE_DB)
cur = con.cursor()
sql = "SELECT {c} FROM moz_cookies WHERE host LIKE '%{h}%'".format(c=CONTENTS, h=host)
cur.execute(sql)
for item in cur.fetchall():
c = cookielib.Cookie(0, item[4], item[5],
None, False,
item[0], item[0].startswith('.'), item[0].startswith('.'),
item[1], False,
item[2],
item[3], item[3]=="",
None, None, {})
cj.set_cookie(c)
return cj
host = 'stackoverflow'
cj = get_cookies(host)
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
response = opener.open('http://stackoverflow.com').read()
# if username in response - Auth successful
if 'Stanislav Golovanov' in response:
print 'Auth successful'
Related
I can't create new repositoies in bitbucket cloud with code below.
I'm able to delete repositiries (with change form 'post' to 'delete' requests method). When I'm using code below I've got http 400 which means according to api docs - If the input document was invalid, or if the caller lacks the privilege to create repositories under the targeted account.
import requests
username = 'user#mail.com'
password = 'password'
headers = {"Content-Type": 'application/json'}
auth = (username, password)
bb_base_url = f"https://api.bitbucket.org/2.0/repositories/username/reponame"
res = requests.post(bb_base_url, headers=headers, auth=auth)
print(res)
So would like to ask for help to refactor code in way I will be able to do two things
I am trying to login to a website in order to get some data. I have noticed that there is not form-data in the 'post' method but there is a 'request payload'. Furthermore, when I login in I cannot see anymore the login post method. Here is a screenshot of the network post login method:
When I login the next page showed is I use the following code in order to login:
import requests
urlData = 'https://b*********.dk/Account/Market'
urlLogin = 'https://b**********an.dk/
with requests.Session() as c:
urlLogin = 'https://b*************n.dk/Authorization/
c.get(urlLogin)
NetSession = c.cookies['ASP.NET_SessionId']
login_data = {
'ASP.NET_SessionId': NetSession,
'username':"A******",
'Password':"q******",
'remmemberMe': True
}
lol = c.post(urlLogin, data=login_data)
print(lol.text)
Running this code the following is outputed:
{"Processed":true,"Message":"The user name or password provided is incorrect.","NeedResetPassword":false}
When i input a wrong password the Processed value is false, while with correct credentials is true. But it deosnt login. Any idea why this could happen?
As you've already correctly noticed, the original credentials are not sent using form encoding (meaning &user=alice&password=secret), but are JSON encoded (so rather {"user":"alice", "password": "secret"}). You can also see this in the request's Content-Type header, which is application/json where (as opposed to application/x-www-form-urlencoded otherwhise).
For your custom request to work, you propably also need to send JSON-encoded data. This is documented in length in the official Documentation, so I'll just give the short version:
import json
# Build session and request body just like you already did in your question
# ...
headers = {"Content-Type": "application/json"}
lol = c.post(urlLogin, data=json.dumps(login_data), headers=headers)
print(lol.json())
I have made a code to get urls from bing search. It gives the error mentioned above.
import urllib
import urllib2
accountKey = 'mykey'
username =accountKey
queryBingFor = "'JohnDalton'"
quoted_query = urllib.quote(queryBingFor)
rootURL = "https://api.datamarket.azure.com/Bing/Search/"
searchURL = rootURL + "Image?$format=json&Query=" + quoted_query
password_mgr = urllib2.HTTPPasswordMgrWithDefaultRealm()
password_mgr.add_password(None, searchURL,username,accountKey)
handler = urllib2.HTTPBasicAuthHandler(password_mgr)
opener = urllib2.build_opener(handler)
urllib2.install_opener(opener)
readURL = urllib2.urlopen(searchURL).read()
I have made the username = authKey as someone told me it has to be same for both. Anyways, i didn't get a username when i made the bing webmaster account. Or is it just my email. Excuse me if i have made novice mistakes. I've just started Python.
In the absence of any other information, it seems unlikely that what is effectively your username and password would be the same thing if this site actually needs this form of authorisation.
Are you able to make it work by doing a request in your browser like the following?
https://mykey:mykey#api.datamarket.azure.com/Bing/Search/Image?$format=json&Query=blah
If so then at lerast it sounds like the credentials are right and that its the way you are using them in python that's wrong, but more likely the above will fail with the same error, suggesting the credentials themselves are not valid.
Also see this question, which suggests there may be a problem is the site doesn't do 'standard' auth: urllib2 HTTPPasswordMgr not working - Credentials not sent error
It also suggests that you might need to pass the top level URL of the site tot he password manager rather than the specific search URL.
Finally, it might be worth adapting this code:
http://www.voidspace.org.uk/python/articles/authentication.shtml
for your site to check the auth realm and scheme the site is sending you to check they're supported.
everybody.
I'm working on a django/mod_wsgi/apache2 website that serves sensitive information using https for all requests and responses. All views are written to redirect if the user isn't authenticated. It also has several views that are meant to function like RESTful web services.
I'm now in the process of writing a script that uses urllib/urllib2 to contact several of these services in order to download a series of very large files. I'm running into problems with 403: FORBIDDEN errors when attempting to log in.
The (rough-draft) method I'm using for authentication and log in is:
def login( base_address, username=None, password=None ):
# prompt for the username (if needed), password
if username == None:
username = raw_input( 'Username: ' )
if password == None:
password = getpass.getpass( 'Password: ' )
log.info( 'Logging in %s' % username )
# fetch the login page in order to get the csrf token
cookieHandler = urllib2.HTTPCookieProcessor()
opener = urllib2.build_opener( urllib2.HTTPSHandler(), cookieHandler )
urllib2.install_opener( opener )
login_url = base_address + PATH_TO_LOGIN
log.debug( "login_url: " + login_url )
login_page = opener.open( login_url )
# attempt to get the csrf token from the cookie jar
csrf_cookie = None
for cookie in cookieHandler.cookiejar:
if cookie.name == 'csrftoken':
csrf_cookie = cookie
break
if not cookie:
raise IOError( "No csrf cookie found" )
log.debug( "found csrf cookie: " + str( csrf_cookie ) )
log.debug( "csrf_token = %s" % csrf_cookie.value )
# login using the usr, pwd, and csrf token
login_data = urllib.urlencode( dict(
username=username, password=password,
csrfmiddlewaretoken=csrf_cookie.value ) )
log.debug( "login_data: %s" % login_data )
req = urllib2.Request( login_url, login_data )
response = urllib2.urlopen( req )
# <--- 403: FORBIDDEN here
log.debug( 'response url:\n' + str( response.geturl() ) + '\n' )
log.debug( 'response info:\n' + str( response.info() ) + '\n' )
# should redirect to the welcome page here, if back at log in - refused
if response.geturl() == login_url:
raise IOError( 'Authentication refused' )
log.info( '\t%s is logged in' % username )
# save the cookies/opener for further actions
return opener
I'm using the HTTPCookieHandler to store Django's authentication cookies on the script-side so I can access the web services and get through my redirects.
I know that the CSRFmiddleware for Django is going to bump me out if I don't pass the csrf token along with the log in information, so I pull that first from the first page/form load's cookiejar. Like I mentioned, this works with the http/development version of the site.
Specifically, I'm getting a 403 when trying to post the credentials to the login page/form over the https connection. This method works when used on the development server which uses an http connection.
There is no Apache directory directive that prevents access to that area (that I can see). The script connects successfully to the login page without post data so I'm thinking that would leave Apache out of the problem (but I could be wrong).
The python installations I'm using are both compiled with SSL.
I've also read that urllib2 doesn't allow https connections via proxy. I'm not very experienced with proxies, so I don't know if using a script from a remote machine is actually a proxy connection and whether that would be the problem. Is this causing the access problem?
From what I can tell, the problem is in the combination of cookies and the post data, but I'm unclear as to where to take it from here.
Any help would be appreciated. Thanks
Please excuse my answering my own question, but - for the record this seems to have solved it:
It turns out I needed to set the HTTP Referer header to the login page url in the request where I post the login information.
req.add_header( 'Referer', login_url )
The reason is explained on the Django CSRF documentation - specifically, step 4.
Due to our somewhat peculiar server setup where we use HTTPS on the production side and DEBUG=False, I wasn't seeing the csrf_failure reason for failure (in this case: 'Referer checking failed - no referer') that is normally output in the DEBUG info. I ended up printing that failure reason to the Apache error_log and STFW'd on it. That lead me to code.djangoproject/.../csrf.py and the Referer header fix.
This works on my django setup on https which is inspired by yours. I'm starting to think that the problem is outside this code... Is the server saying anything? I might very well be looking into apache.
I'm using the following code from my local machine to my server using ssl on nginx, so apache might be the place to look. I suppose one way to narrow it down is to try your script on my login page :) Shoot me an email!
import urllib
import urllib2
import contextlib
def login(login_url, username, password):
"""
Login to site
"""
cookies = urllib2.HTTPCookieProcessor()
opener = urllib2.build_opener(cookies)
urllib2.install_opener(opener)
opener.open(login_url)
try:
token = [x.value for x in cookies.cookiejar if x.name == 'csrftoken'][0]
except IndexError:
return False, "no csrftoken"
params = dict(username=username, password=password, \
this_is_the_login_form=True,
csrfmiddlewaretoken=token,
)
encoded_params = urllib.urlencode(params)
with contextlib.closing(opener.open(login_url, encoded_params)) as f:
html = f.read()
print html
# we're in.
I have a Google App Engine app - http://mylovelyapp.appspot.com/
It has a page - mylovelypage
For the moment, the page just does self.response.out.write('OK')
If I run the following Python at my computer:
import urllib2
f = urllib2.urlopen("http://mylovelyapp.appspot.com/mylovelypage")
s = f.read()
print s
f.close()
it prints "OK"
the problem is if I add login:required to this page in the app's yaml
then this prints out the HTML of the Google Accounts login page
I've tried "normal" authentication approaches. e.g.
passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
auth_handler = urllib2.HTTPBasicAuthHandler()
auth_handler.add_password(None,
uri='http://mylovelyapp.appspot.com/mylovelypage',
user='billy.bob#gmail.com',
passwd='billybobspasswd')
opener = urllib2.build_opener(auth_handler)
urllib2.install_opener(opener)
But it makes no difference - I still get the login page's HTML back.
I've tried Google's ClientLogin auth API, but I can't get it to work.
h = httplib2.Http()
auth_uri = 'https://www.google.com/accounts/ClientLogin'
headers = {'Content-Type': 'application/x-www-form-urlencoded'}
myrequest = "Email=%s&Passwd=%s&service=ah&source=DALELANE-0.0" % ("billy.bob#gmail.com", "billybobspassword")
response, content = h.request(auth_uri, 'POST', body=myrequest, headers=headers)
if response['status'] == '200':
authtok = re.search('Auth=(\S*)', content).group(1)
headers = {}
headers['Authorization'] = 'GoogleLogin auth=%s' % authtok.strip()
headers['Content-Length'] = '0'
response, content = h.request("http://mylovelyapp.appspot.com/mylovelypage",
'POST',
body="",
headers=headers)
while response['status'] == "302":
response, content = h.request(response['location'], 'POST', body="", headers=headers)
print content
I do seem to be able to get some token correctly, but attempts to use it in the header when I call 'mylovelypage' still just return me the login page's HTML. :-(
Can anyone help, please?
Could I use the GData client library to do this sort of thing? From
what I've read, I think it should be able to access App Engine apps,
but I haven't been any more successful at getting the authentication working for App Engine stuff there either
Any pointers to samples, articles, or even just keywords I should be
searching for to get me started, would be very much appreciated.
Thanks!
appcfg.py, the tool that uploads data to App Engine has to do exactly this to authenticate itself with the App Engine server. The relevant functionality is abstracted into appengine_rpc.py. In a nutshell, the solution is:
Use the Google ClientLogin API to obtain an authentication token. appengine_rpc.py does this in _GetAuthToken
Send the auth token to a special URL on your App Engine app. That page then returns a cookie and a 302 redirect. Ignore the redirect and store the cookie. appcfg.py does this in _GetAuthCookie
Use the returned cookie in all future requests.
You may also want to look at _Authenticate, to see how appcfg handles the various return codes from ClientLogin, and _GetOpener, to see how appcfg creates a urllib2 OpenerDirector that doesn't follow HTTP redirects. Or you could, in fact, just use the AbstractRpcServer and HttpRpcServer classes wholesale, since they do pretty much everything you need.
thanks to Arachnid for the answer - it worked as suggested
here is a simplified copy of the code, in case it is helpful to the next person to try!
import os
import urllib
import urllib2
import cookielib
users_email_address = "billy.bob#gmail.com"
users_password = "billybobspassword"
target_authenticated_google_app_engine_uri = 'http://mylovelyapp.appspot.com/mylovelypage'
my_app_name = "yay-1.0"
# we use a cookie to authenticate with Google App Engine
# by registering a cookie handler here, this will automatically store the
# cookie returned when we use urllib2 to open http://currentcost.appspot.com/_ah/login
cookiejar = cookielib.LWPCookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookiejar))
urllib2.install_opener(opener)
#
# get an AuthToken from Google accounts
#
auth_uri = 'https://www.google.com/accounts/ClientLogin'
authreq_data = urllib.urlencode({ "Email": users_email_address,
"Passwd": users_password,
"service": "ah",
"source": my_app_name,
"accountType": "HOSTED_OR_GOOGLE" })
auth_req = urllib2.Request(auth_uri, data=authreq_data)
auth_resp = urllib2.urlopen(auth_req)
auth_resp_body = auth_resp.read()
# auth response includes several fields - we're interested in
# the bit after Auth=
auth_resp_dict = dict(x.split("=")
for x in auth_resp_body.split("\n") if x)
authtoken = auth_resp_dict["Auth"]
#
# get a cookie
#
# the call to request a cookie will also automatically redirect us to the page
# that we want to go to
# the cookie jar will automatically provide the cookie when we reach the
# redirected location
# this is where I actually want to go to
serv_uri = target_authenticated_google_app_engine_uri
serv_args = {}
serv_args['continue'] = serv_uri
serv_args['auth'] = authtoken
full_serv_uri = "http://mylovelyapp.appspot.com/_ah/login?%s" % (urllib.urlencode(serv_args))
serv_req = urllib2.Request(full_serv_uri)
serv_resp = urllib2.urlopen(serv_req)
serv_resp_body = serv_resp.read()
# serv_resp_body should contain the contents of the
# target_authenticated_google_app_engine_uri page - as we will have been
# redirected to that page automatically
#
# to prove this, I'm just gonna print it out
print serv_resp_body
for those who can't get ClientLogin to work, try app engine's OAuth support.
Im not too familiar with AppEngine, or Googles web apis, but for a brute force approach you could write a script with something like mechanize (http://wwwsearch.sourceforge.net/mechanize/) to simply walk through the login process before you begin doing the real work of the client.
I'm not a python expert or a app engine expert. But did you try following the sample appl at http://code.google.com/appengine/docs/gettingstarted/usingusers.html. I created one at http://quizengine.appspot.com, it seemed to work fine with Google authentication and everything.
Just a suggestion, but look in to the getting started guide. Take it easy if the suggestion sounds naive. :)
Thanks.