HTTPError: HTTP Error 401: basic auth failed. Bing Search - python

I have made a code to get urls from bing search. It gives the error mentioned above.
import urllib
import urllib2
accountKey = 'mykey'
username =accountKey
queryBingFor = "'JohnDalton'"
quoted_query = urllib.quote(queryBingFor)
rootURL = "https://api.datamarket.azure.com/Bing/Search/"
searchURL = rootURL + "Image?$format=json&Query=" + quoted_query
password_mgr = urllib2.HTTPPasswordMgrWithDefaultRealm()
password_mgr.add_password(None, searchURL,username,accountKey)
handler = urllib2.HTTPBasicAuthHandler(password_mgr)
opener = urllib2.build_opener(handler)
urllib2.install_opener(opener)
readURL = urllib2.urlopen(searchURL).read()
I have made the username = authKey as someone told me it has to be same for both. Anyways, i didn't get a username when i made the bing webmaster account. Or is it just my email. Excuse me if i have made novice mistakes. I've just started Python.

In the absence of any other information, it seems unlikely that what is effectively your username and password would be the same thing if this site actually needs this form of authorisation.
Are you able to make it work by doing a request in your browser like the following?
https://mykey:mykey#api.datamarket.azure.com/Bing/Search/Image?$format=json&Query=blah
If so then at lerast it sounds like the credentials are right and that its the way you are using them in python that's wrong, but more likely the above will fail with the same error, suggesting the credentials themselves are not valid.
Also see this question, which suggests there may be a problem is the site doesn't do 'standard' auth: urllib2 HTTPPasswordMgr not working - Credentials not sent error
It also suggests that you might need to pass the top level URL of the site tot he password manager rather than the specific search URL.
Finally, it might be worth adapting this code:
http://www.voidspace.org.uk/python/articles/authentication.shtml
for your site to check the auth realm and scheme the site is sending you to check they're supported.

Related

Python Requests / HTTPX - Authenticate All Redirected URL - How To

I wonder if there is a way to authenticate each redirected URL when working with Python modules such as httpx or requests?
Problem Statement
I am trying to connect to an API endpoint under the company network. Due to the company's cyber security measures, the API endpoint will be randomly masked with a company proxy, which causes the 307 Redirect status code.
my current code snippet looks like the below:
import httpx
api_url = 'https://demo.vizionapi.com/carriers'
head = {
'X-API-Key':'API KEY'
}
response = httpx.get(url=api_url, verify='supporting_files/cacert.pem',
headers=head, auth=('my username', 'my password'),
follow_redirects=True)
With above code, I received the 401 authentication needed error (But auth has been passed). This error will only happen when redirection occurs due to the company proxy.
Question:
My assumption is the authentication is only being passed into the first URL not the redirected URL. Therefore, I wonder if anyone know how I can use the same auth parameter for all URLs (direct & redirect)?
Any suggestion will be deeply appracaited.
I don't know what requests behavior with regards to auth during redirect is, but the first solution to come to mind is to manually follow the redirects yourself. Put your request in a loop that checks for the 3xx response codes, and handle auth however you want to.

403 when retrieving a WSDL via Python SUDS

I can't seem to get SUDS to download a WSDL that requires Basic auth credentials. My code is simple:
wsdl_url = 'https://example.com/ChangeRequest.do?WSDL'
self.client = Client(wsdl_url, username=username, password=password)
I've also tried:
from suds.transport.https import HttpAuthenticated
wsdl_url = 'https://example.com/ChangeRequest.do?WSDL'
credentials = dict(username=username, password=password)
t = HttpAuthenticated(**credentials)
self.client = Client(url=wsdl_url, transport=t)
In both cases, the service returns a 403 Forbidden error. I can go down into the SUDS code in http.py and add this line to the call:
u2request.add_header('Authorization','Basic xxxxxxxxxxxxxxxxxxxx')
This works. What am I doing wrong to get SUDS to pass my credentials when downloading the WSDL?
Note: I try to connect to the WSDL directly using both Chrome's Postman plugin and SoapUI, and the service works as well. So I know the credentials are correct.
I encountered a similar issue (suds v0.4, wsdl, 403), and found out that it was because the server I'm trying to access blocks any requests with the header User-Agent set like Python-urllib* (suds is using urllib2, hence the default header). Explicitly change the header solves the issue.
Particular to my solution: I overrode the open method of a transport class, and set client options, like the following code snippet. Note that we need to explicitly set for open and subsequent requests separately. Please advice better ways to circumvent this if you know any. And hope this post could help save someone's time in the future.
import urllib2
import suds
from suds.transport.https import HttpAuthenticated
from suds.transport import TransportError
URL = 'https://example.com/ChangeRequest.do?WSDL'
class HttpHeaderModify(HttpAuthenticated):
def open(self, request):
try:
url = request.url
u2request = urllib2.Request(url, headers={'User-Agent': 'Mozilla'})
self.proxy = self.options.proxy
return self.u2open(u2request)
except urllib2.HTTPError, e:
raise TransportError(str(e), e.code, e.fp)
transport = HttpHeaderModify()
client = Client(URL, transport=transport, timeout=10)
# Subsequent requests' header needs to be set again here. The overridden transport
# class only handles opening of the client.
client.set_options(headers={'User-Agent': 'Mozilla'})
P.S. Though my problem may not be the same, searching for "403 suds" pops up this SO question, so I decide just post my solution here.
reference post that gave me the right direction: https://bitbucket.org/jurko/suds/issues/27/client-request-for-wsdl-does-not-use-given
I used to have this issue before and compare with the soap UI header.
Found that suds missing to include the header (Host).
client.set_options(headers={'Host': 'value'})
And issue fixed.

How to request pages from website that uses OpenID?

This question has been asked here before. The accepted answer was probably obvious to both questioner and answerer---but not to me. I have commented on the above question to get more precisions, but there was no response. I also approached the meta Q&A for help on how to bring back questions from their grave, and got no answer either.
The answer to the here above question was:
From the client's perspective, an OpenID login is very similar to any other web-based login. There isn't a defined protocol for the client; it is an ordinary web session that varies based on your OpenID provider. For this reason, I doubt that any such libraries exist. You will probably have to code it yourself.
I know how to log onto a website with Python already, using the Urllib2 module. But that's not enough for me to guess how to authenticate to an OpenID.
I'm actually trying to get my StackOverflow inbox in json format, for which I need to be logged in.
Could someone provide a short intro or a link to a nice tutorial on how to do that?
Well I myself don't know much about OpenID but your post (and the bounty!!) got me interested.
This link tells the exact flow of OpenID authentication sequence (Atleast for v1.0. The new version is 2.0). From what I could make out, the steps would be something like
You fetch the login page of stackoverflow that will also provide an option to login using OpenID (As a form field).
You send ur openID which is actually a form of uri and NOT username/email(If it is Google profile it is your profile ID)
Stackoverflow will then connect to your ID provider (in this case google) and send you a redirect to google login page and another link to where you should redirect later (lets say a)
You can login to the google provided page conventionally (using POST method from Python)
Google provides a cryptographic token (Not pretty sure about this step) in return to your login request
You send the new request to a with this token.
Stackoverflow will contact google with this token. If authenticity established, it will return a session ID
Later requests to STackOverflow should have this session ID
No idea about logging out!!
This link tells about various responses in OpenID and what they mean. So maybe it will come in handy when your code your client.
Links from the wiki page OpenID Explained
EDIT: Using Tamper Data Add on for Firefox, the following sequence of events can be constructed.
User sends a request to the SO login page. On entering the openID in the form field the resulting page sends a 302 redirecting to a google page.
The redirect URL has a lot of OpenID parameters (which are for the google server). One of them is return_to=https://stackoverflow.com/users/authenticate/?s=some_value.
The user is presented with the google login page. On login there are a few 302's which redirect the user around in google realm.
Finally a 302 is received which redirects user to stackoverflow's page specified in 'return_to' earlier
During this entire series of operation a lot of cookie's have been generated which must be stored correctly
On accessing the SO page (which was 302'd by google), the SO server processes your request and in the response header sends a field "Set-Cookie" to set cookies named gauth and usr with a value along with another 302 to stackoverflow.com. This step completes your login
Your client simply stores the cookie usr
You are logged in as long as you remeber to send the Cookie usr with any request to SO.
You can now request your inbox just remeber to send the usr cookie with the request.
I suggest you start coding your python client and study the responses carefully. In most cases it will be a series of 302's with minimal user intervention (except for filling out your Google username and password and allowing the site page).
However to make it easier, you could just login to SO using your browser, copy all the cookie values and make a request using urllib2 with the cookie values set.
Of course in case you log out on the browser, you will have to login again and change the cookie value in your python program.
I know this is close to archeology, digging a post that's two years old, but I just wrote a new enhanced version of the code from the validated answer, so I thought it may be cool to share it here, as this question/answers has been a great help for me to implement that.
So, here's what's different:
it uses the new requests library that is an enhancement over urllib2 ;
it supports authenticating using google's and stackexchange's openid provider.
it is way shorter and simpler to read, though it has less printouts
here's the code:
#!/usr/bin/env python
import sys
import urllib
import requests
from BeautifulSoup import BeautifulSoup
def get_google_auth_session(username, password):
session = requests.Session()
google_accounts_url = 'http://accounts.google.com'
authentication_url = 'https://accounts.google.com/ServiceLoginAuth'
stack_overflow_url = 'http://stackoverflow.com/users/authenticate'
r = session.get(google_accounts_url)
dsh = BeautifulSoup(r.text).findAll(attrs={'name' : 'dsh'})[0].get('value').encode()
auto = r.headers['X-Auto-Login']
follow_up = urllib.unquote(urllib.unquote(auto)).split('continue=')[-1]
galx = r.cookies['GALX']
payload = {'continue' : follow_up,
'followup' : follow_up,
'dsh' : dsh,
'GALX' : galx,
'pstMsg' : 1,
'dnConn' : 'https://accounts.youtube.com',
'checkConnection' : '',
'checkedDomains' : '',
'timeStmp' : '',
'secTok' : '',
'Email' : username,
'Passwd' : password,
'signIn' : 'Sign in',
'PersistentCookie' : 'yes',
'rmShown' : 1}
r = session.post(authentication_url, data=payload)
if r.url != authentication_url: # XXX
print "Logged in"
else:
print "login failed"
sys.exit(1)
payload = {'oauth_version' : '',
'oauth_server' : '',
'openid_username' : '',
'openid_identifier' : ''}
r = session.post(stack_overflow_url, data=payload)
return session
def get_so_auth_session(email, password):
session = requests.Session()
r = session.get('http://stackoverflow.com/users/login')
fkey = BeautifulSoup(r.text).findAll(attrs={'name' : 'fkey'})[0]['value']
payload = {'openid_identifier': 'https://openid.stackexchange.com',
'openid_username': '',
'oauth_version': '',
'oauth_server': '',
'fkey': fkey,
}
r = session.post('http://stackoverflow.com/users/authenticate', allow_redirects=True, data=payload)
fkey = BeautifulSoup(r.text).findAll(attrs={'name' : 'fkey'})[0]['value']
session_name = BeautifulSoup(r.text).findAll(attrs={'name' : 'session'})[0]['value']
payload = {'email': email,
'password': password,
'fkey': fkey,
'session': session_name}
r = session.post('https://openid.stackexchange.com/account/login/submit', data=payload)
# check if url changed for error detection
error = BeautifulSoup(r.text).findAll(attrs={'class' : 'error'})
if len(error) != 0:
print "ERROR:", error[0].text
sys.exit(1)
return session
if __name__ == "__main__":
prov = raw_input('Choose your openid provider [1 for StackOverflow, 2 for Google]: ')
name = raw_input('Enter your OpenID address: ')
pswd = getpass('Enter your password: ')
if '1' in prov:
so = get_so_auth_session(name, pswd)
elif '2' in prov:
so = get_google_auth_session(name, pswd)
else:
print "Error no openid provider given"
r = so.get('http://stackoverflow.com/inbox/genuwine')
print r.json()
the code is also available as a github gist
HTH
This answer sums up what others have said below, especially RedBaron, plus adding a method I used to get to the StackOverflow Inbox using Google Accounts.
Using the Tamper Data developer tool of Firefox and logging on to StackOVerflow, one can see that OpenID works this way:
StackOverflow requests authentication from a given service (here Google), defined in the posted data;
Google Accounts takes over and checks for an already existing cookie as proof of authentication;
If no cookie is found, Google requests authentication and sets a cookie;
Once the cookie is set, StackOverflow acknowledges authentication of the user.
The above sums up the process, which in reality is more complicated, since many redirects and cookie exchanges occur indeed.
Because reproducing the same process programmatically proved somehow difficult (and that might just be my illiteracy), especially trying to hunt down the URLs to call with all locale specifics etc. I opted for loging on to Google Accounts first, getting a well deserved cookie and then login onto Stackoverflow, which would use the cookie for authentication.
This is done simply using the following Python modules: urllib, urllib2, cookielib and BeautifulSoup.
Here is the (simplified) code, it's not perfect, but it does the trick. The extended version can be found on Github.
#!/usr/bin/env python
import urllib
import urllib2
import cookielib
from BeautifulSoup import BeautifulSoup
from getpass import getpass
# Define URLs
google_accounts_url = 'http://accounts.google.com'
authentication_url = 'https://accounts.google.com/ServiceLoginAuth'
stack_overflow_url = 'https://stackoverflow.com/users/authenticate'
genuwine_url = 'https://stackoverflow.com/inbox/genuwine'
# Build opener
jar = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(jar))
def request_url(request):
'''
Requests given URL.
'''
try:
response = opener.open(request)
except:
raise
return response
def authenticate(username='', password=''):
'''
Authenticates to Google Accounts using user-provided username and password,
then authenticates to StackOverflow.
'''
# Build up headers
user_agent = 'Mozilla/5.0 (Ubuntu; X11; Linux i686; rv:8.0) Gecko/20100101 Firefox/8.0'
headers = {'User-Agent' : user_agent}
# Set Data to None
data = None
# Build up URL request with headers and data
request = urllib2.Request(google_accounts_url, data, headers)
response = request_url(request)
# Build up POST data for authentication
html = response.read()
dsh = BeautifulSoup(html).findAll(attrs={'name' : 'dsh'})[0].get('value').encode()
auto = response.headers.getheader('X-Auto-Login')
follow_up = urllib.unquote(urllib.unquote(auto)).split('continue=')[-1]
galx = jar._cookies['accounts.google.com']['/']['GALX'].value
values = {'continue' : follow_up,
'followup' : follow_up,
'dsh' : dsh,
'GALX' : galx,
'pstMsg' : 1,
'dnConn' : 'https://accounts.youtube.com',
'checkConnection' : '',
'checkedDomains' : '',
'timeStmp' : '',
'secTok' : '',
'Email' : username,
'Passwd' : password,
'signIn' : 'Sign in',
'PersistentCookie' : 'yes',
'rmShown' : 1}
data = urllib.urlencode(values)
# Build up URL for authentication
request = urllib2.Request(authentication_url, data, headers)
response = request_url(request)
# Check if logged in
if response.url != request._Request__original:
print '\n Logged in :)\n'
else:
print '\n Log in failed :(\n'
# Build OpenID Data
values = {'oauth_version' : '',
'oauth_server' : '',
'openid_username' : '',
'openid_identifier' : 'https://www.google.com/accounts/o8/id'}
data = urllib.urlencode(values)
# Build up URL for OpenID authetication
request = urllib2.Request(stack_overflow_url, data, headers)
response = request_url(request)
# Retrieve Genuwine
data = None
request = urllib2.Request(genuwine_url, data, headers)
response = request_url(request)
print response.read()
if __name__ == '__main__':
username = raw_input('Enter your Gmail address: ')
password = getpass('Enter your password: ')
authenticate(username, password)
You need to implement cookies on any "login" page, in Python you use cookiejar. For example:
jar = cookielib.CookieJar()
myopener = urllib2.build_opener(urllib2.HTTPCookieProcessor(jar))
#myopener now supports cookies.
....
I made a simple script that logins to stackoverflow.com using Mozilla Firefox cookies. It's not entirely automated, because you need to login manually, but it's all i managed to do.
Scipt is actual for latest versions of FF ( i'm using 8.0.1 ), but you need to get latest sqlite dll, because default one that comes with python 2.7 can't open DB. You can get it here: http://www.sqlite.org/sqlite-dll-win32-x86-3070900.zip
import urllib2
import webbrowser
import cookielib
import os
import sqlite3
import re
from time import sleep
#login in Firefox. Must be default browser. In other cases log in manually
webbrowser.open_new('http://stackoverflow.com/users/login')
#wait for user to log in
sleep(60)
#Process profiles.ini to get path to cookies.sqlite
profile = open(os.path.join(os.environ['APPDATA'],'Mozilla','Firefox','profiles.ini'), 'r').read()
COOKIE_DB = os.path.join(os.environ['APPDATA'],'Mozilla','Firefox','Profiles',re.findall('Profiles/(.*)\n',profile)[0],'cookies.sqlite')
CONTENTS = "host, path, isSecure, expiry, name, value"
#extract cookies for specific host
def get_cookies(host):
cj = cookielib.LWPCookieJar()
con = sqlite3.connect(COOKIE_DB)
cur = con.cursor()
sql = "SELECT {c} FROM moz_cookies WHERE host LIKE '%{h}%'".format(c=CONTENTS, h=host)
cur.execute(sql)
for item in cur.fetchall():
c = cookielib.Cookie(0, item[4], item[5],
None, False,
item[0], item[0].startswith('.'), item[0].startswith('.'),
item[1], False,
item[2],
item[3], item[3]=="",
None, None, {})
cj.set_cookie(c)
return cj
host = 'stackoverflow'
cj = get_cookies(host)
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
response = opener.open('http://stackoverflow.com').read()
# if username in response - Auth successful
if 'Stanislav Golovanov' in response:
print 'Auth successful'

pywikipedia bot with https and http authentication

I'm having trouble getting my bot to login to a MediaWiki install on the intranet. I believe it is due to the http authentication protecting the wiki.
Facts:
The wiki root is: https://local.example.com/mywiki/
When visiting the wiki with a web browser, a popup comes up asking for enterprise credentials (I assume this is basic access authentication)
This is what I have in my user-config.py:
mylang = 'en'
family = 'mywiki'
usernames['mywiki']['en'] = u'Bot'
authenticate['local.example.com'] = ('user', 'pass')
This is what I have in mywiki_family.py:
# -*- coding: utf-8 -*-
import family, config
# The Wikimedia family that is known as mywiki
class Family(family.Family):
def __init__(self):
family.Family.__init__(self)
self.name = 'mywiki'
self.langs = { 'en' : 'local.example.com'}
def scriptpath(self, code):
return '/mywiki'
def version(self, code):
return '1.13.5'
def isPublic(self):
return False
def hostname(self, code):
return 'local.example.com'
def protocol(self, code):
return 'https'
def path(self, code):
return '/mywiki/index.php'
When I execute login.py -v -v, I get this:
urllib2.urlopen(urllib2.Request('https://local.example.com/w/index.php?title=Special:Userlogin&useskin=monobook&action=submit', wpSkipCookieCheck=1&wpPassword=XXXX&wpDomain=&wpRemember=1&wpLoginattempt=Aanmelden%20%26%20Inschrijven&wpName=Bot, {'Content-type': 'application/x-www-form-urlencoded', 'User-agent': 'PythonWikipediaBot/1.0'})):
(Redundant traceback info here)
urllib2.HTTPError: HTTP Error 401: Unauthorized
(I'm not sure why it has 'local.example.com/w' instead of '/mywiki'.)
I thought it might be trying to authenticate to example.com instead of example.com/wiki, so I changed the authenticate line to:
authenticate['local.example.com/mywiki'] = ('user', 'pass')
But then I get an HTTP 401.2 error back from IIS:
You do not have permission to view this directory or page using the credentials that you supplied because your Web browser is sending a WWW-Authenticate header field that the Web server is not configured to accept.
Any help on how to get this working would be appreciated.
Update After fixing my family file, it now says:
Getting information for site mywiki:en
('http error', 401, 'Unauthorized', )
WARNING: Could not open 'https://local.example.com/mywiki/index.php?title=Non-existing_page&action=edit&useskin=monobook'. Maybe the server or your connection is down. Retrying in 1 minutes...
I looked at the HTTP headers on a plan urllib2.ulropen call and it's using WWW-Authenticate: Negotiate WWW-Authenticate: NTLM. I'm guessing urllib2 and thus pywikipedia don't support this?
Update Added a tasty bounty for help in getting this to work. I can authenticate using python-ntlm. How do I integrate this into pywikipedia?
Well the fact that login.py tries accessing '\w' instead of your path shows that there is a family configuration issue.
Your code is indented strangely: is scriptpath a member of the new Family class? as in:
class Family(family.Family):
def __init__(self):
family.Family.__init__(self)
self.name = 'mywiki'
self.langs = { 'en' : 'local.example.com'}
def scriptpath(self, code):
return '/mywiki'
def version(self, code):
return '1.13.5'
def isPublic(self):
return False
def hostname(self, code):
return 'local.example.com'
def protocol(self, code):
return 'https'
?
I believe that something is wrong with your family file. A good way to check is to do in a python console:
import wikipedia
site = wikipedia.getSite('en', 'mywiki')
print site.login_address()
as long as the relative address is wrong, showing '/w' instead of '/mywiki', it means that the family file is still not configured correctly, and that the bot won't work :)
Update: how to integrate ntlm in pywikipedia?
I just had a look at the basic example here. I would integrate the code before that line in login.py:
response = urllib2.urlopen(urllib2.Request(self.site.protocol() + '://' + self.site.hostname() + address, data, headers))
You want to write something of the like:
from ntlm import HTTPNtlmAuthHandler
user = 'DOMAIN\User'
password = "Password"
url = self.site.protocol() + '://' + self.site.hostname()
passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
passman.add_password(None, url, user, password)
# create the NTLM authentication handler
auth_NTLM = HTTPNtlmAuthHandler.HTTPNtlmAuthHandler(passman)
# create and install the opener
opener = urllib2.build_opener(auth_NTLM)
urllib2.install_opener(opener)
response = urllib2.urlopen(urllib2.Request(self.site.protocol() + '://' + self.site.hostname() + address, data, headers))
I would test this and integrate it directly into pywikipedia codebase if only I had an available ntlm setup...
Whatever happens, please do not vanish with your solution: we're interested, at pywikipedia, by your solution :)
I am guessing the problem you have is that the server expects basic authentication and you are not handling that in your client. Michael Foord wrote a good article about handling basic authentication in Python.
You did not provide enough information for me to be sure about this, so if that does not work, please provide some additional information, like network dump of you connection attempt.

How do you access an authenticated Google App Engine service from a (non-web) python client?

I have a Google App Engine app - http://mylovelyapp.appspot.com/
It has a page - mylovelypage
For the moment, the page just does self.response.out.write('OK')
If I run the following Python at my computer:
import urllib2
f = urllib2.urlopen("http://mylovelyapp.appspot.com/mylovelypage")
s = f.read()
print s
f.close()
it prints "OK"
the problem is if I add login:required to this page in the app's yaml
then this prints out the HTML of the Google Accounts login page
I've tried "normal" authentication approaches. e.g.
passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
auth_handler = urllib2.HTTPBasicAuthHandler()
auth_handler.add_password(None,
uri='http://mylovelyapp.appspot.com/mylovelypage',
user='billy.bob#gmail.com',
passwd='billybobspasswd')
opener = urllib2.build_opener(auth_handler)
urllib2.install_opener(opener)
But it makes no difference - I still get the login page's HTML back.
I've tried Google's ClientLogin auth API, but I can't get it to work.
h = httplib2.Http()
auth_uri = 'https://www.google.com/accounts/ClientLogin'
headers = {'Content-Type': 'application/x-www-form-urlencoded'}
myrequest = "Email=%s&Passwd=%s&service=ah&source=DALELANE-0.0" % ("billy.bob#gmail.com", "billybobspassword")
response, content = h.request(auth_uri, 'POST', body=myrequest, headers=headers)
if response['status'] == '200':
authtok = re.search('Auth=(\S*)', content).group(1)
headers = {}
headers['Authorization'] = 'GoogleLogin auth=%s' % authtok.strip()
headers['Content-Length'] = '0'
response, content = h.request("http://mylovelyapp.appspot.com/mylovelypage",
'POST',
body="",
headers=headers)
while response['status'] == "302":
response, content = h.request(response['location'], 'POST', body="", headers=headers)
print content
I do seem to be able to get some token correctly, but attempts to use it in the header when I call 'mylovelypage' still just return me the login page's HTML. :-(
Can anyone help, please?
Could I use the GData client library to do this sort of thing? From
what I've read, I think it should be able to access App Engine apps,
but I haven't been any more successful at getting the authentication working for App Engine stuff there either
Any pointers to samples, articles, or even just keywords I should be
searching for to get me started, would be very much appreciated.
Thanks!
appcfg.py, the tool that uploads data to App Engine has to do exactly this to authenticate itself with the App Engine server. The relevant functionality is abstracted into appengine_rpc.py. In a nutshell, the solution is:
Use the Google ClientLogin API to obtain an authentication token. appengine_rpc.py does this in _GetAuthToken
Send the auth token to a special URL on your App Engine app. That page then returns a cookie and a 302 redirect. Ignore the redirect and store the cookie. appcfg.py does this in _GetAuthCookie
Use the returned cookie in all future requests.
You may also want to look at _Authenticate, to see how appcfg handles the various return codes from ClientLogin, and _GetOpener, to see how appcfg creates a urllib2 OpenerDirector that doesn't follow HTTP redirects. Or you could, in fact, just use the AbstractRpcServer and HttpRpcServer classes wholesale, since they do pretty much everything you need.
thanks to Arachnid for the answer - it worked as suggested
here is a simplified copy of the code, in case it is helpful to the next person to try!
import os
import urllib
import urllib2
import cookielib
users_email_address = "billy.bob#gmail.com"
users_password = "billybobspassword"
target_authenticated_google_app_engine_uri = 'http://mylovelyapp.appspot.com/mylovelypage'
my_app_name = "yay-1.0"
# we use a cookie to authenticate with Google App Engine
# by registering a cookie handler here, this will automatically store the
# cookie returned when we use urllib2 to open http://currentcost.appspot.com/_ah/login
cookiejar = cookielib.LWPCookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookiejar))
urllib2.install_opener(opener)
#
# get an AuthToken from Google accounts
#
auth_uri = 'https://www.google.com/accounts/ClientLogin'
authreq_data = urllib.urlencode({ "Email": users_email_address,
"Passwd": users_password,
"service": "ah",
"source": my_app_name,
"accountType": "HOSTED_OR_GOOGLE" })
auth_req = urllib2.Request(auth_uri, data=authreq_data)
auth_resp = urllib2.urlopen(auth_req)
auth_resp_body = auth_resp.read()
# auth response includes several fields - we're interested in
# the bit after Auth=
auth_resp_dict = dict(x.split("=")
for x in auth_resp_body.split("\n") if x)
authtoken = auth_resp_dict["Auth"]
#
# get a cookie
#
# the call to request a cookie will also automatically redirect us to the page
# that we want to go to
# the cookie jar will automatically provide the cookie when we reach the
# redirected location
# this is where I actually want to go to
serv_uri = target_authenticated_google_app_engine_uri
serv_args = {}
serv_args['continue'] = serv_uri
serv_args['auth'] = authtoken
full_serv_uri = "http://mylovelyapp.appspot.com/_ah/login?%s" % (urllib.urlencode(serv_args))
serv_req = urllib2.Request(full_serv_uri)
serv_resp = urllib2.urlopen(serv_req)
serv_resp_body = serv_resp.read()
# serv_resp_body should contain the contents of the
# target_authenticated_google_app_engine_uri page - as we will have been
# redirected to that page automatically
#
# to prove this, I'm just gonna print it out
print serv_resp_body
for those who can't get ClientLogin to work, try app engine's OAuth support.
Im not too familiar with AppEngine, or Googles web apis, but for a brute force approach you could write a script with something like mechanize (http://wwwsearch.sourceforge.net/mechanize/) to simply walk through the login process before you begin doing the real work of the client.
I'm not a python expert or a app engine expert. But did you try following the sample appl at http://code.google.com/appengine/docs/gettingstarted/usingusers.html. I created one at http://quizengine.appspot.com, it seemed to work fine with Google authentication and everything.
Just a suggestion, but look in to the getting started guide. Take it easy if the suggestion sounds naive. :)
Thanks.

Categories