cannot keep a stable session open with bloomberg blpapi Python - python

I recently encountered an issue that I have not been able to solve, despite calling the Bloomberg helpdesk and researching thoroughly the internet for similar cases.
In short, I am using the official Python blpapi from Bloomberg (https://github.com/msitt/blpapi-python) and now am experiencing some connectivity issue: I cannot leave a session opened.
Here is the code I am running: https://github.com/msitt/blpapi-python/blob/master/examples/SimpleHistoryExample.py
I simply added a "while True loop" and a "time.sleep" in it so that I can keep the session open and refresh my data every 30 seconds (this is my use case).
This use to run perfectly fine for days, however, since last Friday, I am now getting those log messages:
22FEB2021_08:54:18.870 29336:26880 WARN blpapi_subscriptionmanager.cpp:7437 blpapi.session.subscriptionmanager.{1} Could not find a service for serviceCode: 90.
22FEB2021_08:54:23.755 29336:26880 WARN blpapi_platformcontroller.cpp:377 blpapi.session.platformcontroller.{1} Connectivity lost, no connected endpoints.
22FEB2021_08:54:31.867 29336:26880 WARN blpapi_platformcontroller.cpp:344 blpapi.session.platformcontroller.{1} Connectivity restored.
22FEB2021_08:54:32.731 29336:26880 WARN blpapi_subscriptionmanager.cpp:7437 blpapi.session.subscriptionmanager.{1} Could not find a service for serviceCode: 90.
which goes on and on and on, along with those responses as well:
SessionConnectionDown = {
server = "localhost:8194"
}
ServiceDown = {
serviceName = "//blp/refdata"
servicePart = {
publishing = {
}
}
}
SessionConnectionUp = {
server = "localhost:8194"
encryptionStatus = "Clear"
compressionStatus = "Uncompressed"
}
ServiceUp = {
serviceName = "//blp/refdata"
servicePart = {
publishing = {
}
}
}
I still can pull the data from the bloomberg API: I see the historical data request results just fine. However:
Those service/session status messages messes up my code (I could still ignore them)
For some reason the connect/reconnect also messes my Excel BBG in the background and prevent me from using the BBG excel add-in at all! I now have those "#N/A Connection" outputs in all of my workbooks using bloomberg formulas.
screenshot from excel
Has anyone ever encountered such cases? If yes, please do not hesitate to share your experience, any help is more than appreciated!
Wishing you all a great day,
Adrien

I cannot comment yet so I will try to "answer" it. I use blpapi everyday and pull data all day. I am a BBG Anywhere user and never have any session issues. If you log in from a different device it will kill your session for the Python app. Once you log back in where the python app is running it will connect again.
Why do you have another while loop and sleep to keep the session alive? You should create a separate session and always call it to run your request. You should not need any "keep alive" code inside the request. Just don't call session.stop(). This is what I ended up doing after much trial and error from not knowing what to do.
I run my model using Excel, trying to move away from any substantial code in Excel and use it as a GUI until I can migrate to a custom GUI. I also have BDP functions in my Excel and they work fine.
import blpapi
# removed optparse because it is deprecated.
from argparse import ArgumentParser
SERVICES = {}
def parseCmdLine():
parser = ArgumentParser(description='Retrieve reference data.')
parser.add_argument('-a',
'--ip',
dest='host',
help='server name or IP (default: %(default)s)',
metavar='ipAddress',
default='localhost')
parser.add_argument('-p',
dest='port',
type=int,
help='server port (default: %(default)s)',
metavar='tcpPort',
default=8194)
args = parser.parse_args()
return args
def start_session():
"""Standard session for synchronous refdata requests. Upon creation
the obj is held in SERVICES['session'].
Returns:
obj: A session object.
"""
args = parseCmdLine()
# Fill SessionOptions
sessionOptions = blpapi.SessionOptions()
sessionOptions.setServerHost(args.host)
sessionOptions.setServerPort(args.port)
# Create a Session
session = blpapi.Session(sessionOptions)
# Start a Session
session.start()
SERVICES['session'] = session
return SERVICES['session']
def get_refDataService():
"""Create a refDataService object for functions to use. Upon creation
it is held in SERVICES['refDataService'].
Returns:
obj: refDataService object.
"""
# return the session and request because requests need session.send()
global SERVICES
if 'session' not in SERVICES:
start_session()
session = SERVICES['session']
# Check for SERVICES['refdata'] not needed because start_session()
# is called by this function and start_session() is never called on its own.
session.openService("//blp/refdata")
refDataService = session.getService("//blp/refdata")
SERVICES['refDataService'] = refDataService
session = SERVICES['session']
refDataService = SERVICES['refDataService']
print('get_refDataService called. Curious when this is called.')
return session, refDataService
# sample override request
def ytw_oride_muni(cusip_dict):
"""Requests the Price To Worst for a dict of cusips and Yield To Worst values.
The dict must be {'cusip' : YTW}. Overrides apply to each request, so this
function is designed for different overrides for each cusip. Although they could
all be the same as that is not a restriction.
Returns: Single level nested dict
{'cusip Muni': {'ID_BB_SEC_NUM_DES': 'val', 'PX_ASK': 'val1', 'YLD_CNV_ASK': 'val2'}}
"""
session, refDataService = get_refDataService()
fields1 = ["ID_BB_SEC_NUM_DES", "PX_ASK", "YLD_CNV_ASK"]
try:
values_dict = {}
# For different overrides you must send separate requests.
# This loops and creates separate messages.
for cusip, value in cusip_dict.items():
request = refDataService.createRequest("ReferenceDataRequest")
# append security to request
request.getElement("securities").appendValue(f"{cusip} Muni")
# append fields to request
request.getElement("fields").appendValue(fields1[0])
request.getElement("fields").appendValue(fields1[1])
request.getElement("fields").appendValue(fields1[2])
# add overrides
overrides = request.getElement("overrides")
override1 = overrides.appendElement()
override1.setElement("fieldId", "YLD_CNV_ASK")
override1.setElement("value", f"{value}")
session.sendRequest(request)
# Process received events
while(True):
# We provide timeout to give the chance to Ctrl+C handling:
ev = session.nextEvent(500)
# below msg.messageType == ReferenceDataResponse
for msg in ev:
if msg.messageType() == "ReferenceDataResponse":
if msg.hasElement("responseError"):
print(msg)
if msg.hasElement("securityData"):
data = msg.getElement("securityData")
num_cusips = data.numValues()
for i in range(num_cusips):
sec = data.getValue(i).getElement("security").getValue()
try:
des = data.getValue(i).getElement("fieldData").getElement("ID_BB_SEC_NUM_DES").getValue()
except:
des = None
try:
ptw = data.getValue(i).getElement("fieldData").getElement("PX_ASK").getValue()
except:
ptw = None
try:
ytw = data.getValue(i).getElement("fieldData").getElement("YLD_CNV_ASK").getValue()
except:
ytw = None
values = {'des': des, 'ptw': ptw, 'ytw': ytw}
# Response completly received, so we could exit
if ev.eventType() == blpapi.Event.RESPONSE:
values_dict.update({sec: values})
break
finally:
# Stop the session
# session.stop()
return values_dict

Related

Automatically subscribe to YouTube channels via Channel_ID

I'm fairly new to Python and I'm trying to migrate subscriptions from an older youtube account to a newer one that I'll use going forward. I pulled my subscriptions export from the old one and have around 470+ subs that I'll need to migrate over.
I found this article which absolutely works with automatically subscribing to a youtube channel via their channel_id but it seems like in the key value pair I can only run the .py script once per value.
I tried all sorts of googling to see how I can include multiple values in the key (channelId) but it always only auto subs to the last one in the dictionary.
Can someone please help show me what I'm missing? I feel like there has to be a way to add multiple channelId values in there key dictionary, right?!
Here's what my code looks like > screenshot
import os
import google.oauth2.credentials
import google_auth_oauthlib.flow
from googleapiclient.discovery import build
from googleapiclient.errors import HttpError
from google_auth_oauthlib.flow import InstalledAppFlow
# The CLIENT_SECRETS_FILE variable specifies
# the name of a file that contains
# client_id and client_secret.
CLIENT_SECRETS_FILE = "client_secret.json"
# This scope allows for full read/write access
# to the authenticated user's account and
# requires requests to use an SSL connection.
SCOPES = ['https://www.googleapis.com/auth/youtube.force-ssl']
API_SERVICE_NAME = 'youtube'
API_VERSION = 'v3'
def get_authenticated_service():
flow = InstalledAppFlow.from_client_secrets_file(CLIENT_SECRETS_FILE, SCOPES)
credentials = flow.run_console()
return build(API_SERVICE_NAME, API_VERSION, credentials = credentials)
def print_response(response):
print(response)
# Build a resource based on a list of
# properties given as key-value pairs.
# Leave properties with empty values out
# of the inserted resource.
def build_resource(properties):
resource = {}
for p in properties:
# Given a key like "snippet.title", split into
# "snippet" and "title", where "snippet" will be
# an object and "title" will be a property in that object.
prop_array = p.split('.')
ref = resource
for pa in range(0, len(prop_array)):
is_array = False
key = prop_array[pa]
# For properties that have array values, convert a name like
# "snippet.tags[]" to snippet.tags, and set a flag to handle
# the value as an array.
if key[-2:] == '[]':
key = key[0:len(key)-2:]
is_array = True
if pa == (len(prop_array) - 1):
# Leave properties without values
# out of inserted resource.
if properties[p]:
if is_array:
ref[key] = properties[p].split(', ')
else:
ref[key] = properties[p]
elif key not in ref:
# For example, the property is "snippet.title",
# but the resource does not yet have a "snippet"
# object. Create the snippet object here.
# Setting "ref = ref[key]" means that in the
# next time through the "for pa in range ..." loop,
# we will be setting a property in the
# resource's "snippet" object.
ref[key] = {}
ref = ref[key]
else:
# For example, the property is "snippet.description",
# and the resource already has a "snippet" object.
ref = ref[key]
return resource
# Remove keyword arguments that are not set
def remove_empty_kwargs(**kwargs):
good_kwargs = {}
if kwargs is not None:
for key, value in kwargs.items():
if value:
good_kwargs[key] = value
return good_kwargs
def subscriptions_insert(client, properties, **kwargs):
resource = build_resource(properties)
kwargs = remove_empty_kwargs(**kwargs)
response = client.subscriptions().insert(
body = resource,**kwargs).execute()
return print_response(response)
if __name__ == '__main__':
# When running locally, disable OAuthlib's
# HTTPs verification. When running in production
# * do not * leave this option enabled.
os.environ['OAUTHLIB_INSECURE_TRANSPORT'] = '1'
client = get_authenticated_service()
subscriptions_insert(client,
{'snippet.resourceId.kind': 'youtube# channel',
'snippet.resourceId.channelId': 'UC09fL42MpkktKZWmWxYiDhw', 'UC0Q7Hlz75NYhYAuq6O0fqHw'},
part ='snippet')```
According to YouTube Data API v3 documentation (Subscriptions: insert endpoint and Subscriptions resource), it seems that you can only subscribe a channel at a time. As you have by default 10,000 of quota per day, except if you request extended quota, because Subscriptions: insert costs 50 of quota, then for 470+ subscriptions, you would need 3 days to proceed.
Otherwise you can proceed as follows, it seems that the first time I tried with ~500 channels I have been subscribed to ~290 of them but now I mostly only receive (when removing -H 'Accept-Encoding: gzip, deflate, br' from the cURL request):
{
"error": {
"code": 429,
"message": "Resource has been exhausted (e.g. check quota).",
"errors": [
{
"message": "Resource has been exhausted (e.g. check quota).",
"domain": "global",
"reason": "rateLimitExceeded"
}
],
"status": "RESOURCE_EXHAUSTED"
}
}
So it's an unsure method that you can try to deepen.
Ever wondered how to do that in a single request without using any quota?
Go on an ad hoc YouTube channel YOUR_CHANNEL that you want to subscribed to: https://www.youtube.com/channel/YOUR_CHANNEL_ID
Open the Network tab of your web-browser by using Ctrl + Shift + E (on Firefox) and filter XHR requests.
Now click on Subscribe.
You should see a request to subscribe, copy it as cURL (by right-clicking).
Change at the end
"channelIds":["YOUR_CHANNEL_ID"]
to:
"channelIds":["YOUR_CHANNEL_ID_0, YOUR_CHANNEL_ID_1, ..., YOUR_CHANNEL_ID_499"]
Where YOUR_CHANNEL_ID_0 is your YOUR_CHANNEL_ID and YOUR_CHANNEL_ID_1 the second channel you want to subscribe to and so forth.
Execute the modified cURL request in a terminal and that's it!
Note that this webpage contains a subscriptions count and this one contains all your subscriptions.
To get more than 249 different channels, I used:
import requests, json
channelIds = set()
pageToken = ''
API_KEY = 'AIzaSy...'
i = 0
while len(channelIds) < 250:
url = f'https://www.googleapis.com/youtube/v3/search?q={i}&type=channel&maxResults=50&key={API_KEY}'
if pageToken != '':
url += f"&pageToken={pageToken}"
content = requests.get(url).text
data = json.loads(content)
for item in data['items']:
channelIds.add(item['id']['channelId'])
print(len(channelIds))
if 'nextPageToken' in data:
pageToken = data['nextPageToken']
else:
break
i += 1
print('["' + '","'.join(channelIds) + '"]', len(channelIds))
As #Benjamin Loison has mentioned, there is a quota on the limit on the usage of the API. If you'd like to raise the limit, I think there is a form you can fill out to request more. However, I don't recommend you do so since the form is applicable mainly for a large application that will be used for a long time and involve a long process of human inspection on what you're trying to build (This is based on my personal experience, might not be entirely accurate).
My suggestion would be to use the script you have to print out a list of channel links, and you can click into each of them and press the subscribe button. 470-ish channels should not take you a long time.

Querying more properties in Google Search Console via python script

I am using a Python (2.7) script to download via API Google Search Console data. I would like to get rid of the property and dates arguments when launching the script:
>python script. py ´http://www.example.com´ ´01-01-2000´ ´01-02-2000´
For the latter I managed to do it importing timedelta and commenting out the lines referring to that argument:
argparser = argparse.ArgumentParser(add_help=False)
argparser.add_argument('property_uri', type=str,
help=('Site or app URI to query data for (including '
'trailing slash).'))
# Start and end dates are commented out as timeframe is dynamically set
'''argparser.add_argument('start_date', type=str,
help=('Start date of the requested date range in '
'YYYY-MM-DD format.'))
argparser.add_argument('end_date', type=str,
help=('End date of the requested date range in '
'YYYY-MM-DD format.'))'''
now = datetime.datetime.now()
StartDate = datetime.datetime.now()- timedelta(days=14)
EndDate = datetime.datetime.now()- timedelta(days=7)
From = StartDate.strftime('%Y-%m-%d' )
To = EndDate.strftime('%Y-%m-%d' )
request = {
'startDate': StartDate.strftime('%Y-%m-%d' ),
'endDate': EndDate.strftime('%Y-%m-%d' ),
'dimensions': ['query'],
Now I would like get rid also of the property argument, so that I can simply launch the script and have the property specified in the script itself. My final goal is to get data from several properties using only one script.
I tried to repeat the same procedure used for the dates but no luck. Needless to say I am a total beginner at coding.
I think I can help as I had the same problem when working from the sample script given by google as guidance. Which is what I think you gotten your code from?
The problem is that the script uses the sample_tools.py script in the googleapiclient library, which is meant to abstract away all the authentication bits so you can make a quick query easily. If you want to modify the code, I would recommend writing it from scratch.
These are my functions that I've pieced together from various bits of documentation that you might find useful.
Stage 1: Authentication
def authenticate_http():
"""Executes a searchAnalytics.query request.
Args:
service: The webmasters service to use when executing the query.
property_uri: The site or app URI to request data for.
request: The request to be executed.
Returns:
An array of response rows.
"""
# create flow object
flow = flow_from_clientsecrets('path to client_secrets.json',
scope='https://www.googleapis.com/auth/webmasters.readonly',
redirect_uri='urn:ietf:wg:oauth:2.0:oob')
storage = Storage('credentials_file')
credentials = storage.get()
if credentials:
# print "have auth code"
http_auth = credentials.authorize(Http())
else:
print "need auth code"
# get authorization server uri
auth_uri = flow.step1_get_authorize_url()
print auth_uri
# get credentials object
code_input = raw_input("Code: ")
credentials = flow.step2_exchange(code_input)
storage.put(credentials)
# apply credential headers to all requests
http_auth = credentials.authorize(Http())
return http_auth
Stage 2: Build the Service Object
def build_service(api_name, version):
# use authenticate_http to return the http object
http_auth = authenticate_http()
# build gsc service object
service = build(api_name, version, http=http_auth)
return service
Stage 3: Execute Request
def execute_request(service, property_uri, request):
"""Executes a searchAnalytics.query request.
Args:
service: The webmasters service to use when executing the query.
property_uri: The site or app URI to request data for.
request: The request to be executed.
Returns:
An array of response rows.
"""
return service.searchanalytics().query(
siteUrl=property_uri, body=request).execute()
Stage 4: Main()
def main():
# define service object for the api service you want to use
gsc_service = build_service('webmasters', 'v3')
# define request
request = {'request goes here'}
# set your property set string you want to query
url = 'url or property set string goes here'
# response from executing request
response = execute_request(gsc_service, url, request)
print response
For multiple property sets you can just create a list of property sets, then create a loop and pass each property set into the 'url' argument of the 'execute_request' function
Hope this helps!

Telnet send command and then read response

This shouldn't be that complicated, but it seems that both the Ruby and Python Telnet libs have awkward APIs. Can anyone show me how to write a command to a Telnet host and then read the response into a string for some processing?
In my case "SEND" with a newline retrieves some temperature data on a device.
With Python I tried:
tn.write(b"SEND" + b"\r")
str = tn.read_eager()
which returns nothing.
In Ruby I tried:
tn.puts("SEND")
which should return something as well, the only thing I've gotten to work is:
tn.cmd("SEND") { |c| print c }
which you can't do much with c.
Am I missing something here? I was expecting something like the Socket library in Ruby with some code like:
s = TCPSocket.new 'localhost', 2000
while line = s.gets # Read lines from socket
puts line # and print them
end
I found out that if you don't supply a block to the cmd method, it will give you back the response (assuming the telnet is not asking you for anything else). You can send the commands all at once (but get all of the responses bundled together) or do multiple calls, but you would have to do nested block callbacks (I was not able to do it otherwise).
require 'net/telnet'
class Client
# Fetch weather forecast for NYC.
#
# #return [String]
def response
fetch_all_in_one_response
# fetch_multiple_responses
ensure
disconnect
end
private
# Do all the commands at once and return everything on one go.
#
# #return [String]
def fetch_all_in_one_response
client.cmd("\nNYC\nX\n")
end
# Do multiple calls to retrieve the final forecast.
#
# #return [String]
def fetch_multiple_responses
client.cmd("\r") do
client.cmd("NYC\r") do
client.cmd("X\r") do |forecast|
return forecast
end
end
end
end
# Connect to remote server.
#
# #return [Net::Telnet]
def client
#client ||= Net::Telnet.new(
'Host' => 'rainmaker.wunderground.com',
'Timeout' => false,
'Output_log' => File.open('output.log', 'w')
)
end
# Close connection to the remote server.
def disconnect
client.close
end
end
forecast = Client.new.response
puts forecast

How to process JSON in Flask?

I have asked a few questions about this before, but still haven't solved my problem.
I am trying to allow Salesforce to remotely send commands to a Raspberry Pi via JSON (REST API). The Raspberry Pi controls the power of some RF Plugs via an RF Transmitter called a TellStick. This is all setup, and I can use Python to send these commands. All I need to do now is make the Pi accept JSON, then work out how to send the commands from Salesforce.
Someone kindly forked my repo on GitHub, and provided me with some code which should make it work. But unfortunately it still isn't working.
Here is the previous question: How to accept a JSON POST?
And here is the forked repo: https://github.com/bfagundez/RemotePiControl/blob/master/power.py
What do I need to do? I have sent test JSON messages n the Postman extension and in cURL but keep getting errors.
I just want to be able to send various variables, and let the script work the rest out.
I can currently post to a .py script I have with some URL variables, so /python.py?power=on&device=1&time=10&pass=whatever and it figures it out. Surely there's a simple way to send this in JSON?
Here is the power.py code:
# add flask here
from flask import Flask
app = Flask(__name__)
app.debug = True
# keep your code
import time
import cgi
from tellcore.telldus import TelldusCore
core = TelldusCore()
devices = core.devices()
# define a "power ON api endpoint"
#app.route("/API/v1.0/power-on/<deviceId>",methods=['POST'])
def powerOnDevice(deviceId):
payload = {}
#get the device by id somehow
device = devices[deviceId]
# get some extra parameters
# let's say how long to stay on
params = request.get_json()
try:
device.turn_on()
payload['success'] = True
return payload
except:
payload['success'] = False
# add an exception description here
return payload
# define a "power OFF api endpoint"
#app.route("/API/v1.0/power-off/<deviceId>",methods=['POST'])
def powerOffDevice(deviceId):
payload = {}
#get the device by id somehow
device = devices[deviceId]
try:
device.turn_off()
payload['success'] = True
return payload
except:
payload['success'] = False
# add an exception description here
return payload
app.run()
Your deviceID variable is a string, not an integer; it contains a '1' digit, but that's not yet an integer.
You can either convert it explicitly:
device = devices[int(deviceId)]
or tell Flask you wanted an integer parameter in the route:
#app.route("/API/v1.0/power-on/<int:deviceId>", methods=['POST'])
def powerOnDevice(deviceId):
where the int: part is a URL route converter.
Your views should return a response object, a string or a tuple instead of a dictionary (as you do now), see About Responses. If you wanted to return JSON, use the flask.json.jsonify() function:
# define a "power ON api endpoint"
#app.route("/API/v1.0/power-on/<int:deviceId>", methods=['POST'])
def powerOnDevice(deviceId):
device = devices[deviceId]
# get some extra parameters
# let's say how long to stay on
params = request.get_json()
try:
device.turn_on()
return jsonify(success=True)
except SomeSpecificException as exc:
return jsonify(success=False, exception=str(exc))
where I also altered the exception handler to handle a specific exception only; try to avoid Pokemon exception handling; do not try to catch them all!
To retrieve the Json Post values you must use request.json
if request.json and 'email' in request.json:
request.json['email']

How do you use cookies and HTTP Basic Authentication in CherryPy?

I have a CherryPy web application that requires authentication. I have working HTTP Basic Authentication with a configuration that looks like this:
app_config = {
'/' : {
'tools.sessions.on': True,
'tools.sessions.name': 'zknsrv',
'tools.auth_basic.on': True,
'tools.auth_basic.realm': 'zknsrv',
'tools.auth_basic.checkpassword': checkpassword,
}
}
HTTP auth works great at this point. For example, this will give me the successful login message that I defined inside AuthTest:
curl http://realuser:realpass#localhost/AuthTest/
Since sessions are on, I can save cookies and examine the one that CherryPy sets:
curl --cookie-jar cookie.jar http://realuser:realpass#localhost/AuthTest/
The cookie.jar file will end up looking like this:
# Netscape HTTP Cookie File
# http://curl.haxx.se/rfc/cookie_spec.html
# This file was generated by libcurl! Edit at your own risk.
localhost FALSE / FALSE 1348640978 zknsrv 821aaad0ba34fd51f77b2452c7ae3c182237deb3
However, I'll get an HTTP 401 Not Authorized failure if I provide this session ID without the username and password, like this:
curl --cookie 'zknsrv=821aaad0ba34fd51f77b2452c7ae3c182237deb3' http://localhost/AuthTest
What am I missing?
Thanks very much for any help.
So, the short answer is you can do this, but you have to write your own CherryPy tool (a before_handler), and you must not enable Basic Authentication in the CherryPy config (that is, you shouldn't do anything like tools.auth.on or tools.auth.basic... etc) - you have to handle HTTP Basic Authentication yourself. The reason for this is that the built-in Basic Authentication stuff is apparently pretty primitive. If you protect something by enabling Basic Authentication like I did above, it will do that authentication check before it checks the session, and your cookies will do nothing.
My solution, in prose
Fortunately, even though CherryPy doesn't have a way to do both built-in, you can still use its built-in session code. You still have to write your own code for handling the Basic Authentication part, but in total this is not so bad, and using the session code is a big win because writing a custom session manager is a good way to introduce security bugs into your webapp.
I ended up being able to take a lot of things from a page on the CherryPy wiki called Simple authentication and access restrictions helpers. That code uses CP sessions, but rather than Basic Auth it uses a special page with a login form that submits ?username=USERNAME&password=PASSWORD. What I did is basically nothing more than changing the provided check_auth function from using the special login page to using the HTTP auth headers.
In general, you need a function you can add as a CherryPy tool - specifically a before_handler. (In the original code, this function was called check_auth(), but I renamed it to protect().) This function first tries to see if the cookies contain a (valid) session ID, and if that fails, it tries to see if there is HTTP auth information in the headers.
You then need a way to require authentication for a given page; I do this with require(), plus some conditions, which are just callables that return True. In my case, those conditions are zkn_admin(), and user_is() functions; if you have more complex needs, you might want to also look at member_of(), any_of(), and all_of() from the original code.
If you do it like that, you already have a way to log in - you just submit a valid session cookie or HTTPBA credentials to any URL you protect with the #require() decorator. All you need now is a way to log out.
(The original code instead has an AuthController class which contains login() and logout(), and you can use the whole AuthController object in your HTTP document tree by just putting auth = AuthController() inside your CherryPy root class, and get to it with a URL of e.g. http://example.com/auth/login and http://example.com/auth/logout. My code doesn't use an authcontroller object, just a few functions.)
Some notes about my code
Caveat: Because I wrote my own parser for HTTP auth headers, it only parses what I told it about, which means just HTTP Basic Auth - not, for example, Digest Auth or anything else. For my application that's fine; for yours, it may not be.
It assumes a few functions defined elsewhere in my code: user_verify() and user_is_admin()
I also use a debugprint() function which only prints output when a DEBUG variable is set, and I've left these calls in for clarity.
You can call it cherrypy.tools.WHATEVER (see the last line); I called it zkauth based on the name of my app. Take care NOT to call it auth, or the name of any other built-in tool, though .
You then have to enable cherrypy.tools.WHATEVER in your CherryPy configuration.
As you can see by all the TODO: messages, this code is still in a state of flux and not 100% tested against edge cases - sorry about that! It will still give you enough of an idea to go on, though, I hope.
My solution, in code
import base64
import re
import cherrypy
SESSION_KEY = '_zkn_username'
def protect(*args, **kwargs):
debugprint("Inside protect()...")
authenticated = False
conditions = cherrypy.request.config.get('auth.require', None)
debugprint("conditions: {}".format(conditions))
if conditions is not None:
# A condition is just a callable that returns true or false
try:
# TODO: I'm not sure if this is actually checking for a valid session?
# or if just any data here would work?
this_session = cherrypy.session[SESSION_KEY]
# check if there is an active session
# sessions are turned on so we just have to know if there is
# something inside of cherrypy.session[SESSION_KEY]:
cherrypy.session.regenerate()
# I can't actually tell if I need to do this myself or what
email = cherrypy.request.login = cherrypy.session[SESSION_KEY]
authenticated = True
debugprint("Authenticated with session: {}, for user: {}".format(
this_session, email))
except KeyError:
# If the session isn't set, it either wasn't present or wasn't valid.
# Now check if the request includes HTTPBA?
# FFR The auth header looks like: "AUTHORIZATION: Basic <base64shit>"
# TODO: cherrypy has got to handle this for me, right?
authheader = cherrypy.request.headers.get('AUTHORIZATION')
debugprint("Authheader: {}".format(authheader))
if authheader:
#b64data = re.sub("Basic ", "", cherrypy.request.headers.get('AUTHORIZATION'))
# TODO: what happens if you get an auth header that doesn't use basic auth?
b64data = re.sub("Basic ", "", authheader)
decodeddata = base64.b64decode(b64data.encode("ASCII"))
# TODO: test how this handles ':' characters in username/passphrase.
email,passphrase = decodeddata.decode().split(":", 1)
if user_verify(email, passphrase):
cherrypy.session.regenerate()
# This line of code is discussed in doc/sessions-and-auth.markdown
cherrypy.session[SESSION_KEY] = cherrypy.request.login = email
authenticated = True
else:
debugprint ("Attempted to log in with HTTBA username {} but failed.".format(
email))
else:
debugprint ("Auth header was not present.")
except:
debugprint ("Client has no valid session and did not provide HTTPBA credentials.")
debugprint ("TODO: ensure that if I have a failure inside the 'except KeyError'"
+ " section above, it doesn't get to this section... I'd want to"
+ " show a different error message if that happened.")
if authenticated:
for condition in conditions:
if not condition():
debugprint ("Authentication succeeded but authorization failed.")
raise cherrypy.HTTPError("403 Forbidden")
else:
raise cherrypy.HTTPError("401 Unauthorized")
cherrypy.tools.zkauth = cherrypy.Tool('before_handler', protect)
def require(*conditions):
"""A decorator that appends conditions to the auth.require config
variable."""
def decorate(f):
if not hasattr(f, '_cp_config'):
f._cp_config = dict()
if 'auth.require' not in f._cp_config:
f._cp_config['auth.require'] = []
f._cp_config['auth.require'].extend(conditions)
return f
return decorate
#### CONDITIONS
#
# Conditions are callables that return True
# if the user fulfills the conditions they define, False otherwise
#
# They can access the current user as cherrypy.request.login
# TODO: test this function with cookies, I want to make sure that cherrypy.request.login is
# set properly so that this function can use it.
def zkn_admin():
return lambda: user_is_admin(cherrypy.request.login)
def user_is(reqd_email):
return lambda: reqd_email == cherrypy.request.login
#### END CONDITIONS
def logout():
email = cherrypy.session.get(SESSION_KEY, None)
cherrypy.session[SESSION_KEY] = cherrypy.request.login = None
return "Logout successful"
Now all you have to do is enable both builtin sessions and your own cherrypy.tools.WHATEVER in your CherryPy configuration. Again, take care not to enable cherrypy.tools.auth. My configuration ended up looking like this:
config_root = {
'/' : {
'tools.zkauth.on': True,
'tools.sessions.on': True,
'tools.sessions.name': 'zknsrv',
}
}

Categories