I am using django-auth-ldap to connect to an LDAP server for authentication. django-auth-ldap provides the setting AUTH_LDAP_REQUIRE_GROUP, which can be used to allow access only for users placed in a specific group. This works fine, but the option only allows to check one group; I want to check if a users is placed in either one or another group.
In the module django_auth_ldap/backend.py I could modify the method _check_required_groups of the class LDAPUser(object) to implement this behaviour. Modifying it directly works fine, but since changing the source would endup in a maintenance hell, I am searching for a solution to change this method without touching the source. Two ideas I had:
1) Monkey Patching
Change the _check_required_groups method of an instance of the LDAPUser class. The problem is that I have no idea where it is beeing instantiated. I am just using LDAPSearch and GroupOfNamesType imported from django_auth_ldap.config in the settings file, and passing the string django_auth_ldap.backend.LDAPBackend into the AUTHENTICATION_BACKENDS tuple.
2) Extending the module
Create an own module, extending the original django_auth_ldap and using this instead of the original. I tried to create a new directory, adding an __init__.py with the line:
from django_auth_ldap import *
But using this module does not work, since it can't import custom_auth.config.
Any other suggestions or hints how to make one of those attempts to work?
To be modular, DRY and true to the django philosophy in general, you need to create a class named LDAPBackendEx that would inherit from LDAPBackend and use this class to your AUTHENTICATION_BACKENDS instead of django_auth_ldap.backend.LDAPBackend. Also, you'dd create an LDAPUserEx that would inhert from _LDAPUser and override the _check_required_groups method.
So, the LDAPUserEx would be something like:
class LDAPUserEx(_LDAPUser):
def _check_required_group(self):
pass # Put your implementation here !
Now, concerning the implementation of LDAPBackendEx: Unfortuanately there is no way of defining a custom _LDAPUser class so you'd have to search every method that uses the _LDAPUser class and override it with LDAPUserEx. The correct methdod of implementing django-auth-ldap (and if we actually needed to be modular) would be to add an user_class attribute to LDAPBackend, initialize it to _LDAPUser and use that instead of _LDAPUser.
Checking the code here, I found out that the methods of LDAPBackend that refer _LDAPUser are authenticate, get_user and get_group_permissions. So, the implementation of LDAPBackendEx would be something like this:
class LDAPBackendEx(LDAPBackend):
def authenticate(self, username, password):
ldap_user = LDAPUserEx(self, username=username)
user = ldap_user.authenticate(password)
return user
def get_user(self, user_id):
pass # please put definition of get_user here changing _LDAPUser to LDAPUserEx
def get_group_permissions(self, user):
pass # please put definition of get_group_permissions here changing _LDAPUser to LDAPUserEx
Related
I'm writing a CLI to interact with elasticsearch using the elasticsearch-py library. I'm trying to mock elasticsearch-py functions in order to test my functions without calling my real cluster.
I read this question and this one but I still don't understand.
main.py
Escli inherits from cliff's App class
class Escli(App):
_es = elasticsearch5.Elasticsearch()
settings.py
from escli.main import Escli
class Settings:
def get(self, sections):
raise NotImplementedError()
class ClusterSettings(Settings):
def get(self, setting, persistency='transient'):
settings = Escli._es.cluster\
.get_settings(include_defaults=True, flat_settings=True)\
.get(persistency)\
.get(setting)
return settings
settings_test.py
import escli.settings
class TestClusterSettings(TestCase):
def setUp(self):
self.patcher = patch('elasticsearch5.Elasticsearch')
self.MockClass = self.patcher.start()
def test_get(self):
# Note this is an empty dict to show my point
# it will contain childs dict to allow my .get(persistency).get(setting)
self.MockClass.return_value.cluster.get_settings.return_value = {}
cluster_settings = escli.settings.ClusterSettings()
ret = cluster_settings.get('cluster.routing.allocation.node_concurrent_recoveries', persistency='transient')
# ret should contain a subset of my dict defined above
I want to have Escli._es.cluster.get_settings() to return what I want (a dict object) in order to not make the real HTTP call, but it keeps doing it.
What I know:
In order to mock an instance method I have to do something like
MagicMockObject.return_value.InstanceMethodName.return_value = ...
I cannot patch Escli._es.cluster.get_settings because Python tries to import Escli as module, which cannot work. So I'm patching the whole lib.
I desperately tried to put some return_value everywhere but I cannot understand why I can't mock that thing properly.
You should be mocking with respect to where you are testing. Based on the example provided, this means that the Escli class you are using in the settings.py module needs to be mocked with respect to settings.py. So, more practically, your patch call would look like this inside setUp instead:
self.patcher = patch('escli.settings.Escli')
With this, you are now mocking what you want in the right place based on how your tests are running.
Furthermore, to add more robustness to your testing, you might want to consider speccing for the Elasticsearch instance you are creating in order to validate that you are in fact calling valid methods that correlate to Elasticsearch. With that in mind, you can do something like this, instead:
self.patcher = patch('escli.settings.Escli', Mock(Elasticsearch))
To read a bit more about what exactly is meant by spec, check the patch section in the documentation.
As a final note, if you are interested in exploring the great world of pytest, there is a pytest-elasticsearch plugin created to assist with this.
I am using cherrypy as a web server, and I want to check a user's logged-in status before returning the page. This works on methods in the main Application class (in site.py) but gives an error when I call the same decorated function on method in a class that is one layer deeper in the webpage tree (in a separate file).
validate_user() is the function used as a decorator. It either passes a user to the page or sends them to a 401 restricted page, as a cherrypy.Tool, like this:
from user import validate_user
cherrypy.tools.validate_user = cherrypy.Tool('before_handler', validate_user)
I attach different sections of the site to the main site.py file's Application class by assigning instances of the sub-classes as variables accordingly:
from user import UserAuthentication
class Root:
user = UserAuthentication() # maps user/login, user/register, user/logout, etc
admin = Admin()
api = Api()
#cherrypy.expose
#cherrypy.tools.validate_user()
def how_to(self, **kw):
from other_stuff import how_to_page
return how_to_page(kw)
This, however, does not work when I try to use the validate_user() inside the Admin or Api or Analysis sections. These are in separate files.
import cherrypy
class Analyze:
#cherrypy.expose
#cherrypy.tools.validate_user() #### THIS LINE GIVES ERROR ####
def explore(self, *args, **kw): # #addkw(fetch=['uid'])
import explore
kw['uid'] = cherrypy.session.get('uid',-1)
return explore.explorer(args, kw)
The error is that cherrypy.tools doesn't have a validate_user function or method. But other things I assign in site.py do appear in cherrypy here. What's the reason why I can't use this tool in a separate file that is part of my overall site map?
If this is relevant, the validate_user() function simply looks at the cherrypy.request.cookie, finds the 'session_token' value, and compares it to our database and passes it along if the ID matches.
Sorry I don't know if the Analyze() and Api() and User() pages are subclasses, or nested classes, or extended methods, or what. So I can't give this a precise title. Do I need to pass in the parent class to them somehow?
The issue here is that Python processes everything except the function/method bodies during import. So in site.py, when you import user (or from user import <anything>), that causes all of the user module to be processed before the Python interpreter has gotten to the definition of the validate_user tool, including the decorator, which is attempting to access that tool by value (rather than by a reference).
CherryPy has another mechanism for decorating functions with config that will enable tools on those handlers. Instead of #cherrypy.tools.validate_user, use:
#cherrypy.config(**{"tools.validate_user.on": True})
This decorator works because instead of needing to access validate_user from cherrypy.tools to install itself on the handler, it instead configures CherryPy to install that tool on the handler later, when the handler is invoked.
If that tool is needed for all methods on that class, you can use that config decorator on the class itself.
You could alternatively, enable that tool for given endpoints in the server config, as mentioned in the other question.
I thought it would be possible to create a custom Dexterity factory that calls the default factory and then adds some subcontent (in my case Archetypes-based) to the created 'parent' Dexterity content.
I have no problem creating and registering the custom factory.
However, regardless of what method I use (to create the AT subcontent), the subcontent creation fails when attempted from within the custom factory.
I've tried everything from plone.api to invokeFactory to direct instantiation of the AT content class.
In most cases, traceback shows the underlying Plone/CMF code tries to get portal_types tool using getToolByName and fails; similarly when trying to instantiate the AT class directly, the manage_afterAdd then tries to access reference_catalog, which fails.
Is there any way to make this work?
A different approach can simply be to add event handlers for IObjectAddedEvent, and add there your subcontents using common APIs.
After some trials and errors, it turns out this is possible:
from zope.container.interfaces import INameChooser
from zope.component.hooks import getSite
from plone.dexterity.factory import DexterityFactory
class CustomDexterityFactory(DexterityFactory):
def __call__(self, *args, **kw):
folder = DexterityFactory.__call__(self, *args, **kw)
# we are given no context to work with so need to resort to getSite
# hook or zope.globalrequest.getRequest and then wrap the folder
# in the context of the add view
site = getSite()
wrapped = folder.__of__(site["PUBLISHED"].context)
# invokeFactory fails if the container has no id
folder.id = "tmp_folder_id"
# standard AT content creation
wrapped.invokeFactory("Page", "tmp_obj_id")
page = wrapped["tmp_obj_id"]
new_id = INameChooser(service_obj).chooseName(title, page)
page.setId(new_id)
page.setTitle(title)
# empty the id, otherwise it will keep
folder.id = None
return folder
While the above works, at some point the created Page gets indexed (perhaps by invokeFactory), which means there will be a bogus entry in the catalog. Code to remove the entry could be added to the factory.
Overall, it would be easier to just create an event handler, as suggested by #keul in his answer.
I'm studying the code of jinja2.ext.InternationalizationExtension provided with Jinja2.
I know that tags can be added via the tags attribute; Jinja2 template parser will relinquish control and call user code when one of those strings is the first token in a {% %} block.
class InternationalizationExtension(Extension):
"""This extension adds gettext support to Jinja2."""
tags = set(['trans'])
I learned from the code that an extension can add attributes to the environment by calling Environment.extend; for jinja2.ext.InternationalizationExtension this is done in the __init__ method:
def __init__(self, environment):
Extension.__init__(self, environment)
environment.globals['_'] = _gettext_alias
environment.extend(
install_gettext_translations=self._install,
install_null_translations=self._install_null,
install_gettext_callables=self._install_callables,
uninstall_gettext_translations=self._uninstall,
extract_translations=self._extract,
newstyle_gettext=False
)
I know that custom filters are added by registering functions into Environment.filters:
def datetimeformat(value, format='%H:%M / %d-%m-%Y'):
return value.strftime(format)
environment.filters['datetimeformat'] = datetimeformat
The questions are:
Is it recommended that an extensions adds new filters, and not only tags and attributes to the environment? (The documentation suggests that this should be common practice)
Where in the extension subclass should this be done? In __init__ a reference to the environment is available, so in principle the above code could be put in the __init__ method.
Is it conceptually ok to do such thing in __init__? I personally don't like to alter objects states from within other objects' constructors, but in Jinja2 seems idiomatic enough to make it to an official extension (I'm talking about altering Environment.globals and calling Environment.extend from InternationalizationExtension.__init__).
Edit
A pattern to at least package filters nicely into a Python module. However this install function cannot be invoked from within a template (say, via a custom CallBlock created using an extension), because the environment should not be edited after the template has been instantiated.
def greet(value):
return "Hello %s!" % value
def curse(value):
return "Curse you, %s!" % value
def ohno(value):
return "Oh, No! %s!" % value
def install(env):
env.filters['greet'] = greet
env.filters['curse'] = curse
env.filters['ohno'] = ohno
Is it recommended that an extensions adds new filters, and not only tags and attributes to the environment?
Only if you need them. Otherwise, why overcomplicate your code? Filters are a very common use case for writing or extending other extensions, and the author most likely put that in there because they expect this to happen.
Where in the extension subclass should this be done?
It has to be done at call time, so if you don't directly put it in the __init__ you'll need to put it in a helper method called through __init__.
Is it conceptually ok to do such thing in __init__?
It's perfectly fine, so long as other users of your code can understand what it's doing. The simplest solution is usually the best.
I want to have dict / list to which I can add values, just like models can be added to the admin register in django !
My attempt : (package -> __init__.py)
# Singleton object
# __init__.py (Package: pack)
class remember:
a = []
def add(data):
a.append[data]
def get():
return a
obj = remember()
# models1.py
import pack
pack.obj.add("data")
# models2.py
import pack
pack.obj.add("data2")
print pack.obj.get()
# We should get: ["data", "data2"]
# We get : ["data2"]
How to achieve the desired functionality ?
Some say that methods can do this if you don't need sub-classing, how to do this with methods ?
Update:
To be more clear :
Just like django admin register any one can import and register itself with admin, so that register is persisted between imports.
If it's a singleton you're after, have a look at this old blog post. It contains a link to a well documented implementation (here).
Don't. If you think you need a global you don't and you should reevaluate how you are approaching the problem because 99% of the time you're doing it wrong.
If you have a really good reason to do it perhaps thread_locals() will really solve the problem you're trying to solve. This allows you to set up thread level global data. Note: This is only slightly better than a true global and should in general be avoided, and it can cause you a lot of headaches.
If you're looking for a cross request "global" then you most likely want to look into storing values in memcached.