custom Plone Dexterity factory to create subcontent - python

I thought it would be possible to create a custom Dexterity factory that calls the default factory and then adds some subcontent (in my case Archetypes-based) to the created 'parent' Dexterity content.
I have no problem creating and registering the custom factory.
However, regardless of what method I use (to create the AT subcontent), the subcontent creation fails when attempted from within the custom factory.
I've tried everything from plone.api to invokeFactory to direct instantiation of the AT content class.
In most cases, traceback shows the underlying Plone/CMF code tries to get portal_types tool using getToolByName and fails; similarly when trying to instantiate the AT class directly, the manage_afterAdd then tries to access reference_catalog, which fails.
Is there any way to make this work?

A different approach can simply be to add event handlers for IObjectAddedEvent, and add there your subcontents using common APIs.

After some trials and errors, it turns out this is possible:
from zope.container.interfaces import INameChooser
from zope.component.hooks import getSite
from plone.dexterity.factory import DexterityFactory
class CustomDexterityFactory(DexterityFactory):
def __call__(self, *args, **kw):
folder = DexterityFactory.__call__(self, *args, **kw)
# we are given no context to work with so need to resort to getSite
# hook or zope.globalrequest.getRequest and then wrap the folder
# in the context of the add view
site = getSite()
wrapped = folder.__of__(site["PUBLISHED"].context)
# invokeFactory fails if the container has no id
folder.id = "tmp_folder_id"
# standard AT content creation
wrapped.invokeFactory("Page", "tmp_obj_id")
page = wrapped["tmp_obj_id"]
new_id = INameChooser(service_obj).chooseName(title, page)
page.setId(new_id)
page.setTitle(title)
# empty the id, otherwise it will keep
folder.id = None
return folder
While the above works, at some point the created Page gets indexed (perhaps by invokeFactory), which means there will be a bogus entry in the catalog. Code to remove the entry could be added to the factory.
Overall, it would be easier to just create an event handler, as suggested by #keul in his answer.

Related

How to let DjangoModelFactory create a model without saving it to the DB?

I'm writing unit tests in a Django project. I've got a factory to create objects using the .create() method. So in my unit tests I'm using this:
device = DeviceFactory.create()
This always creates a record in the DB though. Is there a way that I can make the factory create an object without saving it to the DB yet?
I looked over the documentation but I can't find it. Am I missing something?
Quoth this bit of the documentation, use .build() instead of .create():
# Returns a User instance that's not saved
user = UserFactory.build()
# Returns a saved User instance.
# UserFactory must subclass an ORM base class, such as DjangoModelFactory.
user = UserFactory.create()
# Returns a stub object (just a bunch of attributes)
obj = UserFactory.stub()
# You can use the Factory class as a shortcut for the default build strategy:
# Same as UserFactory.create()
user = UserFactory()

Mocking elasticsearch-py calls

I'm writing a CLI to interact with elasticsearch using the elasticsearch-py library. I'm trying to mock elasticsearch-py functions in order to test my functions without calling my real cluster.
I read this question and this one but I still don't understand.
main.py
Escli inherits from cliff's App class
class Escli(App):
_es = elasticsearch5.Elasticsearch()
settings.py
from escli.main import Escli
class Settings:
def get(self, sections):
raise NotImplementedError()
class ClusterSettings(Settings):
def get(self, setting, persistency='transient'):
settings = Escli._es.cluster\
.get_settings(include_defaults=True, flat_settings=True)\
.get(persistency)\
.get(setting)
return settings
settings_test.py
import escli.settings
class TestClusterSettings(TestCase):
def setUp(self):
self.patcher = patch('elasticsearch5.Elasticsearch')
self.MockClass = self.patcher.start()
def test_get(self):
# Note this is an empty dict to show my point
# it will contain childs dict to allow my .get(persistency).get(setting)
self.MockClass.return_value.cluster.get_settings.return_value = {}
cluster_settings = escli.settings.ClusterSettings()
ret = cluster_settings.get('cluster.routing.allocation.node_concurrent_recoveries', persistency='transient')
# ret should contain a subset of my dict defined above
I want to have Escli._es.cluster.get_settings() to return what I want (a dict object) in order to not make the real HTTP call, but it keeps doing it.
What I know:
In order to mock an instance method I have to do something like
MagicMockObject.return_value.InstanceMethodName.return_value = ...
I cannot patch Escli._es.cluster.get_settings because Python tries to import Escli as module, which cannot work. So I'm patching the whole lib.
I desperately tried to put some return_value everywhere but I cannot understand why I can't mock that thing properly.
You should be mocking with respect to where you are testing. Based on the example provided, this means that the Escli class you are using in the settings.py module needs to be mocked with respect to settings.py. So, more practically, your patch call would look like this inside setUp instead:
self.patcher = patch('escli.settings.Escli')
With this, you are now mocking what you want in the right place based on how your tests are running.
Furthermore, to add more robustness to your testing, you might want to consider speccing for the Elasticsearch instance you are creating in order to validate that you are in fact calling valid methods that correlate to Elasticsearch. With that in mind, you can do something like this, instead:
self.patcher = patch('escli.settings.Escli', Mock(Elasticsearch))
To read a bit more about what exactly is meant by spec, check the patch section in the documentation.
As a final note, if you are interested in exploring the great world of pytest, there is a pytest-elasticsearch plugin created to assist with this.

Can't call a decorator within the imported sub-class of a cherrpy application (site tree)

I am using cherrypy as a web server, and I want to check a user's logged-in status before returning the page. This works on methods in the main Application class (in site.py) but gives an error when I call the same decorated function on method in a class that is one layer deeper in the webpage tree (in a separate file).
validate_user() is the function used as a decorator. It either passes a user to the page or sends them to a 401 restricted page, as a cherrypy.Tool, like this:
from user import validate_user
cherrypy.tools.validate_user = cherrypy.Tool('before_handler', validate_user)
I attach different sections of the site to the main site.py file's Application class by assigning instances of the sub-classes as variables accordingly:
from user import UserAuthentication
class Root:
user = UserAuthentication() # maps user/login, user/register, user/logout, etc
admin = Admin()
api = Api()
#cherrypy.expose
#cherrypy.tools.validate_user()
def how_to(self, **kw):
from other_stuff import how_to_page
return how_to_page(kw)
This, however, does not work when I try to use the validate_user() inside the Admin or Api or Analysis sections. These are in separate files.
import cherrypy
class Analyze:
#cherrypy.expose
#cherrypy.tools.validate_user() #### THIS LINE GIVES ERROR ####
def explore(self, *args, **kw): # #addkw(fetch=['uid'])
import explore
kw['uid'] = cherrypy.session.get('uid',-1)
return explore.explorer(args, kw)
The error is that cherrypy.tools doesn't have a validate_user function or method. But other things I assign in site.py do appear in cherrypy here. What's the reason why I can't use this tool in a separate file that is part of my overall site map?
If this is relevant, the validate_user() function simply looks at the cherrypy.request.cookie, finds the 'session_token' value, and compares it to our database and passes it along if the ID matches.
Sorry I don't know if the Analyze() and Api() and User() pages are subclasses, or nested classes, or extended methods, or what. So I can't give this a precise title. Do I need to pass in the parent class to them somehow?
The issue here is that Python processes everything except the function/method bodies during import. So in site.py, when you import user (or from user import <anything>), that causes all of the user module to be processed before the Python interpreter has gotten to the definition of the validate_user tool, including the decorator, which is attempting to access that tool by value (rather than by a reference).
CherryPy has another mechanism for decorating functions with config that will enable tools on those handlers. Instead of #cherrypy.tools.validate_user, use:
#cherrypy.config(**{"tools.validate_user.on": True})
This decorator works because instead of needing to access validate_user from cherrypy.tools to install itself on the handler, it instead configures CherryPy to install that tool on the handler later, when the handler is invoked.
If that tool is needed for all methods on that class, you can use that config decorator on the class itself.
You could alternatively, enable that tool for given endpoints in the server config, as mentioned in the other question.

Customizing / extending / monkey patching Django Auth Backend

I am using django-auth-ldap to connect to an LDAP server for authentication. django-auth-ldap provides the setting AUTH_LDAP_REQUIRE_GROUP, which can be used to allow access only for users placed in a specific group. This works fine, but the option only allows to check one group; I want to check if a users is placed in either one or another group.
In the module django_auth_ldap/backend.py I could modify the method _check_required_groups of the class LDAPUser(object) to implement this behaviour. Modifying it directly works fine, but since changing the source would endup in a maintenance hell, I am searching for a solution to change this method without touching the source. Two ideas I had:
1) Monkey Patching
Change the _check_required_groups method of an instance of the LDAPUser class. The problem is that I have no idea where it is beeing instantiated. I am just using LDAPSearch and GroupOfNamesType imported from django_auth_ldap.config in the settings file, and passing the string django_auth_ldap.backend.LDAPBackend into the AUTHENTICATION_BACKENDS tuple.
2) Extending the module
Create an own module, extending the original django_auth_ldap and using this instead of the original. I tried to create a new directory, adding an __init__.py with the line:
from django_auth_ldap import *
But using this module does not work, since it can't import custom_auth.config.
Any other suggestions or hints how to make one of those attempts to work?
To be modular, DRY and true to the django philosophy in general, you need to create a class named LDAPBackendEx that would inherit from LDAPBackend and use this class to your AUTHENTICATION_BACKENDS instead of django_auth_ldap.backend.LDAPBackend. Also, you'dd create an LDAPUserEx that would inhert from _LDAPUser and override the _check_required_groups method.
So, the LDAPUserEx would be something like:
class LDAPUserEx(_LDAPUser):
def _check_required_group(self):
pass # Put your implementation here !
Now, concerning the implementation of LDAPBackendEx: Unfortuanately there is no way of defining a custom _LDAPUser class so you'd have to search every method that uses the _LDAPUser class and override it with LDAPUserEx. The correct methdod of implementing django-auth-ldap (and if we actually needed to be modular) would be to add an user_class attribute to LDAPBackend, initialize it to _LDAPUser and use that instead of _LDAPUser.
Checking the code here, I found out that the methods of LDAPBackend that refer _LDAPUser are authenticate, get_user and get_group_permissions. So, the implementation of LDAPBackendEx would be something like this:
class LDAPBackendEx(LDAPBackend):
def authenticate(self, username, password):
ldap_user = LDAPUserEx(self, username=username)
user = ldap_user.authenticate(password)
return user
def get_user(self, user_id):
pass # please put definition of get_user here changing _LDAPUser to LDAPUserEx
def get_group_permissions(self, user):
pass # please put definition of get_group_permissions here changing _LDAPUser to LDAPUserEx

Initialize module python

I have a module which wrap an json api for querying song cover/remix data with limits for number of requests per hour/minute. I'd like to keep an optional cache of json responses without forcing users to adjust a cache/context parameter every time. What is a good way of initializing a library/module in python? Or would you recommend I just do the explicit thing and use a cache named parameter in every call that eventually request json data?
I was thinking of doing
_cache = None
class LFU(object):
...
NO_CACHE, LFU = "NO_CACHE", "LFU"
def set_cache_strategy(strategy):
if _cache == NO_CACHE:
_cache = None
else:
_cache = LFU()
import second_hand_songs_wrapper as s
s.set_cache_strategy(s.LFU)
l1 = s.ShsLabel.get_from_resource_id(123)
l2 = s.ShsLabel.get_from_resource_id(123,use_cache=Fale)
edit:
I'm probably only planning on having two strategies one with/ one without a cache.
Other possible alternative initialization schemes I can think of of the top of my head include using enviromental variables, initializing _cache by hand in the user code to None/LFU(), and using explicit cache everywhere(possibly defaulting to having a cache).
Note the reason I don't set cache on an instance of the class is that I currently use a never instantiated class(use class functions + class state as a singleton) to abstract downloading the json data along with some convenience/methods to download certain urls. I could instantiate the downloader class but then I'd have to pass the instance explicitly to each function, or use another global variable for a convenience version of the class. The downloader class also keeps track of # of requests(website has limit per minute/hour) so having multiple downloader objects would cause more trouble.
There's nothing wrong in setting a default, even if that default is None. I would note though that having the pseudo-constants as well as a conditional (provided that's all you use the values for) is redundant. Try:
caching_strategies = {'NO_CACHE' : lambda: None,
'LFU' : LFU}
_cache = caching_strategies['NO_CACHE']
def set_cache_strategy(strategy):
_cache = caching_methods[strategy]()
If you want to provide a convenience method for the available strategies, just wrap caching_strategies.keys(). Really though, as far as your strategies go, you should probably have all your strategies inherit from some base strategy, and just create a no_cache strategy class that inherits from that and stubs all the methods for your standardized caching inteface.

Categories