What is the difference between template in ZCML and ViewPageTemplateFile - python

When creating a BrowserView in Plone, I know that I may optionally configure a template with ZCML like so:
<configure
xmlns:browser="http://namespaces.zope.org/browser"
>
<browser:page
…
class=".foo.FooView"
template="foo.pt"
…
/>
</configure>
Or alternatively in code:
# foo.py
from Products.Five.browser.pagetemplatefile import ViewPageTemplateFile
from zope.publisher.browser import BrowserPage
class FooView(BrowserPage):
"""
My View
"""
def __call__(self):
return ViewPageTemplateFile('foo.pt')(self)
Is there any difference between the two approaches? They both appear to yield the same result.
Sub-question: I know there is a BrowserView class one can import, but conventionally everyone uses BrowserPage. What if any significant differences exist between the two classes?

Note: To be fully equivalent to ZCML you should set the index variable to specify which template you are using. That way the TTW customization will work too.
# foo.py
from Products.Five.browser.pagetemplatefile import ViewPageTemplateFile
from zope.publisher.browser import BrowserPage
class FooView(BrowserPage):
index = ViewPageTemplateFile('foo.pt')
One other pattern you can use with a browser view is adding an update method.
class FooView(BrowserPage):
index = ViewPageTemplateFile('foo.pt')
def __call__(self):
self.update()
return self.index()
def update(self):
self.portal_catalog = ... # initialize code
But this is not the question.
So what is the difference? There is no difference. A browser view must be a callable. The ZCML directive builds this callable in a way the object has an index which must return the rendered page.
But creating the template on each call (your example) there is one difference:
you are creating a new instance of the template on each call of the browser view.
This is not the case with class variable.
One last option: You don't need a class argument in the directive
<configure xmlns:browser="http://namespaces.zope.org/browser">
<browser:page
…
template="foo.pt"
…
/>
</configure>
For more information, you should read the code of the directive, which uses SimpleViewClass where src is the template name.

In Plone, you can customize the template TTW (via portal_view_customizations) only when the template is registered explicitly (e.g. using ZCML or Grok-directives).
If you define template only in your __call__, you won't see it in portal_view_customizations.
Also, I'd guess that loading template within a method would reload it from the disk for every view instance (every request).

AFAIK, there is no difference. The ZCML directive generates a ViewClass with a ViewPageTemplateFile and renders the template on a __call__. See zope.browserpage.metaconfigure.page lines 132, 151.
That is exactly the same you do in your example: you explicitly instantiate the template in your __call__ method.
As to the subquestion: From my understanding, the significant differences are not apparent in the context of Zope2/Plone. Based on the interface (zope.publisher.interfaces.browser.IBrowserPage), the BrowserPage is the base class you want to inherit from, since it implements __call__ and browserDefault. Yet, it does not seem to matter if you use BrowserPage or BrowserView with Plone.

Related

Factory & Composite Design Patterns combo in Python & circular imports

I've built a couple projects now using the composite pattern where the objects hierarchy is built from a configuration file. My problem is that I's like to save each subclass in a separate file, to allow extensions without changing the base classes' files or having huge files. Each object once instantiated is then going to instantiate a different subclass, based on a type listed in its configuration. To do this I thought of including a factory function which will live in its own file, import all accessible subclasses, and contain a single function that will just return the appropriate subclass based on the name passed to it. The problem with this, is that each of those subclass modules, in order to use this factory, must import it. This creates a circular import situation since the factory module imports all subclasses which all import it back. How can this be avoided or is there a cleaner way to instantiate a subclass dynamically within another ?
As an example - I wrote a "Pipeline" project, useful for automation of different procedures I often need to repeat. The basic parent class is called "Block", it is inherited from to create blocks that comply with a certain interface (i.e. other projects that perform actions) and from those I inherit to blocks that actually execute specific operations. A block only needs to see the its successor in the pipeline, and does not care weather this is a single block or an entire, separate pipeline. To implement this I want each block to instantiate its successor based on the order defined in the config file that is passed along the chain. If I were to write a file that imports all implemented concrete blocks and returns whichever one requested, then I wouldn't be able to import it for use in any of the concrete blocks' modules, since they are imported into the factory one in order to be available for instantiation themselves.
You know that if you write your import statement itself inside a method or function, it will only be executed after all module-level classes and functions have been defined, right? Your circular-dependency can be fixed as simply as writing a "factory" method in the base class that will contain a from factory import factory_function statement and call it.
# basemodel.py Base file
class Base:
def factory(self, *args, **kw):
from factory import factory_function
# baseblock.py Block class hierarchy base file
from basemodel import Base
class Block(Base):
...
# blockXX.py Other block classes files:
from baseblock import Block
class SpecializedBlock31(Block):
...
# factory.py:
from block import Block
...
from block31 import Block31
...
# (or some dynamic importing using __import__ and looking at the filesystem)
def factory_function(*args, **kw):
# logic to decide which class to use
...
instance = decided_class(...)
return instance

Cherrypy - reuse modules and hide exposed paths

I have developed two separate modules which ultimately yield an object which can be passed to cherrypy's quickstart function like chp.quickstart(the_app_object, '/', some_config). The dispatcher I use in some_config is chp.dispatch.MethodDispatcher()
Now I need to develop a module which wraps both.
As I put a reference to each of the first two apps in the third one, I get to a situation like this:
import cherrypy
#cherrypy.expose
class HandlerApp1(object):
def POST(self):
return "output_app_1"
#If mounted, HandlerApp1 may be accessed at hostname:port/path_app_1
class App1(object):
path_app_1 = HandlerApp1()
############################################################
#cherrypy.expose
class HandlerApp2(object):
def POST(self):
return "output_app_2"
#If mounted, HandlerApp2 may be accessed at hostname:port/path_app_2
class App2(object):
path_app_2 = HandlerApp2()
############################################################
#If mounted, HandlerApp1 may be accessed at hostname:port/wrapped_1/path_app_1
#and HandlerApp2 at hostname:port/wrapped_2/path_app_2
class App3(object):
wrapped_1 = App1()
wrapped_2 = App2()
What I would like to invoke both POST methods in a single call, having something like this (continues from above):
#cherrypy.expose
class HandlerApp3(object):
app_1 = App1()
app_2 = App2()
def POST(self):
return app_1.POST() + app_2.POST()
##If mounted, HandlerApp3 may be accessed at hostname:port/path_app_3
class App3(object):
path_app_3 = HandlerApp3()
However, having App3 as in the second part, the individual old applications may still be accessed at hostname:port/path_app_3/app_1 and hostname:port/path_app_3/app_2, which is what I would like to avoid.
As I understand, this is because the chosen dispatcher (chp.dispatch.MethodDispatcher()) maps url path segments to attributes of the application object along the application mount point (in this case '/'), and then invokes the attribute object method corresponding to the request method (GET, POST, etc.) if the object instance as an exposed attribute (which is set by #cherrypy.expose), which means a POST to hostname:port/path_app_3/app_1 would effectively invoke <base_app_object>.path_app_3.app_1.POST().
I was wondering if it could be possible to somehow "unexpose" exposed methods down the hierarchy.
Simply renaming attributes in the third app won't do, obviously. I am not sure whether using dictionaries or another collection would help, as it feels like some hack relying on the library not properly accessing nested collections instead of attributes (assuming this would be enough to disrupt the path matching process.

Change a django form class before it is instantiated

I'm wondering if it's possible to alter or change a form class before it's instantiated.
As a concrete example, I have a payment form which needs to be modified based on the payment system being used.
Ideally I'd rather not create different form classes and then choose different ones based on the payment system; instead the payment system object will "tell" the form class what changes it need to make -- for example, making certain fields optional or instructing them to use different widgets.
Is this possible? The alternative is to pass an object representing the payment system into the form, and then having it modify the form after instantiation, but that just seems clumsy somehow to have it run in the form class rather than the view. I feel like the Django "view" is closer to a controller, and it seems like this is where something like this should happen. I also feel like modifying it'd be better to modify the form_class object rather than the form instance; I'm not even sure if when you add fields after the fact like this it will handle validation and form fill-in correctly. Will it?
Anywhere, here's some sample code of how it would work passing the payment object into a form instantiation call:
payment_system.py:
class ExamplePaymentSystem(BasePaymentSystem):
def modify_form(self, form):
for fld in self.optional_fields:
form.fields[fld].required = False
…etc…
forms.py:
class ModifiablePaymentForm(forms.ModelForm):
def __init__(self, *args, **kwargs):
self.payment_system = kwargs.pop("payment_system", None)
super(ModifiablePaymentSystem, self).__init__(*args, **kwargs)
self.payment_system.modify_form(self)
You should not modify global static data (classes defined at module scope are static), because if you run your code in many threads per process (which is often done) one thread may modify form class used by the other threads.
If you your payment systems are static (you do not add new on the fly, while server is running) I'd define one form per payment system.
If not, you can always, define new form types on the fly like that:
def get_form_type(payment_system):
class DynamicForm(BasePaymentForm):
... add change fields etc...
return DynamicForm
or modify instances like that:
class PaymentForm(BasePaymentForm):
def __init__(self, ..., payment_system):
self.fields['foo'].required = False # <--- I'm writing code
#from the memory, so possibly you'll need t edit it
#but this is doable and easy to do.
How to remove field in forms (per OP request).
When you subclass:
This is hard and I think you'll need to browse through form internals and modify them by hand after subclass creation. This is a wild guess...
def get_form_type(payment_system):
class DynamicForm(BasePaymentForm):
... add change fields etc...
del DynamicForm.base_fields['foo']
return DynamicForm
When you modify instance:
I'm not 100% sure, but I peeked into django source code (unfortunately these details are not in docs). But i guess that you should:
class PaymentForm(BasePaymentForm):
def __init__(self, ..., payment_system):
del self.fields['foo']
The fields are a dict (or I guess -- OrderedDict for that matter) and to delete field you need to remove whole key-vaule mapping.

Pyramid/Pylons Framework - opinion on how am I using 'helpers' to accomplish certain tasks

In pyramid, I have created a 'helpers' functionality similar to that in pylons.
one particular function in my helpers.py file is like this:
from pyramid.renderers import render_to_response
def createBlog():
## lots of code here ##
return render_to_response('blog.mako', {'xyz':xyz})
And then in my other applications I can import helpers and do something like the following in my templates:
${h.createBlog()}
which creates a blog on my page. But I am just wondering is this a good way of using helpers to create "module" style plugins that I can easily use anywhere in my projects. Or are there any flaws to this technique which I haven't really thought of yet?
Thanks!
It really depends on how much stuff you want to expose globally. Obviously anything you put into h is available throughout the application, whereas you could return the createBlog function just in the views you want it to be in. One little-known tidbit is that if you use class-based views, the actual class instance is available in the view as the view global variable. For example:
class Foo(object):
def __init__(self, request):
self.request = request
def createBlog(self):
return render('blog.mako'. {})
#view_config(...)
def myview(self):
return {}
Now in your template you can call render your blog using ${view.createBlog()}.

Difference between returning modified class and using type()

I guess it's more of a python question than a django one, but I couldn't replicate this behavior anywhere else, so I'll use exact code that doesn't work as expected.
I was working on some dynamic forms in django, when I found this factory function snippet:
def get_employee_form(employee):
"""Return the form for a specific Board."""
employee_fields = EmployeeFieldModel.objects.filter(employee = employee).order_by ('order')
class EmployeeForm(forms.Form):
def __init__(self, *args, **kwargs):
forms.Form.__init__(self, *args, **kwargs)
self.employee = employee
def save(self):
"Do the save"
for field in employee_fields:
setattr(EmployeeForm, field.name, copy(type_mapping[field.type]))
return type('EmployeeForm', (forms.Form, ), dict(EmployeeForm.__dict__))
[from :http://uswaretech.com/blog/2008/10/dynamic-forms-with-django/]
And there's one thing that I don't understand, why returning modified EmployeeForm doesn't do the trick?
I mean something like this:
def get_employee_form(employee):
#[...]same function body as before
for field in employee_fields:
setattr(EmployeeForm, field.name, copy(type_mapping[field.type]))
return EmployeeForm
When I tried returning modified class django ignored my additional fields, but returning type()'s result works perfectly.
Lennart's hypothesis is correct: a metaclass is indeed the culprit. No need to guess, just look at the sources: the metaclass is DeclarativeFieldsMetaclass currently at line 53 of that file, and adds attributes base_fields and possibly media based on what attributes the class has at creation time. At line 329 ff you see:
class Form(BaseForm):
"A collection of Fields, plus their associated data."
# This is a separate class from BaseForm in order to abstract the way
# self.fields is specified. This class (Form) is the one that does the
# fancy metaclass stuff purely for the semantic sugar -- it allows one
# to define a form using declarative syntax.
# BaseForm itself has no way of designating self.fields.
__metaclass__ = DeclarativeFieldsMetaclass
This implies there's some fragility in creating a new class with base type -- the supplied black magic might or might not carry through! A more solid approach is to use the type of EmployeeForm which will pick up any metaclass that may be involved -- i.e.:
return type(EmployeeForm)('EmployeeForm', (forms.Form, ), EmployeeForm.__dict__)
(no need to copy that __dict__, btw). The difference is subtle but important: rather than using directly type's 3-args form, we use the 1-arg form to pick up the type (i.e., the metaclass) of the form class, then call THAT metaclass in the 3-args form.
Blackly magicallish indeed, but then that's the downside of frameworks which do such use of "fancy metaclass stuff purely for the semantic sugar" &c: you're in clover as long as you want to do exactly what the framework supports, but to get out of that support even a little bit may require countervailing wizardry (which goes some way towards explaining why often I'd rather use a lightweight, transparent setup, such as werkzeug, rather than a framework that ladles magic upon me like Rails or Django do: my mastery of deep black magic does NOT mean I'm happy to have to USE it in plain production code... but, that's another discussion;-).
I just tried this with straight non-django classes and it worked. So it's not a Python issue, but a Django issue.
And in this case (although I'm not 100% sure), it's a question of what the Form class does during class creation. I think it has a meta class, and that this meta class will finalize the form initialization during class creation. That means that any fields you add after class creation will be ignored.
Therefore you need to create a new class, as is done with the type() statement, so that the class creation code of the meta class is involved, now with the new fields.
It's worth noting that this code snippet is a very poor means to the desired end, and involves a common misunderstanding about Django Form objects - that a Form object should map one-to-one with an HTML form. The correct way to do something like this (which doesn't require messing with any metaclass magic) is to use multiple Form objects and an inline formset.
Or, if for some odd reason you really want to keep things in a single Form object, just manipulate self.fields in the Form's __init__ method.

Categories