I'm currently writing a Flask app. One of my views has very complex business logic so I moved that to a class declared outside the view. In the constructor of that class I create several instances of flask_wtf.form.Form objects.
My problem is that at runtime I get the following error:
*** RuntimeError: Working outside of application context.
This typically means that you attempted to use functionality that needed
to interface with the current application object in a way. To solve
this set up an application context with app.app_context(). See the
documentation for more information.
(ipdb is mine)
I assume the form objects need to be in the view? But I want to move the work of creating them into a separate class so the view won't get too complex, otherwise it's unmanageable.
You can't. flask_wtf.Form requires the application context to set up CSRF.
It doesn't really make sense to instantiate a form outside of where it will be used, because you need to instantiate it with the data that is submitted to do anything useful.
Move creating the form instances to a method that you call on that class, rather than in it's __init__ method.
Related
I have a pyramid API which has basically three layers.
View -> validates the request and response
Controller -> Does business logic and retrieves things from the DB.
Services -> Makes calls to external third party services.
The services are a class for each external API which will have things like authentication data. This should be a class attribute as it does not change per instance. However, I cannot work out how to make it a class attribute.
Instead I extract the settings in the view request.registry.settings pass it to the controller which then passes it down in the init() for the service. This seems unnecessary.
Obviously I could hard code them in code but that's an awful idea.
Is there a better way?
Pyramid itself does not use global variables, which is what you are asking for when you ask for settings to be available in class-level or module-level attributes. For instance-level stuff, you can just pass the settings from Pyramid into the instance either from the view or from the config.
To get around this, you can always pass data into your models at config-time for your Pyramid app. For example, in your main just pull settings = config.get_settings() and pass some of them to where they need to be. As a general rule, you want to try to pass things around at config-time once, instead of from the view layer all the time.
Finally, a good way to do that without using class-level or module-level attributes is to register instances of your services with your app. pyramid_services library provides one approach to this, but the idea is basically to instantiate an instance of a service for your app, add it to your pyramid registry config.registry.foo = ... and when you do that you can pass in the settings. Later in your view code you can grab the service from there using request.registry.foo and it's already setup for you!
I'm trying to unit test Django REST Framework view set permissions for two reasons: speed and simplicity. In keeping with these goals I would also like to avoid using any mocking frameworks. Basically I want to do something like this:
request = APIRequestFactory().post(…)
view = MyViewSet.as_view(actions={"post": "create"})
self.assertTrue(MyPermission().has_permission(request, view))
The problem with this approach is that view is not actually a View instance but rather a function which does something with a View instance, and it does not have certain properties which I use in has_permission, such as action. How do I construct the kind of View instance which can be passed to has_permission?
The permission is already tested at both the integration and acceptance level, but I would like to avoid creating several complex and time-consuming tests to simply check that each of the relevant actions are protected.
I've been able to work around this by monkeypatching a view set instance and manually dispatching it:
view_set = MyViewSet()
view_set.action_map = {"post": "create"}
view_set.dispatch(request)
You can do something like below.
request = APIRequestFactory().post(…)
view_obj = MyViewSet()
self.assertTrue(MyPermission().has_permission(request, view_obj))
This is more of a conceptual question. While learning the Django class-based view, I am wondering if it is possible to make a call to a Django view as an initiation call. I mean, after the first call, the following calls from the templates can share the instance variables created by the first one. This avoids passing variables back and forth between the template and server.
No. Django views are specifically designed to prevent this. It would be a very bad idea; any instance variables set would be shared by all future users of that process, leading to potential information leakage and other thread-safety bugs.
If you want to store information between requests, use the session.
I'm currently building a Django app which uses a singleton object.
I want to save this object as a CBV variable because I dont want to initialize it for every 'get' call.
My question in short - can you make a CBV's get function an instance method instead of a classmethod?
And if so, can I save a variable as an instance variable?
EDIT - A better explanation to my question:
I created a class that handles a serialized connection with an electronic measurment instrument.
This class must have only 1 instance (singleton), if another instance will be created a memory leak will crash python.
I want to use it with django in the following way:
Get request to a certain url address ->
The view will ask the instrument class instance for data->
Instance responds with data ->
View returns a JsonResponse with the data.
I think the best way to do it is making the CBV's get method (whose related to the url im getting from) an instance method, but its not such a good practice..
How should I do it?
As I said, get is an instance method.
But you are confusing responsibilities here. A class should have one responsibility only. The view class has the responsibility of responding to the request and returning a response; this is quite separate from the connection to the instrument. That should be a separate class, which is instantiated at module level and used within the view class.
Suppose, in Django 1.6, you have the following model code:
class FooManager(models.Manager):
def get_queryset():
return ... # i.e. return a custom queryset
class Foo(models.Model):
foo_manager = FooManager()
If, outside the Foo model definition (e.g. in view code or in the shell), you do:
Foo.objects = FooManager()
Foo.objects.all()
you'll get an exception in the Django internal code on Foo.objects.all() due to a variable named lookup_model being `None'.
However, if you instead do:
Foo.objects = Foo.foo_manager
Foo.objects.all()
The Foo.objects.all() will work as expected, i.e. as if objects had been defined to be FooManager() in the model definition in the first place.
I believe this behavior is due to Django working its "magic" in creating managers during model definition (just as it works magic in creating model fields).
My question: is there any reason NOT to assign objects to an alternate manager in this way outside of the model definition? It seems to work fine, but I don't fully understand the internals so want to make sure.
In case you are wondering, the context is that I have a large code base with many typical references to objects. I want to have this code base work on different databases dynamically, i.e. based on a request URL parameter. My plan is to use middleware that sets objects for all relevant models to managers that point to the appropriate database. The rest of the app code would then go on its merry way, using objects without ever having to know anything has changed.
The trouble is that this is not at all thread safe. Doing this will change the definition for all requests being served by that process, until something else changes it again. That is very likely to have all sorts of unexpected effects.