I was wondering if there's a pythonic equivalent of the RequestDispatcher.forward(request, response) that I'm used to from Java servlet programming? It's a common enough technique in Java, and enables you to do, say, a little preprocessing of a particular type of request, and then hand over to another url handler. This all happens inside the server, which is an enormous time saver.
The nearest thing I can see in the GAE/Python documentation is RequestHandler.redirect(), but that's hopeless. For one thing, there's an extra round trip to the browser. For another, there's no guarantee the redirect will actually be followed once it's out of my hands, which makes me a little twitchy. (Semantically it's just wrong too, since a redirect implies that the original resource may be unavailable or have moved, which ain't the case.)
There's something that sounds tantalisingly close in webapp2, described at http://webapp-improved.appspot.com/api/webapp2.html#webapp2.RedirectHandler; but from what I can guess from the rather sketchy documentation, it's just for doing a kind of url-rewriting, which is not at all what I want.
I don't know about GAE, but in django you can do something like this:
def view2(request):
do_some_stuff()
def view1(request):
do_some_stuff()
view2(request) # <-- note this line
In GAE maybe you can do this:
return View.action.__func__(self, ...)
(though this, depending on the function, might screw things up, if, for example, View.action uses self.something_which_should_be_here_but_isnt_because_self_is_some_other_thing)
The easiest way to do this is simply to refactor the relevant code into a function that you call from both handlers, or to put the functionality in a base class both handlers extend. Which is more suitable depends on your app.
Related
A lot of my mock usage is in unit tests, but I am not sure if I can use the mock library for production, consider the following trivial example to get data from external source.
class Receiver(object):
def get_data(self):
return _call_api(...)
Now, can I use mock library to change the get_data() function for re-run purpose on production?
with patch('Receiver.get_data') as mock_get_data:
mock_get_data.return_values = [1, 2]
...
Some might suggest to write another Rerun receiver as a better approach, while I don't disagree but I am still raising this question for the sake of curiosity.
My questions include:
If no, what's the reason?
If yes, any caveats?
I would agree that for production use, a Receiver subclass that has an overridden get_data method would be much better.
The reason is simple -- if each type of receiver only receives data from a single source then your code will be much easier to read and maintain. If the same Reciever will end up returning data from multiple sources, then the code will be confusing and you'll end up needing to hunt down whether you were fetching data from one place of whether it's data that you explicitly set via mock, etc.
No. If a function is supposed to behave a certain way in production, then code it to behave that way. If you need fallback or retry behavior, mock is not the right way to do that.
I've been racking my brain on this for the last few weeks and I just can't seem to understand it. I'm hoping you folks here can give me some clarity.
A LITTLE BACKGROUND
I've built an API to help serve a large website and like all of us, are trying to keep the API as efficient as possible. Part of this efficiency is to NOT create an object that contains custom business logic over and over again (Example: a service class) as requests are made. To give some personal background I come from the Java world so I'm use to using a IoC or DI to help handle object creation and injection into my classes to ensure classes are NOT created over and over on a per request basis.
WHAT I'VE READ
While looking at many Python IoC and DI posts I've become rather confused on how to best approach creating a given class and not having to worry about the server getting overloaded with too many objects based on the amount of requests it may be handling.
Some people say an IoC or DI really isn't needed. But as I run my Django app I find that unless I construct the object I want globally (top of file) for views.py to use later rather than within each view class or def within views.py I run the change of creating multiple classes of the same type, which from what I understand would cause memory bloat on the server.
So what's the right way to be pythonic to keep objects from being built over and over? Should I invest in using an IoC / DI or not? Can I safely rely on setting up my service.py files to just contain def's instead of classes that contain def's? Is the garbage collector just THAT efficient so I don't even have to worry about it.
I've purposely not placed any code in this post since this seems like a general questions, but I can provide a few code examples if that helps.
Thanks
From a confused engineer that wants to be as pythonic as possible
You come from a background where everything needs to be a class, I've programmed web apps in Java too, and sometimes it's harder to unlearn old things than to learn new things, I understand.
In Python / Django you wouldn't make anything a class unless you need many instances and need to keep state.
For a service that's hardly the case, and sometimes you'll notice in Java-like web apps some services are made singletons, which is just a workaround and a rather big anti-pattern in Python
Pythonic
Python is flexible enough so that a "services class" isn't required, you'd just have a Python module (e.g. services.py) with a number of functions, emphasis on being a function that takes in something, returns something, in a completely stateless fashion.
# services.py
# this is a module, doesn't keep any state within,
# it may read and write to the DB, do some processing etc but doesn't remember things
def get_scores(student_id):
return Score.objects.filter(student=student_id)
# views.py
# receives HTTP requests
def view_scores(request, student_id):
scores = services.get_scores(student_id)
# e.g. use the scores queryset in a template return HTML page
Notice how if you need to swap out the service, you'll just be swapping out a single Python module (just a file really), so Pythonistas hardly bother with explicit interfaces and other abstractions.
Memory
Now per each "django worker process", you'd have that one services module, that is used over and over for all requests that come in, and when the Score queryset is used and no longer pointed at in memory, it'll be cleaned up.
I saw your other post, and well, instantiating a ScoreService object for each request, or keeping an instance of it in the global scope is just unnecessary, the above example does the job with one module in memory, and doesn't need us to be smart about it.
And if you did need to keep state in-between several requests, keeping them in online instances of ScoreService would be a bad idea anyway because now every user might need one instance, that's not viable (too many online objects keeping context). Not to mention that instance is only accessible from the same process unless you have some sharing mechanisms in place.
Keep state in a datastore
In case you want to keep state in-between requests, you'd keep the state in a datastore, and when the request comes in, you hit the services module again to get the context back from the datastore, pick up where you left it and do your business, return your HTTP response, then unused things will get garbage collected.
The emphasis being on keeping things stateless, where any given HTTP request can be processed on any given django process, and all state objects are garbage collected after the response is returned and objects go out of scope.
This may not be the fastest request/response cycle we can pull, but it's scalable as hell
Look at some major web apps written in Django
I suggest you look at some open source Django projects and look at how they're organized, you'll see a lot of the things you're busting your brains with, Djangonauts just don't bother with.
I am working on a framework installer script. The script needs to modify the users AppDelegate file and inject a few lines of code at the beginning or end of the applicationDidFinishLaunching and applicationWillTerminatate methods.
Some options I've thought about:
Parse the source code, and insert lines at correct positions. (Can be difficult to get right and work for everyone's code, just about equivalent to writing a compiler...)
Subclass the AppDelegate file (is this possible?)
Categories??
Which of these is the best option? Any other suggestions?
If you really need to make this something that modifies the AppDelegate with no intervention at all from the developer, and you can modify the xcodeproj and the nib but not the source, there is a way to do it.
First, make sure your classes get compiled in, and an instance of your class gets created in the nib.
Now, here's what you do:
Define a -[AHHackClass applicationDidFinishLaunching] method that does your extra stuff, then calls the [self originalApplicationDidFinishLaunching].
In -[AHHackClass awakeFromNib:], use objc runtime calls to copy the -[AHHackClass applicationDidFinishLaunching] method to the application delegate as -[originalApplicationDidFinishLaunching], then use method swizzling to swap the two methods' implementations.
Do the same to swizzle applicationWillTerminate.
See JRSwizzle for some code that makes the method swizzling much easier, and MethodSwizzling at CocoaDev for some background.
However, there may be a much easier way to do this: Does your extra stuff really need to be called from the app delegate's applicationDidFinishLaunching and applicationWillTerminate methods? Can't you just set up to listen for notifications in your awakeFromNib and handle things there?
And if, for some reason, you can't do that, can you just put a line in the instructions to the developer to call your method from their applicationDidFinishLaunching method?
One solution I am currently considering:
Add NewAppDelegate.m/h file that subclasses AppDelegate.
This subclass, does what I want, and then calls the super methods.
Find/replace AppDelegate with NewAppDelegate.m.h in main.m
This seems pretty simple and robust. Thoughts on this? Will this work for all/most projects?
Pyramid uses gettext *.po files for translations, a very good and stable way to internationalize an application. It's one disadvantage is it cannot be changed from the app itself. I need some way to give a normal user the ability to change the translations on his own. Django allows changes in the file directly and after the change it restarts the whole app. I do not have that freedom, because the changes will be quite frequent.
Since I could not find any package that will help me with the task, I decided to override the Localizer. My idea is based on using a Translation Domain like Zope projects use and make Localizer search for registered domain, and if not found, back off to default translation strategy.
The problem is that I could not find a good way to place a custom translation solution into the Localizer itself. All I could think of is to reimplement the get_localizer method and rewrite the whole Localizer. But there are several things, that need to be copypasted here, such as interpolation of mappings and other tweeks related to translation strings.
I don't know how much things you have in there but I did something alike a while ago.. Will have to do it once again. The implementation is pretty simple...
If you can be sure that all calls will be handled trough _() or something alike. You can provide your own function. It will look something like it.
def _(val):
val = db.translations.find({key: id, locale: request.locale_name})
if val:
return val['value']
else:
return real_gettext(val)
this is pretty simple... then you need to have something that will dump the database into the file...
But I guess overriding the localizer makes more sense.. I did it a long time ago and overiding the function was easier than searching in the code.
The plus side of Localiser is that it will work everywhere. Monkey patch is pretty cool but it's also pretty ugly. If I had to do it again, I'd provide my own localizer that will load from a database first and then display it's own value. The reason behind database is that if someone closes the server and the file hasn't been modified you won't see the results.
If the DB is more than needed, then Localize is good enough and you can update the file on every update. If the server will get restarted it will load the new files... You'll have to compile the catalog first.
I've noticed I have the same piece of code sitting at the top of several of my controllers. They tend to look like this:
def app_description(app):
""" Dictionary describing an app. """
return {'name': app.app,
'id': app.id,
'is_new': app.is_new(),
'created_on': app.created_on.strftime("%m/%d/%Y"),
'configured': app.configured }
I'll call this from a couple different actions in the controller, but generally not outside that controller. It accesses properties. It calls methods. It formats opaque objects (like dates).
My question is: is this controller code, or model code?
The case for controller:
It defines my API.
It's currently only used in that module.
There doesn't seem to be any logic here.
The case for model:
It seems like a description of the data, which the model should be responsible for.
It feels like I might want to use this in other controllers. Haven't gotten there yet, but these functions are still pretty new, so they might.
Attaching a function to the object it clearly belongs to seems better than leaving it as a module-level function.
It could be more succinctly defined on the model. Something like having the top-level model object define .description(), and the subclasses just define a black/whitelist of properties, plus override the method itself to call functions. I'm pretty sure that would be fewer lines of code (as it would save me the repetition of things like 'name': app.name), which seems like a good thing.
Not sure which framework you are using, but I would suggest creating this helper functionality in its own class and put it in a shared folder like lib/
Alternatively you could have an application helper module that just has a bunch of these helpful application-wide functions.
Either way, I'd keep it away from both the model and the controller.
The answer I finally decided on:
In the short term, having these methods is fine in the controllers. If they define the output, then, OK, they can stay there. They're only used in the model.
Theres a couple things to watch out for, which indicate they've grown up, and need to go elsewhere:
In one case, I needed access to a canonical serialization of the object. At that point, it moved into the model, as a model method.
In another case, I found that I was formatting all timestamps the same. I have a standard #ajaxify decorator that does things like sets Content-Type headers, does JSON encoding, etc. In this case, I moved the datetime standard formatting into there -- when the JSON encoder hits a datetime (formerly unserializable), it always treats it the same (seconds since the epoch, for me).
In yet a third case, I realized that I was re-using this function in a couple controllers. For that, I pulled it out into a common class (like another answer suggested) and used that to define my "Web API". I'd used this pattern before -- it makes sense for grouping similarly-used data (like timeseries data, or top-N lists).
I suspect there's more, but basically, I don't think there all as similar as I thought they were initially. I'm currently happy thinking about them as a convention for simple objects in our (small-ish, new-ish) codebase, with the understanding that after a few iterations, a better solution may present itself. In the meantime, they stay in the controller and define my AJAXy-JSON-only interface.