Background
So let's say I'm making app for GAE, and I want to use API Hooks.
BIG EDIT: In the original version of this question, I described my use case, but some folks correctly pointed out that it was not really suited for API Hooks. Granted! Consider me helped. But now my issue is academic: I still don't know how to use hooks in practice, and I'd like to. I've rewritten my question to make it much more generic.
Code
So I make a model like this:
class Model(db.Model):
user = db.UserProperty(required=True)
def pre_put(self):
# Sets a value, raises an exception, whatever. Use your imagination
And then I create a db_hooks.py:
from google.appengine.api import apiproxy_stub_map
def patch_appengine():
def hook(service, call, request, response):
assert service == 'datastore_v3'
if call == 'Put':
for entity in request.entity_list():
entity.pre_put()
apiproxy_stub_map.apiproxy.GetPreCallHooks().Append('preput',
hook,
'datastore_v3')
Being TDD-addled, I'm making all this using GAEUnit, so in gaeunit.py, just above the main method, I add:
import db_hooks
db_hooks.patch_appengine()
And then I write a test that instantiates and puts a Model.
Question
While patch_appengine() is definitely being called, the hook never is. What am I missing? How do I make the pre_put function actually get called?
Hooks are a little low level for the task at hand. What you probably want is a custom property class. DerivedProperty, from aetycoon, is just the ticket.
Bear in mind, however, that the 'nickname' field of the user object is probably not what you want - per the docs, it's simply the user part of the email field if they're using a gmail account, otherwise it's their full email address. You probably want to let users set their own nicknames, instead.
The issue here is that within the context of the hook() function an entity is not an instance of db.Model as you are expecting.
In this context entity is the protocol buffer class confusingly referred to as entity (entity_pb). Think of it like a JSON representation of your real entity, all the data is there, and you could build a new instance from it, but there is no reference to your memory-resident instance that is waiting for it's callback.
Monkey patching all of the various put/delete methods is the best way to setup Model-level callbacks as far as I know†
Since there doesn't seem to be that many resources on how to do this safely with the newer async calls, here's a BaseModel that implements before_put, after_put, before_delete & after_delete hooks:
class HookedModel(db.Model):
def before_put(self):
logging.error("before put")
def after_put(self):
logging.error("after put")
def before_delete(self):
logging.error("before delete")
def after_delete(self):
logging.error("after delete")
def put(self):
return self.put_async().get_result()
def delete(self):
return self.delete_async().get_result()
def put_async(self):
return db.put_async(self)
def delete_async(self):
return db.delete_async(self)
Inherit your model-classes from HookedModel and override the before_xxx,after_xxx methods as required.
Place the following code somewhere that will get loaded globally in your applicaiton (like main.py if you use a pretty standard looking layout). This is the part that calls our hooks:
def normalize_entities(entities):
if not isinstance(entities, (list, tuple)):
entities = (entities,)
return [e for e in entities if hasattr(e, 'before_put')]
# monkeypatch put_async to call entity.before_put
db_put_async = db.put_async
def db_put_async_hooked(entities, **kwargs):
ents = normalize_entities(entities)
for entity in ents:
entity.before_put()
a = db_put_async(entities, **kwargs)
get_result = a.get_result
def get_result_with_callback():
for entity in ents:
entity.after_put()
return get_result()
a.get_result = get_result_with_callback
return a
db.put_async = db_put_async_hooked
# monkeypatch delete_async to call entity.before_delete
db_delete_async = db.delete_async
def db_delete_async_hooked(entities, **kwargs):
ents = normalize_entities(entities)
for entity in ents:
entity.before_delete()
a = db_delete_async(entities, **kwargs)
get_result = a.get_result
def get_result_with_callback():
for entity in ents:
entity.after_delete()
return get_result()
a.get_result = get_result_with_callback
return a
db.delete_async = db_delete_async_hooked
You can save or destroy your instances via model.put() or any of the db.put(), db.put_async() etc, methods and get the desired effect.
†would love to know if there is an even better solution!?
I don't think that Hooks are really going to solve this problem. The Hooks will only run in the context of your AppEngine application, but the user can change their nickname outside of your application using Google Account settings. If they do that, it won't trigger any logic implement in your hooks.
I think that the real solution to your problem is for your application to manage its own nickname that is independent of the one exposed by the Users entity.
Related
I have a database handler that utilizes SQLAlchemy ORM to communicate with a database. As part of SQLAlchemy's recommended practices, I interact with the session by using it as a context manager. How can I test what a function called inside the context manager using that context manager has done?
EDIT: I realized the file structure mattered due to the complexity in introduced. I re-structured the code below to more closely mirror what the end file structure will be like, and what a common production repo in my environment would look like, with code being defined in one file and tests in a completely separate file.
For example:
Code File (delete_things_from_table.py):
from db_handler import delete, SomeTable
def delete_stuff(handler):
stmt = delete(SomeTable)
with handler.Session.begin() as session:
session.execute(stmt)
session.commit()
Test File:
import pytest
import delete_things_from_table as dlt
from db_handler import Handler
def test_delete_stuff():
handler = db_handler()
dlt.delete_stuff(handler):
# Test that session.execute was called
# Test the value of 'stmt'
# Test that session.commit was called
I am not looking for a solution specific to SQLAlchemy; I am only utilizing this to highlight what I want to test within a context manager, and any strategies for testing context managers are welcome.
After sleeping on it, I came up with a solution. I'd love additional/less complex solutions if there are any available, but this works:
import pytest
import delete_things_from_table as dlt
from db_handler import Handler
class MockSession:
def __init__(self):
self.execute_params = []
self.commit_called = False
def execute(self, *args, **kwargs):
self.execute_params.append(["call", args, kwargs])
return self
def commit(self):
self.commit_called = True
return self
def begin(self):
return self
def __enter__(self):
return self
def __exit__(self, type, value, traceback):
pass
def test_delete_stuff(monkeypatch):
handler = db_handler()
# Parens in 'MockSession' below are Important, pass an instance not the class
monkeypatch.setattr(handler, Session, MockSession())
dlt.delete_stuff(handler):
# Test that session.execute was called
assert len(handler.Session.execute_params)
# Test the value of 'stmt'
assert str(handler.Session.execute_params[0][1][0]) == "DELETE FROM some_table"
# Test that session.commit was called
assert handler.Session.commit_called
Some key things to note:
I created a static mock instead of a MagicMock as it's easier to control the methods/data flow with a custom mock class
Since the SQLAlchemy session context manager requires a begin() to start the context, my mock class needed a begin. Returning self in begin allows us to test the values later.
context managers rely on on the magic methods __enter__ and __exit__ with the argument signatures you see above.
The mocked class contains mocked methods which alter instance variables allowing us to test later
This relies on monkeypatch (there are other ways I'm sure), but what's important to note is that when you pass your mock class you want to patch in an instance of the class and not the class itself. The parentheses make a world of difference.
I don't think it's an elegant solution, but it's working. I'll happily take any suggestions for improvement.
I would like to use Django signals to trigger a celery task like so:
def delete_content(sender, instance, **kwargs):
task_id = uuid()
task = delete_libera_contents.apply_async(kwargs={"instance": instance}, task_id=task_id)
task.wait(timeout=300, interval=2)
But I'm always running into kombu.exceptions.EncodeError: Object of type MusicTracks is not JSON serializable
Now I'm not sure how to tread MusicTracks instance as it's a model class instance. How can I properly pass such instances to my task?
At my tasks.py I have the following:
#app.task(name="Delete Libera Contents", queue='high_priority_tasks')
def delete_libera_contents(instance, **kwargs):
libera_backend = instance.file.libera_backend
...
Never send instance in celery task, you only should send variables for example instanse primary key and then inside of the celery task via this pk find this instance and then do your logic
your code should be like this:
views.py
def delete_content(sender, **kwargs):
task_id = uuid()
task = delete_libera_contents.apply_async(kwargs={"instance_pk": sender.pk}, task_id=task_id)
task.wait(timeout=300, interval=2)
task.py
#app.task(name="Delete Libera Contents", queue='high_priority_tasks')
def delete_libera_contents(instance_pk, **kwargs):
instance = Instance.ojbects.get(pk = instance_pk)
libera_backend = instance.file.libera_backend
...
you can find this rule in celery documentation (can't find link), one of
reasons imagine situation:
you send your instance to celery tasks (it is delayed for any reason for 5 min)
then your project makes logic with this instance, before your task finished
then celery's task time come and it uses this instance old version, and this instance become corrupted
(this is the reason as I think it is, not from the documentation)
First off, sorry for making the question a bit confusing, especially for the people that have already written an answer.
In my case, the delete_content signal can be trigger from three different models, so it actually looks like this:
#receiver(pre_delete, sender=MusicTracks)
#receiver(pre_delete, sender=Movies)
#receiver(pre_delete, sender=TvShowEpisodes)
def delete_content(sender, instance, **kwargs):
delete_libera_contents.delay(instance_pk=instance.pk)
So every time one of these models triggers a delete action, this signal will also trigger a celery task to actually delete the stuff in the background (all stored on S3).
As I cannot and should not pass instances around directly as pointed out by #oruchkin, I pass the instance.pk to the celery task which I then have to find in the celery task as I don't know in the celery task what model has triggered the delete action:
#app.task(name="Delete Libera Contents", queue='high_priority_tasks')
def delete_libera_contents(instance_pk, **kwargs):
if Movies.objects.filter(pk=instance_pk).exists():
instance = Movies.objects.get(pk=instance_pk)
elif MusicTracks.objects.filter(pk=instance_pk).exists():
instance = MusicTracks.objects.get(pk=instance_pk)
elif TvShowEpisodes.objects.filter(pk=instance_pk).exists():
instance = TvShowEpisodes.objects.get(pk=instance_pk)
else:
raise logger.exception("Task: 'Delete Libera Contents', reports: No instance found (code: JFN4LK) - Warning")
libera_backend = instance.file.libera_backend
You might ask why do you not simply pass the sender from the signal to the celery task. I also tried this and again, as already pointed out, I cannot pass instances and I fail with:
kombu.exceptions.EncodeError: Object of type ModelBase is not JSON serializable
So it really seems I have to hard obtain the instance using the if-elif-else clauses at the celery task.
So I'm writing tests for my django application and I have successfully mocked quite a few external api calls that aren't needed for tests however one is tripping me up which is send_sms. To start here is the code:
a/models.py:
from utils.sms import send_sms
...
class TPManager(models.Manager):
def notification_for_job(self, job):
...
send_sms()
...
class TP(models.Model):
objects = TPManager()
...
p/test_models.py:
#patch('a.models.send_sms')
#patch('p.signals.send_mail')
def test_tradepro_review_job_deleted(self, send_mail, send_sms):
job = Job.objects.create(
tradeuser=self.tradeuser,
location=location,
category=category,
details="sample details for job"
)
The Job object creation triggers TP.objects.notification_for_job via its perform_create method here:
p/views.py:
def perform_create(self, serializer):
job = serializer.save(tradeuser=self.request.user.tradeuser)
if settings.DEV_MODE:
from a.models import TP
job.approved = True
job.save()
TP.objects.notification_for_job(job)
I have tried mocking a.models.TP.objects.notification_for_job, utils.sms.send_sms, a.models.TPManger.notification_for_job all to no avail. This is a pretty complex flow but I believe I have tried the main mock candidates here and was wondering if anybody knows how to either mock the notification_for_job function or send_sms function properly mostly just to prevent these api call that inevitably fail due to my test environment.
Any ideas are greatly appreciated!
Is there such thing as application-scope python variables in Flask? I'd like to implement some primitive messaging between users, and shared data cache. Of course, it is possible to implement this via a database, but I wanted to know maybe there is a db-free and perhaps faster approach. Ideally, if the shared variable would be a live python object, but my needs would be satisfied with strings and ints, too.
Edit: complemented with (non-working) example
from flask import g
#app.route('/store/<name>')
def view_hello(name=None):
g.name = name
return "Storing " + g.name
#app.route("/retrieve")
def view_listen():
n = g.name
return "Retrieved: " + n
At trying to retrieve g.name, this triggers error:
AttributeError: '_RequestGlobals' object has no attribute 'name'
I'm unsure whether this is a good idea or not, but I've been using this to share data easily between requests:
class MyServer(Flask):
def __init__(self, *args, **kwargs):
super(MyServer, self).__init__(*args, **kwargs)
#instanciate your variables here
self.messages = []
app = MyServer(__name__)
#app.route("/")
def foo():
app.messages.append("message")
Since flask 0.10 flask.g would be the way to go.
In earlier versions flask.g is stored in the request context and is cleared between requests. If you're using an older version you should store your app level stuff in flask.current_app
My goal: In Pyramid, to call another view-callable, and to get a Response object back without knowing any details about that view-callable.
In my Pyramid application, say I have a view "foo" which is defined using a view_config decorator:
#view_config(route_name="foo",
renderer="foo.jinja2")
def foo_view(request):
return {"whereami" : "foo!"}
Now say that I want to route "bar" to a view that does the same thing for the time being, so it internally calls foo_view and returns its Response:
#view_config(route_name="bar")
def bar_view(request):
return foo_view(request)
...but wait! That doesn't work, since foo_view doesn't return a Response, its renderer does.
So, this will work:
#view_config(route_name="bar",
renderer="foo.jinja2")
def bar_view(request):
return foo_view(request)
as it will apply the same renderer as foo_view did. But this is bad, as I now must repeat myself by copying the renderer value AND having to know the renderer of the view being called.
So, I am going to hope that there is some function available in Pyramid that allows calling another view-callable and getting a Response object back without knowing or caring how it was rendered:
#view_config(route_name="bar")
def bar_view(request):
response = some_function_that_renders_a_view_callable(foo_view, request)
return response
What would some_function_that_renders_a_view_callable be?
pyramid.views.render_view appears to search for a view by name; I don't want to give my views names.
(Note: Returning HTTPFound to cause the client to redirect to the target route is what I am trying avoid. I want to "internally" redirect).
Yep. There is some concerns
doesn't return a Response
predicates/renderer
permissions
request properties associated to old request
Thats why you should not call view from view as function, unless you know what you doing
Pyramid creators did awesome tool for server side redirect - http://docs.pylonsproject.org/projects/pyramid/en/latest/narr/subrequest.html
You can invoking a view with using request.invoke_subrequest:
from wsgiref.simple_server import make_server
from pyramid.config import Configurator
from pyramid.request import Request
def view_one(request):
subreq = Request.blank('/view_two')
response = request.invoke_subrequest(subreq)
return response
def view_two(request):
request.response.body = 'This came from view_two'
return request.response
if __name__ == '__main__':
config = Configurator()
config.add_route('one', '/view_one')
config.add_route('two', '/view_two')
config.add_view(view_one, route_name='one')
config.add_view(view_two, route_name='two')
app = config.make_wsgi_app()
server = make_server('0.0.0.0', 8080, app)
server.serve_forever()`
When /view_one is visted in a browser, the text printed in the
browser pane will be "This came from view_two". The view_one view
used the pyramid.request.Request.invoke_subrequest() API to obtain a
response from another view (view_two) within the same application
when it executed. It did so by constructing a new request that had a
URL that it knew would match the view_two view registration, and
passed that new request along to
pyramid.request.Request.invoke_subrequest(). The view_two view
callable was invoked, and it returned a response. The view_one view
callable then simply returned the response it obtained from the
view_two view callable.
I was struggling with this as well. I have a solution using the render_to_response method, though I'm sure there's a "more correct" way to do it. Until someone posts it, however, here is how I handled this:
from pyramid.renderers import render_to_response
#view_config(route_name="foo", renderer="foo.mak")
def foo_view(request):
return {'stuff':'things', '_renderer':'foo.mak')
def bar_view(request):
values = foo_view(request)
renderer = values['_renderer']
return render_to_response(renderer,values)
(Pyramid 1.3)
This requires a renderer to be used, but by declaring that renderer in the original view's return values, you can retrieve it in another view without knowing what it is. I'm suspecting the need to do this isn't easily findable because there's other, better methods for accomplishing tasks solved by this solution.
Another shortcoming is that it relies on direct import of the view callable. It would be nice if it could be looked up directly by route.
The Pyramid documentation here indicates that leaving the name key word argument out of view_config will cause the view to be registered by the function itself (rather than a string):
Such a registration... implies that the view name will be *my_view*
So, in your case you should be able to use pyramid.view.render_view or pyramid.view.render_view_to_response referencing foo_view directly:
#view_config(route_name="bar")
def bar_view(request):
return pyramid.views.render_view_to_response(None, request, name=foo_view)
Update:
Yep, your right, passing the view function does not work.
It's interesting, but taking your example code and applying the route_name to the config
did not work for me. However, the following example, just giving the view a name sets the route url
and gives the view a name. In this fashion render_view_to_response works as advertised. Naming,
your views may not be what you want, but this configuration accomplishes the same thing as your
example code without added configuration.
#view_config(name="foo")
def foo_view(request):
# returning a response here, in lieu of having
# declared a renderer to delegate to...
return Response('Where am i? `{0[whereami]}'.format({"whereami" : "foo!"}))
#view_config(name="bar")
def bar_view(request):
# handles the response if bar_view has a renderer
return render_view_to_response(None, request, name='foo')
#view_config(name="baz")
def baz_view(request):
# presumably this would not work if foo_view was
# not returning a Response object directly, as it
# skips over the rendering part. I think you would
# have to declare a renderer on this view in that case.
return foo_view(request)
if __name__ == '__main__':
config = Configurator()
config.scan()
app = config.make_wsgi_app()
serve(app, host='127.0.0.1', port='5000')
Not the precise solution you asked for, but a solution to the problem you describe:
Create a view class, of which both foo and bar are methods. Then bar can call self.foo()
Common view_configuration, such as the template name can be applied to the class, and then you can decorate each method with just the view name.
In short, the following should meet your needs, if I understand the problem correctly.
#view_defaults(renderer="foo.jinja2")
class WhereaboutsAreFoo(object):
#view_config(route-name="foo")
def foo_view(self):
return {"whereami" : "foo!"}
#view_config(route-name="bar")
def bar_view(self):
return self.foo_view()
can't you do something like that:
#view_config(name="baz")
def baz_view(request):
return HTTPFound(location=self.request.route_path('foo'))