I'm using a lot of werkzeug.local.LocalProxy objects in my Flask app. They are supposed to be perfect stand-ins for objects, but they aren't really, since they don't respond to type() or instanceof() correctly.
SQLAlchemy doesn't like them at all. If I make a LocalProxy to a SQLAlchemy record, SQLAlchemy considers it to be None. If I pass it a LocalProxy to a simpler type, it just says it's the wrong type.
Here's an example of Flask-SQLAlchemy having a bad time with LocalProxy.
How do you guys deal with this problem? Just call _get_current_object() a lot? It'd be pretty cool if SQLAlchemy or Flask-SQLAlchemy could automatically handle these LocalProxy objects more gracefully, especially considering Flask-Login uses them, and pretty much everybody uses that, right?
I'm considering adding this function to my project to deal with it, and wrapping any of my localproxies in it before passing them to sqlalchemy:
from werkzeug.local import LocalProxy
def real(obj):
if isinstance(obj, LocalProxy):
return obj._get_current_object()
return obj
I patched the drivers used by SQLAlchemy, but I fear that it is not the most generic solution.
from flask_sqlalchemy import SQLAlchemy as FlaskSQLAlchemy
from sqlalchemy.engine import Engine
from werkzeug.local import LocalProxy
class SQLAlchemy(FlaskSQLAlchemy):
"""Implement or overide extension methods."""
def apply_driver_hacks(self, app, info, options):
"""Called before engine creation."""
# Don't forget to apply hacks defined on parent object.
super(SQLAlchemy, self).apply_driver_hacks(app, info, options)
if info.drivername == 'sqlite':
from sqlite3 import register_adapter
def adapt_proxy(proxy):
"""Get current object and try to adapt it again."""
return proxy._get_current_object()
register_adapter(LocalProxy, adapt_proxy)
elif info.drivername == 'postgresql+psycopg2': # pragma: no cover
from psycopg2.extensions import adapt, register_adapter
def adapt_proxy(proxy):
"""Get current object and try to adapt it again."""
return adapt(proxy._get_current_object())
register_adapter(LocalProxy, adapt_proxy)
elif info.drivername == 'mysql+pymysql': # pragma: no cover
from pymysql import converters
def escape_local_proxy(val, mapping):
"""Get current object and try to adapt it again."""
return converters.escape_item(
val._get_current_object(),
self.engine.dialect.encoding,
mapping=mapping,
)
converters.encoders[LocalProxy] = escape_local_proxy
Original source can be found here.
Related
I have a database handler that utilizes SQLAlchemy ORM to communicate with a database. As part of SQLAlchemy's recommended practices, I interact with the session by using it as a context manager. How can I test what a function called inside the context manager using that context manager has done?
EDIT: I realized the file structure mattered due to the complexity in introduced. I re-structured the code below to more closely mirror what the end file structure will be like, and what a common production repo in my environment would look like, with code being defined in one file and tests in a completely separate file.
For example:
Code File (delete_things_from_table.py):
from db_handler import delete, SomeTable
def delete_stuff(handler):
stmt = delete(SomeTable)
with handler.Session.begin() as session:
session.execute(stmt)
session.commit()
Test File:
import pytest
import delete_things_from_table as dlt
from db_handler import Handler
def test_delete_stuff():
handler = db_handler()
dlt.delete_stuff(handler):
# Test that session.execute was called
# Test the value of 'stmt'
# Test that session.commit was called
I am not looking for a solution specific to SQLAlchemy; I am only utilizing this to highlight what I want to test within a context manager, and any strategies for testing context managers are welcome.
After sleeping on it, I came up with a solution. I'd love additional/less complex solutions if there are any available, but this works:
import pytest
import delete_things_from_table as dlt
from db_handler import Handler
class MockSession:
def __init__(self):
self.execute_params = []
self.commit_called = False
def execute(self, *args, **kwargs):
self.execute_params.append(["call", args, kwargs])
return self
def commit(self):
self.commit_called = True
return self
def begin(self):
return self
def __enter__(self):
return self
def __exit__(self, type, value, traceback):
pass
def test_delete_stuff(monkeypatch):
handler = db_handler()
# Parens in 'MockSession' below are Important, pass an instance not the class
monkeypatch.setattr(handler, Session, MockSession())
dlt.delete_stuff(handler):
# Test that session.execute was called
assert len(handler.Session.execute_params)
# Test the value of 'stmt'
assert str(handler.Session.execute_params[0][1][0]) == "DELETE FROM some_table"
# Test that session.commit was called
assert handler.Session.commit_called
Some key things to note:
I created a static mock instead of a MagicMock as it's easier to control the methods/data flow with a custom mock class
Since the SQLAlchemy session context manager requires a begin() to start the context, my mock class needed a begin. Returning self in begin allows us to test the values later.
context managers rely on on the magic methods __enter__ and __exit__ with the argument signatures you see above.
The mocked class contains mocked methods which alter instance variables allowing us to test later
This relies on monkeypatch (there are other ways I'm sure), but what's important to note is that when you pass your mock class you want to patch in an instance of the class and not the class itself. The parentheses make a world of difference.
I don't think it's an elegant solution, but it's working. I'll happily take any suggestions for improvement.
I have a function that returns an SQLAlchemy engine object
create_db.py:
from sqlalchemy import create_engine
def init_db():
return create_engine("sqlite://", echo=True, future=True)
and I've a test that's attempting to use pytest's monkeypatch to mock the call to create_engine.
test_db.py:
import sqlalchemy
from create_db import init_db
def test_correct_db_path_selected(monkeypatch):
def mocked_create_engine():
return "test_connection_string"
monkeypatch.setattr(sqlalchemy, "create_engine", mocked_create_engine())
engine = init_db()
assert engine == "test_connection_string"
When I run pytest, the test is failing as a real sqlalchemy engine object is getting returned, not the mocked string.
AssertionError: assert Engine(sqlite://) == 'test_connection_string'
I've tried the following calls to setattr but they all fail in the same manner:
monkeypatch.setattr("sqlalchemy.engine.create.create_engine", mocked_create_engine)
monkeypatch.setattr(sqlalchemy.engine.create, "create_engine", mocked_create_engine)
monkeypatch.setattr(sqlalchemy.engine, "create_engine", mocked_create_engine)
I've gotten the basic examples from pytest docs to work but it doesn't cover a static function from a library. Does anyone have any suggestions on what I'm doing wrong?
So I've found a solution for my problem, but I'm still not clear on why the above code doesn't work.
If I change my create_db.py to directly call sqlalchemy.create_engine, the mocking function works.
create_db.py:
import sqlalchemy
def init_db():
return sqlalchemy.create_engine("sqlite://")
test_db.py:
import sqlalchemy
from create_db import init_db
class MockEngine:
def __init__(self, path):
self.path = path
def test_correct_db_path_selected(monkeypatch):
def mocked_create_engine(path):
return MockEngine(path=path)
monkeypatch.setattr(sqlalchemy, "create_engine", mocked_create_engine)
engine = init_db()
assert engine.path == "sqlite://"
I don't mind changing my code to make it more testable but I'd still like to know if it's possible to mock a call to the original create_engine call. I'll leave the question and answer up in case any else runs into the same problem.
Edit:
I found a solution that doesn't involve changing the code to be tested. The following call to setattr will mock a function call that isn't on an object:
monkeypatch.setattr(create_db, "create_engine", mocked_create_engine)
This works as it's telling monkeypatch to mock direct calls to create_engine in the create_db.py file.
I have the following code that I'm attempting to create a test (still work in progress):
from core.tests import BaseTestCase
from core.views import get_request
from entidades.forms import InstituicaoForm
from mock import patch
class InstituicaoFormTestCase(BaseTestCase):
def setUp(self):
super(InstituicaoFormTestCase, self).setUp()
#patch('get_request', return_value={'user': 'usuario_qualquer'})
def test_salva_instituicao_quando_informaram_convenio():
import pdb
pdb.set_trace()
form = InstituicaoForm()
it fails because when I try to create a InstituicaoForm, a get_request is called:
def get_request():
return getattr(THREAD_LOCAL, 'request', None)
and it trows this error
entidades/tests.py:11: in <module>
class InstituicaoFormTestCase(BaseTestCase):
entidades/tests.py:16: in InstituicaoFormTestCase
#patch('get_request', return_value={'user': 'usuario_qualquer'})
.tox/unit/local/lib/python2.7/site-packages/mock/mock.py:1670: in patch
getter, attribute = _get_target(target)
.tox/unit/local/lib/python2.7/site-packages/mock/mock.py:1522: in _get_target
(target,))
E TypeError: Need a valid target to patch. You supplied: 'get_request'
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> /home/vinicius/telessaude/.tox/unit/local/lib/python2.7/site-packages/mock/mock.py(1522)_get_target()
-> (target,))
What am I doing wrong? How should mock this get_request() method?
I think the specific thing you're trying to do can be done like this:
#patch('core.views.get_request', return_value={'user': 'usuario_qualquer'})
But you should also look at the Django testing documentation, if you haven't already. You can use the testing client to fake the web request.
If you want to experiment with mock tests that don't access a database, check out Django Mock Queries. (I'm a small contributor to that project.) I've also tried mocking views, but it's fiddly.
I've been using geasessions for a while, been working great. It's simple and fast.
But today I started a new project (GAE v1.3.7) and can't get it to work, get_current_session() just returns None
I've split the code in to a new project that's just using gaesessions:
from google.appengine.ext import webapp
from google.appengine.ext.webapp import util
from google.appengine.ext.webapp import template
from gaesessions import get_current_session
import logging
class MainHandler(webapp.RequestHandler):
def get(self):
session = get_current_session()
logging.info(session)
path = os.path.join(os.path.dirname(__file__), 'index.html')
self.response.out.write(template.render(path, { }))
And in the log it just says None.
The strange part is that the other solutions it still works. (I'm guessing it's due to that the db.Model for Session is already present). Tested both the version I use there and downloaded the latest version (1.04)
I've looked at the source code for gaesessions and it kinda make sense that it returns None:
_current_session = None
def get_current_session():
"""Returns the session associated with the current request."""
return _current_session
And I can't find anywhere where the Session class is invoked, but then again it could be my laking python skills.
Does anyone use gaesession and know what's happening?
I think you have missed something in the installation process.
Anyway, if you scroll down the source code you will notice also this part of code that actually valorize the _current_session variable.
def __call__(self, environ, start_response):
# initialize a session for the current user
global _current_session
_current_session = Session(lifetime=self.lifetime, no_datastore=self.no_datastore, cookie_only_threshold=self.cookie_only_thresh, cookie_key=self.cookie_key)
Background
So let's say I'm making app for GAE, and I want to use API Hooks.
BIG EDIT: In the original version of this question, I described my use case, but some folks correctly pointed out that it was not really suited for API Hooks. Granted! Consider me helped. But now my issue is academic: I still don't know how to use hooks in practice, and I'd like to. I've rewritten my question to make it much more generic.
Code
So I make a model like this:
class Model(db.Model):
user = db.UserProperty(required=True)
def pre_put(self):
# Sets a value, raises an exception, whatever. Use your imagination
And then I create a db_hooks.py:
from google.appengine.api import apiproxy_stub_map
def patch_appengine():
def hook(service, call, request, response):
assert service == 'datastore_v3'
if call == 'Put':
for entity in request.entity_list():
entity.pre_put()
apiproxy_stub_map.apiproxy.GetPreCallHooks().Append('preput',
hook,
'datastore_v3')
Being TDD-addled, I'm making all this using GAEUnit, so in gaeunit.py, just above the main method, I add:
import db_hooks
db_hooks.patch_appengine()
And then I write a test that instantiates and puts a Model.
Question
While patch_appengine() is definitely being called, the hook never is. What am I missing? How do I make the pre_put function actually get called?
Hooks are a little low level for the task at hand. What you probably want is a custom property class. DerivedProperty, from aetycoon, is just the ticket.
Bear in mind, however, that the 'nickname' field of the user object is probably not what you want - per the docs, it's simply the user part of the email field if they're using a gmail account, otherwise it's their full email address. You probably want to let users set their own nicknames, instead.
The issue here is that within the context of the hook() function an entity is not an instance of db.Model as you are expecting.
In this context entity is the protocol buffer class confusingly referred to as entity (entity_pb). Think of it like a JSON representation of your real entity, all the data is there, and you could build a new instance from it, but there is no reference to your memory-resident instance that is waiting for it's callback.
Monkey patching all of the various put/delete methods is the best way to setup Model-level callbacks as far as I know†
Since there doesn't seem to be that many resources on how to do this safely with the newer async calls, here's a BaseModel that implements before_put, after_put, before_delete & after_delete hooks:
class HookedModel(db.Model):
def before_put(self):
logging.error("before put")
def after_put(self):
logging.error("after put")
def before_delete(self):
logging.error("before delete")
def after_delete(self):
logging.error("after delete")
def put(self):
return self.put_async().get_result()
def delete(self):
return self.delete_async().get_result()
def put_async(self):
return db.put_async(self)
def delete_async(self):
return db.delete_async(self)
Inherit your model-classes from HookedModel and override the before_xxx,after_xxx methods as required.
Place the following code somewhere that will get loaded globally in your applicaiton (like main.py if you use a pretty standard looking layout). This is the part that calls our hooks:
def normalize_entities(entities):
if not isinstance(entities, (list, tuple)):
entities = (entities,)
return [e for e in entities if hasattr(e, 'before_put')]
# monkeypatch put_async to call entity.before_put
db_put_async = db.put_async
def db_put_async_hooked(entities, **kwargs):
ents = normalize_entities(entities)
for entity in ents:
entity.before_put()
a = db_put_async(entities, **kwargs)
get_result = a.get_result
def get_result_with_callback():
for entity in ents:
entity.after_put()
return get_result()
a.get_result = get_result_with_callback
return a
db.put_async = db_put_async_hooked
# monkeypatch delete_async to call entity.before_delete
db_delete_async = db.delete_async
def db_delete_async_hooked(entities, **kwargs):
ents = normalize_entities(entities)
for entity in ents:
entity.before_delete()
a = db_delete_async(entities, **kwargs)
get_result = a.get_result
def get_result_with_callback():
for entity in ents:
entity.after_delete()
return get_result()
a.get_result = get_result_with_callback
return a
db.delete_async = db_delete_async_hooked
You can save or destroy your instances via model.put() or any of the db.put(), db.put_async() etc, methods and get the desired effect.
†would love to know if there is an even better solution!?
I don't think that Hooks are really going to solve this problem. The Hooks will only run in the context of your AppEngine application, but the user can change their nickname outside of your application using Google Account settings. If they do that, it won't trigger any logic implement in your hooks.
I think that the real solution to your problem is for your application to manage its own nickname that is independent of the one exposed by the Users entity.