When one of my unit tests deletes a SQLAlchemy object, the object triggers an after_delete event which triggers a Celery task to delete a file from the drive.
The task is CELERY_ALWAYS_EAGER = True when testing.
gist to reproduce the issue easily
The example has two tests. One triggers the task in the event, the other outside the event. Only the one in the event closes the connection.
To quickly reproduce the error you can run:
git clone https://gist.github.com/5762792fc1d628843697.git
cd 5762792fc1d628843697
virtualenv venv
. venv/bin/activate
pip install -r requirements.txt
python test.py
The stack:
$ python test.py
E
======================================================================
ERROR: test_delete_task (__main__.CeleryTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test.py", line 73, in test_delete_task
db.session.commit()
File "/home/brice/Code/5762792fc1d628843697/venv/local/lib/python2.7/site-packages/sqlalchemy/orm/scoping.py", line 150, in do
return getattr(self.registry(), name)(*args, **kwargs)
File "/home/brice/Code/5762792fc1d628843697/venv/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 776, in commit
self.transaction.commit()
File "/home/brice/Code/5762792fc1d628843697/venv/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 377, in commit
self._prepare_impl()
File "/home/brice/Code/5762792fc1d628843697/venv/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 357, in _prepare_impl
self.session.flush()
File "/home/brice/Code/5762792fc1d628843697/venv/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 1919, in flush
self._flush(objects)
File "/home/brice/Code/5762792fc1d628843697/venv/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 2037, in _flush
transaction.rollback(_capture_exception=True)
File "/home/brice/Code/5762792fc1d628843697/venv/local/lib/python2.7/site-packages/sqlalchemy/util/langhelpers.py", line 63, in __exit__
compat.reraise(type_, value, traceback)
File "/home/brice/Code/5762792fc1d628843697/venv/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 2037, in _flush
transaction.rollback(_capture_exception=True)
File "/home/brice/Code/5762792fc1d628843697/venv/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 393, in rollback
self._assert_active(prepared_ok=True, rollback_ok=True)
File "/home/brice/Code/5762792fc1d628843697/venv/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 223, in _assert_active
raise sa_exc.ResourceClosedError(closed_msg)
ResourceClosedError: This transaction is closed
----------------------------------------------------------------------
Ran 1 test in 0.014s
FAILED (errors=1)
I think I found the problem - it's in how you set up your Celery task. If you remove the app context call from your celery setup, everything runs fine:
class ContextTask(TaskBase):
abstract = True
def __call__(self, *args, **kwargs):
# deleted --> with app.app_context():
return TaskBase.__call__(self, *args, **kwargs)
There's a big warning in the SQLAlchemy docs about never modifying the session during after_delete events: http://docs.sqlalchemy.org/en/latest/orm/events.html#sqlalchemy.orm.events.MapperEvents.after_delete
So I suspect the with app.app_context(): is being called during the delete, trying to attach to and/or modify the session that Flask-SQLAlchemy stores in the app object, and therefore the whole thing is bombing.
Flask-SQlAlchemy does a lot of magic behind the scenes for you, but you can bypass this and use SQLAlchemy directly. If you need to talk to the database during the delete event, you can create a new session to the db:
#celery.task()
def my_task():
# obviously here I create a new object
session = db.create_scoped_session()
session.add(User(id=13, value="random string"))
session.commit()
return
But it sounds like you don't need this, you're just trying to delete an image path. In that case, I would just change your task so it takes a path:
# instance will call the task
#event.listens_for(User, "after_delete")
def after_delete(mapper, connection, target):
my_task.delay(target.value)
#celery.task()
def my_task(image_path):
os.remove(image_path)
Hopefully that's helpful - let me know if any of that doesn't work for you. Thanks for the very detailed setup, it really helped in debugging.
Similar to the answer suggested by deBrice, but using the approach similar to Rachel.
class ContextTask(TaskBase):
abstract = True
def __call__(self, *args, **kwargs):
import flask
# tests will be run in unittest app context
if flask.current_app:
return TaskBase.__call__(self, *args, **kwargs)
else:
# actual workers need to enter worker app context
with app.app_context():
return TaskBase.__call__(self, *args, **kwargs)
Ask, the creator of celery, suggested that solution on github
from celery import signals
def make_celery(app):
...
#signals.task_prerun.connect
def add_task_flask_context(sender, **kwargs):
if not sender.request.is_eager:
sender.request.flask_context = app.app_context().__enter__()
#signals.task_postrun.connect
def cleanup_task_flask_context(sender, **kwargs):
flask_context = getattr(sender.request, 'flask_context', None)
if flask_context is not None:
flask_context.__exit__(None, None, None)
Related
I am working from the cookiecutter Flask template, which uses the application factory pattern. I had Celery working for tasks that did not use the application context, but one of my tasks does need to know it; it makes a database query and updates a database object. Right now I have not a circular import error (though I've had them with other attempts) but a maximum recursion depth error.
I consulted this blog post about how to use Celery with the application factory pattern, and I'm trying to follow this Stack Overflow answer closely, since it has a structure apparently also derived from cookiecutter Flask.
Relevant portions of my project structure:
cookiecutter_mbam
│ celeryconfig.py
│
└───cookiecutter_mbam
| __init__.py
│ app.py
│ run_celery.py
│
└───utility
| celery_utils.py
|
└───derivation
| tasks.py
|
└───storage
| tasks.py
|
└───xnat
tasks.py
__init__.py:
"""Main application package."""
from celery import Celery
celery = Celery('cookiecutter_mbam', config_source='cookiecutter_mbam.celeryconfig')
Relevant portion of app.py:
from cookiecutter_mbam import celery
def create_app(config_object='cookiecutter_mbam.settings'):
"""An application factory, as explained here: http://flask.pocoo.org/docs/patterns/appfactories/.
:param config_object: The configuration object to use.
"""
app = Flask(__name__.split('.')[0])
app.config.from_object(config_object)
init_celery(app, celery=celery)
register_extensions(app)
# ...
return app
run_celery.py:
from cookiecutter_mbam.app import create_app
from cookiecutter_mbam import celery
from cookiecutter_mbam.utility.celery_utils import init_celery
app = create_app(config_object='cookiecutter_mbam.settings')
init_celery(app, celery)
celeryconfig.py:
broker_url = 'redis://localhost:6379'
result_backend = 'redis://localhost:6379'
task_serializer = 'json'
result_serializer = 'json'
accept_content = ['json']
enable_utc = True
imports = {'cookiecutter_mbam.xnat.tasks', 'cookiecutter_mbam.storage.tasks', 'cookiecutter_mbam.derivation.tasks'}
Relevant portion of celery_utils.py:
def init_celery(app, celery):
"""Add flask app context to celery.Task"""
class ContextTask(celery.Task):
def __call__(self, *args, **kwargs):
with app.app_context():
return self.run(*args, **kwargs)
celery.Task = ContextTask
return celery
When I try to start the worker using celery -A cookiecutter_mbam.run_celery:celery worker I get a RecursionError: maximum recursion depth exceeded while calling a Python object error. (I also have tried several other ways to invoke the worker, all with the same error.) Here's an excerpt from the stack trace:
Traceback (most recent call last):
File "/Users/katie/anaconda/bin/celery", line 11, in <module>
sys.exit(main())
File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/__main__.py", line 16, in main
_main()
File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/bin/celery.py", line 322, in main
cmd.execute_from_commandline(argv)
File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/bin/celery.py", line 496, in execute_from_commandline
super(CeleryCommand, self).execute_from_commandline(argv)))
File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/bin/base.py", line 275, in execute_from_commandline
return self.handle_argv(self.prog_name, argv[1:])
File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/bin/celery.py", line 488, in handle_argv
return self.execute(command, argv)
File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/bin/celery.py", line 420, in execute
).run_from_argv(self.prog_name, argv[1:], command=argv[0])
File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/bin/worker.py", line 221, in run_from_argv
*self.parse_options(prog_name, argv, command))
File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/bin/base.py", line 398, in parse_options
self.parser = self.create_parser(prog_name, command)
File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/bin/base.py", line 414, in create_parser
self.add_arguments(parser)
File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/bin/worker.py", line 277, in add_arguments
default=conf.worker_state_db,
File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/utils/collections.py", line 126, in __getattr__
return self[k]
File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/utils/collections.py", line 429, in __getitem__
return getitem(k)
File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/utils/collections.py", line 278, in __getitem__
return mapping[_key]
File "/Users/katie/anaconda/lib/python3.6/collections/__init__.py", line 989, in __getitem__
if key in self.data:
File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/utils/collections.py", line 126, in __getattr__
return self[k]
File "/Users/katie/anaconda/lib/python3.6/collections/__init__.py", line 989, in __getitem__
if key in self.data:
File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/utils/collections.py", line 126, in __getattr__
return self[k]
I understand the basic sense of this error -- something is calling itself infinitely. Maybe create_app. But I can't see why, and I don't know how to go about debugging this.
I'm also getting this when I try to load my site:
File "~/cookiecutter_mbam/cookiecutter_mbam/xnat/tasks.py", line 14, in <module>
#celery.task
AttributeError: module 'cookiecutter_mbam.celery' has no attribute 'task'
I did not have this problem when I was using the make_celery method described here, but that method creates circular import problems when you need your tasks to access the application context. Pointers on how to do this correctly with the Cookiecutter Flask template would be much appreciated.
I'm suspicious of that bit of code that's making the Flask app available to celery. It's skipping over some essential code by going directly to run(). (See https://github.com/celery/celery/blob/master/celery/app/task.py#L387)
Try calling the inherited __call__. Here's a snippet from one of my (working) apps.
# Arrange for tasks to have access to the Flask app
TaskBase = celery.Task
class ContextTask(TaskBase):
def __call__(self, *args, **kwargs):
with app.app_context():
return TaskBase.__call__(self, *args, **kwargs) ## << here
celery.Task = ContextTask
I also don't see where you're creating an instance of Celery and configuring it. I assume you have
celery = Celery(__name__)
and then need to
celery.config_from_object(...)
from somewhere within init_celery()
This is solved. I had my configcelery.py in the wrong place. I needed to move it to the package directory, not the parent repo directory. It is incredibly unintuitive/uninformative that a misplaced config file, rather than causing an "I can't find that file"-type error, causes an infinite recursion. But at least I finally saw it and corrected it.
Getting the following error running a celery task, even with a Flask application context:
raised unexpected: RuntimeError('Working outside of application context.\n\nThis typically means that you attempted to use functionality that needed\nto interface with the current application object in some way. To solve\nthis, set up an application context with app.app_context(). See the\ndocumentation for more information.',)
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/celery/app/trace.py", line 382, in trace_task
R = retval = fun(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/celery/app/trace.py", line 641, in __protected_call__
return self.run(*args, **kwargs)
File "/app/example.py", line 172, in start_push_task
}, data=data)
File "/app/push.py", line 65, in push
if user and not g.get('in_celery_task') and 'got_user' not in g:
File "/usr/lib/python3.6/site-packages/werkzeug/local.py", line 347, in __getattr__
return getattr(self._get_current_object(), name)
File "/usr/lib/python3.6/site-packages/werkzeug/local.py", line 306, in _get_current_object
return self.__local()
File "/usr/lib/python3.6/site-packages/flask/globals.py", line 44, in _lookup_app_object
raise RuntimeError(_app_ctx_err_msg)
RuntimeError: Working outside of application context.
This typically means that you attempted to use functionality that needed
to interface with the current application object in some way. To solve
this, set up an application context with app.app_context(). See the
documentation for more information.
Any way to fix this?
For me, the issue was that I had import celery instead of from app import celery.
Here's some more of my setup code for anyone who stumbles across here in the future:
app.py
def make_celery(app):
app.config['broker_url'] = 'amqp://rabbitmq:rabbitmq#rabbit:5672/'
app.config['result_backend'] = 'rpc://rabbitmq:rabbitmq#rabbit:5672/'
celery = Celery(app.import_name, backend=app.config['result_backend'], broker=app.config['broker_url'])
celery.conf.update(app.config)
class ContextTask(Task):
abstract = True
def __call__(self, *args, **kwargs):
with app.test_request_context():
g.in_celery_task = True
res = self.run(*args, **kwargs)
return res
celery.Task = ContextTask
celery.config_from_object(__name__)
celery.conf.timezone = 'UTC'
return celery
celery = make_celery(app)
In the other file:
from app import celery
I am getting this when using a Scrapy parsing function (that can take till 10 minutes sometimes) inside a Celery task.
I use:
- Django==1.6.5
- django-celery==3.1.16
- celery==3.1.16
- psycopg2==2.5.5 (I used also psycopg2==2.5.4)
[2015-07-19 11:27:49,488: CRITICAL/MainProcess] Task myapp.parse_items[63fc40eb-c0d6-46f4-a64e-acce8301d29a] INTERNAL ERROR: InterfaceError('connection already closed',)
Traceback (most recent call last):
File "/home/mo/Work/python/pb-env/local/lib/python2.7/site-packages/celery/app/trace.py", line 284, in trace_task
uuid, retval, SUCCESS, request=task_request,
File "/home/mo/Work/python/pb-env/local/lib/python2.7/site-packages/celery/backends/base.py", line 248, in store_result
request=request, **kwargs)
File "/home/mo/Work/python/pb-env/local/lib/python2.7/site-packages/djcelery/backends/database.py", line 29, in _store_result
traceback=traceback, children=self.current_task_children(request),
File "/home/mo/Work/python/pb-env/local/lib/python2.7/site-packages/djcelery/managers.py", line 42, in _inner
return fun(*args, **kwargs)
File "/home/mo/Work/python/pb-env/local/lib/python2.7/site-packages/djcelery/managers.py", line 181, in store_result
'meta': {'children': children}})
File "/home/mo/Work/python/pb-env/local/lib/python2.7/site-packages/djcelery/managers.py", line 87, in update_or_create
return get_queryset(self).update_or_create(**kwargs)
File "/home/mo/Work/python/pb-env/local/lib/python2.7/site-packages/djcelery/managers.py", line 70, in update_or_create
obj, created = self.get_or_create(**kwargs)
File "/home/mo/Work/python/pb-env/local/lib/python2.7/site-packages/django/db/models/query.py", line 376, in get_or_create
return self.get(**lookup), False
File "/home/mo/Work/python/pb-env/local/lib/python2.7/site-packages/django/db/models/query.py", line 304, in get
num = len(clone)
File "/home/mo/Work/python/pb-env/local/lib/python2.7/site-packages/django/db/models/query.py", line 77, in __len__
self._fetch_all()
File "/home/mo/Work/python/pb-env/local/lib/python2.7/site-packages/django/db/models/query.py", line 857, in _fetch_all
self._result_cache = list(self.iterator())
File "/home/mo/Work/python/pb-env/local/lib/python2.7/site-packages/django/db/models/query.py", line 220, in iterator
for row in compiler.results_iter():
File "/home/mo/Work/python/pb-env/local/lib/python2.7/site-packages/django/db/models/sql/compiler.py", line 713, in results_iter
for rows in self.execute_sql(MULTI):
File "/home/mo/Work/python/pb-env/local/lib/python2.7/site-packages/django/db/models/sql/compiler.py", line 785, in execute_sql
cursor = self.connection.cursor()
File "/home/mo/Work/python/pb-env/local/lib/python2.7/site-packages/django/db/backends/__init__.py", line 160, in cursor
cursor = self.make_debug_cursor(self._cursor())
File "/home/mo/Work/python/pb-env/local/lib/python2.7/site-packages/django/db/backends/__init__.py", line 134, in _cursor
return self.create_cursor()
File "/home/mo/Work/python/pb-env/local/lib/python2.7/site-packages/django/db/utils.py", line 99, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/home/mo/Work/python/pb-env/local/lib/python2.7/site-packages/django/db/backends/__init__.py", line 134, in _cursor
return self.create_cursor()
File "/home/mo/Work/python/pb-env/local/lib/python2.7/site-packages/django/db/backends/postgresql_psycopg2/base.py", line 137, in create_cursor
cursor = self.connection.cursor()
InterfaceError: connection already closed
Unfortunately this is a problem with django + psycopg2 + celery combo.
It's an old and unsolved problem.
Take a look on this thread to understand:
https://github.com/celery/django-celery/issues/121
Basically, when celery starts a worker, it forks a database connection
from django.db framework. If this connection drops for some reason, it
doesn't create a new one. Celery has nothing to do with this problem
once there is no way to detect when the database connection is dropped
using django.db libraries. Django doesn't notifies when it happens,
because it just start a connection and it receives a wsgi call (no
connection pool). I had the same problem on a huge production
environment with a lot of machine workers, and sometimes, these
machines lost connectivity with postgres server.
I solved it putting each celery master process under a linux
supervisord handler and a watcher and implemented a decorator that
handles the psycopg2.InterfaceError, and when it happens this function
dispatches a syscall to force supervisor restart gracefully with
SIGINT the celery process.
Edit:
Found a better solution. I implemented a celery task baseclass like this:
from django.db import connection
import celery
class FaultTolerantTask(celery.Task):
""" Implements after return hook to close the invalid connection.
This way, django is forced to serve a new connection for the next
task.
"""
abstract = True
def after_return(self, *args, **kwargs):
connection.close()
#celery.task(base=FaultTolerantTask)
def my_task():
# my database dependent code here
I believe it will fix your problem too.
Guys and emanuelcds,
I had the same problem, now I have updated my code and created a new loader for celery:
from djcelery.loaders import DjangoLoader
from django import db
class CustomDjangoLoader(DjangoLoader):
def on_task_init(self, task_id, task):
"""Called before every task."""
for conn in db.connections.all():
conn.close_if_unusable_or_obsolete()
super(CustomDjangoLoader, self).on_task_init(task_id, task)
This of course if you are using djcelery, it will also require something like this in the settings:
CELERY_LOADER = 'myproject.loaders.CustomDjangoLoader'
os.environ['CELERY_LOADER'] = CELERY_LOADER
I still have to test it, I will update.
If you are running into this when running tests, then you can either change the test to TransactionTestCase class instead of TestCase or add the mark pytest.mark.django_db(transaction=True). This kept my db connection alive from creation of the pytest-celery fixtures to the database calls.
Github issue - https://github.com/Koed00/django-q/issues/167
For context, I am using pytest-celery with celery_app and celery_worker as fixtures in my tests. I am also trying to hit the test db in the tasks referenced in these tests.
If someone would explain switching to transaction=True keeps it open, that would be great!
I'm trying to figure out how to setup Test Driven Development for GAE.
I start the tests with:
nosetests -v --with-gae
I keep getting the error:
InternalError: table "dev~guestbook!!Entities" already exists
The datastore doesn't exist until I create it in the setUp(), but I'm still getting an error that the entities already exists?
I'm using the code from the GAE tutorial.
Here is my testing code in functional_tests.py:
import sys, os, subprocess, time, unittest, shlex
sys.path.append("/usr/local/google_appengine")
sys.path.append("/usr/local/google_appengine/lib/yaml/lib")
sys.path.append("/usr/local/google_appengine/lib/webapp2-2.5.2")
sys.path.append("/usr/local/google_appengine/lib/django-1.5")
sys.path.append("/usr/local/google_appengine/lib/cherrypy")
sys.path.append("/usr/local/google_appengine/lib/concurrent")
sys.path.append("/usr/local/google_appengine/lib/docker")
sys.path.append("/usr/local/google_appengine/lib/requests")
sys.path.append("/usr/local/google_appengine/lib/websocket")
sys.path.append("/usr/local/google_appengine/lib/fancy_urllib")
sys.path.append("/usr/local/google_appengine/lib/antlr3")
from selenium import webdriver
from google.appengine.api import memcache, apiproxy_stub, apiproxy_stub_map
from google.appengine.ext import db
from google.appengine.ext import testbed
from google.appengine.datastore import datastore_stub_util
from google.appengine.tools.devappserver2 import devappserver2
class NewVisitorTest(unittest.TestCase):
def setUp(self):
# Start the dev server
cmd = "/usr/local/bin/dev_appserver.py /Users/Bryan/work/GoogleAppEngine/guestbook/app.yaml --port 8080 --storage_path /tmp/datastore --clear_datastore --skip_sdk_update_check"
self.dev_appserver = subprocess.Popen(shlex.split(cmd),
stdout=subprocess.PIPE)
time.sleep(2) # Important, let dev_appserver start up
self.testbed = testbed.Testbed()
self.testbed.setup_env(app_id='dermal')
self.testbed.activate()
self.testbed.init_user_stub()
# Create a consistency policy that will simulate the High Replication consistency model.
# with a probability of 1, the datastore should be available.
self.policy = datastore_stub_util.PseudoRandomHRConsistencyPolicy(probability=1)
# Initialize the datastore stub with this policy.
self.testbed.init_datastore_v3_stub(datastore_file="/tmp/datastore/datastore.db", use_sqlite=True, consistency_policy=self.policy)
self.testbed.init_memcache_stub()
self.datastore_stub = apiproxy_stub_map.apiproxy.GetStub('datastore_v3')
# setup the dev_appserver
APP_CONFIGS = ['app.yaml']
# setup client to make sure
from guestbook import Author, Greeting
if not ( Author.query( Author.email == "bryan#mail.com").get()):
logging.info("create Admin")
client = Author(
email = "bryan#mail.com",
).put()
Assert( Author.query( Author.email == "bryan#mail.com").get() )
self.browser = webdriver.Firefox()
self.browser.implicitly_wait(3)
def tearDown(self):
self.browser.quit()
self.testbed.deactivate()
self.dev_appserver.terminate()
def test_submit_anon_greeting(self):
self.browser.get('http://localhost:8080')
self.browser.find_element_by_name('content').send_keys('Anonymous test post')
self.browser.find_element_by_name('submit').submit()
Assert.assertEquals(driver.getPageSource().contains('Anonymous test post'))
Here is the traceback:
test_submit_anon_greeting (functional_tests.NewVisitorTest) ... INFO 2015-05-11 14:41:40,516 devappserver2.py:745] Skipping SDK update check.
INFO 2015-05-11 14:41:40,594 api_server.py:190] Starting API server at: http://localhost:59656
INFO 2015-05-11 14:41:40,598 dispatcher.py:192] Starting module "default" running at: http://localhost:8080
INFO 2015-05-11 14:41:40,600 admin_server.py:118] Starting admin server at: http://localhost:8000
WARNING 2015-05-11 14:41:45,008 tasklets.py:409] suspended generator _run_to_list(query.py:964) raised InternalError(table "dev~guestbook!!Entities" already exists)
ERROR 2015-05-11 14:41:45,009 webapp2.py:1552] table "dev~guestbook!!Entities" already exists
Traceback (most recent call last):
File "/Users/Bryan/Desktop/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/webapp2-2.5.2/webapp2.py", line 1535, in __call__
rv = self.handle_exception(request, response, e)
File "/Users/Bryan/Desktop/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/webapp2-2.5.2/webapp2.py", line 1529, in __call__
rv = self.router.dispatch(request, response)
File "/Users/Bryan/Desktop/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/webapp2-2.5.2/webapp2.py", line 1278, in default_dispatcher
return route.handler_adapter(request, response)
File "/Users/Bryan/Desktop/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/webapp2-2.5.2/webapp2.py", line 1102, in __call__
return handler.dispatch()
File "/Users/Bryan/Desktop/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/webapp2-2.5.2/webapp2.py", line 572, in dispatch
return self.handle_exception(e, self.app.debug)
File "/Users/Bryan/Desktop/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/webapp2-2.5.2/webapp2.py", line 570, in dispatch
return method(*args, **kwargs)
File "/Users/Bryan/work/GoogleAppEngine/guestbook/guestbook.py", line 50, in get
greetings = greetings_query.fetch(10)
File "/Users/Bryan/Desktop/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/ndb/utils.py", line 142, in positional_wrapper
return wrapped(*args, **kwds)
File "/Users/Bryan/Desktop/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/ndb/query.py", line 1187, in fetch
return self.fetch_async(limit, **q_options).get_result()
File "/Users/Bryan/Desktop/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/ndb/tasklets.py", line 325, in get_result
self.check_success()
File "/Users/Bryan/Desktop/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/ndb/tasklets.py", line 368, in _help_tasklet_along
value = gen.throw(exc.__class__, exc, tb)
File "/Users/Bryan/Desktop/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/ndb/query.py", line 964, in _run_to_list
batch = yield rpc
File "/Users/Bryan/Desktop/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/ndb/tasklets.py", line 454, in _on_rpc_completion
result = rpc.get_result()
File "/Users/Bryan/Desktop/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/api/apiproxy_stub_map.py", line 613, in get_result
return self.__get_result_hook(self)
File "/Users/Bryan/Desktop/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/datastore/datastore_query.py", line 2870, in __query_result_hook
self._batch_shared.conn.check_rpc_success(rpc)
File "/Users/Bryan/Desktop/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/datastore/datastore_rpc.py", line 1342, in check_rpc_success
raise _ToDatastoreError(err)
InternalError: table "dev~guestbook!!Entities" already exists
It looks like there are a couple of things happening here.
First, it looks like you are using NoseGAE --with-gae. The plugin handles setting up and tearing down your testbed so you don't have to. This means that you do not need any of the self.testbed code and actually it can cause conflicts internally. Either switch to doing it the NoseGAE way, or don't use the --with-gae flag. If you stick with NoseGAE, it has an option --gae-datastore that lets you set the path to the datastore that it will use for your tests. Then inside your test class, set the property nosegae_datastore_v3 = True to have it set up for you:
class NewVisitorTest(unittest.TestCase):
# enable the datastore stub
nosegae_datastore_v3 = True
Second, the way that dev_appserver / sqlite work together, the appserver loads the sqlite db file into memory and works with it there. When the app server exits, it flushes the database contents back to disk. Since you are using the same datastore for your tests as the dev_appserver.py process you are opening for selenium, they may or may not see the fixture data you set up inside your test.
Here is an example from https://github.com/Trii/NoseGAE/blob/master/nosegae.py#L124-L140
class MyTest(unittest.TestCase):
nosegae_datastore_v3 = True
nosegae_datastore_v3_kwargs = {
'datastore_file': '/tmp/nosegae.sqlite3',
'use_sqlite': True
}
def test_something(self):
entity = MyModel(name='NoseGAE')
entity.put()
self.assertNotNone(entity.key.id())
I guess this line might be the faulty one:
self.testbed.init_datastore_v3_stub(datastore_file="/tmp/datastore/datastore.db", use_sqlite=True, consistency_policy=self.policy)
Setting datastore_file="/tmp/datastore/datastore.db" indicate you want to reuse this existing datastore in your tests
The python code documentation says:
The 'datastore_file' argument can be the path to an existing
datastore file, or None (default) to use an in-memory datastore
that is initially empty.
Personaly I use these in my tests:
def setUp(self):
self.testbed = testbed.Testbed()
self.testbed.activate()
self.testbed.init_datastore_v3_stub(
consistency_policy=datastore_stub_util.PseudoRandomHRConsistencyPolicy(probability=0)
)
self.testbed.init_memcache_stub()
def tearDown(self):
self.testbed.deactivate()
I've wrote this simple code to test my model:
class NDBTestCase(unittest.TestCase):
def setUp(self):
logging.getLogger().setLevel(logging.INFO)
# First, create an instance of the Testbed class.
self.testbed = testbed.Testbed()
# Then activate the testbed, which prepares the service stubs for use.
self.testbed.activate()
# Next, declare which service stubs you want to use.
self.testbed.init_datastore_v3_stub()
self.testbed.init_memcache_stub()
self.owner = m_User(username="owner")
self.owner.put()
def tearDown(self):
self.testbed.deactivate()
def testClub(self):
# this id is a manually assigned
club = m_Club(id="1", name="test", email="test#test.com", description="desc", url="example.com",
owners=[self.owner.key], training_type=["balance", "stability"], tags=["test", "trento"])
club.put()
in the models the owners is like this owners = ndb.KeyProperty(kind="User", repeated=True)
if i run this code with unittest it works perfectly.
I tried to run it with unitest and nosegae and it fails for a problem with the Key
Traceback (most recent call last):
File "/Users/stefano/Documents/SW/gymcentral/tester_ndb.py", line 25, in setUp
self.owner.put()
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/ndb/model.py", line 3379, in _put
return self._put_async(**ctx_options).get_result()
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/ndb/tasklets.py", line 325, in get_result
self.check_success()
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/ndb/tasklets.py", line 368, in _help_tasklet_along
value = gen.throw(exc.__class__, exc, tb)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/ndb/context.py", line 810, in put
key = yield self._put_batcher.add(entity, options)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/ndb/tasklets.py", line 371, in _help_tasklet_along
value = gen.send(val)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/ndb/context.py", line 350, in _put_tasklet
ent._key = key
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/ndb/model.py", line 1363, in __set__
self._set_value(entity, value)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/ndb/model.py", line 1513, in _set_value
value = _validate_key(value, entity=entity)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/ndb/model.py", line 1481, in _validate_key
raise datastore_errors.BadValueError('Expected Key, got %r' % value)
BadValueError: Expected Key, got Key('User', 1)
any idea why?
i run the test from console with this command nosetests tester_ndb.py --with-gae
Could you try running nose with the flag --without-sandbox?
There seems to be an older issue on the former tracker about the same thing.
https://code.google.com/p/nose-gae/issues/detail?id=60
I recently took over maintaining NoseGAE so I will look into the root cause as well but I am assuming it is something internal to the App Engine SDK.
EDIT: --without-sandbox was removed in NoseGAE 0.4.0 when I migrated it to dev_appserver2
I've gotten weird errors like this as well. I think there is a bug somewhere in Google's code. I got around it by tweaking my code to do the same thing in a slightly different way.
Try changing
self.owner.put()
to
ndb.put(self.owner)