If I try to run a test via PyCharm, I get this exception
...bin/python /usr/local/pycharm-4.5.1/helpers/pycharm/utrunner.py ...src/foo/foo/tests/FooEditTest.py::FooEditTest::test_issue_add true
Testing started at 15:59 ...
Traceback (most recent call last):
File "/usr/local/pycharm-4.5.1/helpers/pycharm/utrunner.py", line 139, in <module>
module = loadSource(a[0])
File "/usr/local/pycharm-4.5.1/helpers/pycharm/utrunner.py", line 41, in loadSource
module = imp.load_source(moduleName, fileName)
File "...src/foo/foo/tests/FooEditTest.py", line 45, in <module>
from foo.views.issue.forward import forward
File "...src/foo/foo/views/issue/forward.py", line 29, in <module>
class ForwardForm(forms.Form):
File "...src/foo/foo/views/issue/forward.py", line 36, in ForwardForm
group=forms.ModelChoiceField(Group.objects.exclude(groupconfig__no_issue=True).extra(
File "...python2.7/site-packages/django/db/models/manager.py", line 92, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "...python2.7/site-packages/django/db/models/query.py", line 698, in exclude
return self._filter_or_exclude(True, *args, **kwargs)
File "...python2.7/site-packages/django/db/models/query.py", line 707, in _filter_or_exclude
clone.query.add_q(~Q(*args, **kwargs))
File "...python2.7/site-packages/django/db/models/sql/query.py", line 1331, in add_q
clause, require_inner = self._add_q(where_part, self.used_aliases)
File "...python2.7/site-packages/django/db/models/sql/query.py", line 1358, in _add_q
current_negated=current_negated, connector=connector)
File "...python2.7/site-packages/django/db/models/sql/query.py", line 1182, in build_filter
lookups, parts, reffed_aggregate = self.solve_lookup_type(arg)
File "...python2.7/site-packages/django/db/models/sql/query.py", line 1120, in solve_lookup_type
_, field, _, lookup_parts = self.names_to_path(lookup_splitted, self.get_meta())
File "...python2.7/site-packages/django/db/models/sql/query.py", line 1383, in names_to_path
field, model, direct, m2m = opts.get_field_by_name(name)
File "...python2.7/site-packages/django/db/models/options.py", line 416, in get_field_by_name
cache = self.init_name_map()
File "...python2.7/site-packages/django/db/models/options.py", line 445, in init_name_map
for f, model in self.get_all_related_m2m_objects_with_model():
File "...python2.7/site-packages/django/db/models/options.py", line 563, in get_all_related_m2m_objects_with_model
cache = self._fill_related_many_to_many_cache()
File "...python2.7/site-packages/django/db/models/options.py", line 577, in _fill_related_many_to_many_cache
for klass in self.apps.get_models():
File "...python2.7/site-packages/django/utils/lru_cache.py", line 101, in wrapper
result = user_function(*args, **kwds)
File "...python2.7/site-packages/django/apps/registry.py", line 168, in get_models
self.check_models_ready()
File "...python2.7/site-packages/django/apps/registry.py", line 131, in check_models_ready
raise AppRegistryNotReady("Models aren't loaded yet.")
django.core.exceptions.AppRegistryNotReady: Models aren't loaded yet.
Process finished with exit code 1
I use group=forms.ModelChoiceField(Group.objects.exclude(...))
This line gets executed during importing. The call to django.setup() was not done before.
I have no clue how to solve this:
Should I call django.setup()? But where to insert this line?
Avoid using MyModel.objects.filter(...) during import time? This would need a big refactoring, since we have several ModelChoiceFields.
You're currently running the tests using the PyCharm's Python unit test runner pycharm-4.5.1/helpers/pycharm/utrunner.py.
This test runner is great for low-level unit tests (subclassed from unittest.TestCase) that don't touch Django features like the ORM, but
if the tests rely on Django-specific things like the database, then your TestCases probably need to be subclassed from django.test.testcases.TestCase and you'll need to go through PyCharm's Django test manager django_test_manage.py, which takes care of setting up the test database, initialising the model registry, and so on.
Run > Edit Configurations > Add New Configuration (+ button) > Django Tests
Set the target to foo.foo.tests.FooEditTest and make sure to put DJANGO_SETTINGS_MODULE=... in the environment variables if it doesn't find your settings.
I faced the same problem while running test cases and solved it by adding this to test file at the beginning with import statements
import os
import sys
from django.core.handlers.wsgi import WSGIHandler
os.environ['DJANGO_SETTINGS_MODULE'] = 'myapp.settings'
application = WSGIHandler()
Please show your exact code (especially the extra() clause). Calling filter() or exclude() at import time is not bad because querysets are lazy but you could evaluate the queryset here that caused the exception.
Do not evaluate querysets during import time because import statements are executed only once: e.g., if a new group is created, it won't be available as a choice for you ModelChoiceField.
A bit hard to say without seeing your code, but I had a similar issue once. For me it was related to a module that was being imported by my models and that also contained an import of my models.
ModelA --- imports --> service -- imports --> ModelA
I solved this by moving the import in my service to the method that needed the import. So instead of putting it at the top of the service module, I limited the scope of the import to the method that needed this import, thus avoiding importing the models whilst initializing the service module. Fwew, I wish I could draw it for you :)
Related
I want to setup a machine learning pipeline that is callable by flask but I am facing some issues, these links are for the documentations I have read so far:
https://exploreflask.com/en/latest/views.html#view-decorators
https://flask.palletsprojects.com/en/1.1.x/api/#flask.Flask
Let me explain the pipeline I have in mind:
pull a dataframe from a PostgreSQL database
encode said dataframe to make it ready for most algorithms
split up the data
feed to a pipeline and determine accuracy
store the model in a pickle file
What is working so far:
All parts are working as a regular script
I can just slap all the steps into one huge flask file with one decorator and it would run as well (my emergency solution)
The File Structure
The encoder script:
#Flask main thread
#makes flask start this part as application and not as module
app = Flask('encoder_module')
#app.route('/df_encoder')
def df_encoder(rng = 4):
encoding stuff
`return df`
The Pipeline script (random forest regressor here)
app = Flask('pipeline_module')
#app.route('/pipeline_rfr')
def pipeline_rfr():
pipeline stuff
`return grid_search_rfr`
The pickle module:
app = Flask('pickle_module')
#app.route('/store_reg_pickle')
def store_pickle():
"""
Usage of a Pickle Model -Storage of a trained Model
"""
model = grid_search_rfr
#specify file name in letter strings
model_file = "regression_model"
with open(model_file, mode='wb') as m_f:
pickle.dump(model, m_f)
print(f"Model saved in: {os.getcwd()}")
return model_file
The Main Flask File
#packages
from flask import Flask
from encoder_main_thread import df_encoder
from rfr_pipeline_function import pipeline_rfr
from pickle_call import store_pickle
app = Flask(__name__.split('.')[0])
#app.route('/regression_pipe')
#df_encoder
#pipeline_rfr
#store_reg_pickle
def regression_pipe():
`return 'pipeline done`
The problem iss that the return value of the encoder cannot be a dataframe, only a string, tuple, etc.
Is there a workaround for this?
I actually want it to be a flawless passing of the dataframe to the pipeline and eventually storing it in the pickle file which is then saved in the folder.
For some reason it cannot detect the pickle file import and throws following error:
Use a production WSGI server instead.
* Debug mode: off
Traceback (most recent call last):
File "C:\ANACONDA3\Scripts\flask-script.py", line 9, in <module>
sys.exit(main())
File "C:\ANACONDA3\lib\site-packages\flask\cli.py", line 967, in main
cli.main(args=sys.argv[1:], prog_name="python -m flask" if as_module else None)
File "C:\ANACONDA3\lib\site-packages\flask\cli.py", line 586, in main
return super(FlaskGroup, self).main(*args, **kwargs)
File "C:\ANACONDA3\lib\site-packages\click\core.py", line 782, in main
rv = self.invoke(ctx)
File "C:\ANACONDA3\lib\site-packages\click\core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "C:\ANACONDA3\lib\site-packages\click\core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "C:\ANACONDA3\lib\site-packages\click\core.py", line 610, in invoke
return callback(*args, **kwargs)
File "C:\ANACONDA3\lib\site-packages\click\decorators.py", line 73, in new_func
return ctx.invoke(f, obj, *args, **kwargs)
File "C:\ANACONDA3\lib\site-packages\click\core.py", line 610, in invoke
return callback(*args, **kwargs)
File "C:\ANACONDA3\lib\site-packages\flask\cli.py", line 848, in run_command
app = DispatchingApp(info.load_app, use_eager_loading=eager_loading)
File "C:\ANACONDA3\lib\site-packages\flask\cli.py", line 305, in __init__
self._load_unlocked()
File "C:\ANACONDA3\lib\site-packages\flask\cli.py", line 330, in _load_unlocked
self._app = rv = self.loader()
File "C:\ANACONDA3\lib\site-packages\flask\cli.py", line 388, in load_app
app = locate_app(self, import_name, name)
File "C:\ANACONDA3\lib\site-packages\flask\cli.py", line 240, in locate_app
__import__(module_name)
File "C:\Users\bill-\OneDrive\Dokumente\Docs Bill\TA_files\functions_scripts_storage\flask_test\flask_regression_pipeline.py", line 18, in <module>
#store_reg_pickle
NameError: name 'store_reg_pickle' is not defined
If you wish I could upload the entire scripts but that is a lot to look through and since it is working as a long regular pice of code, the mistake needs to be somewhere with my flask setup.
I am getting this when using a Scrapy parsing function (that can take till 10 minutes sometimes) inside a Celery task.
I use:
- Django==1.6.5
- django-celery==3.1.16
- celery==3.1.16
- psycopg2==2.5.5 (I used also psycopg2==2.5.4)
[2015-07-19 11:27:49,488: CRITICAL/MainProcess] Task myapp.parse_items[63fc40eb-c0d6-46f4-a64e-acce8301d29a] INTERNAL ERROR: InterfaceError('connection already closed',)
Traceback (most recent call last):
File "/home/mo/Work/python/pb-env/local/lib/python2.7/site-packages/celery/app/trace.py", line 284, in trace_task
uuid, retval, SUCCESS, request=task_request,
File "/home/mo/Work/python/pb-env/local/lib/python2.7/site-packages/celery/backends/base.py", line 248, in store_result
request=request, **kwargs)
File "/home/mo/Work/python/pb-env/local/lib/python2.7/site-packages/djcelery/backends/database.py", line 29, in _store_result
traceback=traceback, children=self.current_task_children(request),
File "/home/mo/Work/python/pb-env/local/lib/python2.7/site-packages/djcelery/managers.py", line 42, in _inner
return fun(*args, **kwargs)
File "/home/mo/Work/python/pb-env/local/lib/python2.7/site-packages/djcelery/managers.py", line 181, in store_result
'meta': {'children': children}})
File "/home/mo/Work/python/pb-env/local/lib/python2.7/site-packages/djcelery/managers.py", line 87, in update_or_create
return get_queryset(self).update_or_create(**kwargs)
File "/home/mo/Work/python/pb-env/local/lib/python2.7/site-packages/djcelery/managers.py", line 70, in update_or_create
obj, created = self.get_or_create(**kwargs)
File "/home/mo/Work/python/pb-env/local/lib/python2.7/site-packages/django/db/models/query.py", line 376, in get_or_create
return self.get(**lookup), False
File "/home/mo/Work/python/pb-env/local/lib/python2.7/site-packages/django/db/models/query.py", line 304, in get
num = len(clone)
File "/home/mo/Work/python/pb-env/local/lib/python2.7/site-packages/django/db/models/query.py", line 77, in __len__
self._fetch_all()
File "/home/mo/Work/python/pb-env/local/lib/python2.7/site-packages/django/db/models/query.py", line 857, in _fetch_all
self._result_cache = list(self.iterator())
File "/home/mo/Work/python/pb-env/local/lib/python2.7/site-packages/django/db/models/query.py", line 220, in iterator
for row in compiler.results_iter():
File "/home/mo/Work/python/pb-env/local/lib/python2.7/site-packages/django/db/models/sql/compiler.py", line 713, in results_iter
for rows in self.execute_sql(MULTI):
File "/home/mo/Work/python/pb-env/local/lib/python2.7/site-packages/django/db/models/sql/compiler.py", line 785, in execute_sql
cursor = self.connection.cursor()
File "/home/mo/Work/python/pb-env/local/lib/python2.7/site-packages/django/db/backends/__init__.py", line 160, in cursor
cursor = self.make_debug_cursor(self._cursor())
File "/home/mo/Work/python/pb-env/local/lib/python2.7/site-packages/django/db/backends/__init__.py", line 134, in _cursor
return self.create_cursor()
File "/home/mo/Work/python/pb-env/local/lib/python2.7/site-packages/django/db/utils.py", line 99, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/home/mo/Work/python/pb-env/local/lib/python2.7/site-packages/django/db/backends/__init__.py", line 134, in _cursor
return self.create_cursor()
File "/home/mo/Work/python/pb-env/local/lib/python2.7/site-packages/django/db/backends/postgresql_psycopg2/base.py", line 137, in create_cursor
cursor = self.connection.cursor()
InterfaceError: connection already closed
Unfortunately this is a problem with django + psycopg2 + celery combo.
It's an old and unsolved problem.
Take a look on this thread to understand:
https://github.com/celery/django-celery/issues/121
Basically, when celery starts a worker, it forks a database connection
from django.db framework. If this connection drops for some reason, it
doesn't create a new one. Celery has nothing to do with this problem
once there is no way to detect when the database connection is dropped
using django.db libraries. Django doesn't notifies when it happens,
because it just start a connection and it receives a wsgi call (no
connection pool). I had the same problem on a huge production
environment with a lot of machine workers, and sometimes, these
machines lost connectivity with postgres server.
I solved it putting each celery master process under a linux
supervisord handler and a watcher and implemented a decorator that
handles the psycopg2.InterfaceError, and when it happens this function
dispatches a syscall to force supervisor restart gracefully with
SIGINT the celery process.
Edit:
Found a better solution. I implemented a celery task baseclass like this:
from django.db import connection
import celery
class FaultTolerantTask(celery.Task):
""" Implements after return hook to close the invalid connection.
This way, django is forced to serve a new connection for the next
task.
"""
abstract = True
def after_return(self, *args, **kwargs):
connection.close()
#celery.task(base=FaultTolerantTask)
def my_task():
# my database dependent code here
I believe it will fix your problem too.
Guys and emanuelcds,
I had the same problem, now I have updated my code and created a new loader for celery:
from djcelery.loaders import DjangoLoader
from django import db
class CustomDjangoLoader(DjangoLoader):
def on_task_init(self, task_id, task):
"""Called before every task."""
for conn in db.connections.all():
conn.close_if_unusable_or_obsolete()
super(CustomDjangoLoader, self).on_task_init(task_id, task)
This of course if you are using djcelery, it will also require something like this in the settings:
CELERY_LOADER = 'myproject.loaders.CustomDjangoLoader'
os.environ['CELERY_LOADER'] = CELERY_LOADER
I still have to test it, I will update.
If you are running into this when running tests, then you can either change the test to TransactionTestCase class instead of TestCase or add the mark pytest.mark.django_db(transaction=True). This kept my db connection alive from creation of the pytest-celery fixtures to the database calls.
Github issue - https://github.com/Koed00/django-q/issues/167
For context, I am using pytest-celery with celery_app and celery_worker as fixtures in my tests. I am also trying to hit the test db in the tasks referenced in these tests.
If someone would explain switching to transaction=True keeps it open, that would be great!
Just been updating to Django 1.3 and come across an odd error. I'm getting the following error from some code which works with version 1.2.7.
FieldError: Cannot resolve keyword 'email_config_set' into field. Choices are: id, name, site, type
The odd thing being email_config_set is a related name for a ManyToMany field. I'm not sure why django is trying to resolve it into a field.
The error occurs deep inside django:
Traceback (most recent call last):
File "./core/driver.py", line 268, in run
self.init_norm()
File "./driver/emailevent/background.py", line 130, in init_norm
self.load_config()
File "./driver/emailevent/background.py", line 71, in load_config
events = list(config.events.select_related())
File "/usr/local/lib/python2.6/site-packages/Django-1.3.1-py2.6.egg/django/db/models/manager.py", line 168, in select_related
return self.get_query_set().select_related(*args, **kwargs)
File "/usr/local/lib/python2.6/site-packages/Django-1.3.1-py2.6.egg/django/db/models/fields/related.py", line 497, in get_query_set
return superclass.get_query_set(self).using(db)._next_is_sticky().filter(**(self.core_filters))
File "/usr/local/lib/python2.6/site-packages/Django-1.3.1-py2.6.egg/django/db/models/query.py", line 550, in filter
return self._filter_or_exclude(False, *args, **kwargs)
File "/usr/local/lib/python2.6/site-packages/Django-1.3.1-py2.6.egg/django/db/models/query.py", line 568, in _filter_or_exclude
clone.query.add_q(Q(*args, **kwargs))
File "/usr/local/lib/python2.6/site-packages/Django-1.3.1-py2.6.egg/django/db/models/sql/query.py", line 1194, in add_q
can_reuse=used_aliases, force_having=force_having)
File "/usr/local/lib/python2.6/site-packages/Django-1.3.1-py2.6.egg/django/db/models/sql/query.py", line 1069, in add_filter
negate=negate, process_extras=process_extras)
File "/usr/local/lib/python2.6/site-packages/Django-1.3.1-py2.6.egg/django/db/models/sql/query.py", line 1260, in setup_joins
"Choices are: %s" % (name, ", ".join(names)))
FieldError: Cannot resolve keyword 'email_config_set' into field. Choices are: id, name, site, type
Any pointers or tips would be welcome.
The problem in this case was due to dynamic models, and their creation order. More specifically, some models which were dynamically created after others, were not defined when the _meta caches were filled, and therefore lead to errors. Clearing the caches or changing the creation order resolves this problem.
See also https://groups.google.com/d/msg/django-users/RJlV5_ribZk/P6tv4QlJN4EJ
I've model which represents settings in my system and I use it from another part of my application so that import has 3 levels WORKING CODE <- Module <- Model
Model Variables
from django.db import models
class Variables(models.Model):
key = models.CharField(max_length = 20, verbose_name = 'Variable')
value = models.CharField(max_length = 1024)
class Meta:
app_label = 'core'
def __unicode__(self):
return '%s: %s' % (self.key, self.value,)
Here is the code I'm using it from
Module variables.py
from core.models.storage import Variables
def get_var(name):
return Variables.objects.get(key = name)
Module config.py
var = get_var('some_key')
When I use this stuff from django shell everything works well but when I call get_var function I've ImportError exception
storage.py
from django.db import models
class Variables(models.Model):
key = models.CharField(max_length = 20, verbose_name = 'Variable')
value = models.CharField(max_length = 1024)
class Meta:
app_label = 'core'
def __unicode__(self):
return '%s: %s' % (self.key, self.value,)
File "monitor_cli.py", line 19, in
print worker.get_provider()
File "/home/sultan/Project/monitor/app/worker.py", line 14, in get_provider
print Variables.objects.get(pk=1)
File "/usr/local/lib/python2.6/dist-packages/django/db/models/manager.py", line 132, in get
return self.get_query_set().get(*args, **kwargs)
File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 341, in get
clone = self.filter(*args, **kwargs)
File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 550, in filter
return self._filter_or_exclude(False, *args, **kwargs)
File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 568, in _filter_or_exclude
clone.query.add_q(Q(*args, **kwargs))
File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/query.py", line 1172, in add_q
can_reuse=used_aliases, force_having=force_having)
File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/query.py", line 1060, in add_filter
negate=negate, process_extras=process_extras)
File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/query.py", line 1226, in setup_joins
field, model, direct, m2m = opts.get_field_by_name(name)
File "/usr/local/lib/python2.6/dist-packages/django/db/models/options.py", line 307, in get_field_by_name
cache = self.init_name_map()
File "/usr/local/lib/python2.6/dist-packages/django/db/models/options.py", line 337, in init_name_map
for f, model in self.get_all_related_m2m_objects_with_model():
File "/usr/local/lib/python2.6/dist-packages/django/db/models/options.py", line 414, in get_all_related_m2m_objects_with_model
cache = self._fill_related_many_to_many_cache()
File "/usr/local/lib/python2.6/dist-packages/django/db/models/options.py", line 428, in _fill_related_many_to_many_cache
for klass in get_models():
File "/usr/local/lib/python2.6/dist-packages/django/db/models/loading.py", line 167, in get_models
self._populate()
File "/usr/local/lib/python2.6/dist-packages/django/db/models/loading.py", line 61, in _populate
self.load_app(app_name, True)
File "/usr/local/lib/python2.6/dist-packages/django/db/models/loading.py", line 76, in load_app
app_module = import_module(app_name)
File "/usr/local/lib/python2.6/dist-packages/django/utils/importlib.py", line 35, in import_module
__import__(name)
ImportError: No module named c
Get rid of the models directory, and put all your models in a models.py file, unless you have a VERY good reason not to. Change your imports to from core.models import Variable (rename your Variables class to Variable - django models should be named as the singular not the plural).
The problem probably has to do with the fact that your models live in namespaces other than models; ie. models.storage. The django infrastructure expects certain things to be in certain places. If you're going to put models in separate namespaces like you're doing, you should be importing them from the __init__.py file within the models module. Don't do this, again, unless you have a very good reason.
Finally, when asking questions of this nature, you should provide a lot more information. You should show the code of all of the relevant files. You should provide detail on what you've done that deviates from the convention of django (in this case, a models directory rather than a models.py file). You should also show relevant settings from settings.py, in this case, your INSTALLED_APPS setting. You should also tell us what version of django and python you are using; this is sometimes relevant. More information upfront is much better than less information.
I've decided to try out this project:
http://code.google.com/p/appengine-admin/wiki/QuickStart
For the sake of the experiment, I took the demo guest-book shipped with App Engine. The import park look like this:
import cgi
import datetime
import wsgiref.handlers
from google.appengine.ext import db
from google.appengine.api import users
from google.appengine.ext import webapp
from google.appengine.ext.webapp.util import run_wsgi_app
from google import appengine_admin
The db model and the admin look like this:
class Greeting(db.Model):
author = db.UserProperty()
content = db.StringProperty(multiline=True)
date = db.DateTimeProperty(auto_now_add=True)
class AdminGreeting(appengine_admin.ModelAdmin):
model = Greeting
listFields = ('author','content','date')
editFields = ('author','content','date')
appengine_admin.register(AdminGreeting)
Yet I get this exception, trying to run the site:
File "/home/<username>/python/google_appengine/google/appengine/tools/ dev_appserver.py", line 2875, in _HandleRequest
base_env_dict=env_dict)
File "/home/<username>/python/google_appengine/google/appengine/tools/dev_appserver.py", line 387, in Dispatch
base_env_dict=base_env_dict)
File "/home/<username>/python/google_appengine/google/appengine/tools/dev_appserver.py", line 2162, in Dispatch
self._module_dict)
File "/home/<username>/python/google_appengine/google/appengine/tools/dev_appserver.py", line 2080, in ExecuteCGI
reset_modules = exec_script(handler_path, cgi_path, hook)
File "/home/<username>/python/google_appengine/google/appengine/tools/dev_appserver.py", line 1976, in ExecuteOrImportScript
exec module_code in script_module.__dict__
File "/home/<username>/python/google_appengine/demos/guestbook/guestbook.py", line 37, in <module>
appengine_admin.register(AdminGreeting)
File "/home/<username>/python/google_appengine/google/appengine_admin/model_register.py", line 120, in register
modelAdminInstance = modelAdminClass()
File "/home/<username>/python/google_appengine/google/appengine_admin/model_register.py", line 64, in __init__
self._extractProperties(self.listFields, self._listProperties)
File "/home/<username>/python/google_appengine/google/appengine_admin/model_register.py", line 76, in _extractProperties
storage.append(PropertyWrapper(getattr(self.model, propertyName), propertyName))
File "/home/<username>/python/google_appengine/google/appengine_admin/model_register.py", line 17, in __init__
logging.info("Caching info about property '%s'" % name)
File "/usr/lib/python2.6/logging/__init__.py", line 1451, in info
root.info(*((msg,)+args), **kwargs)
File "/usr/lib/python2.6/logging/__init__.py", line 1030, in info
self._log(INFO, msg, args, **kwargs)
File "/usr/lib/python2.6/logging/__init__.py", line 1142, in _log
record = self.makeRecord(self.name, level, fn, lno, msg, args, exc_info, func, extra)
File "/usr/lib/python2.6/logging/__init__.py", line 1117, in makeRecord
rv = LogRecord(name, level, fn, lno, msg, args, exc_info, func)
File "/usr/lib/python2.6/logging/__init__.py", line 272, in __init__
from multiprocessing import current_process
File "/home/<username>/python/google_appengine/google/appengine/tools/dev_appserver.py", line 1089, in decorate
return func(self, *args, **kwargs)
File "/home/<username>/python/google_appengine/google/appengine/tools/dev_appserver.py", line 1736, in load_module
return self.FindAndLoadModule(submodule, fullname, search_path)
File "/home/<username>/python/google_appengine/google/appengine/tools/dev_appserver.py", line 1089, in decorate
return func(self, *args, **kwargs)
File "/home/<username>/python/google_appengine/google/appengine/tools/dev_appserver.py", line 1638, in FindAndLoadModule
description)
File "/home/<username>/python/google_appengine/google/appengine/tools/dev_appserver.py", line 1089, in decorate
return func(self, *args, **kwargs)
File "/home/<username>/python/google_appengine/google/appengine/tools/dev_appserver.py", line 1589, in LoadModuleRestricted
description)
File "/usr/lib/python2.6/multiprocessing/__init__.py", line 83, in <module>
import _multiprocessing
ImportError: No module named _multiprocessing
INFO 2009-04-25 23:34:27,628 dev_appserver.py:2934] "GET / HTTP/1.1" 500 -
Any idea what could have went wrong?
You appear to be using Python 2.6 (given that some of the messages are from files in /usr/lib/python2.6 ...!), but Google App Engine needs Python 2.5 (any 2.5.x will do for any versions of x), so you should install and use that to run the App Engine SDK.
Google App Engine only supports Python 2.5, and you're on a newer version.
By the look of your directories, you might be on a Linux (or is it a Mac?). On, say, Ubuntu you can "sudo apt-get install python2.5" (it won't affect your Python 2.6 at all), and then rather than:
<path-to-gae>/dev_appserver.py ...
do
python2.5 <path-to-gae>/dev_appserver.py ...
This is better than just blithely developing on 2.6 and deploying on 2.5 which is surely asking for hassles later on.
As others have said, this problem occurs in Python 2.6. I used the fix proposed in this comment in the App Engine issue tracker:
A quickfix is to create a file in your app's root named `_multiprocessing.py' with
the contents:
import multiprocessing
This way it's possible to import the _multiprocessing module.
It worked for me using Python 2.6.2
Cheers,
Kaji
To make this work on my local machine (with 2.6) and on GAE, I used:
import sys, logging
if sys.version[:3] == "2.6": logging.logMultiprocessing = 0
just do the following at the top of your something.py
import logging
logging.logMultiprocessing = 0
import logging
logging.logMultiprocessing = 0
Worked for me
It wired that I have been using GAE with python2.6(probably 2.6.1), and every thing worked fine.
But now I get the same _multiprocess import error.(python2.6.2).
import logging
logging.logMultiprocessing = 0
Worked for me too. Before uploading to GAE, comment those lines.