Unable to use Python's threading.Thread in Django app - python

I am trying to create a web application as a front end to another Python app. I have the user enter data into a form, and upon submitting, the idea is for the data to be saved in a database, and for the data to be passed to a thread object class. The thread is something that is strictly kicked-off based on a user action. My problem is that I can import threading, but cannot access threading.Thread. When the thread ends, it will update the server, so when the user views the job information, they'll see the results.
View:
#login_required(login_url='/login')
def createNetworkView(request):
if request.method == "POST":
# grab my variables from POST
job = models.MyJob()
# load my variables into MyJob object
job.save()
t = ProcessJobThread(job.id, my, various, POST, inputs, here)
t.start()
return HttpResponseRedirect("/viewJob?jobID=" + str(job.id))
else:
return HttpResponseRedirect("/")
My thread class:
import threading # this works
print "About to make thread object" # This works, I see this in the log
class CreateNetworkThread(threading.Thread): # failure here
def __init__(self, jobid, blah1, blah2, blah3):
threading.Thread.__init__(self)
def run(self):
doCoolStuff()
updateDB()
I get:
Exception Type: ImportError
Exception Value: cannot import name Thread
However, if I run python on the command line, I can import threading and also do from threading import Thread. What's the deal?
I have seen other things, like How to use thread in Django and Celery but that seemed overkill, and I don't see how that example could import threading and use threading.Thread, when I can't.
Thank you.
Edit: I'm using Django 1.4.1, Python 2.7.3, Ubuntu 12.10, SQLite for the DB, and I'm running the web application with ./manage.py runserver.

This was a silly issue I had. First, I had made a file called "threading.py" and someone suggest I delete it, which I did (or thought I did). The problem was because of me using Eclipse, the PyDev (Python) plugin for Eclipse only deleted the threading.py file I created, and hides the *.pyc file. I had a lingering threading.pyc file lingering around, even though PyDev has an option that I had enabled to delete orphaned .pyc files.

Related

How to handle simultaneous requests in Django?

I have a standard function-based view in Django which receives some parameters via POST after the user has clicked a button, computes something and then returns a template with context.
#csrf_exempt
def myview(request, param1, param2):
if request.method == 'POST':
return HttpResponseRedirect(reverse("app1:view_name", args=[param1, param2]))
'''Calculate and database r/w'''
template = loader.get_template('showData.html')
return HttpResponse(template.render(context, request))
It works with no problem as long as one request is processed at the time (tested both with runserver and in an Apache server).
However, when I use two devices and click on the button simultaneously in each, both requests are mixed up, run simultaneously, and the website ends up trowing a 500 error, or 404 or sometimes success but cannot GET static files.. (again, tested both with runserver and Apache).
How can I force Django to finish the execution of the current request before starting the next?
Or is there a better way to tackle this?
Any light on this will be appreciated. Thanks!
To coordinate threads within a single server process, use
from threading import RLock
lock = RLock()
and then within myview:
lock.acquire()
... # get template, render it
lock.release()
You might start your server with $ uwsgi --processes 1 --threads 2 ...
Django web server on local machine is not for production environment. So it processes one request at a time. In production, you need to use WSGI server, like uwsgi. With that your app can be set up to serve more than one request at a time. Check https://docs.djangoproject.com/en/2.1/howto/deployment/wsgi/uwsgi/
I post my solution in case its of any help to other.
Finally I configured Apache with a pre-forking to isolate requests from each other. According to the documentation the pre-forking is advised for sites using non-thread-safe libraries (my case, apparently).
With this fix Apache can handle well simultaneous requests. However I will still be glad to hear if someone else has other suggestions!
There should be ways to rewrite the code such, that things do not get mixed up. (At least in many cases this is possible)
One of the pre-requirements (if your server uses threading) is to write thread safe code
This means not using global variables (which is bad practice anyway) (or protecting them with Locks)
and using no calls to functions that aren't thread safe. (or protect them with Locks)
As you don't provide any details we cannot help with this. (this = finding a way to not make the whole request blocking, but keep data integrity)
Otherwise you could use a mutex / Lock, that works across multiple processes.
you could for example try to access a locked file
https://pypi.org/project/filelock/ and block until the file is unlocked by the other view.
example code (after pip installing filelock)
from filelock import FileLock
lock = FileLock("my.lock")
with lock:
if request.method == 'POST':
return HttpResponseRedirect(reverse("app1:view_name", args=[param1, param2]))
'''Calculate and database r/w'''
template = loader.get_template('showData.html')
return HttpResponse(template.render(context, request))
If you use uwsgi, then you could look at the uwsgi implementation of locks:
https://uwsgi-docs.readthedocs.io/en/latest/Locks.html
Here the example code from the uwsgi documentation:
def use_lock_zero_for_important_things():
uwsgi.lock() # Implicit parameter 0
# Critical section
uwsgi.unlock() # Implicit parameter 0
def use_another_lock():
uwsgi.lock(1)
time.sleep(1) # Take that, performance! Ha!
uwsgi.unlock(1)

Entry point for a Django app which is simply a redis subscribe loop that needs access to models - no urls/views

I currently have an external non-Django python process which is a simple redis subscribe loop that simply munges the messages it receives and inserts the result in a user mailbox (redis list), which my main app accesses on requests.
My listener now needs access to models, so it makes sense (to me) to make this a Django app. Being a loop, however, I imagine it's probably best to run this as a separate process.
Edit: removed my own proposed solution using AppConfig.ready() and running the separate process via gunicorn.
What I'm doing is pretty simple, but I'm a bit confused as to where the entry point for this app should be. Any ideas?
Any help/suggestions would be appreciated,
-Scott
I went ahead with #DanielRoseman's suggested and used a management command as the entry point.
I simply added a management command 'runsubscriber' which looks like:
my_app/management/commands/redis_subscriber.py
def handle(self, *args, **options):
rsl = RedisSubcribeLoop()
try:
rsl.start()
except KeyboardInterrupt:
rsl.stop()
I can now run this as a separate process via ./manage.py runsubscriber
and kill it with a ^C. my stop() looks like:
myapp/redis_subscribe_loop.py
def stop(self):
self.rc.punsubscribe() # unsubscribe from all channels
self.rc.close()
so that it shuts down cleanly.
Thanks for all your help,
-Scott

WAMP with Django App using Python 2.7 AND 3.4 - is it possible?

The essence of the question is: Is it possible to use WAMP notifications in a Django application supporting both Python 2.7 and 3.4, considering that code should be running and only be interrupted if a Remote Procedure Call is made? (That is, it's not only waiting for a RPC to come)
How we want to use WAMP: the program has a Javascript frontend with a Python/Django backend. One of the things we do is start a function in the backend when a button in the frontend is clicked. This sometimes takes too much time though, so we allow the user to cancel that by clicking another button. This one makes a Remote Procedure Call, which will cause the function to stop earlier (it changes a variable that is checked in the function). There might be other needs for RPC or Pub/Sub in the future too.
We got it working with Python 2.7 using the autobahn_sync module, but it uses Twisted, which hasn't yet been fully ported to Python 3.x. That's why we would need another way of getting our WAMP notification to work on 3.x.
asyncio is supported and from the crossbar documentation it seemed it could be used instead of Twisted, but we can't get it to work without blocking the code that should be running in parallel (code is added below). And there doesn't seem to be something like autobahn_sync using asyncio instead of Twisted.
We're new to WAMP and there might be something we're missing.
Here's the code (with Python 3.4) we're testing using asyncio. It's blocking the rest of the function:
from asyncio import coroutine, new_event_loop, set_event_loop
from autobahn.asyncio.wamp import ApplicationSession, ApplicationRunner
class MyComponent(ApplicationSession):
#wamp.register("com.function.cancel")
def cancel(self):
print("called cancel procedure")
# do things here
return "Canceled"
#coroutine
def onJoin(self, details):
res = yield from self.register(self)
print("{} procedures registered.".format(len(res)))
def function_triggered_in_frontend():
loop = new_event_loop()
set_event_loop(loop)
runner = ApplicationRunner(url=u"ws://localhost:8080/ws", realm=u"realm1")
runner.run(MyComponent)
# and now the function should continue working on other things, but does't because it doesn't return from runner.run().
print("do stuff")
How can we register to the topic and return from the runner.run call? In the Python 2.7 test, with autobahn_sync we could simply do:
from autobahn_sync import register, run, AlreadyRunningError
#register('com.function.cancel')
def cancel_generate_jobs():
print("called cancel procedure")
#register('com.function.cancel')
# def cancel_generate_jobs():
def function_triggered_in_frontend():
try:
run(realm=u"realm1")
except AlreadyRunningError:
logger.info('AutbahnSync instance already started.')
# and then the code would continue being processed here, as we want

Django post_syncdb signal handler not getting called?

I have a myapp/management/__init__.py that is registering a post_syncdb handler like so:
from django.db.models import signals
from features import models as features
def create_features(app, created_models, verbosity, **kwargs):
print "Creating features!"
# Do stuff...
signals.post_syncdb.connect(create_features, sender=features)
I've verified the following:
Both features and myapp are in settings.INSTALLED_APPS
myapp.management is getting loaded prior to the syncdb running (verified via a print statement at the module level)
The features app is getting processed by syncdb, and it is emitting a post_syncdb signal (verified by examining syncdb's output with --verbosity=2.
I'm using the exact same idiom for another pair of apps, and that handler is called correctly. I've compared the two modules and found no relevant differences between the invocations.
However, myapp.management.create_features is never called. What am I missing?
try putting it in your models.py
Just came across the same issue, and the way I solved it was to remove the sender from function arguments and check for it inside the callback function.
from django.db.models import signals
from features import models as features
def create_features(app, created_models, verbosity, **kwargs):
print "Creating features!"
if app != features #this will work as it compares models module instances
return
# Do stuff...
signals.post_syncdb.connect(create_features)
That way you can keep them in your management module like Django docs suggest. I agree that it should work like you suggested. You could probably dig into the implementation of the Signal class in django.dispatch.
The point is in the sender. Your custom callback is called only if sender worked out. In my case the sender was db.models and it is not worked out if syncdb called not the first time, i.o. the synced models are exist in database. In the documents it is written, but do not putted a proper emphasis.
sender
The models module that was just installed. That is, if syncdb just installed an app called "foo.bar.myapp", sender will be the foo.bar.myapp.models module.
So my solution was drop the database and install my app again.

Halting Django's dev server via page request?

I'm looking at writing a portable, light-weight Python app. As the "GUI toolkit" I'm most familiar with — by a wide margin! — is HTML/CSS/JS, I thought to use Django as a framework for the project, using its built-in "development server" (manage.py runserver).
I've been banging on a proof-of-concept for a couple hours and the only real problem I've encountered so far is shutting down the server once the user has finished using the app. Ideally, I'd like there to be a link on the app's pages which shuts down the server and closes the page, but nothing I see in the Django docs suggests this is possible.
Can this be done? For that matter, is this a reasonable approach for writing a small, portable GUI tool?
One brute force approach would be to let the process kill itself, like:
# somewhere in a views.py
def shutdown(request):
import os
os.kill(os.getpid(), 9)
Note: os.kill is only available on Unix (Windows alternative may be something like this: http://metazin.wordpress.com/2008/08/09/how-to-kill-a-process-in-windows-using-python/)
Take a look at the source code for python manage.py runserver:
http://code.djangoproject.com/browser/django/trunk/django/core/management/commands/runserver.py
When you hit Ctrl+C, all it does is call sys.exit(0), so you could probably just do the same.
Edit: Based on your comment, it seems Django is catching SystemExit when it calls your view function. Did you try calling sys.exit in another thread?
import threading
thread = threading.Thread(target=sys.exit, args=(0,))
thread.start()
Edit: Nevermind, sys.exit from another thread only terminates that thread, which is not well documented in the Python docs. =(
It is works for me on Windows:
def shutdown(request):
import os
os._exit(0)

Categories