Halting Django's dev server via page request? - python

I'm looking at writing a portable, light-weight Python app. As the "GUI toolkit" I'm most familiar with — by a wide margin! — is HTML/CSS/JS, I thought to use Django as a framework for the project, using its built-in "development server" (manage.py runserver).
I've been banging on a proof-of-concept for a couple hours and the only real problem I've encountered so far is shutting down the server once the user has finished using the app. Ideally, I'd like there to be a link on the app's pages which shuts down the server and closes the page, but nothing I see in the Django docs suggests this is possible.
Can this be done? For that matter, is this a reasonable approach for writing a small, portable GUI tool?

One brute force approach would be to let the process kill itself, like:
# somewhere in a views.py
def shutdown(request):
import os
os.kill(os.getpid(), 9)
Note: os.kill is only available on Unix (Windows alternative may be something like this: http://metazin.wordpress.com/2008/08/09/how-to-kill-a-process-in-windows-using-python/)

Take a look at the source code for python manage.py runserver:
http://code.djangoproject.com/browser/django/trunk/django/core/management/commands/runserver.py
When you hit Ctrl+C, all it does is call sys.exit(0), so you could probably just do the same.
Edit: Based on your comment, it seems Django is catching SystemExit when it calls your view function. Did you try calling sys.exit in another thread?
import threading
thread = threading.Thread(target=sys.exit, args=(0,))
thread.start()
Edit: Nevermind, sys.exit from another thread only terminates that thread, which is not well documented in the Python docs. =(

It is works for me on Windows:
def shutdown(request):
import os
os._exit(0)

Related

How to handle simultaneous requests in Django?

I have a standard function-based view in Django which receives some parameters via POST after the user has clicked a button, computes something and then returns a template with context.
#csrf_exempt
def myview(request, param1, param2):
if request.method == 'POST':
return HttpResponseRedirect(reverse("app1:view_name", args=[param1, param2]))
'''Calculate and database r/w'''
template = loader.get_template('showData.html')
return HttpResponse(template.render(context, request))
It works with no problem as long as one request is processed at the time (tested both with runserver and in an Apache server).
However, when I use two devices and click on the button simultaneously in each, both requests are mixed up, run simultaneously, and the website ends up trowing a 500 error, or 404 or sometimes success but cannot GET static files.. (again, tested both with runserver and Apache).
How can I force Django to finish the execution of the current request before starting the next?
Or is there a better way to tackle this?
Any light on this will be appreciated. Thanks!
To coordinate threads within a single server process, use
from threading import RLock
lock = RLock()
and then within myview:
lock.acquire()
... # get template, render it
lock.release()
You might start your server with $ uwsgi --processes 1 --threads 2 ...
Django web server on local machine is not for production environment. So it processes one request at a time. In production, you need to use WSGI server, like uwsgi. With that your app can be set up to serve more than one request at a time. Check https://docs.djangoproject.com/en/2.1/howto/deployment/wsgi/uwsgi/
I post my solution in case its of any help to other.
Finally I configured Apache with a pre-forking to isolate requests from each other. According to the documentation the pre-forking is advised for sites using non-thread-safe libraries (my case, apparently).
With this fix Apache can handle well simultaneous requests. However I will still be glad to hear if someone else has other suggestions!
There should be ways to rewrite the code such, that things do not get mixed up. (At least in many cases this is possible)
One of the pre-requirements (if your server uses threading) is to write thread safe code
This means not using global variables (which is bad practice anyway) (or protecting them with Locks)
and using no calls to functions that aren't thread safe. (or protect them with Locks)
As you don't provide any details we cannot help with this. (this = finding a way to not make the whole request blocking, but keep data integrity)
Otherwise you could use a mutex / Lock, that works across multiple processes.
you could for example try to access a locked file
https://pypi.org/project/filelock/ and block until the file is unlocked by the other view.
example code (after pip installing filelock)
from filelock import FileLock
lock = FileLock("my.lock")
with lock:
if request.method == 'POST':
return HttpResponseRedirect(reverse("app1:view_name", args=[param1, param2]))
'''Calculate and database r/w'''
template = loader.get_template('showData.html')
return HttpResponse(template.render(context, request))
If you use uwsgi, then you could look at the uwsgi implementation of locks:
https://uwsgi-docs.readthedocs.io/en/latest/Locks.html
Here the example code from the uwsgi documentation:
def use_lock_zero_for_important_things():
uwsgi.lock() # Implicit parameter 0
# Critical section
uwsgi.unlock() # Implicit parameter 0
def use_another_lock():
uwsgi.lock(1)
time.sleep(1) # Take that, performance! Ha!
uwsgi.unlock(1)

How to write python script to run automatically at 11:30 pm everyday? [duplicate]

I've been working on a web app using Django, and I'm curious if there is a way to schedule a job to run periodically.
Basically I just want to run through the database and make some calculations/updates on an automatic, regular basis, but I can't seem to find any documentation on doing this.
Does anyone know how to set this up?
To clarify: I know I can set up a cron job to do this, but I'm curious if there is some feature in Django that provides this functionality. I'd like people to be able to deploy this app themselves without having to do much config (preferably zero).
I've considered triggering these actions "retroactively" by simply checking if a job should have been run since the last time a request was sent to the site, but I'm hoping for something a bit cleaner.
One solution that I have employed is to do this:
1) Create a custom management command, e.g.
python manage.py my_cool_command
2) Use cron (on Linux) or at (on Windows) to run my command at the required times.
This is a simple solution that doesn't require installing a heavy AMQP stack. However there are nice advantages to using something like Celery, mentioned in the other answers. In particular, with Celery it is nice to not have to spread your application logic out into crontab files. However the cron solution works quite nicely for a small to medium sized application and where you don't want a lot of external dependencies.
EDIT:
In later version of windows the at command is deprecated for Windows 8, Server 2012 and above. You can use schtasks.exe for same use.
**** UPDATE ****
This the new link of django doc for writing the custom management command
Celery is a distributed task queue, built on AMQP (RabbitMQ). It also handles periodic tasks in a cron-like fashion (see periodic tasks). Depending on your app, it might be worth a gander.
Celery is pretty easy to set up with django (docs), and periodic tasks will actually skip missed tasks in case of a downtime. Celery also has built-in retry mechanisms, in case a task fails.
We've open-sourced what I think is a structured app. that Brian's solution above alludes too. We would love any / all feedback!
https://github.com/tivix/django-cron
It comes with one management command:
./manage.py runcrons
That does the job. Each cron is modeled as a class (so its all OO) and each cron runs at a different frequency and we make sure the same cron type doesn't run in parallel (in case crons themselves take longer time to run than their frequency!)
If you're using a standard POSIX OS, you use cron.
If you're using Windows, you use at.
Write a Django management command to
Figure out what platform they're on.
Either execute the appropriate "AT" command for your users, or update the crontab for your users.
Interesting new pluggable Django app: django-chronograph
You only have to add one cron entry which acts as a timer, and you have a very nice Django admin interface into the scripts to run.
Look at Django Poor Man's Cron which is a Django app that makes use of spambots, search engine indexing robots and alike to run scheduled tasks in approximately regular intervals
See: http://code.google.com/p/django-poormanscron/
I had exactly the same requirement a while ago, and ended up solving it using APScheduler (User Guide)
It makes scheduling jobs super simple, and keeps it independent for from request-based execution of some code. Following is a simple example.
from apscheduler.schedulers.background import BackgroundScheduler
scheduler = BackgroundScheduler()
job = None
def tick():
print('One tick!')\
def start_job():
global job
job = scheduler.add_job(tick, 'interval', seconds=3600)
try:
scheduler.start()
except:
pass
Hope this helps somebody!
Django APScheduler for Scheduler Jobs. Advanced Python Scheduler (APScheduler) is a Python library that lets you schedule your Python code to be executed later, either just once or periodically. You can add new jobs or remove old ones on the fly as you please.
note: I'm the author of this library
Install APScheduler
pip install apscheduler
View file function to call
file name: scheduler_jobs.py
def FirstCronTest():
print("")
print("I am executed..!")
Configuring the scheduler
make execute.py file and add the below codes
from apscheduler.schedulers.background import BackgroundScheduler
scheduler = BackgroundScheduler()
Your written functions Here, the scheduler functions are written in scheduler_jobs
import scheduler_jobs
scheduler.add_job(scheduler_jobs.FirstCronTest, 'interval', seconds=10)
scheduler.start()
Link the File for Execution
Now, add the below line in the bottom of Url file
import execute
You can check the full code by executing
[Click here]
https://github.com/devchandansh/django-apscheduler
Brian Neal's suggestion of running management commands via cron works well, but if you're looking for something a little more robust (yet not as elaborate as Celery) I'd look into a library like Kronos:
# app/cron.py
import kronos
#kronos.register('0 * * * *')
def task():
pass
RabbitMQ and Celery have more features and task handling capabilities than Cron. If task failure isn't an issue, and you think you will handle broken tasks in the next call, then Cron is sufficient.
Celery & AMQP will let you handle the broken task, and it will get executed again by another worker (Celery workers listen for the next task to work on), until the task's max_retries attribute is reached. You can even invoke tasks on failure, like logging the failure, or sending an email to the admin once the max_retries has been reached.
And you can distribute Celery and AMQP servers when you need to scale your application.
I personally use cron, but the Jobs Scheduling parts of django-extensions looks interesting.
Although not part of Django, Airflow is a more recent project (as of 2016) that is useful for task management.
Airflow is a workflow automation and scheduling system that can be used to author and manage data pipelines. A web-based UI provides the developer with a range of options for managing and viewing these pipelines.
Airflow is written in Python and is built using Flask.
Airflow was created by Maxime Beauchemin at Airbnb and open sourced in the spring of 2015. It joined the Apache Software Foundation’s incubation program in the winter of 2016. Here is the Git project page and some addition background information.
Put the following at the top of your cron.py file:
#!/usr/bin/python
import os, sys
sys.path.append('/path/to/') # the parent directory of the project
sys.path.append('/path/to/project') # these lines only needed if not on path
os.environ['DJANGO_SETTINGS_MODULE'] = 'myproj.settings'
# imports and code below
I just thought about this rather simple solution:
Define a view function do_work(req, param) like you would with any other view, with URL mapping, return a HttpResponse and so on.
Set up a cron job with your timing preferences (or using AT or Scheduled Tasks in Windows) which runs curl http://localhost/your/mapped/url?param=value.
You can add parameters but just adding parameters to the URL.
Tell me what you guys think.
[Update] I'm now using runjob command from django-extensions instead of curl.
My cron looks something like this:
#hourly python /path/to/project/manage.py runjobs hourly
... and so on for daily, monthly, etc'. You can also set it up to run a specific job.
I find it more managable and a cleaner. Doesn't require mapping a URL to a view. Just define your job class and crontab and you're set.
after the part of code,I can write anything just like my views.py :)
#######################################
import os,sys
sys.path.append('/home/administrator/development/store')
os.environ['DJANGO_SETTINGS_MODULE']='store.settings'
from django.core.management impor setup_environ
from store import settings
setup_environ(settings)
#######################################
from
http://www.cotellese.net/2007/09/27/running-external-scripts-against-django-models/
You should definitely check out django-q!
It requires no additional configuration and has quite possibly everything needed to handle any production issues on commercial projects.
It's actively developed and integrates very well with django, django ORM, mongo, redis. Here is my configuration:
# django-q
# -------------------------------------------------------------------------
# See: http://django-q.readthedocs.io/en/latest/configure.html
Q_CLUSTER = {
# Match recommended settings from docs.
'name': 'DjangoORM',
'workers': 4,
'queue_limit': 50,
'bulk': 10,
'orm': 'default',
# Custom Settings
# ---------------
# Limit the amount of successful tasks saved to Django.
'save_limit': 10000,
# See https://github.com/Koed00/django-q/issues/110.
'catch_up': False,
# Number of seconds a worker can spend on a task before it's terminated.
'timeout': 60 * 5,
# Number of seconds a broker will wait for a cluster to finish a task before presenting it again. This needs to be
# longer than `timeout`, otherwise the same task will be processed multiple times.
'retry': 60 * 6,
# Whether to force all async() calls to be run with sync=True (making them synchronous).
'sync': False,
# Redirect worker exceptions directly to Sentry error reporter.
'error_reporter': {
'sentry': RAVEN_CONFIG,
},
}
Yes, the method above is so great. And I tried some of them. At last, I found a method like this:
from threading import Timer
def sync():
do something...
sync_timer = Timer(self.interval, sync, ())
sync_timer.start()
Just like Recursive.
Ok, I hope this method can meet your requirement. :)
A more modern solution (compared to Celery) is Django Q:
https://django-q.readthedocs.io/en/latest/index.html
It has great documentation and is easy to grok. Windows support is lacking, because Windows does not support process forking. But it works fine if you create your dev environment using the Windows for Linux Subsystem.
I had something similar with your problem today.
I didn't wanted to have it handled by the server trhough cron (and most of the libs were just cron helpers in the end).
So i've created a scheduling module and attached it to the init .
It's not the best approach, but it helps me to have all the code in a single place and with its execution related to the main app.
I use celery to create my periodical tasks. First you need to install it as follows:
pip install django-celery
Don't forget to register django-celery in your settings and then you could do something like this:
from celery import task
from celery.decorators import periodic_task
from celery.task.schedules import crontab
from celery.utils.log import get_task_logger
#periodic_task(run_every=crontab(minute="0", hour="23"))
def do_every_midnight():
#your code
I am not sure will this be useful for anyone, since I had to provide other users of the system to schedule the jobs, without giving them access to the actual server(windows) Task Scheduler, I created this reusable app.
Please note users have access to one shared folder on server where they can create required command/task/.bat file. This task then can be scheduled using this app.
App name is Django_Windows_Scheduler
ScreenShot:
If you want something more reliable than Celery, try TaskHawk which is built on top of AWS SQS/SNS.
Refer: http://taskhawk.readthedocs.io
For simple dockerized projects, I could not really see any existing answer fit.
So I wrote a very barebones solution without the need of external libraries or triggers, which runs on its own. No external os-cron needed, should work in every environment.
It works by adding a middleware: middleware.py
import threading
def should_run(name, seconds_interval):
from application.models import CronJob
from django.utils.timezone import now
try:
c = CronJob.objects.get(name=name)
except CronJob.DoesNotExist:
CronJob(name=name, last_ran=now()).save()
return True
if (now() - c.last_ran).total_seconds() >= seconds_interval:
c.last_ran = now()
c.save()
return True
return False
class CronTask:
def __init__(self, name, seconds_interval, function):
self.name = name
self.seconds_interval = seconds_interval
self.function = function
def cron_worker(*_):
if not should_run("main", 60):
return
# customize this part:
from application.models import Event
tasks = [
CronTask("events", 60 * 30, Event.clean_stale_objects),
# ...
]
for task in tasks:
if should_run(task.name, task.seconds_interval):
task.function()
def cron_middleware(get_response):
def middleware(request):
response = get_response(request)
threading.Thread(target=cron_worker).start()
return response
return middleware
models/cron.py:
from django.db import models
class CronJob(models.Model):
name = models.CharField(max_length=10, primary_key=True)
last_ran = models.DateTimeField()
settings.py:
MIDDLEWARE = [
...
'application.middleware.cron_middleware',
...
]
Simple way is to write a custom shell command see Django Documentation and execute it using a cronjob on linux. However i would highly recommend using a message broker like RabbitMQ coupled with celery. Maybe you can have a look at
this Tutorial
One alternative is to use Rocketry:
from rocketry import Rocketry
from rocketry.conds import daily, after_success
app = Rocketry()
#app.task(daily.at("10:00"))
def do_daily():
...
#app.task(after_success(do_daily))
def do_after_another():
...
if __name__ == "__main__":
app.run()
It also supports custom conditions:
from pathlib import Path
#app.cond()
def file_exists(file):
return Path(file).exists()
#app.task(daily & file_exists("myfile.csv"))
def do_custom():
...
And it also supports Cron:
from rocketry.conds import cron
#app.task(cron('*/2 12-18 * Oct Fri'))
def do_cron():
...
It can be integrated quite nicely with FastAPI and I think it could be integrated with Django as well as Rocketry is essentially just a sophisticated loop that can spawn, async tasks, threads and processes.
Disclaimer: I'm the author.
Another option, similar to Brian Neal's answer it to use RunScripts
Then you don't need to set up commands. This has the advantage of more flexible or cleaner folder structures.
This file must implement a run() function. This is what gets called when you run the script. You can import any models or other parts of your django project to use in these scripts.
And then, just
python manage.py runscript path.to.script

Python application using Twisted stops running after user logs off of Windows XP

I inherited a project using the Twisted Python library. The application is terminating after the user logs off of Windows XP.
The Python code has been converted to an executable using bbfreeze. In addition, the bbfreeze generated executable is registered as a Windows service using the instsrv.exe and srvany.exe.
I've taken a simple chat example from the Twisted website and create an executable from bbfreeze and registered it with instsrv and srvany and the same problem occurs: the executable stops running after the user logs off.
I'm inclined to think that something about Windows XP and the Twisted library's is causing the application to terminate or stop running. In particular, I think it might be something within the reactor code that's causing the application code to stop running. However, I haven't been able to confirm this.
Has anybody else seen this or have any ideas on what might be causing this?
Thanks,
Mark
"Logging Off" MSDN page says that on log off,
WM_QUERYENDSESSION is sent to every window [on the current desktop];
CTRL_LOGOFF_EVENT is sent to every process.
ejabberd service stops on user logoff/login suggests that a process can terminate if it has no handler for CTRL_LOGOFF_EVENT.
Judging by "I can also reproduce this with a simple chat sample" comment, Twisted is the culprit. Folks on the Internet do report that Twisted services fail in this manner sometimes: Re: SIGBREAK on windows.
Twisted has an internal logging facility. An example of using it is in the answer to Twisted starting/stopping factory/protocol less noisy log messages. Had you used it, you would already have seen the "received SIGBREAK..." message pointing to the root cause.
BTW, I use the code like below to log unhandled exceptions in my scripts. This is always a good idea if the script is to be ever run unattended (or by others that you diagnose problems for :^) ).
# set up logging #####################################
import sys,os,logging
logfile = os.path.splitext(os.path.basename(sys.argv[0]))[0]+".log"
logging.basicConfig(\
format='%(asctime)s %(levelname)-8s %(message)s',\
filename=logfile,\
level=logging.DEBUG)
l = logging.getLogger()
#to avoid multiple copies after restart from pdb prompt
if len(l.handlers)<=1: l.addHandler(logging.StreamHandler(sys.stdout))
#hook to log unhandled exceptions
def excepthook(type,value,traceback):
logging.exception("Unhandled exception occured",exc_info=(type,value,traceback))
old_excepthook(type,value,traceback)
old_excepthook = sys.excepthook
sys.excepthook = excepthook
# ####################################################

Python, using os.system - Is there a way for Python script to move past this without waiting for call to finish?

I am trying to use Python (through Django framework) to make a Linux command line call and have tried both os.system and os.open but for both of these it seems that the Python script hangs after making the command line call as the call is for instantiating a server (so it never "finishes" as its meant to be long-running). I know for doing something like this with other Python code you can use something like celery but I figured there would be a simple way to get it to just make a command line call and not be "tied into it" so that it can just move past, I am wondering if I am doing something wrong... thanks for any advice.
I am making the call like this currently
os.system("command_to_start_server")
also tried:
response = os.popen("command_to_start_server")
I'm not sure, but I think the subprocess module with its Popen is much more flexible than os.popen. If I recall correctly it includes asynchronous process spawning, which I think is what you're looking for.
Edit: It's been a while since I used the subprocess module, but if I'm not mistaken, subprocess.Popen returns immediately, and only when you try to communicate with the process (such as reading its output) using subprocess.communicate does your program block waiting for output if there is none.
You can use django-celery. django-celery provides Celery integration for Django. Celery is a task queue/job queue based on distributed message passing.
See this http://ask.github.com/celery/getting-started/first-steps-with-django.html for tutorial how to use it.
Try:
os.system("command_to_start_server &>/dev/null &")

Killing Python webservers

I am looking for a simple Python webserver that is easy to kill from within code. Right now, I'm playing with Bottle, but I can't find any way at all to kill it in code. If you know how to kill Bottle (in code, no Ctrl+C) that would be super, but I'll take anything that's Python, simple, and killable.
We use this.
import os
os._exit(3)
To crash in a 'controlled' way.
If you want to kill a process from Python, on a Unix-like platform, you can send signals equivalent to Ctrl-C at the console using Pythons os module e.g.
# Get this processes PID
pid_of_process = os.getpid()
# Send the interrupt signal to this process
os.kill(pid_of_process, signal.SIGINT)
Raise exeption and handle it in main or use sys.exit
Try putting
import sys
at the top and the command
sys.exit(0)
In the code that handles the "kill request".

Categories