web2py scheduling multiple tasks - python

in my web2py app i am using scheduler. So far I have one task scheduled which runs a subprocess when called from controler (an external exe file/application)
Now i want to add another task which will do some background work
My code in scheduler.py till now was
def runWoshiEngine(scriptId, path):
# import os, sys
# import time
import subprocess
print "runWoshiEngine in progress......"
p = subprocess.Popen(['woshi_engine.exe', scriptId], shell=True, stdout = subprocess.PIPE, cwd=path)
return dict(status = 1)
from gluon.scheduler import Scheduler
scheduler = Scheduler(db, heartbeat = 1)
so this way the scheduler started on client's request
after starting my app I run my scheduler with command
python web2py.py -K myapp
Now I want to add another function which will start every hour to do some background work.
What would you recommend and how to add it to scheduler because if i add anything to my line of code my initial task --> exe app is not started
thank you
best regards

Related

Use Systemd Watchdog with python. Muliprocessing

How to reset Systemd Watchdog using Python? I'm implementing a watchdog for a multi-threaded picture detection software with many dependencies. Previously, the service started a shell script, but now it starts the Python file directly. However, the watchdog implementation is not functioning correctly. Is there a more effective alternative? The goal is to restart the "Picture Detection Main Application" service if the program gets stuck in a loop for 30 seconds or more.
Following the service in the systemd folder
[Unit]
Description=Picturedetection Main application
Wants=network-online.target
After=network-online.target
[Service]
Type=simple
User=user
WorkingDirectory=/home/user/detection/
ExecStart=/usr/bin/python3 /home/user/detection/picturedetection.py
Environment=TF_CUDNN_USE_AUTOTUNE=0
WatchdogSec=30
Restart=always
WatchdogTimestamp=30
[Install]
WantedBy=multi-user.target
Following the python main i currently use
import sys
import syslog
from multiprocessing import Queue
from DetectionDefines import Detection_Version as OV
import time
print("OPTICONTROL START")
syslog.syslog(syslog.LOG_NOTICE, "PICTUREDETECTION START --- Version " + OV.major + "." + OV.minor)
from config.Config import Config as conf
from prediction.ImageFeed import ImageFeed
from prediction.ResultHandler import ResultHandler
from dataflow.CommServer import CommServer
from dataflow.FTLComm import FTLComm
from dataflow.MiniHTTPServer import MiniHTTPServer
from dataflow.GraphDownloader import GraphDownloader
from tools.Logger import Logger
from dataflow.FTPHandler import FTPHandler
from tools.FileJanitor import FileJanitor
from prediction.PredictionPipeline import PredictionPipeline
#Watchdog test
import os
import time
import systemd
# Communication
CommServer().start()
FTLComm()
#Experimental not working right now. Probably even delete
test = Logger("<WATCHDOGWATCHDOG> ")
def WatchdogReset():
test.notice("WATCHDOG has been reseted")
with open("/dev/watchdog", "w") as f:
f.write("1")
#End of Experimental
# Other subprocesses
MiniHTTPServer().start()
FileJanitor().start()
FTPHandler().start()
GraphDownloader().start()
# Detection subprocesses
img_queue = Queue(maxsize = 1)
rst_queue = Queue(maxsize = conf.result_buffer)
ImageFeed(img_queue).start()
ResultHandler(rst_queue).start()
while True:
# CUDA / TensorFlow need to be in the main process
PredictionPipeline(img_queue, rst_queue).predict()
systemd.daemon.notify("WATCHDOG=1")
Additionally, I want to ensure that the program restarts if it gets stuck in an infinite loop. However, this is a multi-threaded program. Will it still be able to restart while other processes are running?
I attempted to activate the watchdog using the method, but it seems to have no effect. The script restarts every 30 seconds. I considered the possibility of an error in my implementation, but using an "os" query didn't resolve the issue.
Additionally, I attempted to use a custom "FileWatchdog" that sends error messages and restarts the service by executing a shell script. However, this requires superuser rights, and I don't want to distribute software with a hardcoded password. Additionally, I believe this solution would pose a challenge in the long term.
I found the solution
Instead I used the sdnotify library which you can download via pip. Then I checked the currend processes if they´re still alive.
Like this:
import sdnotify
from tools.Logger import Logger
from tools import Watchdog
test = Logger("<WATCHDOGWATCHDOG> ")
n = sdnotify.SystemdNotifier()
n.notify("READY=1")
imdfg = ImageFeed(img_queue)
rslt = ResultHandler(rst_queue)
imdfg.start()
rslt.start()
if(Watchdog.check(imdfg)):
n.notify("WATCHDOG=1")
test.notice("OPTICONTROL_WATCHDOG Reset")
time.sleep(2)
#Watchdog file
from multiprocessing import process
def check(prc):
return prc.is_alive()

Can we add jobs to running schedular in ApSchedular

I started Background scheduler in one file and ran it. Then from other file I accessed scheduler instance and added job. What my thought was the instance will add the job and it will run. I am new to this scheduling mechanisms. What I did is
On one file Main.py
import time
from apscheduler.schedulers.background import BackgroundScheduler
class Main:
a = 2
sched = BackgroundScheduler()
sched.start()
while True:
time.sleep(5)
From other file Bm.py
from Main import Main
class Bm(Main) :
def timed_job():
print 'aa'
Main.sched.add_job(timed_job,'interval',seconds=1)
I thought this would do, but it didnot.I need to this way from seperate file because I need to make a task manager which would run jobs and I need to be able to add or remove jobs anytime needed.SO how can we add and remove jobs to/from running apscheduler?
UPDATE :
This is confusing. I added a function printme on Main.py and did sched.add_job(printme,'interval',seconds=5), it prints me as expected but when I run Bm.py it also prints me, when it was supposed to print aa
def printme():
print 'me'
while True:
# time.sleep(5)
sched.add_job(printme,'interval',seconds=5)
if (input() is 'q'):
sched.shutdown()

Launching and waiting a GUI app to finish with Python

I need to launch a GUI application, wait for the application to quit, and then start the other processes.
import subprocess
res = subprocess.check_output(["/usr/bin/open", "-a", "/Applications/Mou.app", "p.py"])
print "Finished"
... start the other processes
However, the process returns right away without waiting for the Mou.app to finish.
How can I make the python process to wait? I use Mac OS X.
According to the open man page, the -W flag causes open to wait until the app exits.
Therefore try:
import subprocess
res = subprocess.check_output(["/usr/bin/open", "-a", "-W", "/Applications/Mou.app", "p.py"])
print "Finished"

Celery auto reload on ANY changes

I could make celery reload itself automatically when there is changes on modules in CELERY_IMPORTS in settings.py.
I tried to give mother modules to detect changes even on child modules but it did not detect changes in child modules. That make me understand that detecting is not done recursively by celery. I searched it in the documentation but I did not meet any response for my problem.
It is really bothering me to add everything related celery part of my project to CELERY_IMPORTS to detect changes.
Is there a way to tell celery that "auto reload yourself when there is any changes in anywhere of project".
Thank You!
Celery --autoreload doesn't work and it is deprecated.
Since you are using django, you can write a management command for that.
Django has autoreload utility which is used by runserver to restart WSGI server when code changes.
The same functionality can be used to reload celery workers. Create a seperate management command called celery. Write a function to kill existing worker and start a new worker. Now hook this function to autoreload as follows.
import shlex
import subprocess
from django.core.management.base import BaseCommand
from django.utils import autoreload
def restart_celery():
cmd = 'pkill celery'
subprocess.call(shlex.split(cmd))
cmd = 'celery worker -l info -A foo'
subprocess.call(shlex.split(cmd))
class Command(BaseCommand):
def handle(self, *args, **options):
print('Starting celery worker with autoreload...')
# For Django>=2.2
autoreload.run_with_reloader(restart_celery)
# For django<2.1
# autoreload.main(restart_celery)
Now you can run celery worker with python manage.py celery which will autoreload when codebase changes.
This is only for development purposes and do not use it in production. Code taken from my other answer here.
You can manually include additional modules with -I|--include. Combine this with GNU tools like find and awk and you'll be able to find all .py files and include them.
$ celery -A app worker --autoreload --include=$(find . -name "*.py" -type f | awk '{sub("\./",""); gsub("/", "."); sub(".py",""); print}' ORS=',' | sed 's/.$//')
Lets explain it:
find . -name "*.py" -type f
find searches recursively for all files containing .py. The output looks something like this:
./app.py
./some_package/foopy
./some_package/bar.py
Then:
awk '{sub("\./",""); gsub("/", "."); sub(".py",""); print}' ORS=','
This line takes output of find as input and removes all occurences of ./. Then it replaces all / with a .. The last sub() removes replaces .py with an empty string. ORS replaces all newlines with ,. This outputs:
app,some_package.foo,some_package.bar,
The last command, sed removes the last ,.
So the command that is being executed looks like:
$ celery -A app worker --autoreload --include=app,some_package.foo,some_package.bar
If you have a virtualenv inside your source you can exclude it by adding -path .path_to_your_env -prune -o:
$ celery -A app worker --autoreload --include=$(find . -path .path_to_your_env -prune -o -name "*.py" -type f | awk '{sub("\./",""); gsub("/", "."); sub(".py",""); print}' ORS=',' | sed 's/.$//')
You can use watchmedo
pip install watchdog
Start celery worker indirectly via watchmedo
watchmedo auto-restart --directory=./ --pattern=*.py --recursive -- celery worker --app=worker.app --concurrency=1 --loglevel=INFO
More detailed
I used watchdog watchdemo utility, it works great but for some reason the PyCharm debugger was not able to debug the subprocess spawned by watchdemo.
So if your project has werkzeug as dependency, you can use the werkzeug._reloader.run_with_reloader function to autoreload celery worker on code change. Plus it works with PyCharm debugger.
"""
Filename: celery_dev.py
"""
import sys
from werkzeug._reloader import run_with_reloader
# this is the celery app path in my application, change it according to your project
from web.app import celery_app
def run():
# create copy of "argv" and remove script name
argv = sys.argv.copy()
argv.pop(0)
# start the celery worker
celery_app.worker_main(argv)
if __name__ == '__main__':
run_with_reloader(run)
Sample PyCharm debug configuration.
NOTE:
This is a private werkzeug API and is working as of Werkzeug==2.0.3. It may stop working in future versions. Use at you own risk.
OrangeTux's solution didn't work out for me, so I wrote a little Python script to achieve more or less the same. It monitors file changes using inotify, and triggers a celery restart if it detects a IN_MODIFY, IN_ATTRIB, or IN_DELETE.
#!/usr/bin/env python
"""Runs a celery worker, and reloads on a file change. Run as ./run_celery [directory]. If
directory is not given, default to cwd."""
import os
import sys
import signal
import time
import multiprocessing
import subprocess
import threading
import inotify.adapters
CELERY_CMD = tuple("celery -A amcat.amcatcelery worker -l info -Q amcat".split())
CHANGE_EVENTS = ("IN_MODIFY", "IN_ATTRIB", "IN_DELETE")
WATCH_EXTENSIONS = (".py",)
def watch_tree(stop, path, event):
"""
#type stop: multiprocessing.Event
#type event: multiprocessing.Event
"""
path = os.path.abspath(path)
for e in inotify.adapters.InotifyTree(path).event_gen():
if stop.is_set():
break
if e is not None:
_, attrs, path, filename = e
if filename is None:
continue
if any(filename.endswith(ename) for ename in WATCH_EXTENSIONS):
continue
if any(ename in attrs for ename in CHANGE_EVENTS):
event.set()
class Watcher(threading.Thread):
def __init__(self, path):
super(Watcher, self).__init__()
self.celery = subprocess.Popen(CELERY_CMD)
self.stop_event_wtree = multiprocessing.Event()
self.event_triggered_wtree = multiprocessing.Event()
self.wtree = multiprocessing.Process(target=watch_tree, args=(self.stop_event_wtree, path, self.event_triggered_wtree))
self.wtree.start()
self.running = True
def run(self):
while self.running:
if self.event_triggered_wtree.is_set():
self.event_triggered_wtree.clear()
self.restart_celery()
time.sleep(1)
def join(self, timeout=None):
self.running = False
self.stop_event_wtree.set()
self.celery.terminate()
self.wtree.join()
self.celery.wait()
super(Watcher, self).join(timeout=timeout)
def restart_celery(self):
self.celery.terminate()
self.celery.wait()
self.celery = subprocess.Popen(CELERY_CMD)
if __name__ == '__main__':
watcher = Watcher(sys.argv[1] if len(sys.argv) > 1 else ".")
watcher.start()
signal.signal(signal.SIGINT, lambda signal, frame: watcher.join())
signal.pause()
You should probably change CELERY_CMD, or any other global variables.
There was an issue in #AlexTT answer, I don't know if I should comment on his answer of put this as an answer.
You can use watchmedo
pip install watchdog
Start celery worker indirectly via watchmedo
watchmedo auto-restart --directory=./ --pattern=*.py --recursive -- celery -A <app> worker --concurrency=1 --loglevel=INFO
This is the way I made it work in Django:
# worker_dev.py (put it next to manage.py)
from django.utils import autoreload
def run_celery():
from projectname import celery_app
celery_app.worker_main(["-Aprojectname", "-linfo", "-Psolo"])
print("Starting celery worker with autoreload...")
autoreload.run_with_reloader(run_celery)
Then run python worker_dev.py. This has an advantage of working inside docker container.
This is a huge adaptation from Suor's code.
I made a custom Django command which can be called like this:
python manage.py runcelery
So, every time the code changes, celery's main process is gracefully killed and then executed again.
Change the CELERY_COMMAND variable as you wish.
# File: runcelery.py
import os
import signal
import subprocess
import time
import psutil
from django.core.management.base import BaseCommand
from django.utils import autoreload
DELAY_UNTIL_START = 5.0
CELERY_COMMAND = 'celery --config my_project.celeryconfig worker --loglevel=INFO'
class Command(BaseCommand):
help = ''
def kill_celery(self, parent_pid):
os.kill(parent_pid, signal.SIGTERM)
def run_celery(self):
time.sleep(DELAY_UNTIL_START)
subprocess.run(CELERY_COMMAND.split(' '))
def get_main_process(self):
for process in psutil.process_iter():
if process.ppid() == 0: # PID 0 has no parent
continue
parent = psutil.Process(process.ppid())
if process.name() == 'celery' and parent.name() == 'celery':
return parent
return
def reload_celery(self):
parent = self.get_main_process()
if parent is not None:
self.stdout.write('[*] Killing Celery process gracefully..')
self.kill_celery(parent.pid)
self.stdout.write('[*] Starting Celery...')
self.run_celery()
def handle(self, *args, **options):
autoreload.run_with_reloader(self.reload_celery)

Using a Python script to start and stop the Google App Engine dev_appserver during continuous integration testing

I'm trying to write a Python script that will enable me to start the Google App Engine dev_appserver using coverage.py, fetch the /test url from the app that I launch, wait for the server to finish returning the page, then shutdown the dev_appserver, and then generate a report.
My challenge is how to launch the dev_appserver in the background so that I can do the http fetch and then how to shut down the dev_appserver before generating my report.
I'm heading towards something like this:
# get_gae_coverage.py
# Launch dev_appserver with coverge.py
coverage run --source=./ /usr/local/bin/dev_appserver.py --clear_datastore --use_sqlite .
#Fetch /test
urllib.urlopen('http://localhost:8080/test').read()
# Shutdown dev_appserver somehow
# ??
# Generate coverage report
coverage report
What is the best way to write a python script to do this?
You should go with subprocess Popen
import os
import signal
import subprocess
coverage_proc = subprocess.Popen(
['coverage','run', your_flag_list]
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
time.sleep(5) #Find the correct sleep value
urllib.urlopen('http://localhost:8080/test').read()
time.sleep(1)
os.kill(coverage_proc.pid, signal.SIGINT)
Here you can find another approach to test if the server is up and running:
line = proc.stdout.readline()
while '] Running application' not in line:
line = proc.stdout.readline()
threading is the way to accomplish such a kind of task. Namely, you start the dev_appserver in a thread or in the main thread and as it is running, run and collect the results using the coverage module and then kill the dev_appserver python process in another thread and you will have results from coverage.
Here is sample snippet, which runs the dev_appserver.py in a thread and then waits for 10 seconds before and then it kills the python process. You can modify the end method in a suitable wherein the instead of waiting for 10 seconds, it waits for few seconds (in order to let the python process start) and then start doing the coverage testing and after it is done, kill the appserver and finish coverage.
import threading
import subprocess
import time
hold_process = []
def start():
print 'In the start process'
proc = subprocess.Popen(['/usr/bin/python','dev_appserver.py','yourapp'])
hold_process.append(proc)
def end():
time.sleep(10)
proc = hold_process.pop(0)
print 'Killing the appserver process'
proc.kill()
t = threading.Thread(name='startprocess',target=start)
t.deamon = True
w = threading.Thread(name='endprocess',target=end)
t.start()
w.start()
t.join()
w.join()

Categories