Django Celery: Function only runs once within True While Loop - python

Background: I am using celery to run a python script within a django environment to measure weight from a load cell using a raspberry pi. I am measuring a bottle of water and it relays the information to the webserver.
The python code below runs perfectly when ran in a python environment outside of celery, but when it is ran by a celery worker, the value from the hx.get_weight(5) function within the while loop never changes after the first iteration.
This coded runs fine and the hx.get_weight(5) returns a different value everytime (weight is changing) when not ran by a celery worker. So, I believe it is celery that is causing this issue.
import RPi.GPIO as GPIO
import time
import sys
from .hx711 import HX711, L287
app = Celery('robobud', broker='redis://localhost:6379/0')
log = logging.getLogger(__name__)
#app.task
async def pump(program_id, amount1, amount2):
def cleanAndExit():
print ("Cleaning...")
GPIO.cleanup()
print ("Bye!")
sys.exit()
### Weight Class
hx = HX711(5, 6 )
hx.set_reading_format("LSB", "MSB")
hx.set_reference_unit(112)
hx.reset()
hx.tare()
### Weight
while True:
try:
val = hx.get_weight(5)
print val
hx.power_down()
hx.power_up()
time.sleep(1)

Related

Use Systemd Watchdog with python. Muliprocessing

How to reset Systemd Watchdog using Python? I'm implementing a watchdog for a multi-threaded picture detection software with many dependencies. Previously, the service started a shell script, but now it starts the Python file directly. However, the watchdog implementation is not functioning correctly. Is there a more effective alternative? The goal is to restart the "Picture Detection Main Application" service if the program gets stuck in a loop for 30 seconds or more.
Following the service in the systemd folder
[Unit]
Description=Picturedetection Main application
Wants=network-online.target
After=network-online.target
[Service]
Type=simple
User=user
WorkingDirectory=/home/user/detection/
ExecStart=/usr/bin/python3 /home/user/detection/picturedetection.py
Environment=TF_CUDNN_USE_AUTOTUNE=0
WatchdogSec=30
Restart=always
WatchdogTimestamp=30
[Install]
WantedBy=multi-user.target
Following the python main i currently use
import sys
import syslog
from multiprocessing import Queue
from DetectionDefines import Detection_Version as OV
import time
print("OPTICONTROL START")
syslog.syslog(syslog.LOG_NOTICE, "PICTUREDETECTION START --- Version " + OV.major + "." + OV.minor)
from config.Config import Config as conf
from prediction.ImageFeed import ImageFeed
from prediction.ResultHandler import ResultHandler
from dataflow.CommServer import CommServer
from dataflow.FTLComm import FTLComm
from dataflow.MiniHTTPServer import MiniHTTPServer
from dataflow.GraphDownloader import GraphDownloader
from tools.Logger import Logger
from dataflow.FTPHandler import FTPHandler
from tools.FileJanitor import FileJanitor
from prediction.PredictionPipeline import PredictionPipeline
#Watchdog test
import os
import time
import systemd
# Communication
CommServer().start()
FTLComm()
#Experimental not working right now. Probably even delete
test = Logger("<WATCHDOGWATCHDOG> ")
def WatchdogReset():
test.notice("WATCHDOG has been reseted")
with open("/dev/watchdog", "w") as f:
f.write("1")
#End of Experimental
# Other subprocesses
MiniHTTPServer().start()
FileJanitor().start()
FTPHandler().start()
GraphDownloader().start()
# Detection subprocesses
img_queue = Queue(maxsize = 1)
rst_queue = Queue(maxsize = conf.result_buffer)
ImageFeed(img_queue).start()
ResultHandler(rst_queue).start()
while True:
# CUDA / TensorFlow need to be in the main process
PredictionPipeline(img_queue, rst_queue).predict()
systemd.daemon.notify("WATCHDOG=1")
Additionally, I want to ensure that the program restarts if it gets stuck in an infinite loop. However, this is a multi-threaded program. Will it still be able to restart while other processes are running?
I attempted to activate the watchdog using the method, but it seems to have no effect. The script restarts every 30 seconds. I considered the possibility of an error in my implementation, but using an "os" query didn't resolve the issue.
Additionally, I attempted to use a custom "FileWatchdog" that sends error messages and restarts the service by executing a shell script. However, this requires superuser rights, and I don't want to distribute software with a hardcoded password. Additionally, I believe this solution would pose a challenge in the long term.
I found the solution
Instead I used the sdnotify library which you can download via pip. Then I checked the currend processes if they´re still alive.
Like this:
import sdnotify
from tools.Logger import Logger
from tools import Watchdog
test = Logger("<WATCHDOGWATCHDOG> ")
n = sdnotify.SystemdNotifier()
n.notify("READY=1")
imdfg = ImageFeed(img_queue)
rslt = ResultHandler(rst_queue)
imdfg.start()
rslt.start()
if(Watchdog.check(imdfg)):
n.notify("WATCHDOG=1")
test.notice("OPTICONTROL_WATCHDOG Reset")
time.sleep(2)
#Watchdog file
from multiprocessing import process
def check(prc):
return prc.is_alive()

Prefect Python program runs before scheduled time

My program always runs on startup instead of the scheduled time, maybe i'm just misunderstanding something but I can't say why it's doing this.
from prefect import flow, task, get_run_logger
from prefect.deployments import Deployment
from prefect.orion.schemas.schedules import (
CronSchedule,
IntervalSchedule,
RRuleSchedule,
)
#task(retries=2, retry_delay_seconds=5)
def say_hello():
print("hello")
return True
#flow(name="leonardo_dicapriflow")
def leonardo_dicapriflow(name: str):
say_hello()
return
deployment = Deployment.build_from_flow(
flow=leonardo_dicapriflow,
name="email-deployment",
version=1,
tags=["demo"],
schedule=CronSchedule(cron=("55 11 3 10 *") )
)
deployment.apply()
leonardo_dicapriflow("Leo")
Photo of my error screen
So remove the last line of the code and it will be fine
leonardo_dicapriflow("Leo")
apparrently it was running twice, one of them happened to be on start up.

Can we add jobs to running schedular in ApSchedular

I started Background scheduler in one file and ran it. Then from other file I accessed scheduler instance and added job. What my thought was the instance will add the job and it will run. I am new to this scheduling mechanisms. What I did is
On one file Main.py
import time
from apscheduler.schedulers.background import BackgroundScheduler
class Main:
a = 2
sched = BackgroundScheduler()
sched.start()
while True:
time.sleep(5)
From other file Bm.py
from Main import Main
class Bm(Main) :
def timed_job():
print 'aa'
Main.sched.add_job(timed_job,'interval',seconds=1)
I thought this would do, but it didnot.I need to this way from seperate file because I need to make a task manager which would run jobs and I need to be able to add or remove jobs anytime needed.SO how can we add and remove jobs to/from running apscheduler?
UPDATE :
This is confusing. I added a function printme on Main.py and did sched.add_job(printme,'interval',seconds=5), it prints me as expected but when I run Bm.py it also prints me, when it was supposed to print aa
def printme():
print 'me'
while True:
# time.sleep(5)
sched.add_job(printme,'interval',seconds=5)
if (input() is 'q'):
sched.shutdown()

Django Celery - Passing an object to the views and between tasks using RabbitMQ

This is the first time I'm using Celery, and honestly, I'm not sure I'm doing it right. My system has to run on Windows, so I'm using RabbitMQ as the broker.
As a proof of concept, I'm trying to create a single object where one task sets the value, another task reads the value, and I also want to show the current value of the object when I go to a certain url. However I'm having problems sharing the object between everything.
This is my celery.py
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
from django.conf import settings
os.environ.setdefault('DJANGO_SETTINGS_MODULE','cesGroundStation.settings')
app = Celery('cesGroundStation')
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
#app.task(bind = True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
The object I'm trying to share is:
class SchedulerQ():
item = 0
def setItem(self, item):
self.item = item
def getItem(self):
return self.item
This is my tasks.py
from celery import shared_task
from time import sleep
from scheduler.schedulerQueue import SchedulerQ
schedulerQ = SchedulerQ()
#shared_task()
def SchedulerThread():
print ("Starting Scheduler")
counter = 0
while(1):
counter += 1
if(counter > 100):
counter = 0
schedulerQ.setItem(counter)
print("In Scheduler thread - " + str(counter))
sleep(2)
print("Exiting Scheduler")
#shared_task()
def RotatorsThread():
print ("Starting Rotators")
while(1):
item = schedulerQ.getItem()
print("In Rotators thread - " + str(item))
sleep(2)
print("Exiting Rotators")
#shared_task()
def setSchedulerQ(schedulerQueue):
schedulerQ = schedulerQueue
#shared_task()
def getSchedulerQ():
return schedulerQ
I'm starting my tasks in my apps.py...I'm not sure if this is the right place as the tasks/workers don't seem to work until I start the workers in a separate console where I run the celery -A cesGroundStation -l info.
from django.apps import AppConfig
from scheduler.schedulerQueue import SchedulerQ
from scheduler.tasks import SchedulerThread, RotatorsThread, setSchedulerQ, getSchedulerQ
class SchedulerConfig(AppConfig):
name = 'scheduler'
def ready(self):
schedulerQ = SchedulerQ()
setSchedulerQ.delay(schedulerQ)
SchedulerThread.delay()
RotatorsThread.delay()
In my views.py I have this:
def schedulerQ():
queue = getSchedulerQ.delay()
return HttpResponse("Your list: " + queue)
The django app runs without errors, however my output from "celery -A cesGroundStation -l info" is this: Celery command output
First it seems to start multiple "SchedulerThread" tasks, secondly the "SchedulerQ" object isn't being passed to the Rotators, as it's not reading the updated value.
And if I go to the url for which shows the views.schedulerQ view I get this error:
Django views error
I have very, very little experience with Python, Django and Web Development in general, so I have no idea where to start with that last error. Solutions suggest using Redis to pass the object to the views, but I don't know how I'd do that using RabbitMQ. Later on the schedulerQ object will implement a queue and the scheduler and rotators will act as more of a producer/consumer dynamic with the view showing the contents of the queue, so I believe using the database might be too resource intensive. How can I share this object across all tasks, and is this even the right approach?
The right approach would be to use some persistence layer, such as a database or results back end to store the information you want to share between tasks if you need to share information between tasks (in this example, what you are currently putting in your class).
Celery operates on a distributed message passing paradigm - a good way to distill that idea for this example, is that your module will be executed independently every time a task is dispatched. Whenever a task is dispatched to Celery, you must assume it is running in a seperate interpreter and loaded independently of other tasks. That SchedulerQ class is instantiated anew each time.
You can share information between tasks in ways described in the docs linked previously and some best practice tips discuss data persistence concerns.

Python BackgroundScheduler program crashing when ran from another module

I am trying to build a application that will run a bash script every 10 minutes. I am using apscheduler to accomplish this and when i run my code from terminal it works like clock work. However when i try to run the code from another module it crashes i suspect that the calling module is waiting for the "schedule" module to finish and then crash when that never happens.
Error code
/bin/bash: line 1: 13613 Killed ( python ) < /tmp/vIZsEfp/26
shell returned 137
Function that calls schedule
def shedual_toggled(self,widget):
prosessSchedular.start_background_checker()
Schedule Program
def schedul_check():
"""set up to call prosess checker every 10 mins"""
print "%s check ran" %(counter)
counter =+ 1
app = prosessCheckerv3.call_bash() < calls the bash file
if app == False:
print "error with bash"
return False
else:
prosessCheckerv3.build_snap_shot(app)
def start_background_checker():
scheduler = BackgroundScheduler()
scheduler.add_job(schedul_check, 'interval', minutes=10)
scheduler.start()
while True:
time.sleep(2)
if __name__ == '__main__':
start_background_checker()
this program simply calls another ever 10 mins. As a side note i have been trying to stay as far away from multi-threading as possible but if that is required so be it.
Well I managed to figure it out my self. The issue that GTK+ is not thread safe so the timed module need to be either be ran in another thread or else you can realise/enter the thread before/after calling the module.
I just did it like this.
def shedual_toggeld(self,widget):
onOffSwitch = widget.get_active()
""" After main GTK has logicly finished all GUI work run thread on toggel button """
thread = threading.Thread(target=self.call_schedual, args=(onOffSwitch,))
thread.daemon = True
thread.start()
def call_schedual(self, onOffSwitch):
if onOffSwitch == True:
self.sch.start_background_checker()
else:
self.sch.stop_background_checker()
This article goes through it in more detail. Hopefully some one else will find this useful.
http://blogs.operationaldynamics.com/andrew/software/gnome-desktop/gtk-thread-awareness

Categories