Prefect Python program runs before scheduled time - python

My program always runs on startup instead of the scheduled time, maybe i'm just misunderstanding something but I can't say why it's doing this.
from prefect import flow, task, get_run_logger
from prefect.deployments import Deployment
from prefect.orion.schemas.schedules import (
CronSchedule,
IntervalSchedule,
RRuleSchedule,
)
#task(retries=2, retry_delay_seconds=5)
def say_hello():
print("hello")
return True
#flow(name="leonardo_dicapriflow")
def leonardo_dicapriflow(name: str):
say_hello()
return
deployment = Deployment.build_from_flow(
flow=leonardo_dicapriflow,
name="email-deployment",
version=1,
tags=["demo"],
schedule=CronSchedule(cron=("55 11 3 10 *") )
)
deployment.apply()
leonardo_dicapriflow("Leo")
Photo of my error screen

So remove the last line of the code and it will be fine
leonardo_dicapriflow("Leo")
apparrently it was running twice, one of them happened to be on start up.

Related

Django Celery: Function only runs once within True While Loop

Background: I am using celery to run a python script within a django environment to measure weight from a load cell using a raspberry pi. I am measuring a bottle of water and it relays the information to the webserver.
The python code below runs perfectly when ran in a python environment outside of celery, but when it is ran by a celery worker, the value from the hx.get_weight(5) function within the while loop never changes after the first iteration.
This coded runs fine and the hx.get_weight(5) returns a different value everytime (weight is changing) when not ran by a celery worker. So, I believe it is celery that is causing this issue.
import RPi.GPIO as GPIO
import time
import sys
from .hx711 import HX711, L287
app = Celery('robobud', broker='redis://localhost:6379/0')
log = logging.getLogger(__name__)
#app.task
async def pump(program_id, amount1, amount2):
def cleanAndExit():
print ("Cleaning...")
GPIO.cleanup()
print ("Bye!")
sys.exit()
### Weight Class
hx = HX711(5, 6 )
hx.set_reading_format("LSB", "MSB")
hx.set_reference_unit(112)
hx.reset()
hx.tare()
### Weight
while True:
try:
val = hx.get_weight(5)
print val
hx.power_down()
hx.power_up()
time.sleep(1)

MultiThreading in AWS lambda using Python3

I am trying to implement Multithreading in AWS lambda. This is a Sample code that defines the format of my original code which I am trying to execute in lambda.
import threading
import time
def this_will_await(arg,arg2):
print("Hello User")
print(arg,arg2)
def this_should_start_then_wait():
print("This starts")
timer = threading.Timer(3.0, this_will_await,["b","a"])
timer.start()
print("This should execute")
this_should_start_then_wait()
In my local Machine, this code is working fine. The output I am receiving is:
This starts
This should execute
.
.
.
Hello User
('b', 'a')
Those 3 . represents that it waited for 3 seconds to complete the execution.
Now when I execute the same thing in AWS lambda. I am only receiving
This starts
This should execute
I think it's not calling the this_will_await() function.
Have you tried adding timer.join()? You'll need to join the Timer thread because otherwise the Lambda environment will kill off the thread when the parent thread finishes.
This code in a Lambda function:
import threading
import time
def this_will_await(arg,arg2):
print("Hello User")
print(arg,arg2)
def this_should_start_then_wait():
print("This starts")
timer = threading.Timer(3.0, this_will_await,["b","a"])
timer.start()
timer.join()
print("This should execute")
this_should_start_then_wait()
def lambda_handler(event, context):
return this_should_start_then_wait()
Produces this output:
This starts
Hello User
b a
This should execute

Python script restarts itself. Why?

I have a Python script that runs for around 16 hours a day and it schedules multiple specific jobs at specific times of the day. Everyday whenever this script is being used, usually after running for a few hours (maybe 7 hours) it all of sudden starts a second instance of itself and schedules all the jobs for a second time, meaning that every single job gets executed two times consecutively.
I am using Python 3.6.3, apscheduler, sqlalchemy and few other libraries. I was thinking that maybe my misunderstanding of BackgroundScheduler may be the problem or the while clause at the end of the following code sample:
Here is a simplified code sample.
import time
from datetime import datetime, timedelta, date
import telepot
from apscheduler.schedulers.background import BackgroundScheduler
from sqlalchemy import create_engine, asc
from sqlalchemy.orm import sessionmaker
... (code omitted for readability)
EDIT:(addition of jobA method)
def jobA(Session, var):
print("job A")
session = Session()
entity = session.query(Entity).filter(...).first()
try:
all = method(entity,...)
except:
print("Exception was thrown")
session.commit()
session.close()
return []
session.commit()
session.close()
return all
def jobB(...):
...
def jobC(...):
...
if __name__ == "__main__":
ENGINE = create_engine(DB)
Session = sessionmaker(bind=ENGINE)
Loggers.http_console_debug()
...
TelegramBot = telepot.Bot(Constants.BOT_TOKEN)
TelegramBot.sendMessage(chat_id=Constants.CHAT_ID, text="Started\n")
...
try:
scheduler = BackgroundScheduler(use_reloader=False)
scheduler.start()
i = 1
for var in data:
i += 1
...
start_date_time = datetime.combine(var.date, var.time)
#
scheduler.add_job(jobA, 'date',
run_date=start_date_time - timedelta(minutes=15), args=[Session, var])
scheduler.add_job(jobB, 'date',
run_date=start_date_time - timedelta(minutes=10), args=[Session, var])
scheduler.add_job(jobC, 'date',
run_date=start_date_time - timedelta(minutes=3), args=[Session, var])
scheduler.add_job(jobD, 'date',
run_date=start_date_time + timedelta(minutes=11), args=[Session, var])
print(scheduler.get_jobs())
except:
print("Exception has been thrown")
session.commit()
session.close()
while True:
time.sleep(1)
sys.stdout.flush()
This happens both when I run it from command line prompt or from pycharm. When running pgrep it has interesting results.:
pgrep python
After the program has been started, it returns a single process number. However after a few hours it returns two separate processes. Meaning that the Python script is now running twice. With no input from anybody.
In short, my Python script starts for second time for no reason while it is running and I am trying to figure out how to prevent this from happening, and altering it so that the script runs only once in the period of time and schedules its jobs only once.
Thank you so much.
EDIT: Honestly, I don't know what causes this script to start on it's own second time in a separate process. I found this as an example of use for apscheduler and I am looking into fixing this bug of restarting itself.
Here is a link that I used as inspiration for using APScheduler.(Date-based scheduler)
https://pythonadventures.wordpress.com/2013/08/06/apscheduler-examples/

Can we add jobs to running schedular in ApSchedular

I started Background scheduler in one file and ran it. Then from other file I accessed scheduler instance and added job. What my thought was the instance will add the job and it will run. I am new to this scheduling mechanisms. What I did is
On one file Main.py
import time
from apscheduler.schedulers.background import BackgroundScheduler
class Main:
a = 2
sched = BackgroundScheduler()
sched.start()
while True:
time.sleep(5)
From other file Bm.py
from Main import Main
class Bm(Main) :
def timed_job():
print 'aa'
Main.sched.add_job(timed_job,'interval',seconds=1)
I thought this would do, but it didnot.I need to this way from seperate file because I need to make a task manager which would run jobs and I need to be able to add or remove jobs anytime needed.SO how can we add and remove jobs to/from running apscheduler?
UPDATE :
This is confusing. I added a function printme on Main.py and did sched.add_job(printme,'interval',seconds=5), it prints me as expected but when I run Bm.py it also prints me, when it was supposed to print aa
def printme():
print 'me'
while True:
# time.sleep(5)
sched.add_job(printme,'interval',seconds=5)
if (input() is 'q'):
sched.shutdown()

Python BackgroundScheduler program crashing when ran from another module

I am trying to build a application that will run a bash script every 10 minutes. I am using apscheduler to accomplish this and when i run my code from terminal it works like clock work. However when i try to run the code from another module it crashes i suspect that the calling module is waiting for the "schedule" module to finish and then crash when that never happens.
Error code
/bin/bash: line 1: 13613 Killed ( python ) < /tmp/vIZsEfp/26
shell returned 137
Function that calls schedule
def shedual_toggled(self,widget):
prosessSchedular.start_background_checker()
Schedule Program
def schedul_check():
"""set up to call prosess checker every 10 mins"""
print "%s check ran" %(counter)
counter =+ 1
app = prosessCheckerv3.call_bash() < calls the bash file
if app == False:
print "error with bash"
return False
else:
prosessCheckerv3.build_snap_shot(app)
def start_background_checker():
scheduler = BackgroundScheduler()
scheduler.add_job(schedul_check, 'interval', minutes=10)
scheduler.start()
while True:
time.sleep(2)
if __name__ == '__main__':
start_background_checker()
this program simply calls another ever 10 mins. As a side note i have been trying to stay as far away from multi-threading as possible but if that is required so be it.
Well I managed to figure it out my self. The issue that GTK+ is not thread safe so the timed module need to be either be ran in another thread or else you can realise/enter the thread before/after calling the module.
I just did it like this.
def shedual_toggeld(self,widget):
onOffSwitch = widget.get_active()
""" After main GTK has logicly finished all GUI work run thread on toggel button """
thread = threading.Thread(target=self.call_schedual, args=(onOffSwitch,))
thread.daemon = True
thread.start()
def call_schedual(self, onOffSwitch):
if onOffSwitch == True:
self.sch.start_background_checker()
else:
self.sch.stop_background_checker()
This article goes through it in more detail. Hopefully some one else will find this useful.
http://blogs.operationaldynamics.com/andrew/software/gnome-desktop/gtk-thread-awareness

Categories