Python Schedule not work in Flask - python

I am importing Schedule into Flask. My project contains WSGI however I know little about the relationship between Flask and WSGI. Now I have three main files:
wsgi.py: Automatically generated by other tool.
app.py : I put client request here.
test.py: Used to test Schedule.
I want to start a task which is a long task when server launch. Here is the part of wsgi.py:
# -*- coding: utf-8 -*-
from threading import Thread
import test
t = Thread(target=test.job)
t.start()
if __name__ == '__main__':
...
As you see I start a thread and let the job work in it.Here is my test.py.
import schedule
def job():
schedule.every(1).seconds.do(pr)
def pr():
print("I'm working...")
My problem is that the job never starts.

I find out my problem.I never let schedule execute jobs. Now wsgi.py looks like this.
# -*- coding: utf-8 -*-
from threading import Thread
import test
schedule.every(1).seconds.do(test.job)
t = Thread(target=test.run_schedule)
t.start()
if __name__ == '__main__':
...
And test.py:
import schedule
import time
start_time = time.time()
def job():
print("I'm working..." + str(time.time() - start_time))
def run_schedule():
while True:
schedule.run_pending()
time.sleep(1)
In order to work in separate thread, I create a thread and in this thread I loop every 1ms. In loop, schedule invoke run_pending to call the job if time out (in my case it's 1s).

Related

How to parse 100 json from wireshark?

I need to simulate one program 100 times, I need to have one simulation per hour (The duration of one simulation is 1 hour). From every simulation, I want to generate different json. My program must be stopped only be tapping 'exit' I try too to make that works:
from apscheduler.schedulers.blocking import BlockingScheduler
import os
def simulation():
os.system("./program")
print ("tshark -i tun0 -T ek > path/packets_one_hour.json")
scheduler = BlockingScheduler()
scheduler.add_job(some_job, 'interval', hours=1)
scheduler.start()
My problem is that my script is running without stopping. I can't generate 100 different json files. I must put quit after every hour in order to generate a new and different json. But I don't know how to do that.
Refering to this example from apscheduler docs, you could do the following.
from datetime import datetime
import os
import threading
from apscheduler.schedulers.blocking import BlockingScheduler
def json_dump():
# Put here your dump script
pass
def simulation():
os.system("./program")
t = threading.Timer(3600, json_dump)
t.start()
if __name__ == '__main__':
scheduler = BlockingScheduler()
scheduler.add_job(simulation, 'interval', hours=1, id='id_sim')
print('Press Ctrl+{0} to exit'.format('Break' if os.name == 'nt' else 'C'))
try:
scheduler.start()
except (KeyboardInterrupt, SystemExit):
pass
Honestly, haven't tried it. Let me know if it works.

Can we add jobs to running schedular in ApSchedular

I started Background scheduler in one file and ran it. Then from other file I accessed scheduler instance and added job. What my thought was the instance will add the job and it will run. I am new to this scheduling mechanisms. What I did is
On one file Main.py
import time
from apscheduler.schedulers.background import BackgroundScheduler
class Main:
a = 2
sched = BackgroundScheduler()
sched.start()
while True:
time.sleep(5)
From other file Bm.py
from Main import Main
class Bm(Main) :
def timed_job():
print 'aa'
Main.sched.add_job(timed_job,'interval',seconds=1)
I thought this would do, but it didnot.I need to this way from seperate file because I need to make a task manager which would run jobs and I need to be able to add or remove jobs anytime needed.SO how can we add and remove jobs to/from running apscheduler?
UPDATE :
This is confusing. I added a function printme on Main.py and did sched.add_job(printme,'interval',seconds=5), it prints me as expected but when I run Bm.py it also prints me, when it was supposed to print aa
def printme():
print 'me'
while True:
# time.sleep(5)
sched.add_job(printme,'interval',seconds=5)
if (input() is 'q'):
sched.shutdown()

Asynchronous Function Call

I would like to learn how to call a function asynchronously in Python3. I think Tornado can do this. Currently, my code is returning nothing on the command line:
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
async def count(end):
"""Print message when start equals end."""
start = 0
while True:
if start == end:
print('start = {0}, end = {1}'.format(start, end))
break
start = start + 1
def main():
# Start counting.
yield count(1000000000)
# This should print while count is running.
print('Count is running. Async!')
if __name__ == '__main__':
main()
Thanks
To call an async function, you need to provide an event loop to handle it. If you have a Tornado app, it provides such a loop, which allows you to make your handlers asynchronous:
from tornado.web import RequestHandler, url
from tornado.httpserver import HTTPServer
from tornado.ioloop import IOLoop
async def do_something_asynchronous():
# e.g. call another service, read from database etc
return {'something': 'something'}
class YourAsyncHandler(RequestHandler):
async def get(self):
payload = await do_something_asynchronous()
self.write(payload)
application = web.Application([
url(r'/your_url', YourAsyncHandler, name='your_url')
])
http_server = HTTPServer(application)
http_server.listen(8000, address='0.0.0.0')
IOLoop.instance().start()
Outside of a Tornado app you can get the event loop from any number of providers, including the built-in asyncio library:
import asyncio
event_loop = asyncio.get_event_loop()
try:
event_loop.run_until_complete(do_something_asynchronous())
finally:
event_loop.close()

APscheduler will not stop

I have python code that I am developing for a website that, among other things, creates an excel sheet and then converts it into a json file. I need for this code to run continuously unless it is killed by the website administrator.
To this end, I am using APscheduler.
The code runs perfectly without APscheduler but when I attempt to add the rest of the code one of two things happens; 1) It runs forever and will not stop despite using "ctrl+C" and I need to stop it using task manager or 2) It only runs once, and then it stops
Code That doesn't Stop:
from apscheduler.scheduler import Scheduler
import logging
import time
logging.basicConfig()
sched = Scheduler()
sched.start()
(...)
code to make excel sheet and json file
(...)
#sched.interval_schedule(seconds = 15)
def job():
excelapi_final()
while True:
time.sleep(10)
sched.shutdown(wait=False)
Code that stops running after one time:
from apscheduler.scheduler import Scheduler
import logging
import time
logging.basicConfig()
sched = Scheduler()
(...)
#create excel sheet and json file
(...)
#sched.interval_schedule(seconds = 15)
def job():
excelapi_final()
sched.start()
while True:
time.sleep(10)
sched.shutdown(wait=False)
I understand from other questions, a few tutorials and the documentation that sched.shutdown should allow for the code to be killed by ctrl+C - however that is not working. Any ideas? Thanks in advance!
You could use the standalone mode:
sched = Scheduler(standalone=True)
and then start the scheduler like this:
try:
sched.start()
except (KeyboardInterrupt):
logger.debug('Got SIGTERM! Terminating...')
Your corrected code should look like this:
from apscheduler.scheduler import Scheduler
import logging
import time
logging.basicConfig()
sched = Scheduler(standalone=True)
(...)
code to make excel sheet and json file
(...)
#sched.interval_schedule(seconds = 15)
def job():
excelapi_final()
try:
sched.start()
except (KeyboardInterrupt):
logger.debug('Got SIGTERM! Terminating...')
This way the program will stop when Ctrl-C is pressed
You can gracefully shut it down:
import signal
from apscheduler.scheduler import Scheduler
import logging
import time
logging.basicConfig()
sched = Scheduler()
(...)
#create excel sheet and json file
(...)
#sched.interval_schedule(seconds = 15)
def job():
excelapi_final()
sched.start()
def gracefully_exit(signum, frame):
print('Stopping...')
sched.shutdown()
signal.signal(signal.SIGINT, gracefully_exit)
signal.signal(signal.SIGTERM, gracefully_exit)

Running apschduler in Python script as a daemon?

I have a job.py which has the following code.
import datetime
import logging
import sys
import os
from apscheduler.scheduler import Scheduler
from src.extractors.pExtractor import somejob
def run_job():
start = datetime.datetime.now()
logging.debug('Proposal extraction job starting')
somejob.main()
end = datetime.datetime.now()
duration = end - start
logging.debug('job completed , took ' + str(duration.seconds) + ' seconds')
def main():
logging.basicConfig(filename='/tmp/pExtractor.log', level=logging.DEBUG,format='%(levelname)s[%(asctime)s]: %(message)s')
sched = Scheduler()
sched.start()
sched.add_interval_job(run_job, minutes=2)
if __name__ == '__main__':
main()
When I run this on the command prompt, it exits immediately:
INFO[2012-04-03 13:31:02,825]: Started thread pool with 0 core threads
and 20 maximum threads INFO[2012-04-03 13:31:02,827]: Scheduler
started INFO[2012-04-03 13:31:02,827]: Added job "run_job (trigger:
cron[minute='2'], next run at: 2012-04-03 14:02:00)" to job store
"default" INFO[2012-04-03 13:31:02,828]: Shutting down thread pool
How can I makde this run as a daemon?
Write your main() as below.
def main():
[... your_code_as_in_your_question ...]
while (True):
pass
Additionally it shouldn't hurt to consider PEP 3143.

Categories