How to continously integrate (CI) an forever running critical Python application? - python

I have written a crypto-trading bot in Python which runs 24-7. I want to continuously integrate new features and bug fixes to this app through CI software Jenkins. But the problem is I can't just kill the app, checkout the latest revision and restart the app; because the bot might have active trades to be sold (currently in a trade) at any given time. Killing the app would make the bot loose track of its orders. I was thinking of dumping my active trade data in a database, kill the app, update it, restart it and load the trade data from database to restore the bot's awareness of the trades. But I am not sure if this is best way to do this. Is there a better way to do this?

To sum up.
To save state consider using Redis
Listen to system signals to trigger graceful shutdown like so:
import signal
...
def handler_stop_signals(*args, **kwargs):
""" Handle system signals
only SIGTERM expected to trigger this"""
Log.log(__name__).info('Gracefully shutting down')
my_process.shutdown()
if __name__ == 'main':
signal.signal(signal.SIGTERM, handler_stop_signals)
No need for different process,cli, api. Register a listener and this callback will kick in once the signal is issued.
For more info go here
Then when you want to deploy new version with jenkins, just do your service stop or kill , then deploy and start.

Related

How to use an async timer

I have a Python Server running on Win10, communicating with an Android App (written in JavaScript) as a Client, using Sockets.
While the App is in the foreground everything works OK. Once the App is sent to the background (depending on available memory in the mobile), communication stops, and the Server hangs waiting for a reply from the Client.
I could find no way to keep the Android App if the foreground, and I do not have the source code.
The only solution I could think of is to have an Async timer, which after (say 60 seconds) signals the Server to terminate the App, and re-open the Server awaiting for the App to re-connect.
Here is my PSEUDO code:
OpenSocket() # waits for Client to connect
while True:
send(message) # to Client
start async.timer # sleep for 60 seconds
timer != finished:
reset timer
rec = receive.message() # do processing
else:
send(TerminatingMessage)
OpenSocket()
I would appreciate any help to code the above !
When an app is in the background on Android it's rapidly killed off. Within 2 minutes. To prevent that you could use a foreground service. That will increase the length of time it stays alive, but will not help permanently. It also will not help if the phone screen is turned off. When the screen is off you enter a power saving mode called Doze, where apps only get a brief window for IO every 15 minutes.
Basically- this type of architecture is inappropriate for an Android app. Your best bet is to rearchitect. High priority push messages can restart your app if its killed while backgrounded and can override Doze, so long as you don't send too many. Beyond that, we'd need to know how often your app sends and receives data and how important it is that it's timely to help.

How to detect system ACPI G2/S5 Soft Off event with python on linux

I am working on an app using Google's compute engine and would like to use pre-emptible instances.
I need my code to respond to the 30s warning google gives via an ACPI G2 Soft Off signal that they send when they are going to take away your VM as described here: https://cloud.google.com/compute/docs/instances/preemptible.
How do I detect this event in my python code that is running on the machine and react to it accordingly (in my case I need to put the job the VM was working on back on a queue of open jobs so that a different machine can take it).
I am not answering the question directly, but I think that your actual intent is different:
The G2 power button event is generated by both preemption of a VM and the gcloud instances stop command (or the corresponding API, which it calls);
I am assuming that you want to react specially only on instance preemption.
Avoid a common misunderstanding
GCE does not send a "30s termination warning" with the power button event. It just sends the normal, honest power button soft-off event that immediately initiates shutdown of the system.
The "warning" part that comes with it is simple: “Here is your power button event, shutdown the OS ASAP, because you have 30s before we pull the plug off the wall socket. You've been warned!”
You have two system services that you can combine in different ways to get the desired behavior.
1. Use the fact that the system is shutting down upon ACPI G2
The most kosher (and, AFAIK, the only supported) way of handling the ACPI power button event is let the system handle it, and execute what you want in the instance shutdown script. In a systemd-managed machine, the default GCP shutdown script is simply invoked by a Type=oneshot service's ExecStop= command (see systemd.service(8)). The script is ran relatively late in shutdown sequence.
If you must ensure that the shutdown script is ran after (or before) some of your services is sent a signal to terminate, you can modify some of service dependencies. Things to keep in mind:
After and Before are reversed on shutdown: if X is started after Y, then it's stopped before Y.
The After dependency ensures that the service in the sequence is told to terminate before the shutdown script is run. It does not ensure that the service has already terminated.
The shutdown script is run when the google-shutdown-scripts.service is stopped as part of system shutdown.
With all that in mind, you can do sudo systemctl edit google-shutdown-scripts.service. This will create an empty configuration override file and open your $EDITOR, where you can put your After and Before dependencies, for example,
[Unit]
# Make sure that shutdown script is run (synchronously) *before* mysvc1.service is stopped.
After=mysvc1.service
# Make sure that mysvc2.service is sent a command to stop before the shutdown script is run
Before=mysvc2.service
You may specify as many After or Before clauses as you want, 0 or more of each. Read systemd.unit(8) for more information.
2. Use GCP metadata
There is an instance metadatum v1/instance/preempted. If the instance is preempted, it's value is TRUE, otherwise it's FALSE.
GCP has a thorough documentation on working with instance metadata. In short, there are two ways you can use this (or any other) metadata value:
Query its value at any time, e. g. in the shutdown script. curl(1) equivalent:
curl -sfH 'Metadata-Flavor: Google' \
'http://169.254.169.254/computeMetadata/v1/instance/preempted'
Run an HTTP request that will complete (200) when the metadatum changes. The only change that can ever happen to it is from FALSE to TRUE, as preemption is irreversible.
curl -sfH 'Metadata-Flavor: Google' \
'http://169.254.169.254/computeMetadata/v1/instance/preempted?wait_for_change=true'
Caveat: The metadata server may return the 503 response if it's temporarily unavailable (this is very rare, but happens), so certain retry logic is required. This especially true for the long-running second form (with ?wait_for_change=true), as the pending request may return at any time with the code 503. Your code should be ready to handle this and restart the query. curl does not return the HTTP error code directly, but you can use the fact that x=$(curl ....) expression returned an empty string if you scripting it; your criterion for positive detection of preemption is [[ $x == TRUE ]] in this case.
Summary
If you want to detect that the VM is shutting down for any reason, use Google-provided shutdown script.
If you also need to distinguish whether the VM was in fact preempted, as opposed to gcloud instance stop <vmname> (which also sends the power button event!), query the preempted metadata in the shutdown script.
Run a pending HTTP request for metadata change, and react on it accordingly. This will complete successfully when VM is preempted only (but may complete with an error at any time too).
If the daemon that you run is your own, you can also directly query the preempted metadata from the code path which handles the termination signal, if you need to distinguish between different shutdown reasons.
It is not impossible that the real decision point is whether you have an "active job" that you want to return to the "queue", or not: if your service is requested to stop while holding on an active job, just return it, regardless of the reason why you are being stopped. But I cannot comment on this, not knowing your actual design.
I think the simplest way to handle GCP preemption is using SIGTERM.
The SIGTERM signal is a generic signal used to cause program
termination. Unlike SIGKILL, this signal can be blocked, handled, and
ignored. It is the normal way to politely ask a program to terminate. https://www.gnu.org/software/libc/manual/html_node/Termination-Signals.html
This does depend on shutdown scripts, which are run on a "best effort" basis. In practice, shutdown scripts are very reliable for short scripts.
In your shutdown script:
echo "Running shutdown script"
preempted = curl "http://metadata.google.internal/computeMetadata/v1/instance/preempted" -H "Metadata-Flavor: Google"
if $preempted; then
PID="$(pgrep -o "python")"
echo "Send SIGTERM to python"
kill "$PID"
sleep infinity
fi
echo "Shutting down"
In main.py:
import signal
import os
def sigterm_handler(sig, frame):
print("Got SIGTERM")
os.environ["IS_PREEMPTED"] = True
# Call cleanup functions
signal.signal(signal.SIGTERM, sigterm_handler)
if __name__ == "__main__":
print("Main")

Web2Py - configure a scheduler

I have an application written in Web2Py that contains some modules. I need to call some functions out of a module on a periodic basis, say once daily. I have been trying to get a scheduler working for that purpose but am not sure how to get it working properly. I have referred to this and this to get started.
I have got a scheduler.py class in the models directory, which contains code like this:
from gluon.scheduler import Scheduler
from Module1 import Module1
def daily_task():
module1 = Module1()
module1.action1(arg1, arg2, arg3)
daily_task_scheduler = Scheduler(db, tasks=dict(my_daily_task=daily_task))
In default.py I have following code for the scheduler:
def daily_periodic_task():
daily_task_scheduler.queue_task('daily_running_task', repeats=0, period=60)
[for testing I am running it after 60 seconds, otherwise for daily I plan to use period=86400]
In my Module1.py class, I have this kind of code:
def action1(self, arg1, arg2, arg3):
for row in db().select(db.table1.ALL):
row.processed = 'processed'
row.update_record()
One of the issues I am facing is that I don't understand clearly how to make this scheduler work to automatically handle the execution of action1 on daily basis.
When I launch my application using syntax similar to: python web2py.py -K my_app it shows this in the console:
web2py Web Framework
Created by Massimo Di Pierro, Copyright 2007-2015
Version 2.11.2-stable+timestamp.2015.05.30.16.33.24
Database drivers available: sqlite3, imaplib, pyodbc, pymysql, pg8000
starting single-scheduler for "my_app"...
However, when I see the browser at:
http://127.0.0.1:8000/my_app/default/daily_periodic_task
I just see "None" as text displayed on the screen and I don't see any changes produced by the scheduled task in my database table.
While when I see the browser at:
http://127.0.0.1:8000/my_app/default/index
I get an error stating This web page is not available, basically indicating my application never got started.
When I start my application normally using python web2py.py my application loads fine but I don't see any changes produced by the scheduled task in my database table.
I am unable to figure out what I am doing wrong here and how to properly use the scheduler with Web2Py. Basically, I need to know how can I start my application normally alongwith the scheduled tasks properly running in background.
Any help in this regard would be highly appreciated.
Running python web2py.py starts the built-in web server, enabling web2py to respond to HTTP requests (i.e., serving web pages to a browser). This has nothing to do with the scheduler and will not result in any scheduled tasks being run.
To run scheduled tasks, you must start one or more background workers via:
python web2py.py -K myapp
The above does not start the built-in web server and therefore does not enable you to visit web pages. It simply starts a worker process that will be available to execute scheduled tasks.
Also, note that the above does not actually result in any tasks being scheduled. To schedule a task, you must insert a record in the db.scheduler_task table, which you can do via any of the usual methods of inserting records (including using appadmin) or programmatically via the scheduler.queue_task method (which is what you use in your daily_periodic_task action).
Note, you can simultaneously start the built-in web server and a scheduler worker process via:
python web2py.py -a yourpassword -K myapp -X
So, to schedule a daily task and have it actually executed, you need to (a) start a scheduler worker and (b) schedule the task. You can schedule the task by visiting your daily_periodic_task action, but note that you only need to visit that action once, as once the task has been scheduled, it remains in effect indefinitely (given that you have set repeats=0).
If the task does not appear to be working, it is possible there is something wrong with the task itself that is resulting in an error.

What is the proper way to handle periodic housekeeping tasks in Python/Pyramid/CherryPy?

I have a python web-app that uses Pyramid/CherryPy for the webserver.
It has a few periodic housekeeping tasks that need to be run - Clearing out stale sessions, freeing their resources, etc...
What is the proper way to manage this? I can fairly easily just run a additional "housekeeping" thread (and use a separate scheduler, like APscheduler), but having a separate thread reach into the running server thread(s) just seems like a really clumsy solution. CherryPy is already running the server in a (multi-threaded) eventloop, it seems like it should be possible to somehow schedule periodic events through that.
I was lead to this answer by #fumanchu's answer, but I wound up using an instance of the cherrypy.process.plugins.BackgroundTask plugin:
def doHousekeeping():
print("Housekeeper!")
-
def runServer():
cherrypy.tree.graft(wsgi_server.app, "/")
# Unsubscribe the default server
cherrypy.server.unsubscribe()
# Instantiate a new server object
server = cherrypy._cpserver.Server()
# Configure the server object
server.socket_host = "0.0.0.0"
server.socket_port = 8080
server.thread_pool = 30
# Subscribe this server
server.subscribe()
cherrypy.engine.housekeeper = cherrypy.process.plugins.BackgroundTask(2, doHousekeeping)
cherrypy.engine.housekeeper.start()
# Start the server engine (Option 1 *and* 2)
cherrypy.engine.start()
cherrypy.engine.block()
Results in doHousekeeping() being called at 2 second intervals within the CherryPy event loop.
It also doesn't involve doing something as silly as dragging in the entire OS just to call a task periodically.
Have a look at the "main" channel at http://cherrypy.readthedocs.org/en/latest/progguide/extending/customplugins.html
Do yourself a favour and just use cron. No need to roll your own scheduling software.

Detecting computer/program shutdown in Python?

I have a Python script that runs in a loop regularly making adjustments to my lighting system. When I shut down my computer, I'd like my script to detect that, and turn off the lights altogether.
How do I detect my computer beginning to shut down in Python?
Or, assuming Windows sends Python a "time to shut down" notice, how do I intercept that to kill my lights and exit the loop?
This is the wrong way to go about performing action at system shutdown time. The job of the shutdown process is to stop running processes and then switch off power; if you try to detect this happening from within your program and react by getting some last action in, it's a race between the OS and your program who gets to go first. More likely than not your program will have been stopped before it managed to perform the necessary action.
Instead, you should hook into the normal protocol for doing things at shutdown. This will tell the shutdown utility to send an explicit signal to your program and wait for it to be acknowledged, which gives you enough time (within reason) to do what you have to do. How exactly to register to be notified varies with the OS, so this is more of an OS-specific question rather than a Python question.
You should react to the WM_ENDSESSION message.
This message is sent when the user logs off or the computer gets shut down.
If you want to react to Sleep/Hibernate as well, you'll need to handle WM_POWERBROADCAST with PBT_APMSUSPEND.
But I don't know how to do that in python. I guess it depends on your windowing framework since you need have a windows/a message loop to receive messages.

Categories