How to check if remote process is executed - Long polling? - python

I am using python 2.7.
I am using paramiko module to trigger a remote action. The process is long and typically runs for 15-20 minutes. At the end it generates a file locally.
I would not like to block a thread until the process is finished. Is there a way by which I can trigger another action once the file is generated?

Related

When using the python script as CGI, Subprocess Popen not able to run background process

I am trying to run another Python script in background from a CGI python script and want this script to run the process in background without waiting for the other script to complete. Somehow when I am running the same from Linux shell, I can run the other python script in background. But when I tried doing the same through CGI, The front end keeps on loading until the other script completes and not just make it run in background.
I have tried running the same on the Linux Shell and it works. When I shifted to CGI that is when the script waits for the other process to complete.
python1.py:
command = [sys.executable,'python2.py', str(senddata)]
proc=subprocess.Popen(command,shell=False,stdin=None,stdout=None,stderr=None,close_fds=True)
print("Content-type: text/html\r\n\r\n")
print("The script is running in background! Expect an email in 10 minutes.")
python2.py:
This script takes 2-5 minutes to execute and then sends an email to the group.
The expected output is to have this message:
The script is running in background! Expect an email in 10 minutes.
And run python2.py in background without waiting for it to complete.
The webserver will keep the client response active (causing the client to stay in a "loading" state) until all of the output from the CGI program has been collected and forwarded to the client. The way the webserver knows that all output has been collected is that it sees the standard output stream of the CGI process being closed. That normally happens when the CGI process exits.
The reason why you're having this problem is that when subprocess.Popen is told to execute a program with stdout=None, the spawned program will share its parent's standard output stream. Here that means that your background program shares the CGI process's standard output. That means that from the webserver's point of view that stream remains open until both of the processes exit.
To fix, launch the background process with stdout=subprocess.PIPE. If the background process misbehaves if its stdout gets closed when the CGI process dies, try launching it with stdout=open('/dev/null') instead.
The background process's stdin and stderr will have the same issue; they will be shared with the CGI process. AFAIK sharing those will not cause trouble as long as the background process does not attempt to read from its standard input, but if it does do that (or if you just want to be cautious) you can treat them the same way as you treat stdout and either set them to subprocess.PIPE or associate them with /dev/null.
More detail is at https://docs.python.org/2/library/subprocess.html#popen-constructor

how to use systemd to run a python script forever and restart if it dies halfway on raspberry pi 3?

I have read that upstart is obsolete in favor of systemd for raspberry pi 3.
My question is how do I run a python script :
a) forever unless I manually kill it
b) can restart if it dies due to some exception or stop running automatically without any human intervention
my python script itself is already using modules like schedule and while True loops to keep running certain jobs every few seconds.
I am just worried that it will die/stop (which it did) after some indeterminate amount of time.
If it stops, all I want is for it to restart.
Currently, I run the script by double clicking it to open in Python IDLE (2.7) and then run module.
What is the best way to run and open a python script and let it run continuously non-stop and then have it auto restart when it dies / stops for whatever reason?
See this picture where it suddenly stops by itself at 5 plus am
I think you should take a look at Python Supervisor. Supervisor will manage the restart in the event of a crash or even machine re-starts.
http://supervisord.org/
An easier method might be the handle the failure within your script. If it is failing due to some exception, wrap the offending code in a try:except block and handle it gracefully within the script.
That said, this post has the information you need to use systemd to execute a BASH script:
https://unix.stackexchange.com/questions/47695/how-to-write-startup-script-for-systemd
Within your script, you can easily run a python script and catch its return value (when it returns failure in your case) and react appropriately.
Something like this:
#!/bin/bash
python ~/path/to/my/script/myScript.py
if [ $? -ne 0 ] ; then #handle the failure here.
If that won't work either, you can create a script whose sole job is to call the other script and handle its failures, and use systemd to call that script.

Is the python subprocess blocking the IO?

I am using CherryPy as a web server, after my web server request, it may run a long long process. I don't want the web server busy on handling the process, so I separate the execution on in a separate script, and using a subprocess to call this script. But it seems that the 'subprocess' will wait the process finish. Can I do something that after the computer called the subprocess, it executed in the background on it own? Thanks.

How do create multiple processes that will pull messages from rabbitmq?

Say I have a python script that pulls messages of a queue and process it
process_queue_emails.py
Now I want to somehow run multiple processes of this file at once, how would I do that? I need it to run in the background, and I'm guessing on a seperate port? (this is on ubuntu)
So I want to write messages to the queue in my web application, and then I want these worker processes (the .py file above) to recieve and respond to the messages in parallel i.e. I need to run them in their own process.
The zdaemon module can be used to write demonized processes. Or you can look into the multiprocessing module of Python. And alternative is using supervisord for starting arbitrary scripts or programs as daemon.

Subversion post-commit hook

I have created a subversion post-commit hook to send out an email everytime a commit is made.Im calling a python script from the file post-commit in /var/svn/repos/hooks .
REPOS="$1"
REV="$2"
~/svnnotify.py $REV
But the problem is that the svn commit command is taking a longer time to terminate as it waits for the python script to terminate first . Is there any way around this ?
Thank You
Try adding an ampersand (&) after the line that calls your script to put it in the background and return immediately.
Call a batch file and in that batch file execute python script to run in the background by adding ampersand at end of command in batch file( & ).
Maybe put the update in a simple queue that gets scooped up by a script run invoked from cron and sends a message if something is sitting in the queue.
Queue could be a simple file in /tmp, an sqlite file, or a MySQL table.
If it's taking noticeably long to send the e-mail, maybe there's something up with the code in the notify script. It shouldn't take that long to put an e-email in the local mail queue.

Categories