Checking if cherrypy is active - python

I have a process that starts up cherrypy runs a task and then needs to check if it is still running or has completed it's task. I am running python 2.6.7
while True:
if t1.isAlive():
cherrypy.engine.start()
else:
cherrypy.engine.stop()
print "server down"

There are several ways to do it. It depends what you really want. Do you want to check if a process is still alive and kicking, or do you need some feed-back information?
Or do you just have to check for an output (file/log/db)? Give us some more information. Some code-examples would clarify your problem
I guess you could take a look at the PIDFile plugin. With this plugin you could even do a Multiprocess or a check independent of which process started a cherrypy-instance.
Just after the start you initialize the pidfile-plugin and check anywhere outside if the file exists. Only caveat, it may block on zombie-processes or if the pidfile doesn't get erased.

Related

Guaranteeing calling to destruction on process termination

After reading A LOT of data on the subject I still couldn't find any actual solution to my problem (there might not be any).
My problem is as following:
In my project I have multiple drivers working with various hardware's (IO managers, programmable loads, power supplies and more).
Initializing connection to these hardware's is costly (in time), and I cant open and then close the connection for every communication iteration between us.
Meaning I cant do this (Assuming programmable load implements enter / exit):
start of code...
with programmable_load(args) as program_instance:
programmable_load_instance.do_something()
rest of code...
So I went for a different solution :
class programmable_load():
def __init__(self):
self.handler = handler_creator()
def close_connection(self):
self.handler.close_connection()
self.handler = None
def __del__(self):
if (self.handler != None):
self.close_connection()
For obvious reasons I dont 'trust' the destructor to actually get called so I explicitly call close_connection() when I want to end my program (for all drivers).
The problem happens when I abruptly terminate the process, for example when I run via debug mode and quit debugging.
In these cases the process terminates without running through any destructors.
I understand that the OS will clear all memory unused at this point, but is there any way to clear the memory in an organized manner?
and if not, is there a way to make the quit debugging function pass through a certain set of functions? Does the python process know it got a quite debugging event or does it treat it as a normal termination?
Operating system: Windows
According to this documentation:
If a process is terminated by TerminateProcess, all threads of the
process are terminated immediately with no chance to run additional
code.
(Emphasis mine.) This implies that there is nothing you can do in this case.
As detailed here, signals don't work very well on ms-windows.
As was mentioned in a comment, you could use atexit to do the cleanup. But that only works if the process is asked to close (e.g. QUIT signal on Linux) and not just killed (as is likely the case when stopping the debugging session). Similarily if you force your computer to turn off (e.g. long press power button or remove power) then it won't be called either. There is no 'solution' to that for obvious reasons. Your program can't expect to be called when the power suddenly goes off or when it is forcefully killed. The point of forcefully killing is to definitely kill the process now. If it first called your clean-up code then you could delay that which defeats the purpose. That is why there are signals such as to ask your process to stop. This is not Python specific. The same concept also applies across operating systems.
Bonus (design suggestion, not a solution): I would argue that you can still make use of the context manager (using with). Your problem is not unique. Database connections are usually kept alive for longer as well. It is a question of the scope. Move the context further up to the application level. Then it is clear what the boundary is and you don't need any magic (you are probably also aware of #contextmanager to make that a breeze).
I haven't tested properly as I don't have wingide installed over here so I can't grant you this will work but what about using setconsolectrlhandler? For instance, try something like this:
import os
import sys
import win32api
if __name__ == "__main__":
def callback(sig, func=None):
print("Exit handler called!")
try:
win32api.SetConsoleCtrlHandler(callback, True)
except Exception as e:
print("Captured exception", e)
sys.exit(1)
print("Press to quit")
input()
print("Bye!")
It'll be able to handle CTRL+C and CTRL+BREAK signals:

python restart after unexpected exit with `while true` loop

l wrote a python script with while true to do task to catch email's attachment, but sometimes l found out it would exit unexpectedly on server.
l run it on my local for more than 4 hours with no problem, so l can confirm that the code is correct.
So is there a kind of mechanism to restart python when it exit unexpectedly, such as process monitoring? l am a novice in linux.
remark: l run this python script like python attachment.py & in a shell script.
While #triplee's comment will definitely do the trick, I would worry that there is something going on that you would be better-off understanding. That is, why the script is failing.
Without further details, it's difficult to speculate what might be happening. As a first debugging effort, you might try wrapping the entire body within the while True in a try ... except... block, and use the except block to log the error and/or the program state. That is,
while True:
try:
... do some stuff...
except:
... log the exception, print to screen, record the values of key variables, etc.
continue
This would allow you to understand what is happening during the failure, and to write more robust code that handles that event.
l run it on my local for more than 4 hours with no problem, so l can confirm that the code is correct.
You could be surprised by the number of bugs that only reveals after months if not years of correct processing... What you confirm is that the code does not break on first action, but unless you have tested it with all possible corner cases in input (including badly formatted ones) you cannot confirm that it will never break.
That is the reason why a program that is intented to run unattendedly should be carefully designed to always (try to*) leave a trace before exiting. try: except: and the logging module are your best friends here.
* Of cause in case of a system crash or a power outage there's nothing you can do at user program level...
You can try to use Supervisor to manage your process. The Supervisor able to configure the bevhiour of the process exit status and try to restart it.
Attached is the official document and the example in Ubuntu:
example configuration
[program:nodehook]
command=/usr/bin/node /srv/http.js
directory=/srv
autostart=true
autorestart=true
startretries=3
stderr_logfile=/var/log/webhook/nodehook.err.log
stdout_logfile=/var/log/webhook/nodehook.out.log
user=www-data
environment=SECRET_PASSPHRASE='this is secret',SECRET_TWO='another secret

running 2 python scripts without them effecting each other

I have 2 python scripts I'm trying to run side by side. However, each of them have to open and close and reopen independently from each other. Also, one of the scripts is running inside a shell script.
Flaskserver.py & ./pyinit.sh
Flaskserver.py is just a flask server that needs to be restarted everynow and again to load a new page. (cant define all pages as the html is interchangeable). the pyinit is runs as xinit ./pyinit.sh (its selenium-webdriver pythoncode)
So when the Flaskserver changes and restarts the ./pyinit needs to wait about 20 seconds then restart as well.
Either one of these can create errors so I need to be able to check if Flaskserver has an error before restarting ./pyinit if ./pyinit errors i need to set the Flaskserver to a default value and then relaunch both of them.
I know a little about subprocess but I'm unsure on how it can deal with errors and stop-start code.
Rather than using sub-process I would recommend you to create a different thread for your processes using multithread.
Multithreading will not solve the problem if global variables are colliding, but by running them in different scripts, while you might solve this, you might collide in something else like a log file.
Now, if you keep both processes running from a single process that takes care of keeping them separated and assigning different global variables where necessary, you should be able to keep a better control. Using things like join and lock from the multithreading library, will also ensure that they don't collide and it should be easy to put a process to sleep while the other is running (as per waiting 20 secs).
You can keep a thread list as a global variable, as well as your lock. I have done this successfully with CherryPy's server for example. Any more details about multithreading look into the question I linked above, it's very well explained.

Python Daemon: checking to have one daemon run at all times

myalert.py
from daemon import Daemon
import os, time, sys
class alertDaemon(Daemon):
def run(self):
while True:
time.sleep(1)
if __name__ == "__main__":
alert_pid = '/tmp/ex.pid'
# if pid doesnt exists run
if os.path.isfile(alert_pid): # is this check enough?
sys.exit(0)
daemon = alertDaemon(alert_pid)
daemon.start()
Given that no other programs or users will create the pid file:
1) Is there a case where pid does not exists yet the daemon process still running?
2) Is there a case where pid does exists yet the daemon isnt running?
Because if answer is yes to at least one of the questions above, then simply checking for the existence of pid file isnt enough if my goal is have one daemon running at all times.
Q: If i have to check for the process then, I am hoping of avoid something like system call ps -ef and grep for the name of the script. Is there a standard way of doing this?
Note: the script, myalert.py, will be a cronjob
The python-daemon library, which is the reference implementation for PEP 3143: "Standard daemon process library", handles this by using a file lock (via the lockfile library) on the pid file you pass to the DaemonContext object. The underlying OS guarantees that the file lock will be released when the daemon process exits, even if its uncleanly exited. Here's a simple usage example:
import daemon
from daemon.pidfile import PIDLockFile
context = daemon.DaemonContext(
pidfile= PIDLockFile('/var/run/spam.pid'),
)
with context:
main()
So, if a new instance starts up, it doesn't have to determine if the process that created the existing pid file is still running via the pid itself; if it can acquire the file lock, then no other instances are running (since they'd have acquired the lock). If it can't acquire the lock, then another daemon instance must be running.
The only way you'd run into trouble is if someone came along and manually deleted the pid file while the daemon was running. But I don't think you need to worry about someone deliberately breaking things in that way.
Ideally, python-daemon would be part of the standard library, as was the original goal of PEP 3143. Unfortunately, the PEP got deferred, essentially because there was no one willing to actually do the remaining work needed to get in added to the standard library:
Further exploration of the concepts covered in this PEP has been
deferred for lack of a current champion interested in promoting the
goals of the PEP and collecting and incorporating feedback, and with
sufficient available time to do so effectively.
Several ways in which I saw this implemented:
Check wheter pidfile exists -> if so, exit with an error message like "pid file exists -- rm it if you're sure no process is running"
Check whether pidfile exists -> if so, check whether process with that pid exists -> if that's the case, die telling the user "process is running..". The risk of conflicting (reused for another process) PID number is so small that it simply is ignored; telling the user how to make the program start again in case an error occurred
Hint: to check for a process existence, you can check for the /proc/<pid> directory
Also make sure you do all the possible to remove the pidfile when your script exits, eg:
Wrap code in a try .. finally:
# Check & create pidfile
try:
# your application logic
finally:
# remove pidfile
You can even install signal handlers (via the signal module) to remove pidfile upon receiving signals that would not normally raise an exception, but instead exit directly.

Find out if a process running as root exists

I have a program that needs to know if a certain process (also part of the program, but running as a daemon) owned by root exists. The process is started from within the program using pkexec so that the program itself can run as a normal user.
Normally, if I need to know if a process is running, I would use os.kill(pid, 0) and catch the resulting exception. Unfortunately, in this case, Python simply spits an OSError: [Errno 1] Operation not permitted, regardless of whether the process exists or not.
Apart from manually parsing the output of ps aux | grep myprogram, is there a simple way of knowing if the process exists without resorting to an external library like psutils? psutils seems like an awfully large dependency to add for such a simple task.
os.geteuid()
"Return the current process’s effective user id."
root's effective uid is zero:
if os.geteuid() == 0:
print('running as root')
else:
print('no root for you')
If you know the pid you can use psutil:
if psutil.Process(the_pid).is_running():
print('Process is running')
else:
print('Process is *not* running')
Bonus points: this works with python from 2.4 to 3.3 and with linux, OS X, Windows, FreeBSD, Sun Solaris and probably more.
Checking whether /proc/the-pid exists only works on *nix machines, not on windows.
Note also that simply checking /proc/the-pid is not enough to conclude that the process is running. The OS is free to reuse the pids, hence if the process ended and a different process was spawned with the same pid you are screwed.
You must also save somewhere the creation time of the original process. Then to check if the process exist you should first check /proc/the-pid and then check that the creation time of that process matches what you saved. psutil does this automatically.

Categories