Python daemon exits silently in urlopen - python

I have python daemon started from a init.d script. The daemon optionally reads an array of id:s from a server through a REST interface. Otherwise I use an array of pre-defined id:s.
logger.info("BehovsBoBoxen control system: bbb_domoticz.py starting up")
if DOMOTICZ_IN or DOMOTICZ_OUT:
#
# build authenticate string to access Domoticz server
#
p = urllib2.HTTPPasswordMgrWithDefaultRealm()
p.add_password(None, DOMOTICZ_URL, USERNAME, PASSWORD)
handler = urllib2.HTTPBasicAuthHandler(p)
opener = urllib2.build_opener(handler)
urllib2.install_opener(opener)
if DOMOTICZ_IN:
#
# Find all temperature sensors in Domoticz and populate sensors array
#
url= "http://"+DOMOTICZ_URL+"/json.htm?type=devices&filter=temp&used=true&order=Name"
logger.debug('Reading from %s',url)
response=urllib2.urlopen(url)
data=json.loads(response.read())
logger.debug('Response is %s',json.dumps(data, indent=4, sort_keys=True))
for i in range(len(data["result"])):
a=data["result"][i]["Description"]
ini=a.find('%room')
if ini != -1:
ini=ini+6
rIndex=int(a[ini:])
logger.info('Configure room id %s with Domoticz sensor idx: %s', rIndex, data["result"][i]["idx"])
sensors[rIndex]=data["result"][i]["idx"]
The daemon is started from an init.d script at boot. Everything works perfectly if I use the option with predefined id:s, i.e. I don't use the REST interface. The daemon starts at boot, and I can stop and restart the deamon with the command
sudo service start/stop/restart
However, if I use the other option (read id:s from server), the daemon does not start at boot. In the log file I find one single line ("...bbb_domoticz.py starting up"). Henze, the daemon exits silently right after this, probably in one of the following urllib2 calls. The following logger.debug('Reading...') does not show up in the log-file.
But the strange thing is if I manually start the daemon with a copy of the init.d script in my home directory, the daemon starts. If I run the init.d script from /etc/init.d, the deamon immediatly exits as it does at boot. But if start the daemon with the script in my home directory, I can continue to start/stop/restart with the service command.
So my taking from this is something goes wrong in urllib2 unless I have managed to start the daemon once from my home directory. It puzzles me I don't get any traceback or anything when the daemon exits.
Any idea how to nail down this problem?
Edit: Inspired by the answer to add logging to specific modules, I tried to add logging to urllib2. However, I can´t figure out how to let this module use my logging handler. Help on this is appreciated.

Related

How to detect system ACPI G2/S5 Soft Off event with python on linux

I am working on an app using Google's compute engine and would like to use pre-emptible instances.
I need my code to respond to the 30s warning google gives via an ACPI G2 Soft Off signal that they send when they are going to take away your VM as described here: https://cloud.google.com/compute/docs/instances/preemptible.
How do I detect this event in my python code that is running on the machine and react to it accordingly (in my case I need to put the job the VM was working on back on a queue of open jobs so that a different machine can take it).
I am not answering the question directly, but I think that your actual intent is different:
The G2 power button event is generated by both preemption of a VM and the gcloud instances stop command (or the corresponding API, which it calls);
I am assuming that you want to react specially only on instance preemption.
Avoid a common misunderstanding
GCE does not send a "30s termination warning" with the power button event. It just sends the normal, honest power button soft-off event that immediately initiates shutdown of the system.
The "warning" part that comes with it is simple: “Here is your power button event, shutdown the OS ASAP, because you have 30s before we pull the plug off the wall socket. You've been warned!”
You have two system services that you can combine in different ways to get the desired behavior.
1. Use the fact that the system is shutting down upon ACPI G2
The most kosher (and, AFAIK, the only supported) way of handling the ACPI power button event is let the system handle it, and execute what you want in the instance shutdown script. In a systemd-managed machine, the default GCP shutdown script is simply invoked by a Type=oneshot service's ExecStop= command (see systemd.service(8)). The script is ran relatively late in shutdown sequence.
If you must ensure that the shutdown script is ran after (or before) some of your services is sent a signal to terminate, you can modify some of service dependencies. Things to keep in mind:
After and Before are reversed on shutdown: if X is started after Y, then it's stopped before Y.
The After dependency ensures that the service in the sequence is told to terminate before the shutdown script is run. It does not ensure that the service has already terminated.
The shutdown script is run when the google-shutdown-scripts.service is stopped as part of system shutdown.
With all that in mind, you can do sudo systemctl edit google-shutdown-scripts.service. This will create an empty configuration override file and open your $EDITOR, where you can put your After and Before dependencies, for example,
[Unit]
# Make sure that shutdown script is run (synchronously) *before* mysvc1.service is stopped.
After=mysvc1.service
# Make sure that mysvc2.service is sent a command to stop before the shutdown script is run
Before=mysvc2.service
You may specify as many After or Before clauses as you want, 0 or more of each. Read systemd.unit(8) for more information.
2. Use GCP metadata
There is an instance metadatum v1/instance/preempted. If the instance is preempted, it's value is TRUE, otherwise it's FALSE.
GCP has a thorough documentation on working with instance metadata. In short, there are two ways you can use this (or any other) metadata value:
Query its value at any time, e. g. in the shutdown script. curl(1) equivalent:
curl -sfH 'Metadata-Flavor: Google' \
'http://169.254.169.254/computeMetadata/v1/instance/preempted'
Run an HTTP request that will complete (200) when the metadatum changes. The only change that can ever happen to it is from FALSE to TRUE, as preemption is irreversible.
curl -sfH 'Metadata-Flavor: Google' \
'http://169.254.169.254/computeMetadata/v1/instance/preempted?wait_for_change=true'
Caveat: The metadata server may return the 503 response if it's temporarily unavailable (this is very rare, but happens), so certain retry logic is required. This especially true for the long-running second form (with ?wait_for_change=true), as the pending request may return at any time with the code 503. Your code should be ready to handle this and restart the query. curl does not return the HTTP error code directly, but you can use the fact that x=$(curl ....) expression returned an empty string if you scripting it; your criterion for positive detection of preemption is [[ $x == TRUE ]] in this case.
Summary
If you want to detect that the VM is shutting down for any reason, use Google-provided shutdown script.
If you also need to distinguish whether the VM was in fact preempted, as opposed to gcloud instance stop <vmname> (which also sends the power button event!), query the preempted metadata in the shutdown script.
Run a pending HTTP request for metadata change, and react on it accordingly. This will complete successfully when VM is preempted only (but may complete with an error at any time too).
If the daemon that you run is your own, you can also directly query the preempted metadata from the code path which handles the termination signal, if you need to distinguish between different shutdown reasons.
It is not impossible that the real decision point is whether you have an "active job" that you want to return to the "queue", or not: if your service is requested to stop while holding on an active job, just return it, regardless of the reason why you are being stopped. But I cannot comment on this, not knowing your actual design.
I think the simplest way to handle GCP preemption is using SIGTERM.
The SIGTERM signal is a generic signal used to cause program
termination. Unlike SIGKILL, this signal can be blocked, handled, and
ignored. It is the normal way to politely ask a program to terminate. https://www.gnu.org/software/libc/manual/html_node/Termination-Signals.html
This does depend on shutdown scripts, which are run on a "best effort" basis. In practice, shutdown scripts are very reliable for short scripts.
In your shutdown script:
echo "Running shutdown script"
preempted = curl "http://metadata.google.internal/computeMetadata/v1/instance/preempted" -H "Metadata-Flavor: Google"
if $preempted; then
PID="$(pgrep -o "python")"
echo "Send SIGTERM to python"
kill "$PID"
sleep infinity
fi
echo "Shutting down"
In main.py:
import signal
import os
def sigterm_handler(sig, frame):
print("Got SIGTERM")
os.environ["IS_PREEMPTED"] = True
# Call cleanup functions
signal.signal(signal.SIGTERM, sigterm_handler)
if __name__ == "__main__":
print("Main")

Python subprocess -- close Django server and Docker container with Ctrl-C, return to terminal

I'm trying to figure out how to properly close out my script that's supposed to start up a Django server running in a docker container (boot2docker, on Mac OS X). Here's the pertinent code block:
try:
init_code = subprocess.check_output('./initdocker.sh', shell=True)
subprocess.call('./startdockerdjango.sh', shell=True)
except subprocess.CalledProcessError:
try:
subprocess.call('./startdockerdjango.sh', shell=True)
except KeyboardInterrupt:
return
Where startdockerdjango.sh takes care of setting the environment variables that docker needs and starts the server up. The script overall is supposed to know whether to do first-time setup and initialization or simply start the container and server; catching the CalledProcessError means that first time setup was already done and that the container and server can just be started up. The startup works fine, but when a user presses Ctrl-C to stop the server, the server stops normally but then apparently the process that started the server is still going. If I press return, then I can go back to the normal terminal command prompt. If I do any sort of shell command, like ls, then it will be carried out and then I can return to the terminal. I want to change the code so that, if a user presses Ctrl-C, then the server and the container that the server is running in will stop normally and then, afterward, stop the process and have the whole script exit. How can this be done? I don't want to just kill or terminate the process upon KeyboardInterrupt, since then the server and container won't be able to stop normally but will be killed off abruptly.
UPDATE:
I recently tried the following according to Padraic Cunningham's comment:
try:
init_code = subprocess.check_output('./initdocker.sh', shell=True)
subprocess.call('./startdockerdjango.sh', shell=True)
except subprocess.CalledProcessError:
try:
startproc = subprocess.Popen('./startdockerdjango.sh')
except KeyboardInterrupt:
startproc.send_signal(SIGTERM)
startproc.wait()
return
This was my attempt to send a term to the server to shut down gracefully and then use wait() to wait for the process (startproc) to complete. This, however, results in just having the container and server end abruptly, something that I was trying to prevent. The same thing happens if I try SIGINT instead. What, if anything, am I doing wrong in this second approach? I still want the same overall thing as before, which is having one single Ctrl-C end the container and server, then exit the script.
You might want to create the process using Popen. It will give you a little more control on how you manage the child process.
env = {"MY_ENV_VAR": "some value"}
proc = subprocess.Popen("./dockerdjango.sh", env=env)
try:
proc.wait()
except KeyboardInterupt:
proc.terminate() # on linux this gives the a chance to clean up,
# or even ignore the signal entirely
# use proc.send_signal(...) and the module signal to send other signals.
# or proc.kill() if you wish to be kill the process immediately.
If you set the environment variables in python it will also result in less child processes that need to be killed.
In the end, it wasn't worth the effort to have the script know to either do first-time initialization or server+container startup. Instead, the script will just try first-time setup and then will tell the user to do docker-compose up after successful setup. This is a better solution for my specific situation than trying to figure out how to have Ctrl-C properly shut down the server and then exit the script.
To Reset django server subprocess execute in your terminal:
$ sudo lsof -i tcp:8080
$ sudo lsof -i tcp:8080|awk '{print $2}'|cut -d/ -f 1|xargs kill

Windows python script to run a server and continue

I am working on my python script to launch a server, may be in background or in a different process and then further do some processing before killing the launched server.
Once the rest of the processing is over, then kill the launched server.
For Example
server_cmd = 'launch_server.exe -source '+ inputfile
print server_cmd
cmd_pid = subprocess.Popen(server_cmd).pid
...
...
... #Continue doing some processing
cmd_pid.terminate() # Once the processing is done, terminate the server
Some how the script does not continue after launching the server as the server may be running in infinite loop listening for a request. Is there a good way to send this process in background so that it doesn't expect for command line input.
I am using Python 2.7.8
It's odd that your script does not continue after launching the server command. In subprocess module, Popen starts another child process while the parent process (your script) should move on.
However in your code there's already a bug: cmd_pid is an int object and does not have terminate method. You should use subprocess.Popen object to call terminate method.
Making a small change resolved the problem
server_proc = subprocess.Popen(server_cmd, stdout=subprocess.PIPE)
server_proc.terminate()
Thanks Xu for correction in terminate.

Simple HTTPServer python on background

I have this script, I do not know how to have it running in the background, cause when i close the session it close too. I tried putting it on crontab but not find the index.html and shows the list of files in /.
#! /opt/python3/bin/python3
from http.server import HTTPServer, CGIHTTPRequestHandler
port = 8000
httpd = HTTPServer(('', port), CGIHTTPRequestHandler)
print("Starting simple_httpd on port: " + str(httpd.server_port))
httpd.serve_forever()`
Basically you are asking about how to detach the program from your shell ... here is a few options
./scriptname.py >/dev/null 2>&1 & # sends the program to the background
Use gnu-screen (or similar) ... run your program via screen and you can bring it back up when you log back in
Daemonize your program properly
Update:
Recently I have not written a single daemon in python. The days of forking twice or using a daemon library seem to be well behind us. I currently use supervisord and have heard good things about circus. These are just a small set of extra options you can use to deploy python daemons.

python,running command line servers - they're not listening properly

Im attempting to start a server app (in erlang, opens ports and listens for http requests) via the command line using pexpect (or even directly using subprocess.Popen()).
the app starts fine, logs (via pexpect) to the screen fine, I can interact with it as well via command line...
the issue is that the servers wont listen for incoming requests. The app listens when I start it up manually, by typing commands in the command line. using subprocess/pexpect stops the app from listening somehow...
when I start it manually "netstat -tlp" displays the app as listening, when I start it via python (subprocess/pexpect) netstat does not register the app...
I have a feeling it has something to do with the environemnt, the way python forks things, etc.
Any ideas?
thank you
basic example:
note:
"-pz" - just ads ./ebin to the modules path for the erl VM, library search path
"-run" - runs moduleName, without any parameters.
command_str = "erl -pz ./ebin -run moduleName"
child = pexpect.spawn(command_str)
child.interact() # Give control of the child to the user
all of this stuff works correctly, which is strange. I have logging inside my code and all the log messages output as they should. the server wouldnt listen even if I started up its process via a bash script, so I dont think its the python code thats causing it (thats why I have a feeling that its something regarding the way the new OS process is started).
It could be to do with the way that command line arguments are passed to the subprocess.
Without more specific code, I can't say for sure, but I had this problem working on sshsplit ( https://launchpad.net/sshsplit )
To pass arguments correctly (in this example "ssh -ND 3000"), you should use something like this:
openargs = ["ssh", "-ND", "3000"]
print "Launching %s" %(" ".join(openargs))
p = subprocess.Popen(openargs, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
This will not only allow you to see exactly what command you are launching, but should correctly pass the values to the executable. Although I can't say for sure without seeing some code, this seems the most likely cause of failure (could it also be that the program requires a specific working directory, or configuration file?).

Categories