Related
Given this code:
from time import sleep
class TemporaryFileCreator(object):
def __init__(self):
print 'create temporary file'
# create_temp_file('temp.txt')
def watch(self):
try:
print 'watching tempoary file'
while True:
# add_a_line_in_temp_file('temp.txt', 'new line')
sleep(4)
except (KeyboardInterrupt, SystemExit), e:
print 'deleting the temporary file..'
# delete_temporary_file('temp.txt')
sleep(3)
print str(e)
t = TemporaryFileCreator()
t.watch()
during the t.watch(), I want to close this application in the console..
I tried using CTRL+C and it works:
However, if I click the exit button:
it doesn't work.. I checked many related questions about this but it seems that I cannot find the right answer..
What I want to do:
The console can be exited while the program is still running.. to handle that, when the exit button is pressed, I want to make a cleanup of the objects (deleting of created temporary files), rollback of temporary changes, etc..
Question:
how can I handle console exit?
how can I integrate it on object destructors (__exit__())
Is it even possible? (how about py2exe?)
Note: code will be compiled on py2exe.. "hopes that the effect is the same"
You may want to have a look at signals. When a *nix terminal is closed with a running process, this process receives a couple signals. For instance this code waits for the SIGHUB hangup signal and writes a final message. This codes works under OSX and Linux. I know you are specifically asking for Windows but you might want to give it a shot or investigate what signals a Windows command prompt is emitting during shutdown.
import signal
import sys
def signal_handler(signal, frame):
with open('./log.log', 'w') as f:
f.write('event received!')
signal.signal(signal.SIGHUP, signal_handler)
print('Waiting for the final blow...')
#signal.pause() # does not work under windows
sleep(10) # so let us just wait here
Quote from the documentation:
On Windows, signal() can only be called with SIGABRT, SIGFPE, SIGILL, SIGINT, SIGSEGV, or SIGTERM. A ValueError will be raised in any other case.
Update:
Actually, the closest thing in Windows is win32api.setConsoleCtrlHandler (doc). This was already discussed here:
When using win32api.setConsoleCtrlHandler(), I'm able to receive shutdown/logoff/etc events from Windows, and cleanly shut down my app.
And if Daniel's code still works, this might be a nice way to use both (signals and CtrlHandler) for cross-platform purposes:
import os, sys
def set_exit_handler(func):
if os.name == "nt":
try:
import win32api
win32api.SetConsoleCtrlHandler(func, True)
except ImportError:
version = “.”.join(map(str, sys.version_info[:2]))
raise Exception(”pywin32 not installed for Python ” + version)
else:
import signal
signal.signal(signal.SIGTERM, func)
if __name__ == "__main__":
def on_exit(sig, func=None):
print "exit handler triggered"
import time
time.sleep(5)
set_exit_handler(on_exit)
print "Press to quit"
raw_input()
print "quit!"
If you use tempfile to create your temporary file, it will be automatically deleted when the Python process is killed.
Try it with:
>>> foo = tempfile.NamedTemporaryFile()
>>> foo.name
'c:\\users\\blah\\appdata\\local\\temp\\tmpxxxxxx'
Now check that the named file is there. You can write to and read from this file like any other.
Now kill the Python window and check that file is gone (it should be)
You can simply call foo.close() to delete it manually in your code.
I have a dtrace snippet run via python script and the dtrace snippet is such that it generates data when CTRL-C is issued to it. So I had a signal_handler defined in the python script to catch CTRL-C from user and relay this to the dtrace invocation done via subprocess.Popen but I am unable to get any output in my log file. Here is the script:
Proc = []
signal_posted = False
def signal_handler(sig, frame):
print("Got CTRL-C!")
global signal_posted
signal_posted = True
global Proc
Proc.send_signal(signal.SIGINT) #Signal posting from handler
def execute_hotkernel():
#
# Generate the .out output file
#
fileout = "hotkernel.out"
fileo = open(fileout, "w+")
global Proc
Proc = subprocess.Popen(['/usr/sbin/dtrace', '-n', dtrace_script], stdout = fileo)
while Proc.poll() is None:
time.sleep(0.5)
def main():
signal.signal(signal.SIGINT, signal_handler) # Change our signal handler
execute_hotkernel()
if __name__ == '__main__':
main()
Since I have a file hotkernel.out set in subprocess.Popen command for stdout I was expecting the output from dtrace to be redirected to hotkernel.out on doing a CTRL-C but it is empty. What is missing here?
I have a similar issue.
In my case, it's a shell script that runs until you hit Control-C, and then prints out summary information. When I run this using subprocess.Popen, whether using a PIPE or a file object for stdout, I either don't get the information (with a file object) or it hangs when I try to run stdout.readline().
I finally tried running the subprocess from the interpreter and discovered I could get the last line of output after the SIGINT with a PIPE if I call stdout.readline() (where it hangs) and hit Control-C (in the interpreter), and then call stdout.readline() again.
I do not know how to emulate this in script, for a file output or for a PIPE. I did not try the file output in the interpreter.
EDIT:
I finally got back to this and determined, it's actually pretty easy to emulate outside of python and really has nothing to do with python.
/some_cmd_that_ends_on_sigint
(enter control-c)
*data from stdout in event handler*
Works
/some_cmd_that_ends_on_sigint | tee some.log
(enter control-c)
*Nothing sent to stdout in event handler prints to the screen or the log*
Where's my log?
I ended up just adding a file stream in the event handler (in the some_cmd_that_ends_on_sigint source) that writes the data to a (possibly secondary) log. Works, if a bit awkward. You get the data on the screen if running without any piping, but I can also read it when piped or from python from the secondary log.
I'm trying to write a Python script that will enable me to start the Google App Engine dev_appserver using coverage.py, fetch the /test url from the app that I launch, wait for the server to finish returning the page, then shutdown the dev_appserver, and then generate a report.
My challenge is how to launch the dev_appserver in the background so that I can do the http fetch and then how to shut down the dev_appserver before generating my report.
I'm heading towards something like this:
# get_gae_coverage.py
# Launch dev_appserver with coverge.py
coverage run --source=./ /usr/local/bin/dev_appserver.py --clear_datastore --use_sqlite .
#Fetch /test
urllib.urlopen('http://localhost:8080/test').read()
# Shutdown dev_appserver somehow
# ??
# Generate coverage report
coverage report
What is the best way to write a python script to do this?
You should go with subprocess Popen
import os
import signal
import subprocess
coverage_proc = subprocess.Popen(
['coverage','run', your_flag_list]
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
time.sleep(5) #Find the correct sleep value
urllib.urlopen('http://localhost:8080/test').read()
time.sleep(1)
os.kill(coverage_proc.pid, signal.SIGINT)
Here you can find another approach to test if the server is up and running:
line = proc.stdout.readline()
while '] Running application' not in line:
line = proc.stdout.readline()
threading is the way to accomplish such a kind of task. Namely, you start the dev_appserver in a thread or in the main thread and as it is running, run and collect the results using the coverage module and then kill the dev_appserver python process in another thread and you will have results from coverage.
Here is sample snippet, which runs the dev_appserver.py in a thread and then waits for 10 seconds before and then it kills the python process. You can modify the end method in a suitable wherein the instead of waiting for 10 seconds, it waits for few seconds (in order to let the python process start) and then start doing the coverage testing and after it is done, kill the appserver and finish coverage.
import threading
import subprocess
import time
hold_process = []
def start():
print 'In the start process'
proc = subprocess.Popen(['/usr/bin/python','dev_appserver.py','yourapp'])
hold_process.append(proc)
def end():
time.sleep(10)
proc = hold_process.pop(0)
print 'Killing the appserver process'
proc.kill()
t = threading.Thread(name='startprocess',target=start)
t.deamon = True
w = threading.Thread(name='endprocess',target=end)
t.start()
w.start()
t.join()
w.join()
I have written a Python script that checks a certain e-mail address and passes new e-mails to an external program. How can I get this script to execute 24/7, such as turning it into daemon or service in Linux. Would I also need a loop that never ends in the program, or can it be done by just having the code re executed multiple times?
You have two options here.
Make a proper cron job that calls your script. Cron is a common name for a GNU/Linux daemon that periodically launches scripts according to a schedule you set. You add your script into a crontab or place a symlink to it into a special directory and the daemon handles the job of launching it in the background. You can read more at Wikipedia. There is a variety of different cron daemons, but your GNU/Linux system should have it already installed.
Use some kind of python approach (a library, for example) for your script to be able to daemonize itself. Yes, it will require a simple event loop (where your events are timer triggering, possibly, provided by sleep function).
I wouldn't recommend you to choose 2., because you would be, in fact, repeating cron functionality. The Linux system paradigm is to let multiple simple tools interact and solve your problems. Unless there are additional reasons why you should make a daemon (in addition to trigger periodically), choose the other approach.
Also, if you use daemonize with a loop and a crash happens, no one will check the mail after that (as pointed out by Ivan Nevostruev in comments to this answer). While if the script is added as a cron job, it will just trigger again.
Here's a nice class that is taken from here:
#!/usr/bin/env python
import sys, os, time, atexit
from signal import SIGTERM
class Daemon:
"""
A generic daemon class.
Usage: subclass the Daemon class and override the run() method
"""
def __init__(self, pidfile, stdin='/dev/null', stdout='/dev/null', stderr='/dev/null'):
self.stdin = stdin
self.stdout = stdout
self.stderr = stderr
self.pidfile = pidfile
def daemonize(self):
"""
do the UNIX double-fork magic, see Stevens' "Advanced
Programming in the UNIX Environment" for details (ISBN 0201563177)
http://www.erlenstar.demon.co.uk/unix/faq_2.html#SEC16
"""
try:
pid = os.fork()
if pid > 0:
# exit first parent
sys.exit(0)
except OSError, e:
sys.stderr.write("fork #1 failed: %d (%s)\n" % (e.errno, e.strerror))
sys.exit(1)
# decouple from parent environment
os.chdir("/")
os.setsid()
os.umask(0)
# do second fork
try:
pid = os.fork()
if pid > 0:
# exit from second parent
sys.exit(0)
except OSError, e:
sys.stderr.write("fork #2 failed: %d (%s)\n" % (e.errno, e.strerror))
sys.exit(1)
# redirect standard file descriptors
sys.stdout.flush()
sys.stderr.flush()
si = file(self.stdin, 'r')
so = file(self.stdout, 'a+')
se = file(self.stderr, 'a+', 0)
os.dup2(si.fileno(), sys.stdin.fileno())
os.dup2(so.fileno(), sys.stdout.fileno())
os.dup2(se.fileno(), sys.stderr.fileno())
# write pidfile
atexit.register(self.delpid)
pid = str(os.getpid())
file(self.pidfile,'w+').write("%s\n" % pid)
def delpid(self):
os.remove(self.pidfile)
def start(self):
"""
Start the daemon
"""
# Check for a pidfile to see if the daemon already runs
try:
pf = file(self.pidfile,'r')
pid = int(pf.read().strip())
pf.close()
except IOError:
pid = None
if pid:
message = "pidfile %s already exist. Daemon already running?\n"
sys.stderr.write(message % self.pidfile)
sys.exit(1)
# Start the daemon
self.daemonize()
self.run()
def stop(self):
"""
Stop the daemon
"""
# Get the pid from the pidfile
try:
pf = file(self.pidfile,'r')
pid = int(pf.read().strip())
pf.close()
except IOError:
pid = None
if not pid:
message = "pidfile %s does not exist. Daemon not running?\n"
sys.stderr.write(message % self.pidfile)
return # not an error in a restart
# Try killing the daemon process
try:
while 1:
os.kill(pid, SIGTERM)
time.sleep(0.1)
except OSError, err:
err = str(err)
if err.find("No such process") > 0:
if os.path.exists(self.pidfile):
os.remove(self.pidfile)
else:
print str(err)
sys.exit(1)
def restart(self):
"""
Restart the daemon
"""
self.stop()
self.start()
def run(self):
"""
You should override this method when you subclass Daemon. It will be called after the process has been
daemonized by start() or restart().
"""
You should use the python-daemon library, it takes care of everything.
From PyPI: Library to implement a well-behaved Unix daemon process.
Assuming that you would really want your loop to run 24/7 as a background service
For a solution that doesn't involve injecting your code with libraries, you can simply create a service template, since you are using linux:
[Unit]
Description = <Your service description here>
After = network.target # Assuming you want to start after network interfaces are made available
[Service]
Type = simple
ExecStart = python <Path of the script you want to run>
User = # User to run the script as
Group = # Group to run the script as
Restart = on-failure # Restart when there are errors
SyslogIdentifier = <Name of logs for the service>
RestartSec = 5
TimeoutStartSec = infinity
[Install]
WantedBy = multi-user.target # Make it accessible to other users
Place that file in your daemon service folder (usually /etc/systemd/system/), in a *.service file, and install it using the following systemctl commands (will likely require sudo privileges):
systemctl enable <service file name without .service extension>
systemctl daemon-reload
systemctl start <service file name without .service extension>
You can then check that your service is running by using the command:
systemctl | grep running
You can use fork() to detach your script from the tty and have it continue to run, like so:
import os, sys
fpid = os.fork()
if fpid!=0:
# Running as daemon now. PID is fpid
sys.exit(0)
Of course you also need to implement an endless loop, like
while 1:
do_your_check()
sleep(5)
Hope this get's you started.
You can also make the python script run as a service using a shell script. First create a shell script to run the python script like this (scriptname arbitary name)
#!/bin/sh
script='/home/.. full path to script'
/usr/bin/python $script &
now make a file in /etc/init.d/scriptname
#! /bin/sh
PATH=/bin:/usr/bin:/sbin:/usr/sbin
DAEMON=/home/.. path to shell script scriptname created to run python script
PIDFILE=/var/run/scriptname.pid
test -x $DAEMON || exit 0
. /lib/lsb/init-functions
case "$1" in
start)
log_daemon_msg "Starting feedparser"
start_daemon -p $PIDFILE $DAEMON
log_end_msg $?
;;
stop)
log_daemon_msg "Stopping feedparser"
killproc -p $PIDFILE $DAEMON
PID=`ps x |grep feed | head -1 | awk '{print $1}'`
kill -9 $PID
log_end_msg $?
;;
force-reload|restart)
$0 stop
$0 start
;;
status)
status_of_proc -p $PIDFILE $DAEMON atd && exit 0 || exit $?
;;
*)
echo "Usage: /etc/init.d/atd {start|stop|restart|force-reload|status}"
exit 1
;;
esac
exit 0
Now you can start and stop your python script using the command /etc/init.d/scriptname start or stop.
A simple and supported version is Daemonize.
Install it from Python Package Index (PyPI):
$ pip install daemonize
and then use like:
...
import os, sys
from daemonize import Daemonize
...
def main()
# your code here
if __name__ == '__main__':
myname=os.path.basename(sys.argv[0])
pidfile='/tmp/%s' % myname # any name
daemon = Daemonize(app=myname,pid=pidfile, action=main)
daemon.start()
Ubuntu has a very simple way to manage a service.
For python the difference is that ALL the dependencies (packages) have to be in the same directory, where the main file is run from.
I just manage to create such a service to provide weather info to my clients.
Steps:
Create your python application project as you normally do.
Install all dependencies locally like:
sudo pip3 install package_name -t .
Create your command line variables and handle them in code (if you need any)
Create the service file. Something (minimalist) like:
[Unit]
Description=1Droid Weather meddleware provider
[Service]
Restart=always
User=root
WorkingDirectory=/home/ubuntu/weather
ExecStart=/usr/bin/python3 /home/ubuntu/weather/main.py httpport=9570 provider=OWMap
[Install]
WantedBy=multi-user.target
Save the file as myweather.service (for example)
Make sure that your app runs if started in the current directory
python3 main.py httpport=9570 provider=OWMap
The service file produced above and named myweather.service (important to have the extension .service) will be treated by the system as the name of your service. That is the name that you will use to interact with your service.
Copy the service file:
sudo cp myweather.service /lib/systemd/system/myweather.service
Refresh demon registry:
sudo systemctl daemon-reload
Stop the service (if it was running)
sudo service myweather stop
Start the service:
sudo service myweather start
Check the status (log file with where your print statements go):
tail -f /var/log/syslog
Or check the status with:
sudo service myweather status
Back to the start with another iteration if needed
This service is now running and even if you log out it will not be affected.
And YES if the host is shutdown and restarted this service will be restarted...
cron is clearly a great choice for many purposes. However it doesn't create a service or daemon as you requested in the OP. cron just runs jobs periodically (meaning the job starts and stops), and no more often than once / minute. There are issues with cron -- for example, if a prior instance of your script is still running the next time the cron schedule comes around and launches a new instance, is that OK? cron doesn't handle dependencies; it just tries to start a job when the schedule says to.
If you find a situation where you truly need a daemon (a process that never stops running), take a look at supervisord. It provides a simple way to wrapper a normal, non-daemonized script or program and make it operate like a daemon. This is a much better way than creating a native Python daemon.
how about using $nohup command on linux?
I use it for running my commands on my Bluehost server.
Please advice if I am wrong.
If you are using terminal(ssh or something) and you want to keep a long-time script working after you log out from the terminal, you can try this:
screen
apt-get install screen
create a virtual terminal inside( namely abc): screen -dmS abc
now we connect to abc: screen -r abc
So, now we can run python script: python keep_sending_mails.py
from now on, you can directly close your terminal, however, the python script will keep running rather than being shut down
Since this keep_sending_mails.py's PID is a child process of the virtual screen rather than the
terminal(ssh)
If you want to go back check your script running status, you can use screen -r abc again
First, read up on mail aliases. A mail alias will do this inside the mail system without you having to fool around with daemons or services or anything of the sort.
You can write a simple script that will be executed by sendmail each time a mail message is sent to a specific mailbox.
See http://www.feep.net/sendmail/tutorial/intro/aliases.html
If you really want to write a needlessly complex server, you can do this.
nohup python myscript.py &
That's all it takes. Your script simply loops and sleeps.
import time
def do_the_work():
# one round of polling -- checking email, whatever.
while True:
time.sleep( 600 ) # 10 min.
try:
do_the_work()
except:
pass
I would recommend this solution. You need to inherit and override method run.
import sys
import os
from signal import SIGTERM
from abc import ABCMeta, abstractmethod
class Daemon(object):
__metaclass__ = ABCMeta
def __init__(self, pidfile):
self._pidfile = pidfile
#abstractmethod
def run(self):
pass
def _daemonize(self):
# decouple threads
pid = os.fork()
# stop first thread
if pid > 0:
sys.exit(0)
# write pid into a pidfile
with open(self._pidfile, 'w') as f:
print >> f, os.getpid()
def start(self):
# if daemon is started throw an error
if os.path.exists(self._pidfile):
raise Exception("Daemon is already started")
# create and switch to daemon thread
self._daemonize()
# run the body of the daemon
self.run()
def stop(self):
# check the pidfile existing
if os.path.exists(self._pidfile):
# read pid from the file
with open(self._pidfile, 'r') as f:
pid = int(f.read().strip())
# remove the pidfile
os.remove(self._pidfile)
# kill daemon
os.kill(pid, SIGTERM)
else:
raise Exception("Daemon is not started")
def restart(self):
self.stop()
self.start()
to creating some thing that is running like service you can use this thing :
The first thing that you must do is installing the Cement framework:
Cement frame work is a CLI frame work that you can deploy your application on it.
command line interface of the app :
interface.py
from cement.core.foundation import CementApp
from cement.core.controller import CementBaseController, expose
from YourApp import yourApp
class Meta:
label = 'base'
description = "your application description"
arguments = [
(['-r' , '--run'],
dict(action='store_true', help='Run your application')),
(['-v', '--version'],
dict(action='version', version="Your app version")),
]
(['-s', '--stop'],
dict(action='store_true', help="Stop your application")),
]
#expose(hide=True)
def default(self):
if self.app.pargs.run:
#Start to running the your app from there !
YourApp.yourApp()
if self.app.pargs.stop:
#Stop your application
YourApp.yourApp.stop()
class App(CementApp):
class Meta:
label = 'Uptime'
base_controller = 'base'
handlers = [MyBaseController]
with App() as app:
app.run()
YourApp.py class:
import threading
class yourApp:
def __init__:
self.loger = log_exception.exception_loger()
thread = threading.Thread(target=self.start, args=())
thread.daemon = True
thread.start()
def start(self):
#Do every thing you want
pass
def stop(self):
#Do some things to stop your application
Keep in mind that your app must run on a thread to be daemon
To run the app just do this in command line
python interface.py --help
Use whatever service manager your system offers - for example under Ubuntu use upstart. This will handle all the details for you such as start on boot, restart on crash, etc.
You can run a process as a subprocess inside a script or in another script like this
subprocess.Popen(arguments, close_fds=True, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)
Or use a ready-made utility
https://github.com/megashchik/d-handler
I am working on a daemon where I need to embed a HTTP server. I am attempting to do it with BaseHTTPServer, which when I run it in the foreground, it works fine, but when I try and fork the daemon into the background, it stops working. My main application continues to work, but BaseHTTPServer does not.
I believe this has something to do with the fact that BaseHTTPServer sends log data to STDOUT and STDERR. I am redirecting those to files. Here is the code snippet:
# Start the HTTP Server
server = HTTPServer((config['HTTPServer']['listen'],config['HTTPServer']['port']),HTTPHandler)
# Fork our process to detach if not told to stay in foreground
if options.foreground is False:
try:
pid = os.fork()
if pid > 0:
logging.info('Parent process ending.')
sys.exit(0)
except OSError, e:
sys.stderr.write("Could not fork: %d (%s)\n" % (e.errno, e.strerror))
sys.exit(1)
# Second fork to put into daemon mode
try:
pid = os.fork()
if pid > 0:
# exit from second parent, print eventual PID before
print 'Daemon has started - PID # %d.' % pid
logging.info('Child forked as PID # %d' % pid)
sys.exit(0)
except OSError, e:
sys.stderr.write("Could not fork: %d (%s)\n" % (e.errno, e.strerror))
sys.exit(1)
logging.debug('After child fork')
# Detach from parent environment
os.chdir('/')
os.setsid()
os.umask(0)
# Close stdin
sys.stdin.close()
# Redirect stdout, stderr
sys.stdout = open('http_access.log', 'w')
sys.stderr = open('http_errors.log', 'w')
# Main Thread Object for Stats
threads = []
logging.debug('Kicking off threads')
while ...
lots of code here
...
server.serve_forever()
Am I doing something wrong here or is BaseHTTPServer somehow prevented from becoming daemonized?
Edit: Updated code to demonstrate the additional, previously missing code flow and that log.debug shows in my forked, background daemon I am hitting code after fork.
After a bit of googling I finally stumbled over this BaseHTTPServer documentation and after that I ended up with:
from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer
from SocketServer import ThreadingMixIn
class ThreadedHTTPServer(ThreadingMixIn, HTTPServer):
"""Handle requests in a separate thread."""
server = ThreadedHTTPServer((config['HTTPServer']['listen'],config['HTTPServer']['port']), HTTPHandler)
server.serve_forever()
Which for the most part comes after I fork and ended up resolving my problem.
Here's how to do this with the python-daemon library:
from BaseHTTPServer import (HTTPServer, BaseHTTPRequestHandler)
import contextlib
import daemon
from my_app_config import config
# Make the HTTP Server instance.
server = HTTPServer(
(config['HTTPServer']['listen'], config['HTTPServer']['port']),
BaseHTTPRequestHandler)
# Make the context manager for becoming a daemon process.
daemon_context = daemon.DaemonContext()
daemon_context.files_preserve = [server.fileno()]
# Become a daemon process.
with daemon_context:
server.serve_forever()
As usual for a daemon, you need to decide how you will interact with the program after it becomes a daemon. For example, you might register a systemd service, or write a PID file, etc. That's all outside the scope of the question though.
In particular, it's outside the scope of the question to ask: once it's become a daemon process (necessarily detached from any controlling terminal), how do I stop the daemon process? That's up to you to decide, as part of defining the program's behaviour.
You start by instantiating a HTTPServer. But you don't actually tell it to start serving in any of the supplied code. In your child process try calling server.serve_forever().
See this for reference
A simple solution that worked for me was to override the BaseHTTPRequestHandler method log_message(), so we prevent any kind of writing in stdout and avoid problems when demonizing.
class CustomRequestHandler(BaseHTTPServer.BaseHTTPRequestHandler):
def log_message(self, format, *args):
pass
...
rest of custom class code
...
Just use daemontools or some other similar script instead of rolling your own daemonizing process. It is much better to keep this off your script.
Also, your best option: Don't use BaseHTTPServer. It is really bad. There are many good HTTP servers for python, i.e. cherrypy or paste. Both includes ready-to-use daemonizing scripts.
Since this has solicited answers since I originally posted, I thought that I'd share a little info.
The issue with the output has to do with the fact that the default handler for the logging module uses the StreamHandler. The best way to handle this is to create your own handlers. In the case where you want to use the default logging module, you can do something like this:
# Get the default logger
default_logger = logging.getLogger('')
# Add the handler
default_logger.addHandler(myotherhandler)
# Remove the default stream handler
for handler in default_logger.handlers:
if isinstance(handler, logging.StreamHandler):
default_logger.removeHandler(handler)
Also at this point I have moved to using the very nice Tornado project for my embedded http servers.