python-daemon and logging: set logging level interactively - python

I have a python-daemon process that logs to a file via a ThreadedTCPServer (inspired by the cookbook example: https://docs.python.org/2/howto/logging-cookbook.html#sending-and-receiving-logging-events-across-a-network, as I will have many such processes writing to the same file). I am controlling the spawning of the daemon process using subprocess.Popen from an ipython console, and this is how the application will be run. I am able to successfully write to the log file from both the main ipython process, as well as the daemon process, but I am unable to change the level of both by just simply setting the level of the root logger in ipython. Is this something that should be possible? Or will it require custom functionality to set the logging.level of the daemon separately?
Edit: As requested, here is an attempt to provide a pseudo-code example of what I am trying to achieve. I hope that this is a sufficient description.
daemon_script.py
import logging
import daemon
from other_module import function_to_run_as_daemon
class daemon(object):
def __init__(self):
self.daemon_name = __name__
logging.basicConfig() # <--- required, or I don't get any log messages
self.logger = logging.getLogger(self.daemon_name)
self.logger.debug( "Created logger successfully" )
def run(self):
with daemon.daemonContext( files_preserve = [self.logger.handlers[0].stream] )
self.logger.debug( "Daemonised successfully - about to enter function" )
function_to_run_as_daemon()
if __name__ == "__main__":
d = daemon()
d.run()
Then in ipython i would run something like
>>> import logging
>>> rootlogger = logging.getLogger()
>>> rootlogger.info( "test" )
INFO:root:"test"
>>> subprocess.Popen( ["python" , "daemon_script.py"] )
DEBUG:__main__:"Created logger successfully"
DEBUG:__main__:"Daemonised successfully - about to enter function"
# now i'm finished debugging and testing, i want to reduce the level for all the loggers by changing the level of the handler
# Note that I also tried changing the level of the root handler, but saw no change
>>> rootlogger.handlers[0].setLevel(logging.INFO)
>>> rootlogger.info( "test" )
INFO:root:"test"
>>> print( rootlogger.debug("test") )
None
>>> subprocess.Popen( ["python" , "daemon_script.py"] )
DEBUG:__main__:"Created logger successfully"
DEBUG:__main__:"Daemonised successfully - about to enter function"
I think that I may not be approaching this correctly, but, its not clear to me what would work better. Any advice would be appreciated.

The logger you create in your daemon won't be the same as the logger you made in ipython. You could test this to be sure, by just printing out both logger objects themselves, which will show you their memory addresses.
I think a better pattern would be be that you pass if you want to be in "debug" mode or not, when you run the daemon. In other words, call popen like this:
subprocess.Popen( ["python" , "daemon_script.py", "debug"] )
It's up to you, you could pass a string meaning "debug mode is on" as above, or you could pass the log level constant that means "debug", e.g.:
subprocess.Popen( ["python" , "daemon_script.py", "10"] )
(https://docs.python.org/2/library/logging.html#levels)
Then in the daemon's init function use argv for example, to get that argument and use it:
...
import sys
def __init__(self):
self.daemon_name = __name__
logging.basicConfig() # <--- required, or I don't get any log messages
log_level = int(sys.argv[1]) # Probably don't actually just blindly convert it without error handling
self.logger = logging.getLogger(self.daemon_name)
self.logger.setLevel(log_level)
...

Related

Unwanted timestamp when using print in python

I am using autobahn[twisted] to achieve some WAMP communication. While subscribing to a topic a getting feed from it i print it. When i do it i get something like this:
2016-09-25T21:13:29+0200 (u'USDT_ETH', u'12.94669009', u'12.99998074', u'12.90000334', u'0.00035594', u'18396.86929477', u'1422.19525455', 0, u'13.14200000', u'12.80000000')
I have sacrificed too many hours to take it out. And yes, i tested other things to print, it print without this timestamp. This is my code:
from twisted.internet.defer import inlineCallbacks
from autobahn.twisted.wamp import ApplicationSession, ApplicationRunner
class PushReactor(ApplicationSession):
#inlineCallbacks
def onJoin(self, details):
print "subscribed"
yield self.subscribe(self.onTick, u'ticker')
def onTick(self, *args):
print args
if __name__ == '__main__':
runner = ApplicationRunner(u'wss://api.poloniex.com', u'realm1')
runner.run(PushReactor)
How can i remove this timestamp?
Well, sys.stderr and sys.stdout are redirected to a twisted logger.
You need to change the logging format before running you app.
See: https://twistedmatrix.com/documents/15.2.1/core/howto/logger.html
How to reproduce
You can reproduce your problem with this simple application:
from autobahn.twisted.wamp import ApplicationRunner
if __name__ == '__main__':
print("hello1")
runner = ApplicationRunner(u'wss://api.poloniex.com', u'realm1')
print("hello2")
runner.run(None)
print("hello3")
When the process is killed, you'll see:
hello1
hello2
2016-09-26T14:08:13+0200 Received SIGINT, shutting down.
2016-09-26T14:08:13+0200 Main loop terminated.
2016-09-26T14:08:13+0200 hello3
During application launching, stdout (and stderr) are redirected to a file-like object (of class twisted.logger._io.LoggingFile).
Every call to print or write are changed in twister log messages (one for each line).
The redirection is done in the class twisted.logger._global.LogBeginner, look at the beginLoggingTo method.

Python multiprocessing module do not work

i am trying to write a spider with multiprocessing module
here is my python code:
# -*- coding:utf-8 -*-
import multiprocessing
import requests
class SpiderWorker(object):
def __init__(self, q):
self._q = q
def run(self):
def _crawl_item(url):
requests.get("http://www.baidu.com")
if respon.ok:
print respon.url
while True:
rst = self._q.get()
_crawl_item(rst)
def general_worker():
q = multiprocessing.Queue()
CPU_COUNT = multiprocessing.cpu_count()
worker_processes = [
multiprocessing.Process(target=SpiderWorker(q).run)
for i in range(CPU_COUNT)
]
map( lambda process: process.start(), worker_processes )
return q, worker_processes
maybe it is my process way wrong
every time i run this code, my process tell me
<Process(Process-1, stopped[SIGSEGV])>
hope love it
The major problem here is that you don't have any information on why your processes fail. It could be gevent, but it could just as easily be something else. So learning the actual reason why your processes get terminated is the first step before doing anything else.
What you need is multiprocessing.log_to_stderr():
class SpiderWorker(object):
# ...
def run(self):
logger = multiprocessing.log_to_stderr()
logger.setLevel(multiprocessing.SUBDEBUG)
try:
# Here goes your original run() code
except Exception:
logger.exception('whoopsie')
What this code does:
Creates a special logger which will transmit it's information to the main process and dump it to stderr (console by default).
Configures this logger to report everything, including some internal multiprocessing module events (just in case as you probably don't need them).
Wraps your entire code in catch-all statement so whatever happens there cannot escape your notice.
Runs .exception() method on the logger, which not only logs the message (it's meaningless anyway as we don't know what actually happens) but most importantly logs the entire error traceback - which we actually need.

How can I save my log file in Python when the process is killed

I am learning the logging module in Python.
However, if I log like this
logging.basicConfig(filename='mylog.log',format='%(asctime)s - %(levelname)s - %(message)s', level=logging.DEBUG)
while 1:
logging.debug("something")
time.sleep(1)
and interrupt the process with control-C event(or the process is killed), nothing I can got from the log file.
Can I save the most logs whatever happens?
————
EDIT
the question seem become more complex:
I have imported scipy, numpy, pyaudio in my script, and I got:
forrtl: error (200): program aborting due to control-C event
instead of KeyboardInterrupt
I have read this question: Ctrl-C crashes Python after importing scipy.stats
and add these line to my script:
import _thread
import win32api
def handler(dwCtrlType, hook_sigint=_thread.interrupt_main):
if dwCtrlType == 0: # CTRL_C_EVENT
hook_sigint()
return 1 # don't chain to the next handler
return 0 # chain to the next handler
then:
try:
main()
except KeyboardInterrupt:
print("exit manually")
exit()
Now, the script stops without any info if I use ctrl+C. print("exit manually") does not appear. Of course, no logs.
Solved
A stupid mistake!
I run the script when working directory is System32 and want to find log in the script's path.
After I change the route like this, all is well.
logging.basicConfig(filename=os.path.dirname(sys.argv[0])+os.sep+'mylog.log',format='%(asctime)s - %(levelname)s - %(message)s', level=logging.DEBUG)
When you log using logging.debug, logging.info, ..., logging.critical, you're using the root logger. I assume you're not doing anything to configure logging that you haven't shown, so you're running with the out-of-the-box, default configuration. (This is set up for you by the first call to logging.debug, which calls logging.basicConfig()).
The default logging level of the root logger is logging.WARNING (as mentioned in e.g. https://docs.python.org/3/howto/logging.html#logging-basic-tutorial). Thus, nothing you log with logging.debug or logging.info will appear :) If you change logging.debug to logging.warning (or .error or .critical), you will see logging output.
For your code to work as is, set the logging level of the root logger to logging.DEBUG before the loop:
import logging
import time
# logging.getLogger() returns the root logger
logging.getLogger().setLevel(logging.DEBUG)
while 1:
logging.debug("something")
time.sleep(1)
For the CTRL + C event use a try-except to catch the KeyboardInterrupt exception.

redirect subprocess log to wxpython txt ctrl

I would like to capture log o/p from a python based subprocess . Here is part of my code. How do I ridirect my log also to this txt ctrl
Here is mytest.py:
import logging
log=logging.getLogger('test')
class MyTestClass():
def TestFunction(self) :
log.info("start function"
# runs for 5 - 10 mins and has lots of log statments
print "some stuff"
log.info("after Test Function")
# for now
return a,b
#sys.exit(2)
if __name__ == "__main__":
myApp=MyTestClass()
myApp.TestFunction()
I am doing something of this sort in my maingui:
class WxLog(logging.Handler):
def __init__(self, ctrl):
logging.Handler.__init__(self)
self.ctrl = ctrl
def emit(self, record):
if self.ctrl:
self.ctrl.AppendText(self.format(record)+"\n")
and in my gui
self.log = wx.TextCtrl(self, -1, "", style=wx.TE_MULTILINE| wx.TE_RICH2)
#logging.basicConfig(level=logging.INFO)
self.logr = logging.getLogger('')
self.logr.setLevel(logging.INFO)
hdlr = WxLog(self.log)
hdlr.setFormatter(logging.Formatter('%(message)s '))
self.logr.addHandler(hdlr)
#snip
prog = os.path.join(mydir,"mytest.py")
params = [sys.executable,prog]
# Start the subprocess
outmode = subprocess.PIPE
errmode = subprocess.STDOUT
self._proc = subprocess.Popen(params,
stdout=outmode,
stderr=errmode,
shell=True
)
# Read from stdout while there is output from process
while self._proc.poll() == None:
txt = self._proc.stdout.readline()
print txt
# also direct log to txt ctrl
txt = 'Return code was ' + str(self._proc.returncode) +'\n'
# direct
self.logr.info("On end ")
You can try following the suggestion in this post.
Update: You can set the logger in the subprocess to use a SocketHandler and set up a socket server in the GUI to listen for messages from the subprocess, using the technique in the linked-to post to actually make things appear in the GUI. A working socket server is included in the logging documentation.
I wrote an article about how I redirect a few things like ping and traceroute using subprocess to my TextCtrl widget here: http://www.blog.pythonlibrary.org/2010/06/05/python-running-ping-traceroute-and-more/
That might help you figure it out. Here's a more generic article that doesn't use subprocess: http://www.blog.pythonlibrary.org/2009/01/01/wxpython-redirecting-stdout-stderr/
I haven't tried redirecting with the logging module yet, but that may be something I'll do in the future.

Log output of multiprocessing.Process

Is there a way to log the stdout output from a given Process when using the multiprocessing.Process class in python?
The easiest way might be to just override sys.stdout. Slightly modifying an example from the multiprocessing manual:
from multiprocessing import Process
import os
import sys
def info(title):
print title
print 'module name:', __name__
print 'parent process:', os.getppid()
print 'process id:', os.getpid()
def f(name):
sys.stdout = open(str(os.getpid()) + ".out", "w")
info('function f')
print 'hello', name
if __name__ == '__main__':
p = Process(target=f, args=('bob',))
p.start()
q = Process(target=f, args=('fred',))
q.start()
p.join()
q.join()
And running it:
$ ls
m.py
$ python m.py
$ ls
27493.out 27494.out m.py
$ cat 27493.out
function f
module name: __main__
parent process: 27492
process id: 27493
hello bob
$ cat 27494.out
function f
module name: __main__
parent process: 27492
process id: 27494
hello fred
There are only two things I would add to #Mark Rushakoff answer. When debugging, I found it really useful to change the buffering parameter of my open() calls to 0.
sys.stdout = open(str(os.getpid()) + ".out", "a", buffering=0)
Otherwise, madness, because when tail -fing the output file the results can be verrry intermittent. buffering=0 for tail -fing great.
And for completeness, do yourself a favor and redirect sys.stderr as well.
sys.stderr = open(str(os.getpid()) + "_error.out", "a", buffering=0)
Also, for convenience you might dump that into a separate process class if you wish,
class MyProc(Process):
def run(self):
# Define the logging in run(), MyProc's entry function when it is .start()-ed
# p = MyProc()
# p.start()
self.initialize_logging()
print 'Now output is captured.'
# Now do stuff...
def initialize_logging(self):
sys.stdout = open(str(os.getpid()) + ".out", "a", buffering=0)
sys.stderr = open(str(os.getpid()) + "_error.out", "a", buffering=0)
print 'stdout initialized'
Heres a corresponding gist
You can set sys.stdout = Logger() where Logger is a class whose write method (immediately, or accumulating until a \n is detected) calls logging.info (or any other way you want to log). An example of this in action.
I'm not sure what you mean by "a given" process (who's given it, what distinguishes it from all others...?), but if you mean you know what process you want to single out that way at the time you instantiate it, then you could wrap its target function (and that only) -- or the run method you're overriding in a Process subclass -- into a wrapper that performs this sys.stdout "redirection" -- and leave other processes alone.
Maybe if you nail down the specs a bit I can help in more detail...?
Here is the simple and straightforward way for capturing stdout for multiprocessing.Process and io.TextIOWrapper:
import app
import io
import sys
from multiprocessing import Process
def run_app(some_param):
out_file = open(sys.stdout.fileno(), 'wb', 0)
sys.stdout = io.TextIOWrapper(out_file, write_through=True)
app.run()
app_process = Process(target=run_app, args=('some_param',))
app_process.start()
# Use app_process.termninate() for python <= 3.7.
app_process.kill()
The log_to_stderr() function is the simplest solution.
From PYMOTW:
multiprocessing has a convenient module-level function to enable logging called log_to_stderr(). It sets up a logger object using logging and adds a handler so that log messages are sent to the standard error channel. By default, the logging level is set to NOTSET so no messages are produced. Pass a different level to initialize the logger to the level of detail desired.
import logging
from multiprocessing import Process, log_to_stderr
print("Running main script...")
def my_process(my_var):
print(f"Running my_process with {my_var}...")
# Initialize logging for multiprocessing.
log_to_stderr(logging.DEBUG)
# Start the process.
my_var = 100;
process = Process(target=my_process, args=(my_var,))
process.start()
process.kill()
This code will output both print() statements to stderr.

Categories