Receiving the logging messages of a python process opened via subprocess - python

I have a python program that is using pygame. I want to create another pygame window for some additional content and have a seperate script for that. I use socket and localhost for communication.
I am using subprocess to run the script that displays the second pygame window. This script has a number of logging messages that are not displayed on the stdout of the terminal I am using. Is there a way to redirect the logging messages so that they are printed to the console alongside the logging messages of the main program?
So far I have set up a logwrapper that captures the logged output:
level = logging.DEBUG
def getLogger(module_name):
wrapper = LogWrapper(module_name)
return wrapper.getLogger(),wrapper
class LogWrapper():
def __init__(self,module_name):
self.module_name = module_name
self.log_capture_string = None
self.log_trace = []
#property
def trace(self):
values = self.log_capture_string.getvalue()
self.log_trace = self.log_trace+values.split("\n")
return self.log_trace
def getLogger(self, **kwargs):
### Create the logger
logging.config.dictConfigClass(configuration).configure()
logger = logging.getLogger(self.module_name)
logger.setLevel(level)
### Setup the console handler with a FIFOIO object
self.log_capture_string = FIFOIO(32768)
ch = logging.StreamHandler(self.log_capture_string)
ch.setLevel(logging.DEBUG)
### Optionally add a formatter
### Add the console handler to the logger
logger.addHandler(ch)
logger.info("set up logwrap for {}".format(self.module_name))
return logger
class FIFOIO(io.TextIOBase):
def __init__(self, size, *args):
self.maxsize = size
io.TextIOBase.__init__(self, *args)
self.deque = collections.deque()
def getvalue(self):
return ''.join(self.deque)
def write(self, x):
self.deque.append(x)
self.shrink()
def shrink(self):
if self.maxsize is None:
return
size = sum(len(x) for x in self.deque)
while size > self.maxsize:
x = self.deque.popleft()
size -= len(x)
But after running the main program and calling subprogram.logwrapper.trace upon termination doesn't capture any of the error messages from the runtime, only the initialization message, so I am looking for a better way to access this information.

Related

What's the worst level log that I just logged?

I've added logs to a Python 2 application using the logging module.
Now I want to add a closing statement at the end, dependent on the worst thing logged.
If the worst thing logged had the INFO level or lower, print "SUCCESS!"
If the worst thing logged had the WARNING level, write "SUCCESS!, with warnings. Please check the logs"
If the worst thing logged had the ERROR level, write "FAILURE".
Is there a way to get this information from the logger? Some built in method I'm missing, like logging.getWorseLevelLogSoFar?
My current plan is to replace all log calls (logging.info et al) with calls to wrapper functions in a class that also keeps track of that information.
I also considered somehow releasing the log file, reading and parsing it, then appending to it. This seems worse than my current plan.
Are there other options? This doesn't seem like a unique problem.
I'm using the root logger and would prefer to continue using it, but can change to a named logger if that's necessary for the solution.
As you said yourself, I think writing a wrapper function would be the neatest and fastest approach. The problem would be that you need a global variable, if you're not working within a class
global worst_log_lvl = logging.NOTSET
def write_log(logger, lvl, msg):
logger.log(lvl, msg)
if lvl > worst_log_lvl:
global worst_log_lvl
worst_log_lvl = lvl
or make worst_log_lvl a member of a custom class, where you emulate the signature of logging.logger, that you use instead of the actual logger
class CustomLoggerWrapper(object):
def __init__(self):
# setup of your custom logger
self.worst_log_lvl = logging.NOTSET
def debug(self):
pass
# repeat for other functions like info() etc.
As you're only using the root logger, you could attach a filter to it which keeps track of the level:
import argparse
import logging
import random
LEVELS = ['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']
class LevelTrackingFilter(logging.Filter):
def __init__(self):
self.level = logging.NOTSET
def filter(self, record):
self.level = max(self.level, record.levelno)
return True
def main():
parser = argparse.ArgumentParser()
parser.add_argument('maxlevel', metavar='MAXLEVEL', default='WARNING',
choices=LEVELS,
nargs='?', help='Set maximum level to log')
options = parser.parse_args()
maxlevel = getattr(logging, options.maxlevel)
logger = logging.getLogger()
logger.addHandler(logging.NullHandler()) # needs Python 2.7
filt = LevelTrackingFilter()
logger.addFilter(filt)
for i in range(100):
level = getattr(logging, random.choice(LEVELS))
if level > maxlevel:
continue
logger.log(level, 'message')
if filt.level <= logging.INFO:
print('SUCCESS!')
elif filt.level == logging.WARNING:
print('SUCCESS, with warnings. Please check the logs.')
else:
print('FAILURE')
if __name__ == '__main__':
main()
There's a "good" way to get this done automatically by using context filters.
TL;DR I've built a package that has the following contextfilter baked in. You can install it with pip install ofunctions.logger_utils then use it with:
from ofunctions import logger_utils
logger = logger_utils.logger_get_logger(log_file='somepath', console=True)
logger.error("Oh no!")
logger.info("Anyway...")
# Now get the worst called loglevel (result is equivalent to logging.ERROR level in this case)
worst_level = logger_utils.get_worst_logger_level(logger)
Here's the long solution which explains what happens under the hood:
Let's built a contextfilter class that can be injected into logging:
class ContextFilterWorstLevel(logging.Filter):
"""
This class records the worst loglevel that was called by logger
Allows to change default logging output or record events
"""
def __init__(self):
self._worst_level = logging.INFO
if sys.version_info[0] < 3:
super(logging.Filter, self).__init__()
else:
super().__init__()
#property
def worst_level(self):
"""
Returns worst log level called
"""
return self._worst_level
#worst_level.setter
def worst_level(self, value):
# type: (int) -> None
if isinstance(value, int):
self._worst_level = value
def filter(self, record):
# type: (str) -> bool
"""
A filter can change the default log output
This one simply records the worst log level called
"""
# Examples
# record.msg = f'{record.msg}'.encode('ascii', errors='backslashreplace')
# When using this filter, something can be added to logging.Formatter like '%(something)s'
# record.something = 'value'
if record.levelno > self.worst_level:
self.worst_level = record.levelno
return True
Now inject this filter into you logger instance
logger = logging.getLogger()
logger.addFilter(ContextFilterWorstLevel())
logger.warning("One does not simply inject a filter into logging")
Now we can iter over present filters and extract the worst called loglevel like this:
for flt in logger.filters:
if isinstance(flt, ContextFilterWorstLevel):
print(flt.worst_level)

How-To: Python TimedRotatingFileHandle for multiple process-instances and files?

I just got thrown into the deep end with my new contract. The current system uses the python logging module to do timed log-file rotation. The problem is that the log-file of the process running as a daemon gets rotated correctly, while the other log-file of the process instances that get created and destroyed when done does not rotate. Ever. I have now got to find a solution to this problem. After 2 days of research on the internet and python documentation I'm only halfway out of the dark. Since I'm new to the logging module I can't see the answer to the problem since I'm probably looking with my eyes closed!
The process is started with:
python /admin/bin/fmlog.py -l 10 -f /tmp/fmlog/fmapp_log.log -d
where:
-l 10 => DEBUG logging-level
-f ... => Filename to log to for app-instance
-d => run as daemon
The following shows a heavily edited version of my code:
#!/usr/bin python
from comp.app import app, yamlapp
...
from comp.utils.log4_new import *
# Exceptions handling class
class fmlogException(compException): pass
class fmlog(app):
# Fmlog application class
def __init__(self, key, config, **kwargs):
# Initialise the required variables
app.__init__(self, key, config, **kwargs)
self._data = {'sid': self._id}
...
def process(self, tid=None):
if tid is not None:
self.logd("Using thread '%d'." % (tid), data=self._data)
# Run the fmlog process
self.logi("Processing this '%s'" % (filename), data=self._data)
...
def __doDone__(self, success='Failure', msg='', exception=None):
...
self.logd("Process done!")
if __name__ == '__main__':
def main():
with yamlapp(filename=config, cls=fmlog, configcls=fmlogcfg, sections=sections, loglevel=loglevel, \
logfile=logfile, excludekey='_dontrun', sortkey='_priority', usethreads=threads, maxthreads=max, \
daemon=daemon, sleep=sleep) as a:
a.run()
main()
The yamlapp process (sub-class of app) is instantiated and runs as a daemon until manually stopped. This process will only create 1 or more instance(s) of the fmlog class and call the process() function when needed (certain conditions met). Up to x instances can be created per thread if the yamlapp process is run in thread-mode.
The app process code:
#!/usr/bin/env python
...
from comp.utils.log4_new import *
class app(comp.base.comp, logconfig, log):
def __init__(self, cls, **kwargs):
self.__setdefault__('_configcls', configitem)
self.__setdefault__('_daemon', True)
self.__setdefault__('_maxthreads', 5)
self.__setdefault__('_usethreads', False)
...
comp.base.comp.__init__(self, **kwargs)
logconfig.__init__(self, prog(), **getlogkwargs(**kwargs))
log.__init__(self, logid=prog())
def __enter__(self):
self.logi(msg="Starting application '%s:%s' '%d'..." % (self._cls.__name__, \
self.__class__.__name__, os.getpid()))
return self
def ...
def run(self):
...
if self._usethreads:
...
while True:
self.logd(msg="Start of run iteration...")
if not self._usethreads:
while not self._q.empty():
item = self._q.get()
try:
item.process()
self.logd(msg="End of run iteration...")
time.sleep(self._sleep)
The logging config and setup is done via the log4_new.py classes:
#!/usr/bin/env python
import logging
import logging.handlers
import re
class logconfig(comp):
def __init__(self, logid, **kwargs):
comp.__init__(self, **kwargs)
self.__setdefault__('_logcount', 20)
self.__setdefault__('_logdtformat', None)
self.__setdefault__('_loglevel', DEBUG)
self.__setdefault__('_logfile', None)
self.__setdefault__('_logformat', '[%(asctime)-15s][%(levelname)5s] %(message)s')
self.__setdefault__('_loginterval', 'S')
self.__setdefault__('_logintervalnum', 30)
self.__setdefault__('_logsuffix', '%Y%m%d%H%M%S')
self._logid = logid
self.__loginit__()
def __loginit__(self):
format = logging.Formatter(self._logformat, self._logdtformat)
if self._logfile:
hnd = logging.handlers.TimedRotatingFileHandler(self._logfile, when=self._loginterval, interval=self._logintervalnum, backupCount=self._logcount)
hnd.suffix = self._logsuffix
hnd.extMatch = re.compile(strftoregex(self._logsuffix))
else:
hnd = logging.StreamHandler()
hnd.setFormatter(format)
l = logging.getLogger(self._logid)
for h in l.handlers:
l.removeHandler(h)
l.setLevel(self._loglevel)
l.addHandler(hnd)
class log():
def __init__(self, logid):
self._logid = logid
def __log__(self, msg, level=DEBUG, data=None):
l = logging.getLogger(self._logid)
l.log(level, msg, extra=data)
def logd(self, msg, **kwargs):
self.__log__(level=DEBUG, msg=msg, **kwargs)
def ...
def logf(self, msg, **kwargs):
self.__log__(level=FATAL, msg=msg, **kwargs)
def getlogkwargs(**kwargs):
logdict = {}
for key, value in kwargs.iteritems():
if key.startswith('log'): logdict[key] = value
return logdict
Logging is done as expected: logs from yamlapp (sub-class of app) is written to fmapp_log.log, and logs from fmlog is written to fmlog.log.
The problem is that fmapp_log.log is rotated as expected, but fmlog.log is never rotated. How do I solve this? I know the process must run continuously for the rotation to happen, that is why only one logger is used. I suspect another handle must be created for the fmlog process which must never be destroyed when the process exits.
Requirements:
The app (framework or main) log and the fmlog (process) log must be to different files.
Both log-files must be time-rotated.
Hopefully someone will understand the above and be able to give me a couple of pointers.

Writing Python output to either screen or filename

I am writing some code that will output a log to either the screen, or a file, but not both.
I thought the easiest way to do this would be to write a class:
class WriteLog:
"write to screen or to file"
def __init__(self, stdout, filename):
self.stdout = stdout
self.logfile = file(filename, 'a')
def write(self, text):
self.stdout.write(text)
self.logfile.write(text)
def close(self):
self.stdout.close()
self.logfile.close()
And then call it something like this:
output = WriteLog(sys.stdout, 'log.txt')
However, I'm not sure how to allow for switching between the two, i.e. there should be an option within the class that will set WriteLog to either use stdout, or filename. Once that option has been set I just use WriteLog without any need for if statements etc.
Any ideas? Most of the solutions I see online are trying to output to both simultaneously.
Thanks.
Maybe something like this? It uses the symbolic name 'stdout' or 'stderr' in the constructor, or a real filename. The usage of if is limited to the constructor. By the way, I think you're trying to prematurely optimize (which is the root of all evil): you're trying to save time on if's while in real life, the program will spend much more time in file I/O; making the potential waste on your if's negligible.
import sys
class WriteLog:
def __init__(self, output):
self.output = output
if output == 'stdout':
self.logfile = sys.stdout
elif output == 'stderr':
self.logfile = sys.stderr
else:
self.logfile = open(output, 'a')
def write(self, text):
self.logfile.write(text)
def close(self):
if self.output != 'stdout' and self.output != 'stderr':
self.logfile.close()
def __del__(self):
self.close()
if __name__ == '__main__':
a = WriteLog('stdout')
a.write('This goes to stdout\n')
b = WriteLog('stderr')
b.write('This goes to stderr\n')
c = WriteLog('/tmp/logfile')
c.write('This goes to /tmp/logfile\n')
I'm not an expert in it, but try to use the logging library, and maybe you can have logger with 2 handlers, one for file and one for stream and then add/remove handlers dynamically.
I like the suggestion about using the logging library. But if you want to hack out something yourself, maybe passing in the file handle is worth considering.
import sys
class WriteLog:
"write to screen or to file"
def __init__(self, output):
self.output = output
def write(self, text):
self.output.write(text)
def close(self):
self.output.close()
logger = WriteLog(file('c:/temp/log.txt','a' ))
logger.write("I write to the log file.\n")
logger.close()
sysout = WriteLog(sys.stdout)
sysout.write("I write to the screen.\n")
You can utilize the logging library to do something similar to this. The following function will set up a logging object at the INFO level.
def setup_logging(file_name, log_to_file=False, log_to_console=False ):
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
# Create Console handler
if log_to_file:
console_log = logging.StreamHandler()
console_log.setLevel(logging.INFO)
formatter = logging.Formatter('%(asctime)s - %(levelname)-8s - %(name)-12s - %(message)s')
console_log.setFormatter(formatter)
logger.addHandler(console_log)
# Log file
if log_to_console:
file_log = logging.FileHandler('%s.log' % (file_name), 'a', encoding='UTF-8')
file_log.setLevel(logging.INFO)
formatter = logging.Formatter('%(asctime)s - %(levelname)-8s - %(name)-12s - %(message)s')
file_log.setFormatter(formatter)
logger.addHandler(file_log)
return logger
You pass it the name of your log, and where you wish to log (either to the file, the console or both). Then you can utilize this function in your code block like this:
logger = setup_logging("mylog.log", log_to_file=True, log_to_console=False)
logger.info('Message')
This example will log to a file named mylog.log (in the current directory) and have output like this:
2014-11-05 17:20:29,933 - INFO - root - Message
This function has areas for improvement (if you wish to add more functionality). Right now it logs to both the console and file at log level INFO on the .setLevel(logging.INFO) lines. This could be set dynamically if you wish.
Additionally, as it is now, you can easily add standard logging lines (logger.debug('Message'), logger.critcal('DANGER!')) without modifying a class. In these examples, the debug messages won't print (because it is set to INFO) and the critical ones will.

Python logging: log only to handlers not to root

I have the following class:
class Log(object):
# class new
#new is used instead of init because __new__ is able to return (where __init__ can't)
def __new__(self, name, consolelevel, filelevel):
formatter = logging.Formatter('%(asctime)s %(levelname)s: %(name)s: %(message)s')
#Create consolehandler and set formatting (level is set in the ROOT)
consolehandler = StreamHandler()
consolehandler.setFormatter(formatter)
#Create filehandler, set level and set formatting
filehandler = FileHandler(name + '.log')
filehandler.setLevel(filelevel)
filehandler.setFormatter(formatter)
#Create the root logger, add console and file logger. Set the rootlevel == consolelevel.
self.root = logging.getLogger(name)
#causing me problems....
self.root.setLevel(consolelevel)
self.root.addHandler(consolehandler)
self.root.addHandler(filehandler)
self.root.propagate = True
return self.root
# Close the logger object
def close():
# to be implemented
pass
I use this class to log to the console and to a file (depending on the set level). The problem is that the root level seems to be leading for the addhandlers. Is there a way to disable this? Now I set the rootlevel to the same level as the consolelevel but this does not work...
Any advice?
Thanks in advance and with best regards,
JR
A problem that I can see in your code is that it will add more handlers whenever you instantiate the Log class. You probably do not want this.
Keep in mind that getLogger returns always the same instance when called with the same argument, and basically it implements the singleton pattern.
Hence when you later call addHandler it will add a new handler everytime.
The way to deal with logging is to create a logger at the module level and use it.
Also I'd avoid using __new__. In your case you can use a simple function. And note that your Log.close method wont work, because your __new__ method does not return a Log instance, and thus the returned logger doesn't have that method.
Regarding the level of the logger, I don't understand why you do not set the level on the consolehandler but on the whole logger.
This is a simplified version of the module I am making. The module contains a few classes that all need a logging functionality. Each class logs to a different file and and it should also be possible to change the file handler levels between classes (e.g. gamepad class: console.debug and filehandler.info and MQTT class: console info and filehandler.debug).
Therefor I thought that setting up a log class would be the easiest way. Please bear in mind that I usually do electronics but now combined with python. So my skills are pretty basic....
#!/bin/env python2.7
from future import division
from operator import *
import logging
from logging import FileHandler
from logging import StreamHandler
import pygame
import threading
from pygame.locals import *
import mosquitto
import time
from time import sleep
import sys
class ConsoleFileLogger(object):
# class constructor
def __init__(self, filename, loggername, rootlevel, consolelevel, filelevel):
# logger levels: DEBUG, INFO, WARNING, ERROR, CRITICAL
# define a root logger and set its ROOT logging level
logger = logging.getLogger(loggername)
logger.setLevel(rootlevel)
# define a Handler which writes messages or higher to the sys.stderr (Console)
self.console = logging.StreamHandler()
# set the logging level
self.console.setLevel(consolelevel)
# define a Handler which writes messages to a logfile
self.logfile = logging.FileHandler(filename + '.log')
# set the logging level
self.logfile.setLevel(filelevel)
# create formatter and add it to the handlers
formatter = logging.Formatter('%(asctime)s %(levelname)s: %(name)s: %(message)s')
self.console.setFormatter(formatter)
self.logfile.setFormatter(formatter)
# add the handlers to the root logger
logger.addHandler(self.console)
logger.addHandler(self.logfile)
self._logger = logger
# set a net instance of the logger
def set(self):
return self._logger
# Stop and remove the ConsoleFileLogger object
def remove(self):
self._logger.removeHandler(self.console)
self._logger.removeHandler(self.logfile)
self._logfile.FileHandler().close()
class Gamepad():
# class constructor
def __init__(self, mqttgamepad):
self.logger = ConsoleFileLogger('BaseLogFiles/Gamepad', 'Gamepad', logging.INFO, logging.INFO, logging.INFO).set()
if joystickcount == 0:
self.logger.error('No gamepad connected')
elif joystickcount == 1:
self.gamepad = pygame.joystick.Joystick(0)
self.gamepad.init()
self.logger.debug('Joystick name %s', self.gamepad.get_name())
self.logger.debug('nb of axes = %s', self.gamepad.get_numaxes())
self.logger.debug('nb of balls = %s', self.gamepad.get_numballs())
self.logger.debug('nb of buttons = %s', self.gamepad.get_numbuttons())
self.logger.debug('nb of mini joysticks = %s', self.gamepad.get_numhats())
elif joystickcount > 1:
self.logger.error('only one gamepad is allowed')
def run(self):
self.logger.debug('gamepad running')
class MQTTClient():
def __init__(self, clientname):
self.logger = ConsoleFileLogger('BaseLogFiles/MQTT/Pub', clientname, logging.DEBUG, logging.DEBUG, logging.DEBUG).set()
self.logger.debug('test')
def run(self):
self.logger.info('Connection MQTT Sub OK')
def main():
logger = ConsoleFileLogger('BaseLogFiles/logControlCenterMain', 'ControlCenterMain', logging.DEBUG, logging.DEBUG, logging.DEBUG).set()
mqttclient = MQTTClient("MQTTClient")
mqttclient.connect()
gamepad = Gamepad(mqttclient)
if gamepad.initialized():
gamepadthread = threading.Thread(target=gamepad.run)
gamepadthread.start()
mqtttpubhread = threading.Thread(target=mqttclient.run)
mqtttpubhread.start()
logger.info('BaseMain started')
# Monitor the running program for a KeyboardInterrupt exception
# If this is the case all threads and other methods can be closed the correct way :)
while 1:
try:
sleep(1)
except KeyboardInterrupt:
logger.info('Ctrl-C pressed')
gamepad.stop()
mqttclient.stop()
logger.info('BaseMain stopped')
sys.exit(0)
if name == 'main':
main()

How do I inherit from logging.Handler to log to a GtkTextView?

I am relatively new to Python and I'm developing my first Python GUI (slowly). One of the third-party modules I want to use uses Python's logging framework. I would like their logs to go to a GtkTextView. I know where their logger variable is, so can call logger.addHandler.
How do I inherit from logging.Handler correctly?
My implementation based on the source of StreamHandler is
class GtkTextViewHandler(logging.Handler):
def __init__(self, tv):
logging.Handler.__init__(self)
self.tv = tv
self.tbf = tv.get_buffer()
self.formatter = None
def emit(self, record):
try:
msg = self.format(record)
fs = "%s\n"
self.tbf.insert(self.tbf.get_end_iter(), fs % msg)
self.tv.scroll_to_iter(self.tv.get_end_iter(), 0.0, False, 0, 0)
except:
self.handleError(record)

Categories