Redirect TF_CPP logs to file python - python

I'm running python script in which I intend to save tensorflow logs (regardless of their severity) generated during the execution of the script, to a file so that they don't pop up on the console.
Here's the code snippet:
# Setting up Info Logger
import logging, os, sys
from logging.handlers import RotatingFileHandler
def setup_logger(fileName=None, level=logging.NOTSET):
if fileName is None:
fileName = u'./test.log'
FORMAT = '%(asctime)s -- %(name)s -- %(levelname)s -- %(message)s'
logger_file_handler = RotatingFileHandler(fileName, 'w')
logger_file_handler.setLevel(level)
formatter = logging.Formatter(FORMAT)
logger_file_handler.setFormatter(formatter)
tf_logger = logging.getLogger('tensorflow')
tf_logger.addHandler(logger_file_handler)
tf_logger.setLevel(logging.NOTSET)
tf_logger.propagate = False
logging.captureWarnings(True)
logger = logging.getLogger(__name__.split('.')[0])
warnings_logger = logging.getLogger('py.warnings')
logger.addHandler(logger_file_handler)
logger.setLevel(level)
warnings_logger.addHandler(logger_file_handler)
# Redirect all logs to disk
setup_logger(level=logging.DEBUG)
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
Problem
1) The script saves all logs except the ones of tensorflow.
2) If I comment out os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2', then I get the following line on the console:
2019-12-10 11:51:07.501527: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll
What I've tried
I found out that the TF_CPP environment variable sends these logs to sys.stderr (correct me if I'm wrong) which gets printed on the console.
But it may lead to instability as it'll redirect even the tracebacks to log file, if I try to log stderr.
What I expect
I intend to get the tensorflow log (like the above one) to appear in the log file.
  
Kindly give your valuable opinion/solution regarding this problem.
 
Thanks!
Preetkaran
Python Ver: 3.7.5
Tensorflow Ver: 2.0.0

Related

Python: How to color logs while printing to a file?

A little context to what I am doing. I am running some python scripts through a different programming language on an industrial controller. Since I am not running the python scripts directly I can't watch any print or log statements from the terminal so I need to send the detailed logs to a log file.
Since we are logging a lot of information when debugging, I wanted to find a way to color the log file such as coloredlogs does to logs printed to terminal. I looked at coloredlogs but it appears that it can only print colored logs to files when using VIM. Does anyone know a way to print colored logs to a file using python that can be opened with a program such as wordpad? (maybe a .rtf file).
It can be a solution to use the Windows PowerShell Get-Content function to print a file which contains ANSI escape sequences to color the log.
For example:
import coloredlogs
import logging
# Create a logger object.
logger = logging.getLogger(__name__)
# Create a filehandler object
fh = logging.FileHandler('spam.log')
fh.setLevel(logging.DEBUG)
# Create a ColoredFormatter to use as formatter for the FileHandler
formatter = coloredlogs.ColoredFormatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter)
logger.addHandler(fh)
# Install the coloredlogs module on the root logger
coloredlogs.install(level='DEBUG')
logger.debug("this is a debugging message")
logger.info("this is an informational message")
logger.warning("this is a warning message")
logger.error("this is an error message")
logger.critical("this is a critical message")
When opening a Windows PowerShell you can use Get-Content .\spam.log to print the logs in color.

Python logging does not produce newline after each log (Linux)

I have used the same Python script on Windows which worked fine and produced several logs during each time it was run. The problem is when I ran the script on Linux the logging produced all of the logs onto one line.
I have tried \n in different places such as in the formatter, in each line itself.
This is how the logging is set up:
# This is the setup of the logging system for the program.
logger = logging.getLogger(__name__)
# Sets the name of the log file to 'login.log'
handler = logging.FileHandler(config.path['logger'])
# Sets up the format of the log file: Time, Function Name, Error Level (e.g. Warning Info, Critical),
# and then the message that follows the format.
formatter = logging.Formatter('%(asctime)-5s %(funcName)-20s %(levelname)-10s %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
# Sets the lowest level of messages to info.
logger.setLevel(logging.INFO)
And here is how each log is made:
logger.warning('%-15s' % client + ' Failed: Logout Error')
Thanks in advance

python: logging output from external API to logger module

I am python beginner. My python script logs output to a file (say example.log) using the basic python logging module. However, my python script also makes some 3rd party API calls (for example, parse_the_file) over which I don't have any control. I want to capture the output (usually on console) produced by the API into my example.log. The following example code works partially but the problem is that the contents get overwritten as soon as I start logging output of the API into my log file.
#!/usr/bin/env python
import logging
import sys
import os
from common_lib import * # import additional modules
logging.basicConfig(filename='example.log', filemode='w', level=logging.DEBUG, format='%(asctime)s - %(levelname)s - %(message)s')
logging.debug('This is a log message.') # message goes to log file.
sys.stdout = open('example.log','a')
metadata_func=parse_the_file('/opt/metadata.txt') # output goes to log file but OVERWRITES the content
sys.stdout = sys.__stdout__
logging.debug('This is a second log message.') # message goes to log file.
I know there have been post suggesting to similar question on this site but I haven't a workaround/solution for this problem that will work in this scenario.
Try:
log_file = open('example.log', 'a')
logging.basicConfig(stream=log_file, level=logging.DEBUG)
logging.debug("Test")
sys.stdout = log_file
sys.stderr = log_file
stdout_fd = os.dup(1)
stderr_fd = os.dup(2)
os.dup2(log_file.fileno(), 1)
os.dup2(log_file.fileno(), 2)
os.system("echo foo")
os.dup2(stdout_fd, 1)
os.dup2(stderr_fd, 2)
sys.stdout = sys.__stdout__
sys.stderr = sys.__stderr__
However, this will not format it accordingly. If you want that, you can try something like http://plumberjack.blogspot.com/2009/09/how-to-treat-logger-like-output-stream.html

Python logging module logs on Mac, but not Linux

I am experiencing an issue where I am using the logging module in my app. I am working in Eclipse against the LDT Python (Py 2.7) interface (rather than Pydev) on my MacBook Pro. The logging module works through Eclipse; however, when I transfer my app over to a RHEL5 2.7, logging does not seem to be working at all. It is not throwing any exceptions, it is just not logging anything to console or file (it creates the file though).
Code:
# Initialize logging
log = logging.getLogger('pepPrep')
# Log to stderr
console = logging.StreamHandler()
console.setLevel(logging.INFO)
# Log to file
logname = 'pepPrep.' + datetime.datetime.now().strftime("%Y%m%d_%H:%M") + '.log'
filelog = logging.FileHandler(logname)
filelog.setLevel(logging.DEBUG)
# set a format
formatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')
# tell the handler to use this format
console.setFormatter(formatter)
filelog.setFormatter(formatter)
# add the handler to the root logger
log.addHandler(console)
log.addHandler(filelog)
log.INFO('This is a test')
log.DEBUG('This is a test2')
Any pointers on how I can make this work?
The default threshold for logging is WARNING, so INFO and DEBUG messages are not output by default. To do so, add e.g.
logging.getLogger().setLevel(logging.DEBUG)
to get DEBUG and INFO messages.
You can confirm this is your problem by doing
log.warning('This is a test3')
before adding that setLevel, and confirming that the warning is actually output.

How do I configure the Python logging module in Django?

I'm trying to configure logging for a Django app using the Python logging module. I have placed the following bit of configuration code in my Django project's settings.py file:
import logging
import logging.handlers
import os
date_fmt = '%m/%d/%Y %H:%M:%S'
log_formatter = logging.Formatter(u'[%(asctime)s] %(levelname)-7s: %(message)s (%(filename)s:%(lineno)d)', datefmt=date_fmt)
log_dir = os.path.join(PROJECT_DIR, "var", "log", "my_app")
log_name = os.path.join(log_dir, "nyrb.log")
bytes = 1024 * 1024 # 1 MB
if not os.path.exists(log_dir):
os.makedirs(log_dir)
handler = logging.handlers.RotatingFileHandler(log_name, maxBytes=bytes, backupCount=7)
handler.setFormatter(log_formatter)
handler.setLevel(logging.DEBUG)
logging.getLogger().setLevel(logging.DEBUG)
logging.getLogger().addHandler(handler)
logging.getLogger(__name__).info("Initialized logging subsystem")
At startup, I get a couple Django-related messages, as well as the "Initialized logging subsystem", in the log files, but then all the log messages end up going to the web server logs (/var/log/apache2/error.log, since I'm using Apache), and use the standard log format (not the formatter I designated). Am I configuring logging incorrectly?
Kind of anti-climactic, but it turns out there was a third-party app installed in the project that had its own logging configuration that was overriding the one I set up (it modified the root logger, for some reason -- not very kosher for a Django app!). Removed that code and everything works as expected.
See this other answer. Note that settings.py is usually imported twice, so you should avoid creating multiple handlers. Better logging support is coming to Django in 1.3 (hopefully), but for now you should ensure that if your setup code is called more than once, there are no adverse effects.
I'm not sure why your logged messages are going to the Apache logs, unless you've (somewhere else in your code) added a StreamHandler to your root logger with sys.stdout or sys.stderr as the stream. You might want to print out logging.getLogger().handlers just to see it's what you'd expect to see.
I used this with success (although it does not rotate):
# in settings.py
import logging
logging.basicConfig(
level = logging.DEBUG,
format = '%(asctime)s %(levelname)s %(funcName)s %(lineno)d \
\033[35m%(message)s\033[0m',
datefmt = '[%d/%b/%Y %H:%M:%S]',
filename = '/tmp/my_django_app.log',
filemode = 'a'
)
I'd suggest to try an absolute path, too.
I guess logging stops when Apache forks the process. After that happened, because all file descriptors were closed during daemonization, logging system tries to reopen log file and as far as I understand uses relative file path:
log_dir = os.path.join(PROJECT_DIR, "var", "log", "my_app")
log_name = os.path.join(log_dir, "nyrb.log")
But there is no “current directory” when process has been daemonized. Try to use absolute log_dir path. Hope that helps.

Categories