When I try to log from my script, it always generates basic logging output. There is no configuration for this.
The code is the following:
def set_logger():
res_logger = logging.getLogger(__name__)
file_handler = logging.FileHandler(filename='%s.log' % __file__)
res_logger.setLevel(logging.INFO)
stdout_handler = logging.StreamHandler(sys.stdout)
file_handler.setLevel(logging.INFO)
stdout_handler.setLevel(logging.WARNING)
file_formatter = logging.Formatter('[%(asctime)s]\t{%(filename)s:%(lineno)d}\t%(levelname)s\t- %(message)s')
file_handler.setFormatter(file_formatter)
stdout_formatter = logging.Formatter('[%(asctime)s]\t{%(filename)s:%(lineno)d}\t%(levelname)s\t- %(message)s')
stdout_handler.setFormatter(stdout_formatter)
res_logger.addHandler(file_handler)
res_logger.addHandler(stdout_handler)
return res_logger
logger = set_logger()
logger.error("TEST")
The output with this setup should be [2018-04-17 15:50:31,601] {filename.py:60} ERROR - TEST in my stdout and in the filename.py.log file.
However, the actual output is
ERROR:__main__:TEST
[2018-04-17 15:57:37,756] {filename.py:60} ERROR - TEST
What's even weirder: I can't use "logging.error('test')" to produce basic output, even if I never declare a logger. It just produces nothing. And this ONLY happens to this script. If I copy-paste the same code to a new script, it all works fine. There is no logging configuration being done in the whole script, as you can see here
All of the 9 times "logging" is written are shown in the image. The only time "logger" is used, is when there are logs to produce.
I can't see why its not working.
Something else is almost certainly configuring logging and adding another handler which outputs to console. For example, if I copy your code to footest.py and add an import logging, sys at the top and run it using
python -c "import footest"
I get the output you would expect. But if I insert a line
logging.debug('foo')
and then run the python command again, I get the extra line. The logging.debug('foo') doesn't output anything, as the default level is WARNING, but it adds a handler which is used when the logger.error('TEST') at the bottom is executed.
Related
Apologies if this question is similar to others posted on SO, but I have tried many of the answers given and could not achieve what I am attempting to do.
I have some code that calls an external module:
import trafilatura
# after obtaining article_html
text = trafilatura.extract(article_html, language=en)
This sometimes prints out a warning on the console, which comes from the following code in the trafilatura module:
# at the top of the file
LOGGER = logging.getLogger(__name__)
# in the method that I'm calling
LOGGER.warning('HTML lang detection failed')
I'd like to not print this and other messages produced by the module directly to the console, but to store them somewhere such that I can edit the messages and decide what to do with them. (Specifically, I want to save the messages in slightly modified form but only given certain circumstances.) I am not using the logging library in my own code.
I have tried the following solution suggestions:
buf = io.StringIO()
with contextlib.redirect_stderr(buf): # I also tried redirect_stdout
text = trafilatura.extract(article_html, language=en)
and
buf = io.StringIO()
sysout = sys.stdout
syserr = sys.stderr
sys.stdout = sys.stderr = buf
text = trafilatura.extract(article_html, language=en)
sys.stdout = sysout
sys.stderr = syserr
However, in both cases buf remains empty and trafilatura still prints its logging messages to the console. Testing the redirects above with other calls (e.g. print("test")) they seem to catch those just fine, so apparently LOGGER.warning() from trafilatura is just not printing to stderr or stdout?
I thought I could set a different output stream target for trafilatura's LOGGER, but it is using a NullHandler so I could neither figure out its stream target nor do I know how to change it:
# from trafilatura's top-level __init__.py
logging.getLogger(__name__).addHandler(NullHandler())
Any ideas? Thanks in advance.
The idea here is to work within the standard logging lib of python. Adding a NullHandler is actually standard recommended practice for libraries that add a logger because it prevents falling back to stderr if no logging configuration is present.
What is likely happening here is that those logs are propagating to the root logger which got some handler attached somewhere else. You can stop that by getting the logger of the module in your code and setting it to not propagate:
# assuming that "trafilatura" is the __name__ of the module:
logger = logging.getLogger("trafilatura")
logger.propagate = False
I am coding a project which has multiple modules for a legacy software in Python 2.7. I am logging the statements in a file called debg.log from all the modules. The code is ran from the main python file called "Automation.py" which then calls multiple modules one of which is suppose called "builds.py". I want the log statements to be printed onto the debg.log file such that it is sequential the same way as the statements are executed in the python modules.
For e.g: Let's say in the actual code Automation.py is ran first and it prints out this first: "This is the main file." and "It calls the other modules". Then it calls the other module "builds.py" which prints out "I am called by the first file". And then the sequence of commands goes back to the main file "Automation.py" and it prints out "Program has ended."
In my program the when I use logger the sequence is being printed like this in debg.log:
prints out "Program has ended." So basically the print in the 2nd module is not printed.
Now, I have to make sure such that the output printed in debg.log is new each time the program is executed so I have to use mode='w' as the logs should not be appended to previous logs. That is where the problem lies as if I put mode='w' in the module 'builds.py' it truncates the logs from the 'Automation.py' first and then when the commands goes back to the 'Automation.py' it truncates the logs from the 'builds.py'. Please help explain how I should organize the filemode such that everything is printed in the sequential manner and that it does not get truncated.
Below is the logger part of the code of the main module
//Small snippet of logger code in Automation.py
#import logger
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
formatter = logging.Formatter('%(message)s')
file_handler = logging.FileHandler('debg.log', mode='w')
file_handler.setFormatter(formatter)
stream_handler = logging.StreamHandler()
logger.addHandler(file_handler)
logger.addHandler(stream_handler)
This is the logger part of the second module
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
formatter = logging.Formatter('%(message)s')
file_handler = logging.FileHandler('debg.log', mode='w')
file_handler.setFormatter(formatter)
stream_handler = logging.StreamHandler()
logger.addHandler(file_handler)
logger.addHandler(stream_handler)
As, I said I have used multiple different scenarios by changing the mode of the file so that it prints on the log file correctly but it did not work. The only time it did print out in a sequential manner was when the mode was not included as so it went to default which is append. But then the logs just kept appending to the debg.log which is not what I want.
I tried to build a logger that also loggs the full output from the python interpreter, what you normally see on the console.
This is my suggestion, it did not work.
logger = logging.getLogger()
stdoutStreamHandler = logging.StreamHandler(sys.stdout)
stderrStreamHandler = logging.StreamHandler(sys.stderr)
fileHandler = logging.FileHandler(logpath)
logger.addHandler(stdoutStreamHandler)
logger.addHandler(stderrStreamHandler)
Unsure what you mean by logging the full output from the Python Interpreter, do you mean the command-line interpreter or the background interpreter at run-time?
Either way this may help: trace
This will allow you to execute any code with the trace, and will also allow you to write it out to a file:
import sys
import trace
# create a Trace object, telling it what to ignore, and whether to
# do tracing or line-counting or both.
tracer = trace.Trace(
ignoredirs=[sys.prefix, sys.exec_prefix],
trace=0,
count=1)
# run the new command using the given tracer
tracer.run('main()')
# make a report, placing output in the current directory
r = tracer.results()
r.write_results(show_missing=True, coverdir=".")
I have not yet used it myself, but it could be possible to execute
logger.debug(r)
Which would create a record in your log file of the debug log level, you could, of course, change that to info, warning, error or critical.
I have a package that relies on several different modules, each of which sets up its own logger. That allows me to log where each log message originates from, which is useful.
However, when using this code in an IPython/Jupyter notebook, I was having trouble controlling what got printed to the screen. Specifically, I was getting a lot of DEBUG-level messages that I didn't want to see.
How do I change the level of logs that get printed to the notebook?
More info:
I've tried to set up a root logger in the notebook as follows:
# In notebook
import logging
logging.basicConfig()
logger = logging.getLogger()
logger.setLevel(logging.INFO)
# Import the module
import mymodule
And then at the top of my modules, I have
# In mymodule.py
import logging
logger = logging.getLogger('mypackage.' + __name__)
logger.setLevel(logging.DEBUG)
logger.propagate = True
# Log some messages
logger.debug('debug')
logger.info('info')
When the module code is called in the notebook, I would expect the logs to get propagated up, and then for the top logger to only print the info log statement. But both the debug and the info log statement get shown.
Related links:
From this IPython issue, it seems that there are two different levels of logging that one needs to be aware of. One, which is set in the ipython_notebook_config file, only affects the internal IPython logging level. The other is the IPython logger, accessed with get_ipython().parent.log.
https://github.com/ipython/ipython/issues/8282
https://github.com/ipython/ipython/issues/6746
With current ipython/Jupyter versions (e.g. 6.2.1), the logging.getLogger().handlers list is empty after startup and logging.getLogger().setLevel(logging.DEBUG) has no effect, i.e. no info/debug messages are printed.
Inside ipython, you have to change an ipython configuration setting (and possibly work around ipython bugs), as well. For example, to lower the logging threshold to debug messages:
# workaround via specifying an invalid value first
%config Application.log_level='WORKAROUND'
# => fails, necessary on Fedora 27, ipython3 6.2.1
%config Application.log_level='DEBUG'
import logging
logging.getLogger().setLevel(logging.DEBUG)
log = logging.getLogger()
log.debug('Test debug')
For just getting the debug messages of one module (cf. the __name__ value in that module) you can replace the above setLevel() call with a more specific one:
logging.getLogger('some.module').setLevel(logging.DEBUG)
The root cause of this issue (from https://github.com/ipython/ipython/issues/8282) is that the Notebook creates a root logger by default (which is different from IPython default behavior!). The solution is to get at the handler of the notebook logger, and set its level:
# At the beginning of the notebook
import logging
logger = logging.getLogger()
assert len(logger.handlers) == 1
handler = logger.handlers[0]
handler.setLevel(logging.INFO)
With this, I don't need to set logger.propagate = True in the modules and it works.
Adding another solution because the solution was easier for me. On startup of the Ipython kernel:
import logging
logging.basicConfig(level=20)
Then this works:
logging.getLogger().info("hello")
>> INFO:root:hello
logging.info("hello")
>> INFO:root:hello
And if I have similar logging code in a function that I import and run, the message will display as well.
I want to output some strings to a log file and I want the log file to be continuously updated.
I have looked into the logging module pf python and found out that it is
mostly about formatting and concurrent access.
Please let me know if I am missing something or amy other way of doing it
usually i do the following:
# logging
LOG = "/tmp/ccd.log"
logging.basicConfig(filename=LOG, filemode="w", level=logging.DEBUG)
# console handler
console = logging.StreamHandler()
console.setLevel(logging.ERROR)
logging.getLogger("").addHandler(console)
The logging part initialises logging's basic configurations. In the following I set up a console handler that prints out some logging information separately. Usually my console output is set to output only errors (logging.ERROR) and the detailed output in the LOG file.
Your loggings will now printed to file. For instance using:
logger = logging.getLogger(__name__)
logger.debug("hiho debug message")
or even
logging.debug("next line")
should work.
Doug Hellmann has a nice guide.
To add my 10cents with regards to using logging. I've only recently discovered the Logging module and was put off at first. Maybe just because it initially looks like a lot of work, but it's really simple and incredibly handy.
This is the set up that I use. Similar to Mkinds answer, but includes a timestamp.
# Set up logging
log = "bot.log"
logging.basicConfig(filename=log,level=logging.DEBUG,format='%(asctime)s %(message)s', datefmt='%d/%m/%Y %H:%M:%S')
logging.info('Log Entry Here.')
Which will produce something like:
22/09/2015 14:39:34 Log Entry Here.
You can log to a file with the Logging API.
Example: http://docs.python.org/2/howto/logging.html#logging-to-a-file