Python Fabric logging errors - python

I am trying to understand how fabric's logger module works.
I run on the command line:
$ fabfile -I task-1
I of course get output to the console showing me the execution of the task on each of the remote hosts connected to.
Bu how can I redirect the output of errors to a logfile on my local machine and put a timestamp on it?
Does fabric's logger module provide this? Or should I use Python's logging module. Either one, I am not sure how to implement.

Unfortunately, Fabric does not feature logging to a file (see issue #57)
But there is a workaround using the logging module, which I find pretty nice.
First, configure your logger:
import logging
logging.basicConfig(
level=logging.DEBUG, format='%(asctime)s:%(levelname)s:%(name)s:%(message)s',
filename="out.log",
filemode='a'
)
And then wrap the portions of your code which are likely to throw errors with a try/catch block like this:
try:
#code
except:
logging.exception('Error:')
The logger will print 'Error:' and the exception's stacktrace to "out.log"

Related

Python 3- syslog.syslog works but logging.handlers.sysloghandler logs nothing

I have a number of scripts that use logging.error(msg) style logging messages.
I originally had these handled by a FileHandler but now I need to change it over to a SysLogHandler. I can't get the SysLogHandler to work, by which I mean there are no exceptions but nothing ends up in the logs.
On the other hand, I can get messages logged through syslog.syslog with this code:
import syslog
syslog.openlog(ident="LOG_IDENTIFIER", logoption=syslog.LOG_PID, facility=syslog.LOG_LOCAL3)
syslog.syslog('syslog test msg local3')
and this is what I'm using for the sysloghandler:
import logging
import logging.handlers
logger = logging.getLogger('myLogger')
handler = logging.handlers.SysLogHandler(address='/dev/log',facility=19)
logger.addHandler(handler)
logging.error('sysloghandler test msg local3')
facility=19 because that's the value for SysLogHandler.LOG_LOCAL.
I'm thinking the issue is the address setting. The log files are located in /var/log but that doesn't seem to work. I've tried not specifying an address (so it defaults to localhost port 514) still nothing gets logged.
I need to know either how I can determine the correct address info or how to troubleshoot the issue if that isn't the problem.

Change level logged to IPython/Jupyter notebook

I have a package that relies on several different modules, each of which sets up its own logger. That allows me to log where each log message originates from, which is useful.
However, when using this code in an IPython/Jupyter notebook, I was having trouble controlling what got printed to the screen. Specifically, I was getting a lot of DEBUG-level messages that I didn't want to see.
How do I change the level of logs that get printed to the notebook?
More info:
I've tried to set up a root logger in the notebook as follows:
# In notebook
import logging
logging.basicConfig()
logger = logging.getLogger()
logger.setLevel(logging.INFO)
# Import the module
import mymodule
And then at the top of my modules, I have
# In mymodule.py
import logging
logger = logging.getLogger('mypackage.' + __name__)
logger.setLevel(logging.DEBUG)
logger.propagate = True
# Log some messages
logger.debug('debug')
logger.info('info')
When the module code is called in the notebook, I would expect the logs to get propagated up, and then for the top logger to only print the info log statement. But both the debug and the info log statement get shown.
Related links:
From this IPython issue, it seems that there are two different levels of logging that one needs to be aware of. One, which is set in the ipython_notebook_config file, only affects the internal IPython logging level. The other is the IPython logger, accessed with get_ipython().parent.log.
https://github.com/ipython/ipython/issues/8282
https://github.com/ipython/ipython/issues/6746
With current ipython/Jupyter versions (e.g. 6.2.1), the logging.getLogger().handlers list is empty after startup and logging.getLogger().setLevel(logging.DEBUG) has no effect, i.e. no info/debug messages are printed.
Inside ipython, you have to change an ipython configuration setting (and possibly work around ipython bugs), as well. For example, to lower the logging threshold to debug messages:
# workaround via specifying an invalid value first
%config Application.log_level='WORKAROUND'
# => fails, necessary on Fedora 27, ipython3 6.2.1
%config Application.log_level='DEBUG'
import logging
logging.getLogger().setLevel(logging.DEBUG)
log = logging.getLogger()
log.debug('Test debug')
For just getting the debug messages of one module (cf. the __name__ value in that module) you can replace the above setLevel() call with a more specific one:
logging.getLogger('some.module').setLevel(logging.DEBUG)
The root cause of this issue (from https://github.com/ipython/ipython/issues/8282) is that the Notebook creates a root logger by default (which is different from IPython default behavior!). The solution is to get at the handler of the notebook logger, and set its level:
# At the beginning of the notebook
import logging
logger = logging.getLogger()
assert len(logger.handlers) == 1
handler = logger.handlers[0]
handler.setLevel(logging.INFO)
With this, I don't need to set logger.propagate = True in the modules and it works.
Adding another solution because the solution was easier for me. On startup of the Ipython kernel:
import logging
logging.basicConfig(level=20)
Then this works:
logging.getLogger().info("hello")
>> INFO:root:hello
logging.info("hello")
>> INFO:root:hello
And if I have similar logging code in a function that I import and run, the message will display as well.

How can I see log messages when unit testing in PyCharm?

I'm sure this is a simple fix, but I'd like to view log messages in the PyCharm console while running a unit test. The modules I'm testing have their own loggers, and normally I'd set a root logger to catch the debugging messages at a certain level, and pipe the other logs to a file. But I can't figure out how this works with unit tests.
I'm using the unittest2 module, and using PyCharm's automatic test discovery (which probably is based on nose, but I don't know).
I've tried fooling with the run configurations, but there doesn't seem to be a straightforward way to do this.
The PyCharm documentation isn't particularly helpful here either, if any of you work there.
In edit: It DOES appear that the console catches critical level log messages. I want to know if there is a way to configure this to catch debug level messages.
This post (Pycharm unit test interactive debug command line doesn't work) suggests adding the -s option to the build configuration, which does not produce the desired result.
The only solution I've found is to do the normal logging setup at the top of the file with tests in it (assuming you're just running one test or test class): logging.basicConfig(level=logging.DEBUG). Make sure you put that before most import statements or the default logging for those modules will have already been set (that was a hard one to figure out!).
I needed to add too the option --nologcapture to nosetests as param in pycharm 2016.3.2
Run>Edit COnfigurations > Defaults > Python Tests > Nosetests : activate the check for Prams option and add --nologcapture
In Edit Configurations:
delete old configurations
go to Defaults / Python tests / NoseTests
add --nologcapture to "additional Arguments"
worked like a "charm" for me in pyCharm 2017.2.3
thanks bott
My problem was similar, I only saw messages with level WARNING or higher while running tests in PyCharm. A colleague suggested to configure the logger in __init__.py which is in the directory of my tests.
# in tests/__init__.py
import logging
import sys
# Reconfiguring the logger here will also affect test running in the PyCharm IDE
log_format = '%(asctime)s %(levelname)s %(filename)s:%(lineno)d %(message)s'
logging.basicConfig(stream=sys.stdout, level=logging.DEBUG, format=log_format)
Then I can simply log in the test code like:
import logging
logging.info('whatever')
I experienced this problem all of a sudden after updating Pycharm. I had been running my tests with the Unittest runner, and all of a sudden Pycharm decided the default should be the pytest runner. When I changed the default test runner back to Unittest the logs appeared as I expected they would.
You can change the default test runner in Project Settings/Preferences >> Tools >> Python Integrated Tools
I assume adding the -nologcapture flag as noted in other answers probably would have worked in my case as well, but I prefer a solution that's exposed by the IDE rather than a manual override.
BTW choosing Unittest also solves an additional error that I got once when trying to run tests - No module named 'nose'
The -s option mentioned in the question in combination with adding the StreamHandler using stream=sys.stdout did work for me.
import logging
import sys
logger = logging.getLogger('foo')
logger.addHandler(logging.StreamHandler(stream=sys.stdout))
logger.warning('foo')
Pycharm unit test interactive debug command line doesn't work
Add a stream handler, pointing to sys.stdout if doctest or pytest is running:
import logging
import sys
logger = logging.getLogger()
def setup_doctest_logger(log_level:int=logging.DEBUG):
"""
:param log_level:
:return:
>>> logger.info('test') # there is no output in pycharm by default
>>> setup_doctest_logger()
>>> logger.info('test') # now we have the output we want
test
"""
if is_pycharm_running():
logger_add_streamhandler_to_sys_stdout()
logger.setLevel(log_level)
def is_pycharm_running()->bool:
if ('docrunner.py' in sys.argv[0]) or ('pytest_runner.py' in sys.argv[0]):
return True
else:
return False
def logger_add_streamhandler_to_sys_stdout():
stream_handler=logging.StreamHandler(stream=sys.stdout)
logger.addHandler(stream_handler)

writing a log file from python program

I want to output some strings to a log file and I want the log file to be continuously updated.
I have looked into the logging module pf python and found out that it is
mostly about formatting and concurrent access.
Please let me know if I am missing something or amy other way of doing it
usually i do the following:
# logging
LOG = "/tmp/ccd.log"
logging.basicConfig(filename=LOG, filemode="w", level=logging.DEBUG)
# console handler
console = logging.StreamHandler()
console.setLevel(logging.ERROR)
logging.getLogger("").addHandler(console)
The logging part initialises logging's basic configurations. In the following I set up a console handler that prints out some logging information separately. Usually my console output is set to output only errors (logging.ERROR) and the detailed output in the LOG file.
Your loggings will now printed to file. For instance using:
logger = logging.getLogger(__name__)
logger.debug("hiho debug message")
or even
logging.debug("next line")
should work.
Doug Hellmann has a nice guide.
To add my 10cents with regards to using logging. I've only recently discovered the Logging module and was put off at first. Maybe just because it initially looks like a lot of work, but it's really simple and incredibly handy.
This is the set up that I use. Similar to Mkinds answer, but includes a timestamp.
# Set up logging
log = "bot.log"
logging.basicConfig(filename=log,level=logging.DEBUG,format='%(asctime)s %(message)s', datefmt='%d/%m/%Y %H:%M:%S')
logging.info('Log Entry Here.')
Which will produce something like:
22/09/2015 14:39:34 Log Entry Here.
You can log to a file with the Logging API.
Example: http://docs.python.org/2/howto/logging.html#logging-to-a-file

Where can I check tornado's log file?

I think there was a default log file, but I didn't find it yet.
Sometimes the HTTP request process would throw an exception on the screen, but I suggest it also goes somewhere on the disk or I wouldn't know what was wrong during a long run test.
P.S.: write an exception handler is another topic; first I'd like to know my question's answer.
I found something here:
https://groups.google.com/forum/?fromgroups=#!topic/python-tornado/px4R8Tkfa9c
But it also didn't mention where can I find those log.
It uses standard python logging module by default.
Here is definition:
access_log = logging.getLogger("tornado.access")
app_log = logging.getLogger("tornado.application")
gen_log = logging.getLogger("tornado.general")
It doesn't write to files by default. You can run it using supervisord and define in supervisord config, where log files will be located. It will capture output of tornado and write it to files.
Also you can think this way:
tornado.options.options['log_file_prefix'].set('/opt/logs/my_app.log')
tornado.options.parse_command_line()
But in this case - measure performance. I don't suggest you to write to files directly from tornado application, if it can be delegated.
FYI: parse_command_line just enables pretty console logging.
With newer versions, you may do
args = sys.argv
args.append("--log_file_prefix=/opt/logs/my_app.log")
tornado.options.parse_command_line(args)
or as #ColeMaclean mentioned, providing
--log_file_prefix=PATH
at command line
There's no logfile by default.
You can use the --log_file_prefix=PATH command line option to set one.
Tornado just uses the Python stdlib's logging module, if you're trying to do anything more complicated.
Use RotatingFileHandler:
import logging
from logging.handlers import RotatingFileHandler
log_path = "/path/to/tornado.access.log"
logger_ = logging.getLogger("tornado.access")
logger_.setLevel(logging.INFO)
logger_.propagate = False
handler = RotatingFileHandler(log_path, maxBytes=1024*1024*1024, backupCount=3)
handler.setFormatter(logging.Formatter("[%(name)s][%(asctime)s][%(levelname)s][%(pathname)s:%(lineno)d] > %(message)s"))
logger_.addHandler(handler)

Categories