logging will not write to file in python 2.7.4 - python

I cannot get the logging module to write to a file for the life of me and I have no idea what is the problem.
I am runnning
form = "%(asctime)s - %(levelname)s - %(message)s"
logging.basicConfig(logfile='/home/gabriel/Developement/cl/cl.log',level=logging.DEBUG, format=form)
logging.debug("oh")
logging.info("oh!")
logging.warning("OH!")
logging.error("OH NO!")
I'm completely unsure of what's going on, the file is not created, nor is it written to. However, python does not raise an exception. I've tried running with python 2.7.4 and ipython. Please let me know what diagnostic steps I can take, I wish I could provide more information but I do not know what is relevant...

Change logfile to filename, like so:
logging.basicConfig(filename='/home/gabriel/Developement/cl/cl.log', level=logging.DEBUG, format=form)
You can see the keyword arguments taken by basicConfig here.

Related

Python Fabric logging errors

I am trying to understand how fabric's logger module works.
I run on the command line:
$ fabfile -I task-1
I of course get output to the console showing me the execution of the task on each of the remote hosts connected to.
Bu how can I redirect the output of errors to a logfile on my local machine and put a timestamp on it?
Does fabric's logger module provide this? Or should I use Python's logging module. Either one, I am not sure how to implement.
Unfortunately, Fabric does not feature logging to a file (see issue #57)
But there is a workaround using the logging module, which I find pretty nice.
First, configure your logger:
import logging
logging.basicConfig(
level=logging.DEBUG, format='%(asctime)s:%(levelname)s:%(name)s:%(message)s',
filename="out.log",
filemode='a'
)
And then wrap the portions of your code which are likely to throw errors with a try/catch block like this:
try:
#code
except:
logging.exception('Error:')
The logger will print 'Error:' and the exception's stacktrace to "out.log"

Python logging formatting by level

I'm using python's logging library, but I want the debug logs to have a different format than the warning and error logs. Is this possible?
ETA: I want warnings and errors to appear as:
%(levelname)s: %(message)s
but debug statements to appear as
DEBUG: (only Brian cares about this) : %(message)s
all other questions I've seen have been to change the format, but that changes for EVERYTHING.
First of all, double-check if you really need this. A log output with different record formats is prone to be rather hard to read by both humans and machines.
Maybe what you actually need is different formats for different log destinations (console vs file) which will also have different verbosity (the file will have a debug log with additional information).
Now, the way is to use a custom Formatter:
class MultiformatFormatter(logging.Formatter):
def __init__(self,<args>):
<...>
def format(self,record):
if record.levelno <= logging.DEBUG:
s=<generate string one way>
else:
s=<generate string another way>
return s
<...>
#for each handler that this should apply to
handler.setFormatter(MultiformatFormatter(<args>))

Logging on console and in different files in python 2.7

I need help with logging in python 2.7.
My directory structure is
I have 3 different functions in file called hydra:
lets name them a, b, c
I have create 3 process to run those functions separately.
I have separate logging in those functions, which works just fine and logs in 3 different log files.
Now, I want them all to log on console as well as write in the file.
I start logging as:
logging.basicConfig(level=logging.INFO,
format='%(asctime)s %(name)s.%(funcName)s +%(lineno)s: %(levelname)-8s [%(process)d] %(message)s',
filename=logfile,
filemode='a')
What all I tried:
Creating a stream handler and adding it to the root, but it didn't work for me.
Any help is appreciated.
I am happy to answer any doubts and forgive me if I missed out on any detail.
logging.getLogger().addHandler(logging.StreamHandler())
Adding the above mentioned line in all the 3 functions resolved my issue.

writing a log file from python program

I want to output some strings to a log file and I want the log file to be continuously updated.
I have looked into the logging module pf python and found out that it is
mostly about formatting and concurrent access.
Please let me know if I am missing something or amy other way of doing it
usually i do the following:
# logging
LOG = "/tmp/ccd.log"
logging.basicConfig(filename=LOG, filemode="w", level=logging.DEBUG)
# console handler
console = logging.StreamHandler()
console.setLevel(logging.ERROR)
logging.getLogger("").addHandler(console)
The logging part initialises logging's basic configurations. In the following I set up a console handler that prints out some logging information separately. Usually my console output is set to output only errors (logging.ERROR) and the detailed output in the LOG file.
Your loggings will now printed to file. For instance using:
logger = logging.getLogger(__name__)
logger.debug("hiho debug message")
or even
logging.debug("next line")
should work.
Doug Hellmann has a nice guide.
To add my 10cents with regards to using logging. I've only recently discovered the Logging module and was put off at first. Maybe just because it initially looks like a lot of work, but it's really simple and incredibly handy.
This is the set up that I use. Similar to Mkinds answer, but includes a timestamp.
# Set up logging
log = "bot.log"
logging.basicConfig(filename=log,level=logging.DEBUG,format='%(asctime)s %(message)s', datefmt='%d/%m/%Y %H:%M:%S')
logging.info('Log Entry Here.')
Which will produce something like:
22/09/2015 14:39:34 Log Entry Here.
You can log to a file with the Logging API.
Example: http://docs.python.org/2/howto/logging.html#logging-to-a-file

Where can I check tornado's log file?

I think there was a default log file, but I didn't find it yet.
Sometimes the HTTP request process would throw an exception on the screen, but I suggest it also goes somewhere on the disk or I wouldn't know what was wrong during a long run test.
P.S.: write an exception handler is another topic; first I'd like to know my question's answer.
I found something here:
https://groups.google.com/forum/?fromgroups=#!topic/python-tornado/px4R8Tkfa9c
But it also didn't mention where can I find those log.
It uses standard python logging module by default.
Here is definition:
access_log = logging.getLogger("tornado.access")
app_log = logging.getLogger("tornado.application")
gen_log = logging.getLogger("tornado.general")
It doesn't write to files by default. You can run it using supervisord and define in supervisord config, where log files will be located. It will capture output of tornado and write it to files.
Also you can think this way:
tornado.options.options['log_file_prefix'].set('/opt/logs/my_app.log')
tornado.options.parse_command_line()
But in this case - measure performance. I don't suggest you to write to files directly from tornado application, if it can be delegated.
FYI: parse_command_line just enables pretty console logging.
With newer versions, you may do
args = sys.argv
args.append("--log_file_prefix=/opt/logs/my_app.log")
tornado.options.parse_command_line(args)
or as #ColeMaclean mentioned, providing
--log_file_prefix=PATH
at command line
There's no logfile by default.
You can use the --log_file_prefix=PATH command line option to set one.
Tornado just uses the Python stdlib's logging module, if you're trying to do anything more complicated.
Use RotatingFileHandler:
import logging
from logging.handlers import RotatingFileHandler
log_path = "/path/to/tornado.access.log"
logger_ = logging.getLogger("tornado.access")
logger_.setLevel(logging.INFO)
logger_.propagate = False
handler = RotatingFileHandler(log_path, maxBytes=1024*1024*1024, backupCount=3)
handler.setFormatter(logging.Formatter("[%(name)s][%(asctime)s][%(levelname)s][%(pathname)s:%(lineno)d] > %(message)s"))
logger_.addHandler(handler)

Categories