I started using Python logging to tidy up the messages generated by some code. I have the statement
logging.basicConfig(filename='my_log.log', filemode='w', encoding='utf-8', level=logging.INFO)
in the main Python module and was expecting the log file to be created from scratch every time the program runs. However, Python appends the new messages to the existing file. I wrote the program below to try to test what is going on but now I don't get a log file created and the messages appear in the console (Spyder 5). I am running WinPython 3.9 on a Windows 10 machine.
import datetime
import logging
logging.basicConfig(filename='my_log.log', filemode='w', level=logging.INFO)
t = datetime.datetime.now()
logging.info("Something happened at %s" % t)
All suggestions welcome - thanks!
Update: following #bzu's suggestion, I changed my code to this:
import datetime
import logging
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
handler = logging.FileHandler('my_log.log', 'w')
logger.addHandler(handler)
t = datetime.datetime.now()
logging.info("Something happened at %s" % t)
This created the log file but nothing was written to it. If I change the final line to
logger.info("Something happened at %s" % t)
the the message appears in the log file. However, I want to log the messages from different modules which import logging and so know about logging in general but I don't know how they can access the specific logger object
If you run the program with plain python (python log_test_file.py), it should work as expected.
I suspect that the problem with Spyder is, that it overrides the logging module config.
To avoid this kind of problems, the recommended way is to use custom logger per file:
import logging
logger = logging.getLogger(__name__)
# boilerplate: can be converted into a method and reused
logger.setLevel(logging.INFO)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler = logging.FileHandler(filename='my_log.log', mode='w', encoding='utf-8')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.info("Something happened")
After some searching on the web, adding force=True to logging.basicConfig seemed to solve the problem ie:
logging.basicConfig(filename='my_log.log', filemode='w', encoding='utf-8', level=logging.INFO, force=True)
Hi I'm wanting to create a logging module that will allow me to set the logging level and pass a messaging to the function. Although this doesn't work, it's my idea and needs some guidance.
def log_message(loglevel, message):
logging.getLogger().setLevel(loglevel.upper())
logging.basicConfig(level=logging.getLevelName, filename="mylog.log",
filemode="w", format="%(asctime)s - %(levelname)s - %(message)s")
logging.getLogger(message)
log_message("info", 'this is a test')
This implementation may not be thread safe and should not be used for production level as it is just to fix what is not working in your function.
import logging
# map logging levels
LEVELS = {"NOTSET":logging.NOTSET,"DEBUG":logging.DEBUG, "INFO":logging.INFO,
"WARNING":logging.WARNING,"ERROR":logging.ERROR, "CRITICAL":logging.CRITICAL}
handler = logging.FileHandler("mylog.log","a")
formatter = logging.Formatter("%(asctime)s - %(levelname)s - %(message)s")
handler.setFormatter(formatter)
logger = logging.getLogger()
logger.addHandler(handler)
logger.setLevel(LEVELS["DEBUG"])
# your function
def log_message(logLevel, message):
# log file name
# map logging level to int values
logger.log(LEVELS[logLevel.upper()],message)
log_message("info", "asd")
Well. it is said that setLevel just specifies minimum threshold to display the log, so it should be fine.
Logger.setLevel() specifies the lowest-severity log message a logger will handle, where debug is the lowest built-in severity level and critical is the highest built-in severity.
Yet, I will not recommend this for production, as you need better written logging function. Please refer to https://docs.python.org/3/howto/logging.html#basic-logging-tutorial and https://docs.python.org/3/howto/logging.html#advanced-logging-tutorial for better logging setup.
I have a main process which makes use of different other modules. And these modules also use other modules. I need to log all the logs into single log file. Due to use of TimedRotatingFileHandler, my log behaves differently after midnight. I got to know why it so but couldn't clearly how I can solve it.
Below is log_config.py which is used by all other modules to get the logger and log.
'''
import logging
import sys
from logging.handlers import TimedRotatingFileHandler
FORMATTER = logging.Formatter("%(asctime)s — %(name)s — %(message)s")
LOG_FILE = "my_app.log"
def get_file_handler():
file_handler = TimedRotatingFileHandler(LOG_FILE, when='midnight')
file_handler.setFormatter(FORMATTER)
return file_handler
def get_logger(logger_name):
logger = logging.getLogger(logger_name)
logger.setLevel(logging.DEBUG) # better to have too much log than not enough
logger.addHandler(get_file_handler())
#with this pattern, it's rarely necessary to propagate the error up to parent
logger.propagate = False
return logger
'''
All other modules call,
'logging = log_config.get_logger(name)'
and use it to log.
I came to know about QueueHandler and QueueListener but not sure how to use them in my code.
How can I use these to serialize logs to single file.?
I like using the python logging module because it standardizes my application and easier to get metrics. The problem I face is, for every application (or file.py) I am keep putting this on top of my code.
logger = logging.getLogger(__name__)
if not os.path.exists('log'):
os.makedirs('log')
logName=time.strftime("%Y%m%d.log")
hdlr = logging.FileHandler('log/%s'%(logName))
logger.setLevel(logging.INFO)
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(funcName)s %(levelname)s - %(message)s')
ch.setFormatter(formatter)
hdlr.setFormatter(formatter)
logger.addHandler(ch)
logger.addHandler(hdlr)
This is tedious and repetitive. Is there a better way to do this?
How do people log for a large application with multiple modules?
Take a look at logging.basicConfig().
If you wrap the basicConfig() in a function then you can just import your function and pass specific args (i.e. log filename, format, level, etc).
It will help to condense the code a bit and make it more extensible.
For example -
import logging
def test_logging(filename, format):
logging.basicConfig(filename=filename, format=format, level=logging.DEBUG)
# test
logging.info('Info test...')
logging.debug('Debug test...')
Then just import the test_logging() function into other programs.
Hope that this helps.
Read Using logging in multiple modules from Logging Cookbook.
What you need to do is to use getLogger() function to get a logger with a pre-defined settings.
import logging
logger = logging.getLogger('some_logger')
You set those settings just once at application startup time.
Ok, here's the code where I setup everything:
if __name__ == '__main__':
app.debug = False
applogger = app.logger
file_handler = FileHandler("error.log")
file_handler.setLevel(logging.DEBUG)
applogger.setLevel(logging.DEBUG)
applogger.addHandler(file_handler)
app.run(host='0.0.0.0')
What happens is
error.log gets created
Nothing is ever written to it
Despite not adding a StreamHandler and setting debug to false I still get everything to STDOUT (this might be correct, but still seems weird)
Am I totally off here somewhere or what is happening?
Why not do it like this:
if __name__ == '__main__':
init_db() # or whatever you need to do
import logging
logging.basicConfig(filename='error.log',level=logging.DEBUG)
app.run(host="0.0.0.0")
If you now start you application, you'll see that error.log contains:
INFO:werkzeug: * Running on http://0.0.0.0:5000/
For more info, visit http://docs.python.org/2/howto/logging.html
Okay, as you insist that you cannot have two handler with the method I showed you, I'll add an example that makes this quite clear. First, add this logging code to your main:
import logging, logging.config, yaml
logging.config.dictConfig(yaml.load(open('logging.conf')))
Now also add some debug code, so that we see that our setup works:
logfile = logging.getLogger('file')
logconsole = logging.getLogger('console')
logfile.debug("Debug FILE")
logconsole.debug("Debug CONSOLE")
All what is left is the "logging.conf" program. Let's use that:
version: 1
formatters:
hiformat:
format: 'HI %(asctime)s - %(name)s - %(levelname)s - %(message)s'
simple:
format: '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
handlers:
console:
class: logging.StreamHandler
level: DEBUG
formatter: hiformat
stream: ext://sys.stdout
file:
class: logging.FileHandler
level: DEBUG
formatter: simple
filename: errors.log
loggers:
console:
level: DEBUG
handlers: [console]
propagate: no
file:
level: DEBUG
handlers: [file]
propagate: no
root:
level: DEBUG
handlers: [console,file]
This config is more complicated than needed, but it also shows some features of the logging module.
Now, when we run our application, we see this output (werkzeug- and console-logger):
HI 2013-07-22 16:36:13,475 - console - DEBUG - Debug CONSOLE
HI 2013-07-22 16:36:13,477 - werkzeug - INFO - * Running on http://0.0.0.0:5000/
Also note that the custom formatter with the "HI" was used.
Now look at the "errors.log" file. It contains:
2013-07-22 16:36:13,475 - file - DEBUG - Debug FILE
2013-07-22 16:36:13,477 - werkzeug - INFO - * Running on http://0.0.0.0:5000/
Ok, my failure stemmed from two misconceptions:
1) Flask will apparently ignore all your custom logging unless it is running in production mode
2) debug=False is not enough to let it run in production mode. You have to wrap the app in any sort of WSGI server to do so
After i started the app from gevent's WSGI server (and moving logging initialization to a more appropriate place) everything seems to work fine
The output you see in the console of your app is from the underlying Werkzeug logger that can be accessed through logging.getLogger('werkzeug').
Your logging can function in both development and release by also adding handlers to that logger as well as the Flask one.
More information and example code: Write Flask Requests to an Access Log.
This works:
if __name__ == '__main__':
import logging
logFormatStr = '[%(asctime)s] p%(process)s {%(pathname)s:%(lineno)d} %(levelname)s - %(message)s'
logging.basicConfig(format = logFormatStr, filename = "global.log", level=logging.DEBUG)
formatter = logging.Formatter(logFormatStr,'%m-%d %H:%M:%S')
fileHandler = logging.FileHandler("summary.log")
fileHandler.setLevel(logging.DEBUG)
fileHandler.setFormatter(formatter)
streamHandler = logging.StreamHandler()
streamHandler.setLevel(logging.DEBUG)
streamHandler.setFormatter(formatter)
app.logger.addHandler(fileHandler)
app.logger.addHandler(streamHandler)
app.logger.info("Logging is set up.")
app.run(host='0.0.0.0', port=8000, threaded=True)
I didn't like the other answers so I kept at it and it seems like I had to make my logging config AFTER Flask did it's own setup.
#app.before_first_request
def initialize():
logger = logging.getLogger("your_package_name")
logger.setLevel(logging.DEBUG)
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
formatter = logging.Formatter(
"""%(levelname)s in %(module)s [%(pathname)s:%(lineno)d]:\n%(message)s"""
)
ch.setFormatter(formatter)
logger.addHandler(ch)
My app is structured like
/package_name
__main__.py <- where I put my logging configuration
__init__.py <- conveniance for myself, not necessary
/tests
/package_name <- Actual flask app
__init__.py
/views
/static
/templates
/lib
Following these directions http://flask.pocoo.org/docs/0.10/patterns/packages/
Why not take a dive in the code and see...
The module we land on is flask.logging.py, which defines a function named create_logger(app). Inspecting that function will give a few clues as to potential culprits when troubleshooting logging issues with Flask.
EDIT: this answer was meant for Flask prior to version 1. The flask.logging.py module has considerably changed since then. The answer still helps with some general caveats and advices regarding python logging, but be aware that some of Flask's peculiarities in that regard have been addressed in version 1 and might no longer apply.
The first possible cause of conflicts in that function is this line:
logger = getLogger(app.logger_name)
Let's see why:
The variable app.logger_name is set in the Flask.__init__() method to the value of import_name, which is itself the receiving parameter of Flask(__name__). That is app.logger_name is assigned the value of __name__, which will likely be the name of your main package, let's for this example call it 'awesomeapp'.
Now, imagine that you decided to configure and create your own logger manually. What do you think the chances are that if your project is named "awesomeapp" you would also use that name to configure your logger, I think it's pretty likely.
my_logger = logging.getLogger('awesomeapp') # doesn't seem like a bad idea
fh = logging.FileHandler('/tmp/my_own_log.log')
my_logger.setLevel(logging.DEBUG)
my_logger.addHandler(fh)
It makes sense to do this... except for a few problems.
When the Flask.logger property is invoked for the first time it will in turn call the function flask.logging.create_logger() and the following actions will ensue:
logger = getLogger(app.logger_name)
Remember how you named your logger after the project and how app.logger_name shares that name too? What happens in the line of code above is that the function logging.getLogger() has now retrieved your previously created logger and the following instructions are about to mess with it in a way that will have you scratching your head later. For instance
del logger.handlers[:]
Poof, you just lost all the handlers you may have previously registered with your logger.
Other things that happen within the function, without going much into details. It creates and registers two logging.StreamHandler objects that can spit out to sys.stderr and/or Response objects. One for log level 'debug' and another for 'production'.
class DebugLogger(Logger):
def getEffectiveLevel(self):
if self.level == 0 and app.debug:
return DEBUG
return Logger.getEffectiveLevel(self)
class DebugHandler(StreamHandler):
def emit(self, record):
if app.debug and _should_log_for(app, 'debug'):
StreamHandler.emit(self, record)
class ProductionHandler(StreamHandler):
def emit(self, record):
if not app.debug and _should_log_for(app, 'production'):
StreamHandler.emit(self, record)
debug_handler = DebugHandler()
debug_handler.setLevel(DEBUG)
debug_handler.setFormatter(Formatter(DEBUG_LOG_FORMAT))
prod_handler = ProductionHandler(_proxy_stream)
prod_handler.setLevel(ERROR)
prod_handler.setFormatter(Formatter(PROD_LOG_FORMAT))
logger.__class__ = DebugLogger
logger.addHandler(debug_handler)
logger.addHandler(prod_handler)
With the above details to light it should become clearer why our manually configured logger and handlers misbehave when Flask gets involved. The new information gives us new options though. If you still want to keep separate handlers, the simplest approach is to name your logger to something different than the project (e.g. my_logger = getLogger('awesomeapp_logger') ). Another approach, if you want to be consistent with the logging protocols in Flask, is to register a logging.FileHandler object on Flask.logger using a similar approach to Flask.
import logging
def set_file_logging_handler(app):
logging_path = app.config['LOGGING_PATH']
class DebugFileHandler(logging.FileHandler):
def emit(self, record):
# if your app is configured for debugging
# and the logger has been set to DEBUG level (the lowest)
# push the message to the file
if app.debug and app.logger.level==logging.DEBUG:
super(DebugFileHandler, self).emit(record)
debug_file_handler = DebugFileHandler('/tmp/my_own_log.log')
app.logger.addHandler(debug_file_handler)
app = Flask(__name__)
# the config presumably has the debug settings for your app
app.config.from_object(config)
set_file_logging_handler(app)
app.logger.info('show me something')
Logging Quick start
-- This code will not work with more than one log file inside a class/or import
import logging
import os # for Cwd path
path = os.getcwd()
logFormatStr = '%(asctime)s %(levelname)s - %(message)s'
logging.basicConfig(filename=path + '\logOne.log', format=logFormatStr, level=logging.DEBUG), logging.info('default message')
for Multiple logging file
creating a instance of logging using logging.getLogger() method---
for each logger file required one instance of logging
we can create multiple log file but not with same instance
Create new logger instance with name or Hardcore_String
----preferred (name) this will specify exact class from where it call
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
type of logging -- INFO, DEBUG, ERROR, CRITICAL, WARNING
DEBUG----Detailed information, typically of interest only when diagnosing problems.
INFO----Confirmation that things are working as expected.
WARNING----An indication that something unexpected happened, or indicative of some problem in the near future (e.g. ‘disk space low’). The software is still working as expected.
ERROR----Due to a more serious problem, the software has not been able to perform some function.
CRITICAL----A serious error, indicating that the program itself may be unable to continue running.
Create New Formatter
format = logging.Formatter('%(asctime)s %(levelname)s - %(message)s')
create new file Handler
file_handel = logging.FileHandler(path + '\logTwo.log')
set format to FileHandler & add file_handler to logging instance [logger]
file_handel.setFormatter(format)
logger.addHandler(file_handel)
add a message to logOne.log file and logTwo.log with respective setlevel
logger.info("message for logOne")
logging.debug(" message for logTwo")
for more details