Previously I used this logging pattern
log = logging.getLogger(__name__)
log.setLevel(logging.DEBUG)
fh = logging.FileHandler("logs.log", 'w', encoding="utf-8")
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter)
log.addHandler(fh)
And my log file had these messages:
2019-08-21 11:08:08,271 - INFO - Started
2019-08-21 11:08:08,271 - INFO - Connecting to Google Sheets...
2019-08-21 11:08:11,857 - INFO - Successfuly connected to Google Sheet
2019-08-21 11:08:11,869 - ERROR - Not found: 'TG'
2019-08-21 11:08:11,869 - DEBUG - Getting values from Sheets...
2019-08-21 11:08:12,452 - DEBUG - Got new event row: "Flex - Flex"
2019-08-21 11:08:12,453 - DEBUG - Done. Values:
...
It looks ugly and I changed it to this:
logging.basicConfig(
level = logging.DEBUG,
format = '%(asctime)s - %(levelname)s - %(message)s',
filename = 'logs.log', filemode = 'w'
)
log = logging.getLogger()
Now my log file looks like that
2019-08-21 11:14:02,374 - INFO - Started
2019-08-21 11:14:02,374 - INFO - Connecting to Google Sheets...
2019-08-21 11:14:02,406 - DEBUG - [b'eyJ0eX...jcifQ', b'eyJ...NvbSJ9', b'f7BQ...dE2w']
2019-08-21 11:14:02,407 - INFO - Refreshing access_token
2019-08-21 11:14:03,448 - DEBUG - Starting new HTTPS connection (1): www.googleapis.com:443
2019-08-21 11:14:04,447 - DEBUG - https://www.googleapis.com:443 "GET /drive/v3/files?q=mimeType%3D%27application%2Fvnd.google-apps.spreadsheet%27&pageSize=1000&supportsTeamDrives=True&includeTeamDriveItems=True HTTP/1.1" 200 None
2019-08-21 11:14:04,450 - DEBUG - Starting new HTTPS connection (1): sheets.googleapis.com:443
2019-08-21 11:14:05,782 - DEBUG - https://sheets.googleapis.com:443 "GET /v4/spreadsheets/1q6...cTI?includeGridData=false HTTP/1.1" 200 None
2019-08-21 11:14:05,899 - INFO - Successfuly connected to Google Sheet
2019-08-21 11:14:05,901 - ERROR - Not found: 'TG'
2019-08-21 11:14:05,902 - DEBUG - Getting values from Sheets...
2019-08-21 11:14:06,426 - DEBUG - https://sheets.googleapis.com:443 "GET /v4/spreadsheets/1q6...cTI/values/%D0%9B%D0%B8%D1%81%D1%821 HTTP/1.1" 200 None
2019-08-21 11:14:06,543 - DEBUG - Got new event row: xxx
2019-08-21 11:14:06,544 - DEBUG - Done. Values: xxx
2019-08-21 11:14:06,544 - DEBUG - Getting line...
2019-08-21 11:14:06,550 - DEBUG - Starting new HTTPS connection (1): api.site.com:443
2019-08-21 11:14:07,521 - DEBUG - https://api.site.com:443 "GET /v1/fix...?Id=33 HTTP/1.1" 200 6739
I receiving some requests debug logs that I didn't use in my code
How to turn it off?
I found that is because of the requests module
The reason for all the log messages from the requests module is because of the below piece of code,
logging.basicConfig(
level = logging.DEBUG # This sets the root logger to DEBUG
)
logging.basicConfig changes the logger configuration of the root logger present in your program
Since you've used requests module here, requests uses urllib3 which prints those debug messages.
Fix:
You can use your first logger initialisation code to configure your logger else, you can also use the below code,
logging.basicConfig(
format = '%(asctime)s - %(levelname)s - %(message)s',
filename = 'logs.log', filemode = 'w'
)
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG) # Here you are changing the level of your logger alone
You can try the following code. In this case you don't change the basic config of logging module. You can config only your instance of logging.getLogger. If you use this implementation, it shouldn't have effect to other modules. Furthermore, you can handle the console and file separately so this logger can be more configurable.
Code:
import logging
# Create a custom logger
logger = logging.getLogger(__name__)
# Create handlers
c_handler = logging.StreamHandler()
f_handler = logging.FileHandler("logs.log", "w", encoding="utf-8")
c_handler.setLevel(logging.INFO)
f_handler.setLevel(logging.DEBUG)
# Create formatters and add it to handlers
c_format = logging.Formatter("%(asctime)s - %(levelname)s - %(message)s")
f_format = logging.Formatter("%(asctime)s - %(levelname)s - %(message)s")
c_handler.setFormatter(c_format)
f_handler.setFormatter(f_format)
# Add handlers to the logger
logger.addHandler(c_handler)
logger.addHandler(f_handler)
logger.warning("This is a warning")
logger.error("This is an error")
Output:
>>> python test.py
2019-08-21 08:55:10,579 - WARNING - This is a warning
2019-08-21 08:55:10,580 - ERROR - This is an error
As stated in #noufel13's answer, the reason for the additional log messages showing up is that you set the root logger's level to DEBUG and add a handler to it.
By default, importing logging instantiates a RootLogger object with the level WARNING and no handlers attached.
Importing requests instantiates a bunch of library loggers, in particular
urllib3.util.retry, urllib3.util, urllib3, urllib3.connection, urllib3.response, urllib3.connectionpool, urllib3.poolmanager and requests.
This is done so that deveolpers using these libraries can easily enable debug output to test and troubleshoot their code, or just have logging output without modifying 3rd party code, just by configuring the root logger accordingly.
All these loggers are created with default values, the relevant being level NOTSET, propagate True and disabled False.
Level NOTSET (emphasis mine):
[...] causes all messages to be processed when the logger is the root logger, or delegation to the parent when the logger is a non-root logger
propagate True:
[...] events logged to this logger will be passed to the handlers of higher level (ancestor) loggers [...] The constructor sets this attribute to True
disabled False:
disabled is an undocumented attribute of the Logger class
These loggers always exist when you import requests in your program. Also, they always emit their log messages.
These messsages are not visible by default because all of these loggers only have a NullHandler attached ...
This handler does nothing. It's intended to be used to avoid the
"No handlers could be found for logger XXX" one-off warning. This is
important for library code, which may contain code to log events. If a user
of the library does not configure logging, the one-off warning might be
produced; to avoid this, the library developer simply needs to instantiate
a NullHandler and add it to the top-level logger of the library module or
package.
... and because their ultimate ancestor, the root logger, also comes without any handlers and is set to level WARNING per default.
basicConfig only affects the root logger. The way you use it, a specific Formatter is created and attached to a newly instantiated FileHandler that, in turn, is attached to the root logger. Also, the root logger's level is set to DEBUG.
Now, all the messages from the requests and urllib3 loggers ending up at the root logger after traversing the hierarchy, are logged to the file handled by the root's FileHandler.
How to stop that
As per your original recipe, and as outlined by #milanbalazs, continue to create and configure dedicated loggers for your intended pupose and leave the root logger alone.
You state, that you find programmatical configuration rather ugly; I somewhat agree.
The logging.config module offers various ways for you to configure your logging.
For example, you could use dictConfig() and provide the configuration via a dictionary that you could potentially import from another, dedicated module.
The following configures a logger named test the same way your basicConfig() approach does for the root logger:
import logging.config
cfg_dict = {
"version": 1,
"formatters": {
"default": {
"format": '%(asctime)s - %(levelname)s - %(message)s',
}
},
"handlers": {
"file": {
"class": "logging.FileHandler",
"formatter": "default",
"filename": "logs.log",
"mode": "w",
}
},
"loggers": {
"test": {
"level": "DEBUG",
"handlers": ["file"],
}
}
}
logging.config.dictConfig(cfg_dict)
log = logging.getLogger("test")
The logging.config module futher supports configuration files via fileConfig() and a socket listener for configuration data in the suitable formats for dict and file configuration.
If you'd rather configure and use the root logger, you can either disable all "unwanted" loggers or instruct them not to propagate.
import logging
import requests
for logger in logging.Logger.manager.loggerDict.values():
logger.propagate = False
# -- OR --
logger.disabled = True
logging.basicConfig(
level=logging.DEBUG,
format='%(asctime)s - %(levelname)s - %(message)s',
filename='logs.log',
filemode='w'
)
log = logging.getLogger()
Note: Both, the Manager's loggerDict and the Logger's disabled attributes are undocumented. While they are not explicitly marked as an internal implementation detail via Python's convention (i.e. leading underscore name), I would hesitate to consider them part of the official logging API and thus expect them to be potential subject to change.
Related
I have the following minimalist example of a logging test based on the Logging Cookbook:
import logging
logger = logging.getLogger('test')
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - '
'%(message)s')
ch.setFormatter(formatter)
logger.addHandler(ch)
print(logger.handlers)
logger.debug('hello world')
The above produces the following output:
$ python test_log.py
[<StreamHandler <stderr> (DEBUG)>]
As I've defined a handler and set the log level to debug, I was expecting the hello world log message to show up in the sample above.
If a logger's level isn't explicitly set, the system looks up the level of ancestor loggers until it gets to a logger whose level is explicitly set. In this case, it's the root logger which is the parent of the logger named 'test'. Setting the level of either this logger or the root logger to DEBUG will result in the log message being output. See this part of the documentation for the event information flow in Python logging.
So if we call logger.debug('hello world') we are generating a DEBUG event. However the logger acts as first filter before passing the event to the handlers. So if the logger level is at higher severity (like INFO) it will filter the event and the handler won't receive it, even if the event was matching the handler's level (DEBUG).
So the logger's level should be as low as the lowest level of its handlers. If you don't want the rest of the handlers to log DEBUG messages you would need to explicitly set their level higher, instead of leaving it unset and therefor inherited from the logger.
I am logging to a log file in a Flask application under gunicorn and nginx using the following setup:
def setup_logging():
stream_handler = logging.StreamHandler()
formatter = logging.Formatter('[%(asctime)s][PID:%(process)d][%(levelname)s][%(name)s.%(funcName)s()] %(message)s')
stream_handler.setFormatter(formatter)
stream_handler.setLevel("DEBUG")
logging.getLogger().addHandler(stream_handler)
file_handler = RotatingFileHandler("log.txt", maxBytes=100000, backupCount=10)
file_handler.setFormatter(logging.Formatter(
'%(asctime)s %(levelname)s: %(message)s '
'[in %(pathname)s:%(lineno)d]'
))
file_handler.setLevel("DEBUG")
logging.getLogger().addHandler(file_handler)
logging.getLogger().setLevel("DEBUG")
Then inializing logging prior to creating the app:
setup_logging()
def create_app(config_name):
app = Flask(__name__)
Then in modules I am logging using:
import logging
logger = logging.getLogger(__name__)
x = 2
logger.debug('x: {0}' x)
Logging works OK on my local machine - both to stdout and log.txt
However when I run the application on a remote server nothing gets written to log.txt. I have deployed as a user with read and write permission on log.txt on the remote system.
I have tried initializing the app on the remote server with DEBUG = True, still nothing written to log file. The only way I can get view any logs is by viewing /var/log/supervisor/app-stdout---supervisor-nnn.log files but these dont show all logging output
Using the answer from HolgerShurig here Flask logging - Cannot get it to write to a file on the server log file I get only named logger output (ie no output from module level logging)
2017-10-21 00:32:45,125 - file - DEBUG - Debug FILE
running same code lo local machine I get
2017-10-21 08:35:39,046 - file - DEBUG - Debug FILE
2017-10-21 08:35:42,340 - werkzeug - INFO - * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit) ie
2017-10-21 08:38:46,236 [MainThread ] [INFO ] 127.0.0.1 - - [21/Oct/2017 08:38:46] "[37mGET /blah/blah HTTP/1.1[0m" 200 -
I then changed the logging config to:
def setup_logging(app):
stream_handler = logging.StreamHandler()
formatter = logging.Formatter('[%(asctime)s][PID:%(process)d][%(levelname)s][%(lineno)s][%(name)s.%(funcName)s()] %(message)s')
stream_handler.setFormatter(formatter)
stream_handler.setLevel(Config.LOG_LEVEL)
app.logger.addHandler(stream_handler)
file_handler = RotatingFileHandler(Config.LOGGING_FILE, maxBytes=Config.LOGGING_MAX_BYTES, backupCount=Config.LOGGING_BACKUP_COUNT)
file_handler.setFormatter(logging.Formatter(
'%(asctime)s %(levelname)s: %(message)s '
'[in %(pathname)s:%(lineno)d]'
))
file_handler.setLevel(Config.LOG_LEVEL)
loggers = [app.logger]
for logger in loggers:
logger.addHandler(file_handler)
app.logger.setLevel(Config.LOG_LEVEL)
app.logger.debug('this message should be recorded in the log file')
and set to us this just after creating the Flask app:
setup_logging(app)
I each module I am still using
import logging
logger = logging.getLogger(__name__)
#for example
def example():
logger.debug('debug')
logger.info('info')
logger.warn('warn')
When I run the application on ther server with
gunicorn manage:app
The only thing printed in the log.txt file is
2017-10-21 02:48:32,982 DEBUG: this message should be recorded in the log file [in /../../__init__.py:82]
But locally MainThread processeses are shown as well
Any ideas?
If your configuration is working on your local machine and not working on your remote server, then your problem is about permission, regarding the file or the directory where the logfile resides.
Here's something that could help you.
Besides that, here's a Gist which could give another perspective regarding logging configuration for Flask applications.
I'm building a python application with also a web interface, with Flask web framework.
It runs on Flask internal server in debug/dev mode and in production mode it runs on tornado as wsgi container.
This is how i've set up my logger:
log_formatter = logging.Formatter('%(asctime)s [%(levelname)s] %(message)s')
file_handler = logging.handlers.RotatingFileHandler(LOG_FILE, maxBytes=5 * 1024 * 1024, backupCount=10)
file_handler.setFormatter(log_formatter)
console_handler = logging.StreamHandler(sys.stdout)
console_handler.setFormatter(log_formatter)
log = logging.getLogger('myAppLogger')
log.addHandler(file_handler)
log.addHandler(console_handler)
To add my logger to the Flask app i tried this:
app = Flask('system.web.server')
app.logger_name = 'myAppLogger'
But the log still going to the Flask default log handler, and in addition, I didn't found how to customize the log handlers also for the Tornado web server.
Any help is much appreciated,
thanks in advance
AFAIK, you can't change the default logger in Flask. You can, however, add your handlers to the default logger:
app = Flask('system.web.server')
app.logger.addHandler(file_handler)
app.logger.addHandler(console_handler)
Regarding my comment above - "Why would you want to run Flask in tornado ...", ignore that. If you are not seeing any performance hit, then clearly there's no need to change your setup.
If, however, in future you'd like to migrate to a multithreaded container, you can look into uwsgi or gunicorn.
I managed to do it, multiple handler, each doing their thing, so that ERROR log will not show on the INFO log as well and end up with duplicate info grrr:
app.py
import logging
from logging.handlers import RotatingFileHandler
app = Flask(__name__)
# Set format that both loggers will use:
formatter = logging.Formatter("[%(asctime)s] {%(pathname)s:%(lineno)d} %(levelname)s - %(message)s")
# Set logger A for known errors
log_handler = RotatingFileHandler('errors.log', maxBytes=10000, backupCount=1)
log_handler.setFormatter(formatter)
log_handler.setLevel(logging.INFO)
a = logging.getLogger('errors')
a.addHandler(log_handler)
# Set logger B for bad exceptions
exceptions_handler = RotatingFileHandler('exceptions.log', maxBytes=10000, backupCount=1)
exceptions_handler.setFormatter(formatter)
exceptions_handler.setLevel(logging.ERROR)
b = logging.getLogger('exceptions')
b.addHandler(exceptions_handler)
...
whatever_file_where_you_want_to_log.py
import logging
import traceback
# Will output known error messages to 'errors.log'
logging.getLogger('errors').error("Cannot connect to database, timeout")
# Will output the full stack trace to 'exceptions.log', when trouble hits the fan
logging.getLogger('exceptions').error(traceback.format_exc())
I've created a TimedRotatingHandler for the Flask server. But all the logs generated by the werkzeug are still thrown on the console.
How can I redirect these logs to the log file(test.log) with the log rotating functionality.
Code snippet:
log = logging.getLogger(__name__)
log.setLevel(logging.DEBUG)
# add a file handler
fh = logging.handlers.TimedRotatingFileHandler("test.log",when='M',interval=1,backupCount=0)
fh.setLevel(logging.DEBUG)
# create a formatter and set the formatter for the handler.
frmt = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
fh.setFormatter(frmt)
# add the Handler to the logger
log.addHandler(fh)
app.run(host='0.0.0.0', debug=True)
The below logs are still thrown on the console.
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
* Restarting with stat
192.168.1.6 - - [25/Jun/2015 07:11:13] "GET / HTTP/1.1" 200 -
192.168.1.6 - - [25/Jun/2015 07:11:13] "GET /static/js/jquery-1.11.2/jquery-1.11.2.min.js HTTP/1.1" 304 -
192.168.1.6 - - [25/Jun/2015 07:11:13] "GET /static/js/main/main.js HTTP/1.1" 304 -
192.168.1.6 - - [25/Jun/2015 07:11:13] "GET /favicon.ico HTTP/1.1" 404 -
Adding the log file handler to the werkzeug logger is too not solving the problem
logging.getLogger('werkzeug').addHandler(fh);
To change werkzeung logging you need to override logging.root.handler like:
import logging
from logging.handlers import RotatingFileHandler
handler = RotatingFileHandler('log.log', maxBytes=10000, backupCount=1)
handler.setLevel(logging.ERROR)
logging.root.handlers = [handler]
See here for details.
It looks like werkzeug sends it messages to the root logger rather than a named logger.
If you try this it should start logging to your file
logging.getLogger().addHandler(fh)
If you want it to stop logging to the console then you'll have to remove the stream handler from the root logger. You can see the handlers for the root logger by using
>>> print logging.getLogger().handlers
[<logging.StreamHandler object at 0x.....>, logging.handlers.TimedRotatingFileHandler object at 0x.....>]
In order to direct the flask server/app log to a file handler and also werkzeug to same file handler, you may want to do something like the following.
handler = RotatingFileHandler('test.log', maxBytes=1000000, backupCount=5)
handler.setLevel(logging.DEBUG)
app.logger.addHandler(handler)
log = logging.getLogger('werkzeug')
log.setLevel(logging.DEBUG)
log.addHandler(handler)
app.run(host='0.0.0.0', debug=True)
The addHandler will not solve this problem. A logger can have many handlers.
from logging.config import dictConfig
dictConfig({
'version': 1,
'handlers': {
'file.handler': {
'class': 'logging.handlers.RotatingFileHandler',
'filename': 'test.log',
'maxBytes': 4194304, # 4 MB
'backupCount': 10,
'level': 'DEBUG',
},
},
'loggers': {
'werkzeug': {
'level': 'DEBUG',
'handlers': ['file.handler'],
},
},
})
By the way, this is not a problem at all. In a development environment, logs in console is convenient. In a production mode, applications will not run in a console.
I was able to find the main issue, which was that app.run(debug=True) should be removed and be replaced by app.run(), because flask werkzeug uses the same method to show the beautified error on the webpage.
I have a pretty weird problem with my logging facility i use inside django/python. The logging is not working anymore since i upgraded to django 1.3. It seems to be related to the logging level and the 'debug=' setting in the settings.py file.
1) When i log INFO messages and debug=False, the logging won't happen, my file doesn't get appended.
2) When i log WARNING messages and debug=False, the logging works perfectly like i want it to, the file gets appended
3) When i log INFO messages and debug=True, the logging seems to work, the file get appended.
How could i log INFO messages with debug=False? It worked before django 1.3... is there somewhere a mysterious setting which do the trick? Underneath there is a sample code:
views.py:
import logging
logging.basicConfig(level=logging.INFO,
format='%(asctime)s %(levelname)s %(message)s',
filename='/opt/general.log',
filemode='a')
def create_log_file(filename, log_name, level=logging.INFO):
handler = logging.handlers.TimedRotatingFileHandler(filename, 'midnight', 7, backupCount=10)
handler.setLevel(level)
formatter = logging.Formatter('%(asctime)s %(name)-12s %(levelname)-8s %(message)s', '%a, %Y-%m-%d %H:%M:%S')
handler.setFormatter(formatter)
logging.getLogger(log_name).addHandler(handler)
create_log_file('/opt/test.log', 'testlog')
logger_test = logging.getLogger('testlog')
logger_test.info('testing the logging functionality')
With this code the logging does not work in Django 1.3 with debug set to False in the settings.py file. When i should do like this:
logger_test.warning('testing the logging functionality')
This works perfectly when debug is set to False. The levels DEBUG and INFO aint logging but WARNING,ERROR and CRITICAL are doing their job...
Does anyone have an idea?
Since Django 1.3 contains its own logging configuration, you need to ensure that anything you're doing doesn't clash with it. For example, if the root logger has handlers already configured by Django by the time your module first gets imported, your basicConfig() call won't have any effect.
What you're describing is the normal logging situation - WARNINGs and above get handled, while INFO and DEBUG are suppressed by default. It looks as if your basicConfig() is not having any effect; You should consider replacing your basicConfig() call with the appropriate logging configuration in settings.py, or at least investigate the root logging level and what handlers are attached to it, at the time of your basicConfig() call.