I have a tornado application and a custom logger method. My code to build and use the custom logger is the following:
def create_logger():
"""
This function creates the logger functionality to be used throughout the Python application
:return: bool - true if successful
"""
# Configuring the logger
filename = "PythonLogger.log"
# Change the current working directory to the logs folder, so that the logs files is written in it.
os.chdir(os.path.normpath(os.path.normpath(os.path.dirname(os.path.abspath(__file__)) + os.sep + os.pardir + os.sep + os.pardir + os.sep + 'logs')))
# Create the logs file
logging.basicConfig(filename=filename, format='%(asctime)s %(message)s', filemode='w')
# Creating the logger
logger = logging.getLogger()
# Setting the threshold of logger to DEBUG
logger.setLevel(logging.NOTSET)
logger.log(0, 'El logger está inicializado')
return True
def log_info_message(msg):
"""
Utility for message logging with code 20
:param msg:
:return:
"""
return logging.getLogger().log(20, msg)
In the code, I initialize the logger and already write a message to it before the Tornado application initialization:
if __name__ == '__main__':
# Logger initialization
create_logger()
# First log message
log_info_message('Initiating Python application')
# Starting Tornado
tornado.options.parse_command_line()
# Specifying what app exactly is being started
server = tornado.httpserver.HTTPServer(test.app)
server.listen(options.port)
try:
if 'Windows_NT' not in os.environ.values():
server.start(0)
tornado.ioloop.IOLoop.instance().start()
except KeyboardInterrupt:
tornado.ioloop.IOLoop.instance().stop()
Then let's say my method get of HTTP request is as follows (only interesting lines):
class API(tornado.web.RequestHandler):
def get(self):
self.write('Get request ')
logging.getLogger("tornado.access").log(20, 'Hola')
logging.getLogger("tornado.application").log(20, '1')
logging.getLogger("tornado.general").log(20, '2')
log_info_message('Received a GET request at: ' + datetime.datetime.now().strftime("%d-%b-%Y (%H:%M:%S.%f)"))
What I see is a difference between local testing and testing on server.
A) On local, I can see log message at first script running, and log messages of requests (after initializing Tornado app) in my log file and the Tornado logs.
B) On server, I only see the first message, not my log messages when Get requests are accepted and also see Tornado's loggers when there's an error, but even don't see the messages produced by Tornado's loggers. I guess that means that somehow Tornado is re-initializing the logger and making mine and his 3 ones write in some other file (somehow that does not affect when errors happens??).
I am aware that Tornado uses its own 3 logging functions, but somehow I would like to use mine as well, at the same time as keeping the Tornado's ones and writing them all into the same file. Basically reproduce that local behaviour on server but also keeping it when some error happens, of course.
How could I achieve this?
Thanks in advance!
P.S.: if I add a name to the logger, let's say logging.getLogger('Example') and changed log_info_message function to return logging.getLogger('Example').log(20, msg), Tornado's logger would fail and raise error. So that option destroys its own loggers...
It seems the only problem was that, on the server side, tornado was setting the the mininum level for a log message to be written on log file higher (minimum of 40 was required). So that logging.getLogger().log(20, msg) would not write on the logging file but logging.getLogger().log(40, msg) would.
I would like to understand why, so if anybody knows, your knowledge would be more than welcome. For the time being that solution is working though.
tornado.log defines options that can be used to customise logging via command line (check tornado.options) - one of them is logging that defines the log level used. You are likely using this on the server and setting it to error.
When debugging logging I suggest you create a RequestHandler that will log or return the structure of the existing loggers by inspecting the root logger. When you see the structure it is much easier to understand why it works the way it works.
I have problem sending loggging stream over udp to targeted third part machine that collects log streams.
I've got just couple of lines of init to appear on remote console. The encoding over the udp should be UTF-8, too.
## setup_loggin function body
handler=logging.StreamHandler()
handler.setFormatter(formatter)
socketHandler=logging.handlers.SysLogHandler(address(SYSLOG_IP, 514))
socketHandler.setFormatter(formatter)
logger = logging.getLogger(name)
if not logger.handlers:
logger.setLevel(log_level)
logger.addHandler(handler)
logger.addHandler(socketHandler)
return logger
The remote server gets just begining of the logging at end results. Probably because it expects UDP to receive "UTF-8", and than parses it through.
Is there a way to change logging encoding to "UTF-8" using SysLogHandler or any other loggin handler.?
Problem solved. Third party syslog was parsing based on RFC3164
The MSG part has two fields known as the TAG field and the CONTENT field.
So, formatter would be something like
formatter_syslog = logging.Formatter(fmt='foo: %(levelname)s - %(module)s.%(funcName)s - %(message)s')
Previously I used this logging pattern
log = logging.getLogger(__name__)
log.setLevel(logging.DEBUG)
fh = logging.FileHandler("logs.log", 'w', encoding="utf-8")
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter)
log.addHandler(fh)
And my log file had these messages:
2019-08-21 11:08:08,271 - INFO - Started
2019-08-21 11:08:08,271 - INFO - Connecting to Google Sheets...
2019-08-21 11:08:11,857 - INFO - Successfuly connected to Google Sheet
2019-08-21 11:08:11,869 - ERROR - Not found: 'TG'
2019-08-21 11:08:11,869 - DEBUG - Getting values from Sheets...
2019-08-21 11:08:12,452 - DEBUG - Got new event row: "Flex - Flex"
2019-08-21 11:08:12,453 - DEBUG - Done. Values:
...
It looks ugly and I changed it to this:
logging.basicConfig(
level = logging.DEBUG,
format = '%(asctime)s - %(levelname)s - %(message)s',
filename = 'logs.log', filemode = 'w'
)
log = logging.getLogger()
Now my log file looks like that
2019-08-21 11:14:02,374 - INFO - Started
2019-08-21 11:14:02,374 - INFO - Connecting to Google Sheets...
2019-08-21 11:14:02,406 - DEBUG - [b'eyJ0eX...jcifQ', b'eyJ...NvbSJ9', b'f7BQ...dE2w']
2019-08-21 11:14:02,407 - INFO - Refreshing access_token
2019-08-21 11:14:03,448 - DEBUG - Starting new HTTPS connection (1): www.googleapis.com:443
2019-08-21 11:14:04,447 - DEBUG - https://www.googleapis.com:443 "GET /drive/v3/files?q=mimeType%3D%27application%2Fvnd.google-apps.spreadsheet%27&pageSize=1000&supportsTeamDrives=True&includeTeamDriveItems=True HTTP/1.1" 200 None
2019-08-21 11:14:04,450 - DEBUG - Starting new HTTPS connection (1): sheets.googleapis.com:443
2019-08-21 11:14:05,782 - DEBUG - https://sheets.googleapis.com:443 "GET /v4/spreadsheets/1q6...cTI?includeGridData=false HTTP/1.1" 200 None
2019-08-21 11:14:05,899 - INFO - Successfuly connected to Google Sheet
2019-08-21 11:14:05,901 - ERROR - Not found: 'TG'
2019-08-21 11:14:05,902 - DEBUG - Getting values from Sheets...
2019-08-21 11:14:06,426 - DEBUG - https://sheets.googleapis.com:443 "GET /v4/spreadsheets/1q6...cTI/values/%D0%9B%D0%B8%D1%81%D1%821 HTTP/1.1" 200 None
2019-08-21 11:14:06,543 - DEBUG - Got new event row: xxx
2019-08-21 11:14:06,544 - DEBUG - Done. Values: xxx
2019-08-21 11:14:06,544 - DEBUG - Getting line...
2019-08-21 11:14:06,550 - DEBUG - Starting new HTTPS connection (1): api.site.com:443
2019-08-21 11:14:07,521 - DEBUG - https://api.site.com:443 "GET /v1/fix...?Id=33 HTTP/1.1" 200 6739
I receiving some requests debug logs that I didn't use in my code
How to turn it off?
I found that is because of the requests module
The reason for all the log messages from the requests module is because of the below piece of code,
logging.basicConfig(
level = logging.DEBUG # This sets the root logger to DEBUG
)
logging.basicConfig changes the logger configuration of the root logger present in your program
Since you've used requests module here, requests uses urllib3 which prints those debug messages.
Fix:
You can use your first logger initialisation code to configure your logger else, you can also use the below code,
logging.basicConfig(
format = '%(asctime)s - %(levelname)s - %(message)s',
filename = 'logs.log', filemode = 'w'
)
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG) # Here you are changing the level of your logger alone
You can try the following code. In this case you don't change the basic config of logging module. You can config only your instance of logging.getLogger. If you use this implementation, it shouldn't have effect to other modules. Furthermore, you can handle the console and file separately so this logger can be more configurable.
Code:
import logging
# Create a custom logger
logger = logging.getLogger(__name__)
# Create handlers
c_handler = logging.StreamHandler()
f_handler = logging.FileHandler("logs.log", "w", encoding="utf-8")
c_handler.setLevel(logging.INFO)
f_handler.setLevel(logging.DEBUG)
# Create formatters and add it to handlers
c_format = logging.Formatter("%(asctime)s - %(levelname)s - %(message)s")
f_format = logging.Formatter("%(asctime)s - %(levelname)s - %(message)s")
c_handler.setFormatter(c_format)
f_handler.setFormatter(f_format)
# Add handlers to the logger
logger.addHandler(c_handler)
logger.addHandler(f_handler)
logger.warning("This is a warning")
logger.error("This is an error")
Output:
>>> python test.py
2019-08-21 08:55:10,579 - WARNING - This is a warning
2019-08-21 08:55:10,580 - ERROR - This is an error
As stated in #noufel13's answer, the reason for the additional log messages showing up is that you set the root logger's level to DEBUG and add a handler to it.
By default, importing logging instantiates a RootLogger object with the level WARNING and no handlers attached.
Importing requests instantiates a bunch of library loggers, in particular
urllib3.util.retry, urllib3.util, urllib3, urllib3.connection, urllib3.response, urllib3.connectionpool, urllib3.poolmanager and requests.
This is done so that deveolpers using these libraries can easily enable debug output to test and troubleshoot their code, or just have logging output without modifying 3rd party code, just by configuring the root logger accordingly.
All these loggers are created with default values, the relevant being level NOTSET, propagate True and disabled False.
Level NOTSET (emphasis mine):
[...] causes all messages to be processed when the logger is the root logger, or delegation to the parent when the logger is a non-root logger
propagate True:
[...] events logged to this logger will be passed to the handlers of higher level (ancestor) loggers [...] The constructor sets this attribute to True
disabled False:
disabled is an undocumented attribute of the Logger class
These loggers always exist when you import requests in your program. Also, they always emit their log messages.
These messsages are not visible by default because all of these loggers only have a NullHandler attached ...
This handler does nothing. It's intended to be used to avoid the
"No handlers could be found for logger XXX" one-off warning. This is
important for library code, which may contain code to log events. If a user
of the library does not configure logging, the one-off warning might be
produced; to avoid this, the library developer simply needs to instantiate
a NullHandler and add it to the top-level logger of the library module or
package.
... and because their ultimate ancestor, the root logger, also comes without any handlers and is set to level WARNING per default.
basicConfig only affects the root logger. The way you use it, a specific Formatter is created and attached to a newly instantiated FileHandler that, in turn, is attached to the root logger. Also, the root logger's level is set to DEBUG.
Now, all the messages from the requests and urllib3 loggers ending up at the root logger after traversing the hierarchy, are logged to the file handled by the root's FileHandler.
How to stop that
As per your original recipe, and as outlined by #milanbalazs, continue to create and configure dedicated loggers for your intended pupose and leave the root logger alone.
You state, that you find programmatical configuration rather ugly; I somewhat agree.
The logging.config module offers various ways for you to configure your logging.
For example, you could use dictConfig() and provide the configuration via a dictionary that you could potentially import from another, dedicated module.
The following configures a logger named test the same way your basicConfig() approach does for the root logger:
import logging.config
cfg_dict = {
"version": 1,
"formatters": {
"default": {
"format": '%(asctime)s - %(levelname)s - %(message)s',
}
},
"handlers": {
"file": {
"class": "logging.FileHandler",
"formatter": "default",
"filename": "logs.log",
"mode": "w",
}
},
"loggers": {
"test": {
"level": "DEBUG",
"handlers": ["file"],
}
}
}
logging.config.dictConfig(cfg_dict)
log = logging.getLogger("test")
The logging.config module futher supports configuration files via fileConfig() and a socket listener for configuration data in the suitable formats for dict and file configuration.
If you'd rather configure and use the root logger, you can either disable all "unwanted" loggers or instruct them not to propagate.
import logging
import requests
for logger in logging.Logger.manager.loggerDict.values():
logger.propagate = False
# -- OR --
logger.disabled = True
logging.basicConfig(
level=logging.DEBUG,
format='%(asctime)s - %(levelname)s - %(message)s',
filename='logs.log',
filemode='w'
)
log = logging.getLogger()
Note: Both, the Manager's loggerDict and the Logger's disabled attributes are undocumented. While they are not explicitly marked as an internal implementation detail via Python's convention (i.e. leading underscore name), I would hesitate to consider them part of the official logging API and thus expect them to be potential subject to change.
I am trying to add logging to a medium size Python project with minimal disruption. I would like to log from multiple modules to rotating files silently (without printing messages to the terminal). I have tried to modify this example and I almost have the functionality I need except for one issue.
Here is how I am attempting to configure things in my main script:
import logging
import logging.handlers
import my_module
LOG_FILE = 'logs\\logging_example_new.out'
#logging.basicConfig(level=logging.DEBUG)
# Define a Handler which writes messages to rotating log files.
handler = logging.handlers.RotatingFileHandler(LOG_FILE, maxBytes=100000, backupCount=1)
handler.setLevel(logging.DEBUG) # Set logging level.
# Create message formatter.
formatter = logging.Formatter('%(asctime)s %(name)-12s %(levelname)-8s %(message)s')
# Tell the handler to use this format
handler.setFormatter(formatter)
# Add the handler to the root logger
logging.getLogger('').addHandler(handler)
# Now, we can log to the root logger, or any other logger. First the root...
logging.debug('Root debug message.')
logging.info('Root info message.')
logging.warning('Root warning message.')
logging.error('Root error message.')
logging.critical('Root critical message.')
# Use a Logger in another module.
my_module.do_something() # Call function which logs a message.
Here is a sample of what I am trying to do in modules:
import logging
def do_something():
logger = logging.getLogger(__name__)
logger.debug('Module debug message.')
logger.info('Module info message.')
logger.warning('Module warning message.')
logger.error('Module error message.')
logger.critical('Module critical message.')
Now, here is my problem. I currently get the messages logged into rotating files silently. But I only get warning, error, and critical messages. Despite setting handler.setLevel(logging.DEBUG).
If I uncomment logging.basicConfig(level=logging.DEBUG), then I get all the messages in the log files but I also get the messages printed to the terminal.
How do I get all messages above the specified threshold to my log files without outputing them to the terminal?
Thanks!
Update:
Based on this answer, it appears that calling logging.basicConfig(level=logging.DEBUG) automatically adds a StreamHandler to the Root logger and you can remove it. When I did remove it leaving only my RotatingFileHandler, messages no longer printed to the terminal. I am still wondering why I have to use logging.basicConfig(level=logging.DEBUG) to set the message level threshold, when I am setting handler.setLevel(logging.DEBUG). If anyone can shed a little more light on these issues it would still be appreciated.
You need to call set the logging level on the logger itself as well. I believe by default, the logging level on the logger is logging.WARNING
Ex.
root_logger = logging.getLogger('')
root_logger.setLevel(logging.DEBUG)
# Add the handler to the root logger
root_logger.addHandler(handler)
The loggers log level determines what the logger will actually log (i.e what messages will actually get handed to the handlers). The handlers log level determines what it will actually handle (i.e. what messages actually are output to file, stream, etc). So you could potentially have multiple handlers attached to a logger each handling a different log level.
Here's an SO answer that explains the way this works
Having a new problem where the logger is writing about every other line without the format, just the message.
My code:
import logging
from logging.handlers import RotatingFileHandler
# Set up logging
LOG_FILE = argv[0][:-3] + '.log'
logging.basicConfig(
filename=LOG_FILE,
filemode='a',
level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)
handler = RotatingFileHandler(LOG_FILE,maxBytes=1000000)
logger.addHandler(handler)
def main():
target, command, notify_address, wait = get_args(argv)
logger.info('Checking status of %s every %d minutes.' % (target, wait))
logger.info('Running %s and sending output to %s when online.' % (command, notify_address))
The vars returned from get_args() are all strings, even though wait is a number.
Note that I am not receiving any errors in my IDE or when running.
The output I am getting in my log file:
2015-02-12 16:26:27,483 - INFO - Checking status of <ip address> every 30 minutes.
Running <arbitrary bash command string> and sending output to <my email address> when online.
2015-02-12 16:26:27,483 - INFO - Running <arbitrary bash command string> and sending output to <my email address> when online.
What is causing the second logger.info() to print twice, and only once formatted properly?
I have another script that logs perfectly, no idea what I've done here. (Copy/pasted the logging setup section to be safe)
Are you using loggers at different levels of your code? It sounds like the log messages could be propagating upwards. Try adding
logger.propagate = False
after you add the handler. You can check out the python docs for a more detailed explanation here, but the relevant text below sounds exactly like what you're seeing.
Note If you attach a handler to a logger and one or more of its ancestors, it may emit the same record multiple times. In general, you should not need to attach a handler to more than one logger - if you just attach it to the appropriate logger which is highest in the logger hierarchy, then it will see all events logged by all descendant loggers, provided that their propagate setting is left set to True. A common scenario is to attach handlers only to the root logger, and to let propagation take care of the rest.