Having a new problem where the logger is writing about every other line without the format, just the message.
My code:
import logging
from logging.handlers import RotatingFileHandler
# Set up logging
LOG_FILE = argv[0][:-3] + '.log'
logging.basicConfig(
filename=LOG_FILE,
filemode='a',
level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)
handler = RotatingFileHandler(LOG_FILE,maxBytes=1000000)
logger.addHandler(handler)
def main():
target, command, notify_address, wait = get_args(argv)
logger.info('Checking status of %s every %d minutes.' % (target, wait))
logger.info('Running %s and sending output to %s when online.' % (command, notify_address))
The vars returned from get_args() are all strings, even though wait is a number.
Note that I am not receiving any errors in my IDE or when running.
The output I am getting in my log file:
2015-02-12 16:26:27,483 - INFO - Checking status of <ip address> every 30 minutes.
Running <arbitrary bash command string> and sending output to <my email address> when online.
2015-02-12 16:26:27,483 - INFO - Running <arbitrary bash command string> and sending output to <my email address> when online.
What is causing the second logger.info() to print twice, and only once formatted properly?
I have another script that logs perfectly, no idea what I've done here. (Copy/pasted the logging setup section to be safe)
Are you using loggers at different levels of your code? It sounds like the log messages could be propagating upwards. Try adding
logger.propagate = False
after you add the handler. You can check out the python docs for a more detailed explanation here, but the relevant text below sounds exactly like what you're seeing.
Note If you attach a handler to a logger and one or more of its ancestors, it may emit the same record multiple times. In general, you should not need to attach a handler to more than one logger - if you just attach it to the appropriate logger which is highest in the logger hierarchy, then it will see all events logged by all descendant loggers, provided that their propagate setting is left set to True. A common scenario is to attach handlers only to the root logger, and to let propagation take care of the rest.
Related
I have a tornado application and a custom logger method. My code to build and use the custom logger is the following:
def create_logger():
"""
This function creates the logger functionality to be used throughout the Python application
:return: bool - true if successful
"""
# Configuring the logger
filename = "PythonLogger.log"
# Change the current working directory to the logs folder, so that the logs files is written in it.
os.chdir(os.path.normpath(os.path.normpath(os.path.dirname(os.path.abspath(__file__)) + os.sep + os.pardir + os.sep + os.pardir + os.sep + 'logs')))
# Create the logs file
logging.basicConfig(filename=filename, format='%(asctime)s %(message)s', filemode='w')
# Creating the logger
logger = logging.getLogger()
# Setting the threshold of logger to DEBUG
logger.setLevel(logging.NOTSET)
logger.log(0, 'El logger está inicializado')
return True
def log_info_message(msg):
"""
Utility for message logging with code 20
:param msg:
:return:
"""
return logging.getLogger().log(20, msg)
In the code, I initialize the logger and already write a message to it before the Tornado application initialization:
if __name__ == '__main__':
# Logger initialization
create_logger()
# First log message
log_info_message('Initiating Python application')
# Starting Tornado
tornado.options.parse_command_line()
# Specifying what app exactly is being started
server = tornado.httpserver.HTTPServer(test.app)
server.listen(options.port)
try:
if 'Windows_NT' not in os.environ.values():
server.start(0)
tornado.ioloop.IOLoop.instance().start()
except KeyboardInterrupt:
tornado.ioloop.IOLoop.instance().stop()
Then let's say my method get of HTTP request is as follows (only interesting lines):
class API(tornado.web.RequestHandler):
def get(self):
self.write('Get request ')
logging.getLogger("tornado.access").log(20, 'Hola')
logging.getLogger("tornado.application").log(20, '1')
logging.getLogger("tornado.general").log(20, '2')
log_info_message('Received a GET request at: ' + datetime.datetime.now().strftime("%d-%b-%Y (%H:%M:%S.%f)"))
What I see is a difference between local testing and testing on server.
A) On local, I can see log message at first script running, and log messages of requests (after initializing Tornado app) in my log file and the Tornado logs.
B) On server, I only see the first message, not my log messages when Get requests are accepted and also see Tornado's loggers when there's an error, but even don't see the messages produced by Tornado's loggers. I guess that means that somehow Tornado is re-initializing the logger and making mine and his 3 ones write in some other file (somehow that does not affect when errors happens??).
I am aware that Tornado uses its own 3 logging functions, but somehow I would like to use mine as well, at the same time as keeping the Tornado's ones and writing them all into the same file. Basically reproduce that local behaviour on server but also keeping it when some error happens, of course.
How could I achieve this?
Thanks in advance!
P.S.: if I add a name to the logger, let's say logging.getLogger('Example') and changed log_info_message function to return logging.getLogger('Example').log(20, msg), Tornado's logger would fail and raise error. So that option destroys its own loggers...
It seems the only problem was that, on the server side, tornado was setting the the mininum level for a log message to be written on log file higher (minimum of 40 was required). So that logging.getLogger().log(20, msg) would not write on the logging file but logging.getLogger().log(40, msg) would.
I would like to understand why, so if anybody knows, your knowledge would be more than welcome. For the time being that solution is working though.
tornado.log defines options that can be used to customise logging via command line (check tornado.options) - one of them is logging that defines the log level used. You are likely using this on the server and setting it to error.
When debugging logging I suggest you create a RequestHandler that will log or return the structure of the existing loggers by inspecting the root logger. When you see the structure it is much easier to understand why it works the way it works.
I'm writing a Glue ETL, and I'm trying to log using Python default's logger.
The problem is that all the log messages I'm printing using the logger appear in the error stream of the job.
If I print directly to stdout (using print), I see the printed messages in the regular cloudwatch log stream.
I tried to redirect my logger to stdout, but I still got the same result: messages appear in error stream.
Does anyone know how I can use logger, and still see my messages in the log cloudwatch stream? (and not in the error cloudwatch stream)
This is the code sample I'm using to test:
import logging
import sys
MSG_FORMAT = '%(asctime)s %(levelname)s %(name)s: %(message)s'
DATETIME_FORMAT = '%Y-%m-%d %H:%M:%S'
logging.basicConfig(format=MSG_FORMAT, datefmt=DATETIME_FORMAT, stream=sys.stdout)
logger = logging.getLogger()
logger.setLevel(logging.INFO)
logger.info("Test log message. This appear on the job error cloudwatch stream")
print("This is a print. This appear on the job log cloudwatch stream")
I ended up adding
logging.getLogger().addHandler(logging.StreamHandler(sys.stdout))
to my logger definition.
That resolved the problem
I am writing a script that connects to N hosts via SSH ... queries the 3rd party system and extracts data and then displays all the collected data in a certain format.
I want to log all the actions the script is executing as well as any exceptions encountered on to the console and to a log file, so that the user can see what is happening while the script is running (If someone used Ansible - then just like the output we get on the console and logs when running the playbooks)
Expected output
[timestamp]: connecting machine 1
[timestamp]: connection established
[timestamp]: querying database xyz
[timestamp]: ERR: invalid credentials
[timestamp]: aborting data extraction
[timestamp]: connection closed
[timestamp]: ---------------------------
[timestamp]: connecting machine 2
[timestamp]: connection established
[timestamp]: querying database xyz
[timestamp]: extraction complete
[timestamp]: closing the connection
I hope I am able to explain it correctly - Logging actions and exceptions with timestamp for the whole script and all the data iterations.
Please advice and if possible with an example script that uses the technique. Thanks
You can have a look here for some more detailed guidance. Here's how I usually set up logging on my stuff:
import logging
...
logger = logging.getLogger()
log_handler = logging.StreamHandler(sys.stdout)
log_handler.setFormatter(logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s - %(funcName)s - line %(lineno)d"))
log_handler.setLevel(logging.DEBUG)
logger.addHandler(log_handler)
logger.setLevel(logging.DEBUG)
This will produce output like this, for any event from DEBUG upwards:
2017-05-16 13:30:03,193 - root - INFO - Starting execution - main - line 35
2017-05-16 13:30:03,206 - root - DEBUG - Config file grabbed successfully - readConfig - line 71
...
2017-05-15 13:30:26,792 - root - WARNING - Reached maximum number of attempts (3) for this request; skipping request. - main - line 79
2017-05-15 13:30:26,797 - root - ERROR - Failed to grab item. Unfortunately, this is a showstopper :( - main - line 79
The above is produced by a line in the main function of my app, that reads:
logger.info("Starting execution")
Another line in my readConfig function:
logging.debug("Config file grabbed successfully")
And another two lines in main again:
logging.warning("Reached maximum number of attempts ({max_attempts}) for this request; skipping request.".format(max_attempts = max_tries))
...
logging.error("Failed to grab item. Unfortunately, this is a showstopper :(")
Then it's a matter of how much information and context you need on each log entry. Have a look here at formatting the entries, and here at formatters. I'll have these sent to me via email, anytime the app runs triggered by crontab, by adding MAILTO = root to the top of my crontab file, and making sure my system email is properly set.
If you want to set it to go to the console and a file, you'll just need to set two different handlers. This answer provides a good example, where you'd set a StreamHandler to log to the console, and a FileHandler to log to a file. So instead of setting it up as I mentioned above I usually do, you could try:
import logging
...
# Set up logging and formatting
logger = logging.getLogger()
logFormatter = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s - %(funcName)s - line %(lineno)d")
# Set up the console handler
consoleHandler = logging.StreamHandler()
consoleHandler.setFormatter(logFormatter)
logger.addHandler(consoleHandler)
# Set up the file handler
fileHandler = logging.FileHandler("{0}/{1}.log".format(logPath, fileName))
fileHandler.setFormatter(logFormatter)
logger.addHandler(fileHandler)
# Set up logging levels
consoleHandler.setLevel(logging.DEBUG)
fileHandler.setLevel(logging.DEBUG)
logger.setLevel(logging.DEBUG)
Check out the logging module here, there's a nice example section with both basic and advanced applications. Doing stuff in the format you've described appears to be included in the tutorial.
I am trying to add logging to a medium size Python project with minimal disruption. I would like to log from multiple modules to rotating files silently (without printing messages to the terminal). I have tried to modify this example and I almost have the functionality I need except for one issue.
Here is how I am attempting to configure things in my main script:
import logging
import logging.handlers
import my_module
LOG_FILE = 'logs\\logging_example_new.out'
#logging.basicConfig(level=logging.DEBUG)
# Define a Handler which writes messages to rotating log files.
handler = logging.handlers.RotatingFileHandler(LOG_FILE, maxBytes=100000, backupCount=1)
handler.setLevel(logging.DEBUG) # Set logging level.
# Create message formatter.
formatter = logging.Formatter('%(asctime)s %(name)-12s %(levelname)-8s %(message)s')
# Tell the handler to use this format
handler.setFormatter(formatter)
# Add the handler to the root logger
logging.getLogger('').addHandler(handler)
# Now, we can log to the root logger, or any other logger. First the root...
logging.debug('Root debug message.')
logging.info('Root info message.')
logging.warning('Root warning message.')
logging.error('Root error message.')
logging.critical('Root critical message.')
# Use a Logger in another module.
my_module.do_something() # Call function which logs a message.
Here is a sample of what I am trying to do in modules:
import logging
def do_something():
logger = logging.getLogger(__name__)
logger.debug('Module debug message.')
logger.info('Module info message.')
logger.warning('Module warning message.')
logger.error('Module error message.')
logger.critical('Module critical message.')
Now, here is my problem. I currently get the messages logged into rotating files silently. But I only get warning, error, and critical messages. Despite setting handler.setLevel(logging.DEBUG).
If I uncomment logging.basicConfig(level=logging.DEBUG), then I get all the messages in the log files but I also get the messages printed to the terminal.
How do I get all messages above the specified threshold to my log files without outputing them to the terminal?
Thanks!
Update:
Based on this answer, it appears that calling logging.basicConfig(level=logging.DEBUG) automatically adds a StreamHandler to the Root logger and you can remove it. When I did remove it leaving only my RotatingFileHandler, messages no longer printed to the terminal. I am still wondering why I have to use logging.basicConfig(level=logging.DEBUG) to set the message level threshold, when I am setting handler.setLevel(logging.DEBUG). If anyone can shed a little more light on these issues it would still be appreciated.
You need to call set the logging level on the logger itself as well. I believe by default, the logging level on the logger is logging.WARNING
Ex.
root_logger = logging.getLogger('')
root_logger.setLevel(logging.DEBUG)
# Add the handler to the root logger
root_logger.addHandler(handler)
The loggers log level determines what the logger will actually log (i.e what messages will actually get handed to the handlers). The handlers log level determines what it will actually handle (i.e. what messages actually are output to file, stream, etc). So you could potentially have multiple handlers attached to a logger each handling a different log level.
Here's an SO answer that explains the way this works
There is a script, written in Python, which parse sensors data and events from number of servers by IPMI. Then it sends graph data to one server and error logs to the other. Logging server is Syslog-ng+Mysql
So the task is to store logs by owner, but not by script host.
Some code example:
import logging
import logging.handlers
loggerCent = logging.getLogger(prodName + 'Remote')
ce = logging.handlers.SysLogHandler(address=('192.168.1.11', 514), facility='daemon')
formatter = logging.Formatter('%(name)s: %(levelname)s: %(message)s')
loggerCent.setLevel(logging.INFO)
loggerCent.addHandler(ce)
loggerCent.warning('TEST MSG')
So I need to extend the code so I could tell to syslog-ng, that the log belongs to the other host. Or some other desigion.
Any ideas?
UPD:
So it looks like there is the way to use LogAdapter. But how can use it:
loggerCent = logging.getLogger(prodName + 'Remote')
ce = logging.handlers.SysLogHandler(address=('192.168.1.11', 514), facility='daemon')
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)-15s %(name)-5s %(levelname)-8s host: %(host)-15s %(message)s')
loggerCent.addHandler(ce)
loggerCent2 = logging.LoggerAdapter(loggerCent,
{'host': '123.231.231.123'})
loggerCent2.warning('TEST MSG')
Looking for the message through TcpDump I see nothing about host in LoggerAdapter
What am I doing wrong?
UPD2:
Well, I did't find the way to send host to syslog-ng. Though it is possible to send first host in chain, but I really can't find the way to sent it through Python Logging.
Anyway, I made parser in syslog-ng:
CSV Parser
parser ipmimon_log {
csv-parser(
columns("LEVEL", "UNIT", "MESSAGE")
flags(escape-double-char,strip-whitespace)
delimiters(";")
quote-pairs('""[](){}')
);
};
log {
source(s_net);
parser(ipmimon_log);
destination(d_mysql_ipmimon);
};
log {
source(s_net);
destination(d_mysql_norm);
flags(fallback);
};
Then I send logs to the syslog-ng delimited by ;
edit-rewrite
You are missing the critical step of actually adding your Formatter to your Handler. Try:
loggerCent = logging.getLogger(prodName + 'Remote')
loggerCent.setLevel(logging.DEBUG)
ce = logging.handlers.SysLogHandler(address=('192.168.1.11', 514), facility='daemon')
formatter = logging.Formatter('%(host)s;%(message)s')
ce.setFormatter(formatter)
loggerCent.addHandler(ce)
loggerCent2 = logging.LoggerAdapter(loggerCent, {'host': '123.231.231.123'})
loggerCent2.warning('TEST MSG')
Note that you won't be running basicConfig any more, so you will be responsible for attaching a Handler and setting a log level for the root logger yourself (or you will get 'no handler' errors).