How to send python logs to syslogs? - python

I am using logging for generating logs on my python (2.7) project. I create one instance of logger class on each python file (LOGGER = logging.getLogger(__name__) like this).
I want to send all the logs to syslog. Ways I have found using SysLogHandler needs to specify the address (/dev/log, /var/run/syslog etc.) of syslogs which can be different on different platforms.
Is there any generic way to do it (send the logs to syslog)?

As you found, you configure logging to use the SysLogHandler. As you also noted, the "address" will be different on different systems and configurations. You will need to make that a configurable part of your application.
If you are configuring the logging programmatically, you could implement some heuristics to search for the various UNIX domain socket paths you expect to find.
Note the syslog can use UDP or TCP sockets to log to a remote syslog server instead of using a UNIX domain socket on the local machine.

import sys, logging
class streamToLogger(object):
def __init__(self, logger, log_level = logging.INFO):
self.logger = logger
self.log_level = log_level
self.linebuf = ''
def write(self, buf):
for line in buf.rstrip().splitlines():
self.logger.log(self.log_level, line.rstrip())
logging.basicConfig(
level = logging.DEBUG,
format = '%(asctime)s:%(levelname)s:%(name)s:%(message)s',
filename = os.path.join('/var/log', 'logfile.log'),
filemode = 'a'
)
sys.stdout = streamToLogger(logging.getLogger('STDOUT'), logging.INFO)
sys.stderr = streamToLogger(logging.getLogger('STDERR'), logging.ERROR)

Related

Issue in sending python logs to Splunk using splunk_hec_handler

I am using Python logging library to push logs to splunk. This package use HEC method to push logs to splunk.
Issue I am facing is that out of many logger statements in my application, I want selectively only few logger statements to splunk not all.
So i created one method below method which converts string logs in json (key/value) and pushes into splunk. So I am calling this method just after the logger statement I wish to push to splunk. But rest all the logger statements which i dont wish to send to splunk they are also getting pushed to splunk.
Why is this happening?
class Test:
def __init__(self):
self.logger = logging.getLogger('myapp')
def method_test(self,url,data,headers):
response = requests.post(url=url, data=json.dumps(data), headers=abc.headers)
##i dont want to push this below log message to splunk but this is also getting pushed to splunk
self.logger.debug(f"request code:{response.request.url} request body:{response.request.body}")
##I wish to send this below log to splunk
self.logger.debug(f"response code:{response.status_code} response body:{response.text}")
log_dic = {'response_code': response.status_code,'response_body': response.text}
splunklogger = self.logging_override(log_dic, self.splunk_host,
self.index_token, self.port,
self.proto, self.ssl_verify,
self.source)
splunklogger.info(log_dic)
return response
def logging_override(log_dict: dict, splunk_host,index_token,splunk_port,splunk_proto,ssl_ver,source_splnk):
"""
This function help in logging custom fields in JSON key value form by defining fields of our choice in log_dict dictionary
and pushes logs to Splunk Server
"""
splunklogger = logging.getLogger()
splunklogger.setLevel(logging.INFO)
stream_handler = logging.StreamHandler()
basic_dict = {"time": "%(asctime)s", "level": "%(levelname)s"}
full_dict = {**basic_dict, **log_dict}
stream_formatter = logging.Formatter(json.dumps(full_dict))
stream_handler.setFormatter(stream_formatter)
if not splunklogger.handlers:
splunklogger.addHandler(stream_handler)
splunklogger.handlers[0] = stream_handler
splunk_handler = SplunkHecHandler(splunk_host,
index_token,
port=splunk_port, proto=splunk_proto, ssl_verify=ssl_ver,
source=source_splnk)
splunklogger.addHandler(splunk_handler)
splunklogger.addHandler(splunk_handler)
return splunklogger
I believe that the problem is with your calls to logging.getLogger, namely when you're configuring your app logger, you're specifying a logger name, but when you're configuring the splunk logger, you're not specifying any and therefore getting, configuring, and attaching the SplunkHandler to the root logger.
As events come in to the lower level loggers by default they propagate their events to higher level loggers (e.g. the root logger) and thus get emitted to Splunk.
I suspect an easy solution would be to look at your logger names... possibly put the Splunk logger at a lower level than your component? or look into the propagation of loggers. The same docs page linked above talks a bit about logger objects and their propagation.

Tornado substituting custom logger on server (not on local computer)

I have a tornado application and a custom logger method. My code to build and use the custom logger is the following:
def create_logger():
"""
This function creates the logger functionality to be used throughout the Python application
:return: bool - true if successful
"""
# Configuring the logger
filename = "PythonLogger.log"
# Change the current working directory to the logs folder, so that the logs files is written in it.
os.chdir(os.path.normpath(os.path.normpath(os.path.dirname(os.path.abspath(__file__)) + os.sep + os.pardir + os.sep + os.pardir + os.sep + 'logs')))
# Create the logs file
logging.basicConfig(filename=filename, format='%(asctime)s %(message)s', filemode='w')
# Creating the logger
logger = logging.getLogger()
# Setting the threshold of logger to DEBUG
logger.setLevel(logging.NOTSET)
logger.log(0, 'El logger está inicializado')
return True
def log_info_message(msg):
"""
Utility for message logging with code 20
:param msg:
:return:
"""
return logging.getLogger().log(20, msg)
In the code, I initialize the logger and already write a message to it before the Tornado application initialization:
if __name__ == '__main__':
# Logger initialization
create_logger()
# First log message
log_info_message('Initiating Python application')
# Starting Tornado
tornado.options.parse_command_line()
# Specifying what app exactly is being started
server = tornado.httpserver.HTTPServer(test.app)
server.listen(options.port)
try:
if 'Windows_NT' not in os.environ.values():
server.start(0)
tornado.ioloop.IOLoop.instance().start()
except KeyboardInterrupt:
tornado.ioloop.IOLoop.instance().stop()
Then let's say my method get of HTTP request is as follows (only interesting lines):
class API(tornado.web.RequestHandler):
def get(self):
self.write('Get request ')
logging.getLogger("tornado.access").log(20, 'Hola')
logging.getLogger("tornado.application").log(20, '1')
logging.getLogger("tornado.general").log(20, '2')
log_info_message('Received a GET request at: ' + datetime.datetime.now().strftime("%d-%b-%Y (%H:%M:%S.%f)"))
What I see is a difference between local testing and testing on server.
A) On local, I can see log message at first script running, and log messages of requests (after initializing Tornado app) in my log file and the Tornado logs.
B) On server, I only see the first message, not my log messages when Get requests are accepted and also see Tornado's loggers when there's an error, but even don't see the messages produced by Tornado's loggers. I guess that means that somehow Tornado is re-initializing the logger and making mine and his 3 ones write in some other file (somehow that does not affect when errors happens??).
I am aware that Tornado uses its own 3 logging functions, but somehow I would like to use mine as well, at the same time as keeping the Tornado's ones and writing them all into the same file. Basically reproduce that local behaviour on server but also keeping it when some error happens, of course.
How could I achieve this?
Thanks in advance!
P.S.: if I add a name to the logger, let's say logging.getLogger('Example') and changed log_info_message function to return logging.getLogger('Example').log(20, msg), Tornado's logger would fail and raise error. So that option destroys its own loggers...
It seems the only problem was that, on the server side, tornado was setting the the mininum level for a log message to be written on log file higher (minimum of 40 was required). So that logging.getLogger().log(20, msg) would not write on the logging file but logging.getLogger().log(40, msg) would.
I would like to understand why, so if anybody knows, your knowledge would be more than welcome. For the time being that solution is working though.
tornado.log defines options that can be used to customise logging via command line (check tornado.options) - one of them is logging that defines the log level used. You are likely using this on the server and setting it to error.
When debugging logging I suggest you create a RequestHandler that will log or return the structure of the existing loggers by inspecting the root logger. When you see the structure it is much easier to understand why it works the way it works.

stream logging over udp

I have problem sending loggging stream over udp to targeted third part machine that collects log streams.
I've got just couple of lines of init to appear on remote console. The encoding over the udp should be UTF-8, too.
## setup_loggin function body
handler=logging.StreamHandler()
handler.setFormatter(formatter)
socketHandler=logging.handlers.SysLogHandler(address(SYSLOG_IP, 514))
socketHandler.setFormatter(formatter)
logger = logging.getLogger(name)
if not logger.handlers:
logger.setLevel(log_level)
logger.addHandler(handler)
logger.addHandler(socketHandler)
return logger
The remote server gets just begining of the logging at end results. Probably because it expects UDP to receive "UTF-8", and than parses it through.
Is there a way to change logging encoding to "UTF-8" using SysLogHandler or any other loggin handler.?
Problem solved. Third party syslog was parsing based on RFC3164
The MSG part has two fields known as the TAG field and the CONTENT field.
So, formatter would be something like
formatter_syslog = logging.Formatter(fmt='foo: %(levelname)s - %(module)s.%(funcName)s - %(message)s')

Python logging sometimes writing message to file but not format

Having a new problem where the logger is writing about every other line without the format, just the message.
My code:
import logging
from logging.handlers import RotatingFileHandler
# Set up logging
LOG_FILE = argv[0][:-3] + '.log'
logging.basicConfig(
filename=LOG_FILE,
filemode='a',
level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)
handler = RotatingFileHandler(LOG_FILE,maxBytes=1000000)
logger.addHandler(handler)
def main():
target, command, notify_address, wait = get_args(argv)
logger.info('Checking status of %s every %d minutes.' % (target, wait))
logger.info('Running %s and sending output to %s when online.' % (command, notify_address))
The vars returned from get_args() are all strings, even though wait is a number.
Note that I am not receiving any errors in my IDE or when running.
The output I am getting in my log file:
2015-02-12 16:26:27,483 - INFO - Checking status of <ip address> every 30 minutes.
Running <arbitrary bash command string> and sending output to <my email address> when online.
2015-02-12 16:26:27,483 - INFO - Running <arbitrary bash command string> and sending output to <my email address> when online.
What is causing the second logger.info() to print twice, and only once formatted properly?
I have another script that logs perfectly, no idea what I've done here. (Copy/pasted the logging setup section to be safe)
Are you using loggers at different levels of your code? It sounds like the log messages could be propagating upwards. Try adding
logger.propagate = False
after you add the handler. You can check out the python docs for a more detailed explanation here, but the relevant text below sounds exactly like what you're seeing.
Note If you attach a handler to a logger and one or more of its ancestors, it may emit the same record multiple times. In general, you should not need to attach a handler to more than one logger - if you just attach it to the appropriate logger which is highest in the logger hierarchy, then it will see all events logged by all descendant loggers, provided that their propagate setting is left set to True. A common scenario is to attach handlers only to the root logger, and to let propagation take care of the rest.

How to set source host address in Python Logging?

There is a script, written in Python, which parse sensors data and events from number of servers by IPMI. Then it sends graph data to one server and error logs to the other. Logging server is Syslog-ng+Mysql
So the task is to store logs by owner, but not by script host.
Some code example:
import logging
import logging.handlers
loggerCent = logging.getLogger(prodName + 'Remote')
ce = logging.handlers.SysLogHandler(address=('192.168.1.11', 514), facility='daemon')
formatter = logging.Formatter('%(name)s: %(levelname)s: %(message)s')
loggerCent.setLevel(logging.INFO)
loggerCent.addHandler(ce)
loggerCent.warning('TEST MSG')
So I need to extend the code so I could tell to syslog-ng, that the log belongs to the other host. Or some other desigion.
Any ideas?
UPD:
So it looks like there is the way to use LogAdapter. But how can use it:
loggerCent = logging.getLogger(prodName + 'Remote')
ce = logging.handlers.SysLogHandler(address=('192.168.1.11', 514), facility='daemon')
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)-15s %(name)-5s %(levelname)-8s host: %(host)-15s %(message)s')
loggerCent.addHandler(ce)
loggerCent2 = logging.LoggerAdapter(loggerCent,
{'host': '123.231.231.123'})
loggerCent2.warning('TEST MSG')
Looking for the message through TcpDump I see nothing about host in LoggerAdapter
What am I doing wrong?
UPD2:
Well, I did't find the way to send host to syslog-ng. Though it is possible to send first host in chain, but I really can't find the way to sent it through Python Logging.
Anyway, I made parser in syslog-ng:
CSV Parser
parser ipmimon_log {
csv-parser(
columns("LEVEL", "UNIT", "MESSAGE")
flags(escape-double-char,strip-whitespace)
delimiters(";")
quote-pairs('""[](){}')
);
};
log {
source(s_net);
parser(ipmimon_log);
destination(d_mysql_ipmimon);
};
log {
source(s_net);
destination(d_mysql_norm);
flags(fallback);
};
Then I send logs to the syslog-ng delimited by ;
edit-rewrite
You are missing the critical step of actually adding your Formatter to your Handler. Try:
loggerCent = logging.getLogger(prodName + 'Remote')
loggerCent.setLevel(logging.DEBUG)
ce = logging.handlers.SysLogHandler(address=('192.168.1.11', 514), facility='daemon')
formatter = logging.Formatter('%(host)s;%(message)s')
ce.setFormatter(formatter)
loggerCent.addHandler(ce)
loggerCent2 = logging.LoggerAdapter(loggerCent, {'host': '123.231.231.123'})
loggerCent2.warning('TEST MSG')
Note that you won't be running basicConfig any more, so you will be responsible for attaching a Handler and setting a log level for the root logger yourself (or you will get 'no handler' errors).

Categories