I have a Python library which I want to use structured logging for. I've been targeting python-json-logger, which automatically converts a dict passed to the extra kwarg of a logger to JSON:
In [1]: import logging
In [2]: from pythonjsonlogger import jsonlogger
In [3]: logger = logging.getLogger()
In [4]: handler = logging.StreamHandler()
In [5]: formatter = jsonlogger.JsonFormatter()
In [6]: handler.setFormatter(formatter)
In [7]: logger.addHandler(handler)
In [8]: logger.warning("test", extra={"username": "jashugan"})
{"message": "test", "username": "jashugan"}
This means that if the application using my library would like JSON-formatted logs, they can configure their logger like above.
However, if the application using my library doesn't do any configuration, then the information I'm storing in extra gets completely lost:
In [1]: import logging
In [2]: logger = logging.getLogger()
In [3]: logger.warning("test", extra={"username": "jashugan"})
test
Is there a way that I could use extra to create structured logs and have the information show up automatically if the application doesn't configure logging to render it specifically?
Related
I have a python script to get data from website and load into Csv then load Csv Data into Database.
I want to load the log information into the file. I have added Logging module to capture the time frame and other info.
But I'm not sure
how to get the warnings whether file loaded or not.
What caused the error to load Data into the table.
Script :
import logging
import pandas as pd
from sqlalchemy import create_engine
from urllib.parse import quote
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s %(levelname)s %(message)s',
filename='myapp.log',
filemode='w')
logging.debug('A debug message')
logging.warning('COnnected to DB')
logging.error('This is warning message')
db_connection_str = {f"DB_Connection"}
ds_connection = create_engine(db_connection_str)
a = pd.read_html("https://www.pininterest.com")
df = pd.DataFrame(a[0])
df_cd = pd.read_csv('c:/users/mohan/file.csv')
df_final = pd.read_csv('file.csv')
df_final.to_sql('Table',ds_connection,index = False)
logging.info('Completed Successfully')
Please suggest what ever the Logging I have kept is correct or any suggestions
i would like to add an dictionary as an additional object in the logs.
I have an dictionary which contains the 'message".
this is my approach:
import logging
message = {'message': body_dict['message']}
logger_with_message_details = logging.getLogger()
handler = logging.StreamHandler()
json_formatter = logging.Formatter({"message": "%(message)s"})
handler.setFormatter(json_formatter)
logger_with_message_details.addHandler(handler)
logger_with_message_details = logging.LoggerAdapter(log, message)
logger_with_message_details.info("Message details.")
logger_with_message_details.info("Message details extracted.", extra=message)
I tried with two logs, but no output is containing the object :(
I have this formatter for django
FORMAT = "[%(filename)s:%(lineno)s - %(funcName)20s() ] %(message)s"
The file name i get is
views.py
Now that is confusing as its hard to see from which module is that views.py.
Is there any to get appname in logger formatter
Use pathname instead of filename in your logging configuration.
FORMAT = "[%(pathname)s:%(lineno)s - %(funcName)20s() ] %(message)s"
There are also other variables you can use — check the logging module documentation for a list.
Note that if you're acquiring a Logger instance using logger = logging.getLogger(__name__) (which is a common way to do it), you can also retrieve the module name (e.g. myapp.views) using name.
This is (arguably) better practice but will not work if you're doing e.g. logger = logging.getLogger("mylogger") or logger = logging.getLogger()
Currently I have everything getting logged to one logfile but I want to separate it out to multiple log files. I look at the logging in python documentation but they don't discuss about this.
log_format = "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
logging.basicConfig(filename=(os.path.join(OUT_DIR, + '-user.log')),
format=log_format, level=logging.INFO, datefmt='%Y-%m-%d %H:%M:%S')
Currently this is how I do the logging. what I want to do have different type of errors or information get log into different log files. At the moment when I do logging.info('Logging IN') and logging.error('unable to login') will go to same logfile. I want to seperate them. Do I need to create another logging object to support the logging into another file?
What you /could/ do (I haven't dug into the logging module too much so there may be a better way to do this) is maybe use a stream rather than a file object:
In [1]: class LogHandler(object):
...: def write(self, msg):
...: print 'a :%s' % msg
...: print 'b :%s' % msg
...:
In [3]: import logging
In [4]: logging.basicConfig(stream=LogHandler())
In [5]: logging.critical('foo')
a :CRITICAL:root:foo
b :CRITICAL:root:foo
In [6]: logging.warn('bar')
a :WARNING:root:bar
b :WARNING:root:bar
Edit with further handling:
Assuming your log files already exist, you could do something like this:
import logging
class LogHandler(object):
format = '%(levelname)s %(message)s'
files = {
'ERROR': 'error.log',
'CRITICAL': 'error.log',
'WARN': 'warn.log',
}
def write(self, msg):
type_ = msg[:msg.index(' ')]
with open(self.files.get(type_, 'log.log'), 'r+') as f:
f.write(msg)
logging.basicConfig(format=LogHandler.format, stream=LogHandler())
logging.critical('foo')
This would allow you to split your logging into various files based on conditions in your log messages. If what you're looking for isn't found, it simply defaults to log.log.
I created this solution from docs.python.org/2/howto/logging-cookbook.html
Simply create two logging file handlers, assign their logging level and add them to your logger.
import os
import logging
current_path = os.path.dirname(os.path.realpath(__file__))
logger = logging.getLogger('simple_example')
logger.setLevel(logging.DEBUG)
#to log debug messages
debug_log = logging.FileHandler(os.path.join(current_path, 'debug.log'))
debug_log.setLevel(logging.DEBUG)
#to log errors messages
error_log = logging.FileHandler(os.path.join(current_path, 'error.log'))
error_log.setLevel(logging.ERROR)
logger.addHandler(debug_log)
logger.addHandler(error_log)
logger.debug('This message should go in the debug log')
logger.info('and so should this message')
logger.warning('and this message')
logger.error('This message should go in both the debug log and the error log')
If I want the access log for Cherrypy to only get to a fixed size, how would I go about using rotating log files?
I've already tried http://www.cherrypy.org/wiki/Logging, which seems out of date, or has information missing.
Look at http://docs.python.org/library/logging.html.
You probably want to configure a RotatingFileHandler
http://docs.python.org/library/logging.html#rotatingfilehandler
I've already tried http://www.cherrypy.org/wiki/Logging, which seems
out of date, or has information missing.
Try adding:
import logging
import logging.handlers
import cherrypy # you might have imported this already
and instead of
log = app.log
maybe try
log = cherrypy.log
The CherryPy documentation of the custom log handlers shows this very example.
Here is the slightly modified version that I use on my app:
import logging
from logging import handlers
def setup_logging():
log = cherrypy.log
# Remove the default FileHandlers if present.
log.error_file = ""
log.access_file = ""
maxBytes = getattr(log, "rot_maxBytes", 10000000)
backupCount = getattr(log, "rot_backupCount", 1000)
# Make a new RotatingFileHandler for the error log.
fname = getattr(log, "rot_error_file", "log\\error.log")
h = handlers.RotatingFileHandler(fname, 'a', maxBytes, backupCount)
h.setLevel(logging.DEBUG)
h.setFormatter(cherrypy._cplogging.logfmt)
log.error_log.addHandler(h)
# Make a new RotatingFileHandler for the access log.
fname = getattr(log, "rot_access_file", "log\\access.log")
h = handlers.RotatingFileHandler(fname, 'a', maxBytes, backupCount)
h.setLevel(logging.DEBUG)
h.setFormatter(cherrypy._cplogging.logfmt)
log.access_log.addHandler(h)
setup_logging()
Cherrypy does its logging using the standard Python logging module. You will need to change it to use a RotatingFileHandler. This handler will take care of everything for you including rotating the log when it reaches a set maximum size.