Logging in GCP and locally - python

I am building a system that is intended to run on Virtual Machines in Google Cloud Platform. However, as a form of backup, it may be run locally as well. That being said, my issue currently is with logging. I have two loggers, both work, a local logger and a cloud logger.
Cloud logger
import google.cloud.logging
from google.cloud.logging.handlers import CloudLoggingHandler
from google.oauth2 import service_account
CREDS = google.cloud.logging.Client(
project=PROJECT, credentials=service_account.Credentials.from_service_account_file(CREDENTIAL_FILE))
class GoogleLogger(CloudLoggingHandler):
def __init__(self, client=CREDS):
super(GoogleLogger, self).__init__(client)
def setup_logging():
"""
This function can be invoked in order to setup logging based on a yaml config in the
root dir of this project
"""
try:
# with open('logging.yaml', 'rt') as f:
with open(LOGGING_CONFIG, 'rt') as f:
config = yaml.safe_load(f.read())
f.close()
logging.config.dictConfig(config)
except Exception:
print('Error in Logging Configuration. Using default configs')
print(traceback.format_exc())
logging.basicConfig(level=logging.INFO)
logging.yaml
version: 1
formatters:
simple:
format: "%(name)s - %(lineno)d - %(message)s"
complex:
format: "%(asctime)s - %(name)s | %(levelname)s | %(module)s : [%(filename)s: %(lineno)d] - %(message)s"
json:
class: logger.JsonFormatter
handlers:
console:
class: logging.StreamHandler
level: DEBUG
formatter: complex
cloud:
class: logger.GoogleLogger
formatter: json
level: INFO
loggers:
cloud:
level: INFO
handlers: [console,cloud]
propagate: yes
__main__:
level: DEBUG
handlers: [console]
propagate: yes
I use setup_logging() to set everything up like so:
setup_logging()
logger = logging.getLogger(<type_of_logger>)
can be "cloud" or "__main__"
"main" only logs locally, "cloud" logs both to GCP Stackdriver Logs and locally.
Now if I'm not running on GCP, an error gets thrown here:
CREDS = google.cloud.logging.Client(
project=PROJECT, credentials=service_account.Credentials.from_service_account_file(CREDENTIAL_FILE))
What is the best way around this? The class GoogleLogger(CloudLoggingHandler): always gets run, and if isn't in GCP it breaks.
An idea is to wrap the class in a try/except block, but that sounds like a horrible idea. How do I make my code smart enough to choose which logger automatically? And if running locally, completely ignore the GoogleLogger?
Edit (Traceback)
File "import_test.py", line 2, in <module>
from logger import setup_logging
File "/Users/daudn/Documents/clean_space/tgs_workflow/utils/logger.py", line 16, in <module>
class GoogleLogger(CloudLoggingHandler):
File "/Users/daudn/Documents/clean_space/tgs_workflow/utils/logger.py", line 23, in GoogleLogger
project=PROJECT, credentials=service_account.Credentials.from_service_account_file(CREDENTIAL_FILE))
File "/usr/local/lib/python3.7/site-packages/google/cloud/logging/client.py", line 123, in __init__
self._connection = Connection(self, client_info=client_info)
File "/usr/local/lib/python3.7/site-packages/google/cloud/logging/_http.py", line 39, in __init__
super(Connection, self).__init__(client, client_info)
TypeError: __init__() takes 2 positional arguments but 3 were given

This is due to bad inheritance.
Try to pass client into parent's __init__ instead
class LoggingHandlerInherited(CloudLoggingHandler):
def __init__(self):
super().__init__(client=google.cloud.logging.Client())
Make sure that you have GOOGLE_APPLICATION_CREDENTIALS in your environments

Related

Python logging module doesn't work in Sagemaker

I've got a sagemaker instance running a jupyter notebook. I'd like to use python's logging module to write to a log file, but it doesn't work.
My code is pretty straightforward:
import logging
logger = logging.getLogger()
formatter = logging.Formatter("%(asctime)s - %(levelname)s - %(name)s - %(message)s", datefmt="%y/%m/%d %H:%M:%S")
fhandler = logging.FileHandler("taxi_training.log")
fhandler.setFormatter(formatter)
logger.addHandler(fhandler)
logger.debug("starting log...")
This should write a line to my file taxi_training.log but it doesn't.
I tried using the reload function from importlib, I also tried setting the output stream to sys.stdout explicitly. Nothing is logging to the file or in cloudwatch.
Do I need to add anything to my Sagemaker instance for this to work properly?
The Python logging module requires a logging level and one or more handlers to process output. By default, the logging level is set to WARNING (30) with a STDOUT handler for that level. If a logging level and/or handler is not explicitly defined, these settings are inherited from the parent root logger settings. These settings can be verified by adding the following lines to the bottom of your code:
# Verify levels and handlers
print("Parent Logger: "+logger.parent.name)
print("Parent Level: "+str(logger.parent.level))
print("Parent Handlers: "+str(logger.parent.handlers))
print("Logger Level: "+str(logger.level))
print("Logger Handlers: "+str(logger.handlers))
The easiest way to instantiate a handler and set a logging level is by running the logging.basicConfig() function (documentation). This will set a logging level and STDOUT handler at the root logger level which will propagate to any child loggers created in the same code. Here is an example using the code provided:
import logging
logger = logging.getLogger('log')
logging.basicConfig(level=logging.INFO) # Set logging level and STDOUT handler
logger.info(5)

Call a parent class's method from child class but don't see in the log file

I would like to call a parent class's method from child class but I don't see the logging entry in the log file when I run the following command:
python.exe B.py
If I call the printA() method in the A.py code then I see the logging entry.
The following piece of Python code is A.py file:
#!/usr/bin/env python
import logging
import logging.config
import yaml
with open('logging.yaml', 'r') as f:
config = yaml.safe_load(f.read())
logging.config.dictConfig(config)
logger = logging.getLogger(__name__)
class A:
def __init__(self, name, value):
self.name = name
self.value = value
def printA(self):
logger.info('Name: {}'.format(self.name))
logger.info('Value: {}'.format(self.value))
B.py file:
#!/usr/bin/env python
from A import *
import logging
import logging.config
import yaml
with open('logging.yaml', 'r') as f:
config = yaml.safe_load(f.read())
logging.config.dictConfig(config)
logger = logging.getLogger(__name__)
class B(A):
def __init__(self, name, value):
super(B, self).__init__(name, value + 1)
b = B('Name', 1)
b.printA()
logging.yaml file:
version: 1
formatters:
simple:
format: '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
handlers:
console:
class: logging.StreamHandler
level: DEBUG
formatter: simple
stream: ext://sys.stdout
debug_file_handler:
class: logging.handlers.RotatingFileHandler
level: DEBUG
formatter: simple
filename: debug.log
maxBytes: 10485760 # 10MB
backupCount: 20
encoding: utf8
root:
level: DEBUG
handlers: [console, debug_file_handler]
Actual result is an empty log file. The question is what should I change on my source code to make the logging function complete?
Will be grateful for any help.
You are configuring and re-configuring the logging module, and each time you call logging.config.dictConfig() you leave the disable_existing_loggers parameter to dictConfig() to True. See the Dictionary Schema Details section of the documentation:
disable_existing_loggers - whether any existing loggers are to be disabled. This setting mirrors the parameter of the same name in fileConfig(). If absent, this parameter defaults to True. This value is ignored if incremental is True.
So each time you call dictConfig() any logging.Logger() instances are being disabled.
Your code works if you only call dictConfig() once, before you use logging.getLogger(__name__) to create your single Logging() object. But when you expanded to two modules, your from A import * line imports A, executes dictConfig() and creates a Logger() before control returns to the B module, which then again runs dictConfig() (the logger reference you create in B is otherwise unused anywhere).
You only need to configure logging once, preferably as early as possible from the main entry point (the script you run with Python), and if your code has already created Logger() instances you want to continue to use, you either need to set incremental to True (but understand [that only a subset of your configuration will be applied then), or set disable_existing_loggers to False.
Remember that you can always update the dictionary you load from the .yaml file, so you could just use:
config['disable_existing_loggers'] = False
before you pass config to logging.config.dictConfig().
I'd use a if __name__ == '__main__': guard to ensure that you only configure logging at that point. Don't run top-level code in a module that alters global configuration without such a guard:
if __name__ == '__main__':
# this module is used as a script, configure logging
with open('logging.yaml', 'r') as f:
config = yaml.safe_load(f.read())
# do not disable any logger objects already created before this point
config['disable_existing_loggers'] = False
logging.config.dictConfig(config)

Set logging levels

I'm trying to use the standard library to debug my code:
This works fine:
import logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
logger.info('message')
I can't make work the logger for the lower levels:
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger(__name__)
logger.info('message')
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger(__name__)
logger.debug('message')
I don't get any response for neither of those.
What Python version? That works for me in 3.4. But note that basicConfig() won't affect the root handler if it's already setup:
This function does nothing if the root logger already has handlers configured for it.
To set the level on root explicitly do logging.getLogger().setLevel(logging.DEBUG). But ensure you've called basicConfig() before hand so the root logger initially has some setup. I.e.:
import logging
logging.basicConfig()
logging.getLogger().setLevel(logging.DEBUG)
logging.getLogger('foo').debug('bah')
logging.getLogger().setLevel(logging.INFO)
logging.getLogger('foo').debug('bah')
Also note that "Loggers" and their "Handlers" both have distinct independent log levels. So if you've previously explicitly loaded some complex logger config in you Python script, and that has messed with the root logger's handler(s), then this can have an effect, and just changing the loggers log level with logging.getLogger().setLevel(..) may not work. This is because the attached handler may have a log level set independently. This is unlikely to be the case and not something you'd normally have to worry about.
I use the following setup for logging.
Yaml based config
Create a yaml file called logging.yml like this:
version: 1
formatters:
simple:
format: "%(name)s - %(lineno)d - %(message)s"
complex:
format: "%(asctime)s - %(name)s - %(lineno)d - %(message)s"
handlers:
console:
class: logging.StreamHandler
level: DEBUG
formatter: simple
file:
class: logging.handlers.TimedRotatingFileHandler
when: midnight
backupCount: 5
level: DEBUG
formatter: simple
filename : Thrift.log
loggers:
qsoWidget:
level: INFO
handlers: [console,file]
propagate: yes
__main__:
level: DEBUG
handlers: [console]
propagate: yes
Python - The main
The "main" module should look like this:
import logging.config
import logging
import yaml
with open('logging.yaml','rt') as f:
config=yaml.safe_load(f.read())
f.close()
logging.config.dictConfig(config)
logger=logging.getLogger(__name__)
logger.info("Contest is starting")
Sub Modules/Classes
These should start like this
import logging
class locator(object):
def __init__(self):
self.logger = logging.getLogger(__name__)
self.logger.debug('{} initialized')
Hope that helps you...
In my opinion, this is the best approach for the majority of cases.
Configuration via an INI file
Create a filename logging.ini in the project root directory as below:
[loggers]
keys=root
[logger_root]
level=DEBUG
handlers=screen,file
[formatters]
keys=simple,verbose
[formatter_simple]
format=%(asctime)s [%(levelname)s] %(name)s: %(message)s
[formatter_verbose]
format=[%(asctime)s] %(levelname)s [%(filename)s %(name)s %(funcName)s (%(lineno)d)]: %(message)s
[handlers]
keys=file,screen
[handler_file]
class=handlers.TimedRotatingFileHandler
interval=midnight
backupCount=5
formatter=verbose
level=WARNING
args=('debug.log',)
[handler_screen]
class=StreamHandler
formatter=simple
level=DEBUG
args=(sys.stdout,)
Then configure it as below:
import logging
from logging.config import fileConfig
fileConfig('logging.ini')
logger = logging.getLogger('dev')
name = "stackoverflow"
logger.info(f"Hello {name}!")
logger.critical('This message should go to the log file.')
logger.error('So should this.')
logger.warning('And this, too.')
logger.debug('Bye!')
If you run the script, the sysout will be:
2021-01-31 03:40:10,241 [INFO] dev: Hello stackoverflow!
2021-01-31 03:40:10,242 [CRITICAL] dev: This message should go to the log file.
2021-01-31 03:40:10,243 [ERROR] dev: So should this.
2021-01-31 03:40:10,243 [WARNING] dev: And this, too.
2021-01-31 03:40:10,243 [DEBUG] dev: Bye!
And debug.log file should contain:
[2021-01-31 03:40:10,242] CRITICAL [my_loger.py dev <module> (12)]: This message should go to the log file.
[2021-01-31 03:40:10,243] ERROR [my_loger.py dev <module> (13)]: So should this.
[2021-01-31 03:40:10,243] WARNING [my_loger.py dev <module> (14)]: And this, too.
All done.
I wanted to leave the default logger at warning level but have detailed lower-level loggers for my code. But it wouldn't show anything. Building on the other answer, it's critical to run logging.basicConfig() beforehand.
import logging
logging.basicConfig()
logging.getLogger('foo').setLevel(logging.INFO)
logging.getLogger('foo').info('info')
logging.getLogger('foo').debug('info')
logging.getLogger('foo').setLevel(logging.DEBUG)
logging.getLogger('foo').info('info')
logging.getLogger('foo').debug('debug')
Outputs expected
INFO:foo:info
INFO:foo:info
DEBUG:foo:debug
For a logging solution across modules, I did this
# cfg.py
import logging
logging.basicConfig()
logger = logging.getLogger('foo')
logger.setLevel(logging.INFO)
logger.info(f'active')
# main.py
import cfg
cfg.logger.info(f'main')

Sharing external module/function between own classes

Question about sharing 'functions' between classes.
Situation:
All my own code is in 1 file
I'm using python-daemon to daemonize my script
That uses a class (Doorcamdaemon) to initiate and run.
It imports another class (Doorcam) which has a looping function
I'm using a sample script for the daemon functions, and it shows how to use the logging module.
The logging works from the main part of the script and in the Doorcamdaemon class, but not in the Doorcam class.
class Doorcamdaemon():
def __init__(self):
#skipping some content, not related to this issue
self.Doorcam=Doorcam()
def run(self):
self.Doorcam.startListening() #looping function
class Doorcam
def __init__(self):
#skipping somecontent, not related to this issue
def startListening(self):
while True:
logger.info('Hello')
app = Doorcamdaemon()
logger = logging.getLogger("DoorcamLog")
logger.setLevel(logging.DEBUG)
formatter = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s")
handler = logging.FileHandler("/var/log/doorcam.log")
handler.setFormatter(formatter)
logger.addHandler(handler)
daemon_runner = runner.DaemonRunner(app)
daemon_runner.daemon_context.files_preserve=[handler.stream]
daemon_runner.do_action()
The error returned is:
$ ./Doorcam.py start
Traceback (most recent call last):
File "./Doorcam.py", line 211, in <module>
app = Doorcamdaemon()
File "./Doorcam.py", line 159, in __init__
self.doorcam=Doorcam()
File "./Doorcam.py", line 18, in __init__
logger.info('Doorcam started capturing')
NameError: global name 'logger' is not defined
So my obvious question: How can I make it work in the Doorcam class as well?
Try moving the line
app = Doorcamdaemon()
to after the lines that create and set up logger. The traceback is telling you:
logger doesn't exist in line 18 where Doorcam's constructor tries to use it
Doorcamdaemon tries to construct a Doorcam at line 159 in its own constructor
So you can't create a Doorcamdaemon if logger isn't defined yet.
Some of the content you omitted in Doorcam.__init__ was related to this issue :)

Flask logging - Cannot get it to write to a file

Ok, here's the code where I setup everything:
if __name__ == '__main__':
app.debug = False
applogger = app.logger
file_handler = FileHandler("error.log")
file_handler.setLevel(logging.DEBUG)
applogger.setLevel(logging.DEBUG)
applogger.addHandler(file_handler)
app.run(host='0.0.0.0')
What happens is
error.log gets created
Nothing is ever written to it
Despite not adding a StreamHandler and setting debug to false I still get everything to STDOUT (this might be correct, but still seems weird)
Am I totally off here somewhere or what is happening?
Why not do it like this:
if __name__ == '__main__':
init_db() # or whatever you need to do
import logging
logging.basicConfig(filename='error.log',level=logging.DEBUG)
app.run(host="0.0.0.0")
If you now start you application, you'll see that error.log contains:
INFO:werkzeug: * Running on http://0.0.0.0:5000/
For more info, visit http://docs.python.org/2/howto/logging.html
Okay, as you insist that you cannot have two handler with the method I showed you, I'll add an example that makes this quite clear. First, add this logging code to your main:
import logging, logging.config, yaml
logging.config.dictConfig(yaml.load(open('logging.conf')))
Now also add some debug code, so that we see that our setup works:
logfile = logging.getLogger('file')
logconsole = logging.getLogger('console')
logfile.debug("Debug FILE")
logconsole.debug("Debug CONSOLE")
All what is left is the "logging.conf" program. Let's use that:
version: 1
formatters:
hiformat:
format: 'HI %(asctime)s - %(name)s - %(levelname)s - %(message)s'
simple:
format: '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
handlers:
console:
class: logging.StreamHandler
level: DEBUG
formatter: hiformat
stream: ext://sys.stdout
file:
class: logging.FileHandler
level: DEBUG
formatter: simple
filename: errors.log
loggers:
console:
level: DEBUG
handlers: [console]
propagate: no
file:
level: DEBUG
handlers: [file]
propagate: no
root:
level: DEBUG
handlers: [console,file]
This config is more complicated than needed, but it also shows some features of the logging module.
Now, when we run our application, we see this output (werkzeug- and console-logger):
HI 2013-07-22 16:36:13,475 - console - DEBUG - Debug CONSOLE
HI 2013-07-22 16:36:13,477 - werkzeug - INFO - * Running on http://0.0.0.0:5000/
Also note that the custom formatter with the "HI" was used.
Now look at the "errors.log" file. It contains:
2013-07-22 16:36:13,475 - file - DEBUG - Debug FILE
2013-07-22 16:36:13,477 - werkzeug - INFO - * Running on http://0.0.0.0:5000/
Ok, my failure stemmed from two misconceptions:
1) Flask will apparently ignore all your custom logging unless it is running in production mode
2) debug=False is not enough to let it run in production mode. You have to wrap the app in any sort of WSGI server to do so
After i started the app from gevent's WSGI server (and moving logging initialization to a more appropriate place) everything seems to work fine
The output you see in the console of your app is from the underlying Werkzeug logger that can be accessed through logging.getLogger('werkzeug').
Your logging can function in both development and release by also adding handlers to that logger as well as the Flask one.
More information and example code: Write Flask Requests to an Access Log.
This works:
if __name__ == '__main__':
import logging
logFormatStr = '[%(asctime)s] p%(process)s {%(pathname)s:%(lineno)d} %(levelname)s - %(message)s'
logging.basicConfig(format = logFormatStr, filename = "global.log", level=logging.DEBUG)
formatter = logging.Formatter(logFormatStr,'%m-%d %H:%M:%S')
fileHandler = logging.FileHandler("summary.log")
fileHandler.setLevel(logging.DEBUG)
fileHandler.setFormatter(formatter)
streamHandler = logging.StreamHandler()
streamHandler.setLevel(logging.DEBUG)
streamHandler.setFormatter(formatter)
app.logger.addHandler(fileHandler)
app.logger.addHandler(streamHandler)
app.logger.info("Logging is set up.")
app.run(host='0.0.0.0', port=8000, threaded=True)
I didn't like the other answers so I kept at it and it seems like I had to make my logging config AFTER Flask did it's own setup.
#app.before_first_request
def initialize():
logger = logging.getLogger("your_package_name")
logger.setLevel(logging.DEBUG)
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
formatter = logging.Formatter(
"""%(levelname)s in %(module)s [%(pathname)s:%(lineno)d]:\n%(message)s"""
)
ch.setFormatter(formatter)
logger.addHandler(ch)
My app is structured like
/package_name
__main__.py <- where I put my logging configuration
__init__.py <- conveniance for myself, not necessary
/tests
/package_name <- Actual flask app
__init__.py
/views
/static
/templates
/lib
Following these directions http://flask.pocoo.org/docs/0.10/patterns/packages/
Why not take a dive in the code and see...
The module we land on is flask.logging.py, which defines a function named create_logger(app). Inspecting that function will give a few clues as to potential culprits when troubleshooting logging issues with Flask.
EDIT: this answer was meant for Flask prior to version 1. The flask.logging.py module has considerably changed since then. The answer still helps with some general caveats and advices regarding python logging, but be aware that some of Flask's peculiarities in that regard have been addressed in version 1 and might no longer apply.
The first possible cause of conflicts in that function is this line:
logger = getLogger(app.logger_name)
Let's see why:
The variable app.logger_name is set in the Flask.__init__() method to the value of import_name, which is itself the receiving parameter of Flask(__name__). That is app.logger_name is assigned the value of __name__, which will likely be the name of your main package, let's for this example call it 'awesomeapp'.
Now, imagine that you decided to configure and create your own logger manually. What do you think the chances are that if your project is named "awesomeapp" you would also use that name to configure your logger, I think it's pretty likely.
my_logger = logging.getLogger('awesomeapp') # doesn't seem like a bad idea
fh = logging.FileHandler('/tmp/my_own_log.log')
my_logger.setLevel(logging.DEBUG)
my_logger.addHandler(fh)
It makes sense to do this... except for a few problems.
When the Flask.logger property is invoked for the first time it will in turn call the function flask.logging.create_logger() and the following actions will ensue:
logger = getLogger(app.logger_name)
Remember how you named your logger after the project and how app.logger_name shares that name too? What happens in the line of code above is that the function logging.getLogger() has now retrieved your previously created logger and the following instructions are about to mess with it in a way that will have you scratching your head later. For instance
del logger.handlers[:]
Poof, you just lost all the handlers you may have previously registered with your logger.
Other things that happen within the function, without going much into details. It creates and registers two logging.StreamHandler objects that can spit out to sys.stderr and/or Response objects. One for log level 'debug' and another for 'production'.
class DebugLogger(Logger):
def getEffectiveLevel(self):
if self.level == 0 and app.debug:
return DEBUG
return Logger.getEffectiveLevel(self)
class DebugHandler(StreamHandler):
def emit(self, record):
if app.debug and _should_log_for(app, 'debug'):
StreamHandler.emit(self, record)
class ProductionHandler(StreamHandler):
def emit(self, record):
if not app.debug and _should_log_for(app, 'production'):
StreamHandler.emit(self, record)
debug_handler = DebugHandler()
debug_handler.setLevel(DEBUG)
debug_handler.setFormatter(Formatter(DEBUG_LOG_FORMAT))
prod_handler = ProductionHandler(_proxy_stream)
prod_handler.setLevel(ERROR)
prod_handler.setFormatter(Formatter(PROD_LOG_FORMAT))
logger.__class__ = DebugLogger
logger.addHandler(debug_handler)
logger.addHandler(prod_handler)
With the above details to light it should become clearer why our manually configured logger and handlers misbehave when Flask gets involved. The new information gives us new options though. If you still want to keep separate handlers, the simplest approach is to name your logger to something different than the project (e.g. my_logger = getLogger('awesomeapp_logger') ). Another approach, if you want to be consistent with the logging protocols in Flask, is to register a logging.FileHandler object on Flask.logger using a similar approach to Flask.
import logging
def set_file_logging_handler(app):
logging_path = app.config['LOGGING_PATH']
class DebugFileHandler(logging.FileHandler):
def emit(self, record):
# if your app is configured for debugging
# and the logger has been set to DEBUG level (the lowest)
# push the message to the file
if app.debug and app.logger.level==logging.DEBUG:
super(DebugFileHandler, self).emit(record)
debug_file_handler = DebugFileHandler('/tmp/my_own_log.log')
app.logger.addHandler(debug_file_handler)
app = Flask(__name__)
# the config presumably has the debug settings for your app
app.config.from_object(config)
set_file_logging_handler(app)
app.logger.info('show me something')
Logging Quick start
-- This code will not work with more than one log file inside a class/or import
import logging
import os # for Cwd path
path = os.getcwd()
logFormatStr = '%(asctime)s %(levelname)s - %(message)s'
logging.basicConfig(filename=path + '\logOne.log', format=logFormatStr, level=logging.DEBUG), logging.info('default message')
for Multiple logging file
creating a instance of logging using logging.getLogger() method---
for each logger file required one instance of logging
we can create multiple log file but not with same instance
Create new logger instance with name or Hardcore_String
----preferred (name) this will specify exact class from where it call
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
type of logging -- INFO, DEBUG, ERROR, CRITICAL, WARNING
DEBUG----Detailed information, typically of interest only when diagnosing problems.
INFO----Confirmation that things are working as expected.
WARNING----An indication that something unexpected happened, or indicative of some problem in the near future (e.g. ‘disk space low’). The software is still working as expected.
ERROR----Due to a more serious problem, the software has not been able to perform some function.
CRITICAL----A serious error, indicating that the program itself may be unable to continue running.
Create New Formatter
format = logging.Formatter('%(asctime)s %(levelname)s - %(message)s')
create new file Handler
file_handel = logging.FileHandler(path + '\logTwo.log')
set format to FileHandler & add file_handler to logging instance [logger]
file_handel.setFormatter(format)
logger.addHandler(file_handel)
add a message to logOne.log file and logTwo.log with respective setlevel
logger.info("message for logOne")
logging.debug(" message for logTwo")
for more details

Categories