Question about sharing 'functions' between classes.
Situation:
All my own code is in 1 file
I'm using python-daemon to daemonize my script
That uses a class (Doorcamdaemon) to initiate and run.
It imports another class (Doorcam) which has a looping function
I'm using a sample script for the daemon functions, and it shows how to use the logging module.
The logging works from the main part of the script and in the Doorcamdaemon class, but not in the Doorcam class.
class Doorcamdaemon():
def __init__(self):
#skipping some content, not related to this issue
self.Doorcam=Doorcam()
def run(self):
self.Doorcam.startListening() #looping function
class Doorcam
def __init__(self):
#skipping somecontent, not related to this issue
def startListening(self):
while True:
logger.info('Hello')
app = Doorcamdaemon()
logger = logging.getLogger("DoorcamLog")
logger.setLevel(logging.DEBUG)
formatter = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s")
handler = logging.FileHandler("/var/log/doorcam.log")
handler.setFormatter(formatter)
logger.addHandler(handler)
daemon_runner = runner.DaemonRunner(app)
daemon_runner.daemon_context.files_preserve=[handler.stream]
daemon_runner.do_action()
The error returned is:
$ ./Doorcam.py start
Traceback (most recent call last):
File "./Doorcam.py", line 211, in <module>
app = Doorcamdaemon()
File "./Doorcam.py", line 159, in __init__
self.doorcam=Doorcam()
File "./Doorcam.py", line 18, in __init__
logger.info('Doorcam started capturing')
NameError: global name 'logger' is not defined
So my obvious question: How can I make it work in the Doorcam class as well?
Try moving the line
app = Doorcamdaemon()
to after the lines that create and set up logger. The traceback is telling you:
logger doesn't exist in line 18 where Doorcam's constructor tries to use it
Doorcamdaemon tries to construct a Doorcam at line 159 in its own constructor
So you can't create a Doorcamdaemon if logger isn't defined yet.
Some of the content you omitted in Doorcam.__init__ was related to this issue :)
Related
Coding this in Python as a newbie and trying to understand how to create a single instance of a logging class in a separate module. I am trying to access the logging module from my python script. So, I have different files in this automation script and I am trying to record the logs in a file and displaying the logs in the console at the same time so I would be utilizing two handlers namely: FileHandler() and StreamHandler(). The initialization of logger is in a different file called debugLogs.py and I am accessing this file from multiple Python modules running the script. But if the separate modules call debugLogs.py it creates multiple instances of the logger which means it gets printed multiple times which is not what I want. That is why I need to use singleton method to create just one instance. How do you suggest I go about doing that? I have included my version of debugLogs.py in this code and I have shown
#debugLogs.py
import logging
import logging.handlers
#open readme file and read name of latest log file created
def get_name():
with open("latestLogNames.txt") as f:
for line in f:
pass
latestLog = line
logfile_name = latestLog[:-1]
return logfile_name
class Logger(object):
_instance = None
def __new__(self, logfile_name):
if not self._instance:
self._instance = super(Logger, self).__new__(self)
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
formatter = logging.Formatter('%(message)s')
file_handler = logging.FileHandler(logfile_name)
file_handler.setFormatter(formatter)
stream_handler = logging.StreamHandler()
logger.addHandler(file_handler)
logger.addHandler(stream_handler)
return self._instance
So get_name() gets the latest log name already indicated in the readme file latestLogNames.txt and inputs the logs in the latest log file already there.
I understand that my singleton class code is not right and I am confused on how to initialize the whole class structure. But somehow I would have to pass that logfile_name value to that class. So I am planning to call this logger from a different module with something like this:
#differentModule.py
import debugLogs
logger = debugLogs.Logger(debugLogs.get_name())
And then I would use logger.info("...") to print the logs as well as store it in the file. Please tell me how to restructure the debugLogs.py and how to call it from different modules of my script.
I am building a system that is intended to run on Virtual Machines in Google Cloud Platform. However, as a form of backup, it may be run locally as well. That being said, my issue currently is with logging. I have two loggers, both work, a local logger and a cloud logger.
Cloud logger
import google.cloud.logging
from google.cloud.logging.handlers import CloudLoggingHandler
from google.oauth2 import service_account
CREDS = google.cloud.logging.Client(
project=PROJECT, credentials=service_account.Credentials.from_service_account_file(CREDENTIAL_FILE))
class GoogleLogger(CloudLoggingHandler):
def __init__(self, client=CREDS):
super(GoogleLogger, self).__init__(client)
def setup_logging():
"""
This function can be invoked in order to setup logging based on a yaml config in the
root dir of this project
"""
try:
# with open('logging.yaml', 'rt') as f:
with open(LOGGING_CONFIG, 'rt') as f:
config = yaml.safe_load(f.read())
f.close()
logging.config.dictConfig(config)
except Exception:
print('Error in Logging Configuration. Using default configs')
print(traceback.format_exc())
logging.basicConfig(level=logging.INFO)
logging.yaml
version: 1
formatters:
simple:
format: "%(name)s - %(lineno)d - %(message)s"
complex:
format: "%(asctime)s - %(name)s | %(levelname)s | %(module)s : [%(filename)s: %(lineno)d] - %(message)s"
json:
class: logger.JsonFormatter
handlers:
console:
class: logging.StreamHandler
level: DEBUG
formatter: complex
cloud:
class: logger.GoogleLogger
formatter: json
level: INFO
loggers:
cloud:
level: INFO
handlers: [console,cloud]
propagate: yes
__main__:
level: DEBUG
handlers: [console]
propagate: yes
I use setup_logging() to set everything up like so:
setup_logging()
logger = logging.getLogger(<type_of_logger>)
can be "cloud" or "__main__"
"main" only logs locally, "cloud" logs both to GCP Stackdriver Logs and locally.
Now if I'm not running on GCP, an error gets thrown here:
CREDS = google.cloud.logging.Client(
project=PROJECT, credentials=service_account.Credentials.from_service_account_file(CREDENTIAL_FILE))
What is the best way around this? The class GoogleLogger(CloudLoggingHandler): always gets run, and if isn't in GCP it breaks.
An idea is to wrap the class in a try/except block, but that sounds like a horrible idea. How do I make my code smart enough to choose which logger automatically? And if running locally, completely ignore the GoogleLogger?
Edit (Traceback)
File "import_test.py", line 2, in <module>
from logger import setup_logging
File "/Users/daudn/Documents/clean_space/tgs_workflow/utils/logger.py", line 16, in <module>
class GoogleLogger(CloudLoggingHandler):
File "/Users/daudn/Documents/clean_space/tgs_workflow/utils/logger.py", line 23, in GoogleLogger
project=PROJECT, credentials=service_account.Credentials.from_service_account_file(CREDENTIAL_FILE))
File "/usr/local/lib/python3.7/site-packages/google/cloud/logging/client.py", line 123, in __init__
self._connection = Connection(self, client_info=client_info)
File "/usr/local/lib/python3.7/site-packages/google/cloud/logging/_http.py", line 39, in __init__
super(Connection, self).__init__(client, client_info)
TypeError: __init__() takes 2 positional arguments but 3 were given
This is due to bad inheritance.
Try to pass client into parent's __init__ instead
class LoggingHandlerInherited(CloudLoggingHandler):
def __init__(self):
super().__init__(client=google.cloud.logging.Client())
Make sure that you have GOOGLE_APPLICATION_CREDENTIALS in your environments
I'm trying to write a module to use it in different scripts
import logging
from logging.handlers import RotatingFileHandler
_logger_name = "Nagios"
_print_format = "%(asctime)s - %(levelname)s - %(message)s"
_level = logging.DEBUG
class Log():
def __init__(self,log_file,logger_name=_logger_name,level=_level):
self.log_file = log_file
self.logger_name = logger_name
self.level = level
def getLog(self):
"""
Return the logging object
"""
_logger = logging.getLogger(self.logger_name)
_logger.setLevel(self.level)
_logger.addHandler(self._rotateLog())
return _logger
def _rotateLog(self):
"""
Rotating the log files if it exceed the size
"""
rh = RotatingFileHandler(self.log_file,
maxBytes=20*1024*1024, backupCount=2)
formatter = logging.Formatter(_print_format)
rh.setFormatter(formatter)
return rh
log = Log("kdfnknf").getLog()
log("hello")
I see the following error:
Traceback (most recent call last):
File "nagiosLog.py", line 45, in <module>
log("hello")
TypeError: 'Logger' object is not callable
Any idea why I'm getting this error,
When debugged using pdb I do see it returns the object and printing the dir(log) I don't see the Logger module in it.
Am I missing something here
log("Hello")
This is wrong.
Correct is
log.info("Hello")
log must be printed with logging level i.e. info/error/warning
See the logging docs:
You have to use a function, you can't just call Logger:
Logger.info(msg, *args, **kwargs)
Logs a message with level INFO on
this logger. The arguments are interpreted as for debug().
or
Logger.warning(msg, *args, **kwargs)
Logs a message with level WARNING on this logger. The arguments are >interpreted as for debug().
so instead, do:
log.info("Test info level logging...")
My unittest module breaks when testing my main file because my main file references a logger that was not initialized.
We have the following simple example.
logger_main.py:
import logging
def some_func():
logger.info(__name__ + " started ")
# Do something
logger.info(__name__ + " finished ")
return 1
if __name__ == '__main__':
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
some_func()
logger_main_tests.py:
import unittest
import logging
from logger_main import some_func
class Test(unittest.TestCase):
def setUp(self):
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger(__name__)
def testName(self):
self.assertEqual(some_func(), 1)
if __name__ == "__main__":
unittest.main()
logger_main.py runs as expected, however, logger_main_tests.py gives the following error.
Finding files... done.
Importing test modules ... done.
======================================================================
ERROR: testName (logger_main_tests.Test)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\workspaces\PythonPlayground\DataStoreHierarchy\logger_main_tests.py", line 11, in testName
self.assertEqual(some_func(), 1)
File "C:\workspaces\PythonPlayground\DataStoreHierarchy\logger_main.py", line 4, in some_func
logger.info(__name__ + " started ")
NameError: name 'logger' is not defined
----------------------------------------------------------------------
Ran 1 test in 0.003s
FAILED (errors=1)
The error makes sense since some_func() is trying to use logger that doesn't exists in its scope and I would like to figure out how to set up my unittests with a logger (set at the DEBUG level) so that any logger.info or logger.debug statement inside of my functions (such as some_func()) in logger_main.py would be printed out at the appropriate level.
Move this line outside of your main declaration.
logger = logging.getLogger(__name__)
The main will defined where the logs goes, but should not be used to defined the logger. Your module should declare the logger in its global context.
You can (and should) defined as many loggers as you need, it is common to have one per file or class and declaring it after you imports so it is available anywhere in the code that follow.
import logging
logger = logging.getLogger(__name__)
def some_func():
.....
This is because the logger is only being defined when __name__ == "__main__".
Generally you'll define one logger per file, something like:
logger = logging.getLogger(__name__)
At the top of the file, after the imports.
Also, notice that the logger defined in Test.setUp is local to the function, so it won't do anything.
Consider this code:
import logging
print "print"
logging.error("log")
I get:
print
ERROR:root:log
now if I include a thid-party module at the beginning of the previous code and rerun it I get only:
print
there are some previous question about this, but here I cannot touch the module I'm importing.
The code of the third-party module is here: http://atlas-sw.cern.ch/cgi-bin/viewcvs-atlas.cgi/offline/DataManagement/DQ2/dq2.clientapi/lib/dq2/clientapi/DQ2.py?view=markup, but my question is more general: independently of the module I'm importing I want a clean logging working in the expected way
Some (non-working) proposed solutions:
from dq2.clientapi.DQ2 import DQ2
import logging
del logging.root.handlers[:]
from dq2.clientapi.DQ2 import DQ2
import logging
logging.disable(logging.NOTSET)
logs = logging.getLogger('root')
logs.error("Some error")
the next one works, but produced some additional errors:
from dq2.clientapi.DQ2 import DQ2
import logging
reload(logging)
I get:
print
ERROR:root:log
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/afs/cern.ch/sw/lcg/external/Python/2.6.5/x86_64-slc5-gcc43- opt/lib/python2.6/atexit.py", line 24, in _run_exitfuncs
func(*targs, **kargs)
File "/afs/cern.ch/sw/lcg/external/Python/2.6.5/x86_64-slc5-gcc43-opt/lib/python2.6/logging/__init__.py", line 1509, in shutdown
h.close()
File "/afs/cern.ch/sw/lcg/external/Python/2.6.5/x86_64-slc5-gcc43-opt/lib/python2.6/logging/__init__.py", line 705, in close
del _handlers[self]
KeyError: <logging.StreamHandler instance at 0x2aea031f7248>
Error in sys.exitfunc:
Traceback (most recent call last):
File "/afs/cern.ch/sw/lcg/external/Python/2.6.5/x86_64-slc5-gcc43-opt/lib/python2.6/atexit.py", line 24, in _run_exitfuncs
func(*targs, **kargs)
File "/afs/cern.ch/sw/lcg/external/Python/2.6.5/x86_64-slc5-gcc43-opt/lib/python2.6/logging/__init__.py", line 1509, in shutdown
h.close()
File "/afs/cern.ch/sw/lcg/external/Python/2.6.5/x86_64-slc5-gcc43-opt/lib/python2.6/logging/__init__.py", line 705, in close
del _handlers[self]
KeyError: <logging.StreamHandler instance at 0x2aea031f7248>
from dq2.clientapi.DQ2 import DQ2
import logging
logger = logging.getLogger(__name__)
ch = logging.StreamHandler()
logger.addHandler(ch)
logger.error("log")
It depends on what the other module is doing; e.g. if it's calling logging.disable then you can call logging.disable(logging.NOTSET) to reset it.
You could try reloading the logging module:
from importlib import reload
logging.shutdown()
reload(logging)
The problem is this will leave the third-party module with its own copy of logging in an unusable state, so could cause more problems later.
To completely clear existing logging configuration from the root logger, this might work:
root = logging.getLogger()
list(map(root.removeHandler, root.handlers))
list(map(root.removeFilter, root.filters))
However, this doesn't reset to the "default", this clears everything. You'd have to then add a StreamHandler to achieve what you want.
A more complete solution that doesn't invalidate any loggers. Should work, unless some module does something strange like holding a reference to a filterer or a handler.
def reset_logging():
manager = logging.root.manager
manager.disabled = logging.NOTSET
for logger in manager.loggerDict.values():
if isinstance(logger, logging.Logger):
logger.setLevel(logging.NOTSET)
logger.propagate = True
logger.disabled = False
logger.filters.clear()
handlers = logger.handlers.copy()
for handler in handlers:
# Copied from `logging.shutdown`.
try:
handler.acquire()
handler.flush()
handler.close()
except (OSError, ValueError):
pass
finally:
handler.release()
logger.removeHandler(handler)
Needless to say, you must setup your logging after running reset_logging().