Trying to run robot code in python.
When running the following code, on the output console I get 'No handlers could be found for logger "RobotFramework"'
How do I log/print to console the raise AssertionError from another module?
Or log all logs sent to logger "RobotFramework"
from ExtendedSelenium2Library import ExtendedSelenium2Library
from robot.api import logger
class firsttest():
def googleit(self):
self.use_url = 'https://google.ca'
self.use_browser = 'chrome'
s2l = ExtendedSelenium2Library()
s2l.open_browser(self.use_url, self.use_browser)
s2l.maximize_browser_window()
try:
s2l.page_should_contain('this text does not exist on page')
except:
logger.console('failed')
runit = firsttest()
runit.googleit()
Get following console output:
failed
No handlers could be found for logger "RobotFramework"
Related
I'm trying to write all the stacktrace output that appears in the console to a file. Even better would be to also display in the console and written to a file. I'm running a pyqt application with multiple modules.
I'm using the approach here Print exception with stack trace to file
It works fine if I copy the above approach:
import logging
from logging.handlers import RotatingFileHandler
import traceback
logger = logging.getLogger("Rotating Log")
logger.setLevel(logging.ERROR)
handler = RotatingFileHandler("log.txt", maxBytes=10000, backupCount=5)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
try:
1/0
except Exception as e:
logger.error(str(e))
logger.error(traceback.format_exc())
But if I change the "try:" to run my application I get no output.
try:
_ventana = arcMainWindow()
_ventana.show()
app.exec_()
For example, the application calls the function below, and this has an error in it like print(9/0) then I don't get any output.
self.myQTbutton.clicked.connect(lambda:mymodule.myfunction(self))
I've not been able to find a simple way to ensure that a traceback from any function in any module gets written to the same log file. I've tried importing the logger but couldn't get that to import correctly. Would appreciate any help with this as new to this aspect of python.
I have a library I'm working on which makes use of Click. It's contained within a Docker image. I'm trying to test it using pytest using click.testing.CliRunner. I'm using logging to write logs, and I've specified that these logs should be emitted in the pyproject.toml. When an exception is raised in my code, and only within Docker, I get the following exception from Click:
/opt/conda/lib/python3.8/site-packages/click/testing.py:434: ValueError
except Exception as e:
if not catch_exceptions:
raise
exception = e
exit_code = 1
exc_info = sys.exc_info()
finally:
sys.stdout.flush()
> stdout = outstreams[0].getvalue()
E ValueError: I/O operation on closed file.
/opt/conda/lib/python3.8/site-packages/click/testing.py:434: ValueError
I've managed to minimally reproduce this issue. My code looks something like this:
import logging, click
logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)
#click.command()
#click.argument('value')
def main(value):
logger.info(value)
raise RuntimeError()
My tests look like this:
import pytest
from click.testing import CliRunner
from main import main
def test_main():
runner = CliRunner()
runner.invoke(main, ['hello'], catch_exceptions=False)
assert True
And my pyproject.toml is:
[tool.pytest.ini_options]
log_cli = true
log_level = "INFO"
Removing the logging, the CliRunner, or pytest (i.e. running test_main directly) does not trigger the ValueError, and the RuntimeError is the only exception raised. Running this outside of a Docker container also does not raise the ValueError.
How can I avoid this error?
This code is available on a GitHub repo for reproduction. I reproduced this issue in a continuum/miniconda3 container.
Why isn't this code creating a log file? It used to work... I ran it from command line and debugger vscode...
I have logged at info and error level and still nothing.
Seems like an empty file should at least be created.. .but then again, maybe python does a lazy create.
import logging
import argparse
import datetime
import sys
import platform
def main():
print("something")
logging.error("something")
if(__name__ == '__main__'):
the_time = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
msg = "Start time: {}".format(the_time)
print(msg)
logging.info(msg)
parser = argparse.ArgumentParser(prog='validation')
parser.add_argument("-L", "--log", help="log level", required=False, default='INFO')
args = parser.parse_args()
numeric_level = getattr(logging, args.log.upper(), None)
print(numeric_level)
if not isinstance(numeric_level, int):
raise ValueError('Invalid log level: %s' % args.log.upper())
print("setting log level to: {} for log file validate.log".format(args.log.upper()))
logging.basicConfig(filename='validate.log', level=numeric_level, format='%(asctime)s %(levelname)-8s %(message)s', datefmt='%d-%b-%Y %H:%M:%S')
logging.info("Python version: {}".format(sys.version))
main()
The documentation of the function logging.basicConfig() says:
This function does nothing if the root logger already has handlers configured, unless the keyword argument force is set to True.
Now, does your root logger has handlers? Initially, no. However, when you call any of the log functions (debug, info, error, etc.), it creates a default handler. Therefore, calling basicConfig after these functions does nothing.
Running this program
print(logging.getLogger().hasHandlers())
logging.error('test message')
print(logging.getLogger().hasHandlers())
produces:
$ python3 test.py
False
ERROR:root:test message
True
To fix your issue, just add force=True as an argument for basicConfig.
I would like to have:
a main.log file with all logs above DEBUG level to be captured from main and imported modules
the console should show only ERROR level logs from main and its imported submodules.
Note: I may have no control on the error handling logs of the imported submodules.
Here is the main.py code for this:
# main.py importing a submodule
import logging
import submodule
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
# log to console
c_handler = logging.StreamHandler()
console_format = logging.Formatter("[%(levelname)s] %(message)s")
c_handler.setFormatter(console_format)
c_handler.setLevel = logging.INFO
logger.addHandler(c_handler)
logger.error("This is an error!!! Logged to console")
# log to file from main
logfile = "./logging/main.log"
f_handler = logging.FileHandler(filename=logfile)
f_format = logging.Formatter("%(asctime)s: %(name)-18s [%(levelname)-8s] %(message)s")
f_handler.setFormatter(f_format)
f_handler.setLevel = logging.DEBUG
logger.addHandler(f_handler)
logger.debug("This is a debug error. Not logged to console, but should log to file")
... and the submodule.py code ...
# submodule.py
import logging
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
formatter = logging.Formatter('%(levelname)s:%(name)s:%(message)s')
# log to console
c_handler = logging.StreamHandler()
c_handler.setFormatter(formatter)
logger.addHandler(c_handler)
logger.info("This is an info message from submodule, should be recorded in main.log!")
logger.debug("This is a debug message from submodule, also should be recorded in main.log!!")
When I run main.py:
[ERROR] This is an error!!! Logged to console shows up correctly in the console
But...
Console also shows...
INFO:submodule:This is an info message from submodule, should be recorded in main.log!
[DEBUG] This is a debug error. Not logged to console, but should log to file
The main.log file only shows yy-mm-dd hh:mm:ss: __main__ [DEBUG ] This is a debug error. Not logged to console, but should log to file only. It does not show logs from the submodule.py
Appreciate knowing:
Where am I going wrong?
What would be the code correction needed?
EDIT: Based on #Dan D. suggestion changed submodule.py as follows:
# submodule.py
import logging
logger = logging.getLogger(__name__)
def logsomething():
logger.info("This is an info message from submodule, should be recorded in main.log!")
logger.debug("This is a debug message from submodule, also should be recorded in main.log!!")
... and the program logs to console and file appropriately.
Q. If I want to change to message format for the submodule.py only, can this be done through main.py?
Your submodule should just be:
import logging
logger = logging.getLogger(__name__)
logger.info("This is an info message from submodule, should be recorded in main.log!")
logger.debug("This is a debug message from submodule, also should be recorded in main.log!!")
Then your main module should be:
# main.py importing a submodule
import logging
logger = logging.getLogger(__name__)
# log to console
c_handler = logging.StreamHandler()
console_format = logging.Formatter("[%(levelname)s] %(message)s")
c_handler.setFormatter(console_format)
c_handler.setLevel(logging.INFO)
logging.getLogger().addHandler(c_handler)
# log to file from main
logfile = "./logging/main.log"
f_handler = logging.FileHandler(filename=logfile)
f_format = logging.Formatter("%(asctime)s: %(name)-18s [%(levelname)-8s] %(message)s")
f_handler.setFormatter(f_format)
f_handler.setLevel(logging.DEBUG)
logging.getLogger().addHandler(f_handler)
logging.getLogger().setLevel(logging.DEBUG)
import submodule
logger.error("This is an error!!! Logged to console")
logger.debug("This is a debug error. Not logged to console, but should log to file")
Edit: The handlers have to be added before the code in the submodule runs. To effect this the import submodule was moved after the code that sets up the handlers.
Normally modules shouldn't have any top level logging calls so all the imports can be done at the top and then callables that use logging are called indirectly by the code in the if __name__=="__main__": after it sets up logging.
I want to find out how logging should be organised given that I write many scripts and modules that should feature similar logging. I want to be able to set the logging appearance and the logging level from the script and I want this to propagate the appearance and level to my modules and only my modules.
An example script could be something like the following:
import logging
import technicolor
import example_2_module
def main():
verbose = True
global log
log = logging.getLogger(__name__)
logging.root.addHandler(technicolor.ColorisingStreamHandler())
# logging level
if verbose:
logging.root.setLevel(logging.DEBUG)
else:
logging.root.setLevel(logging.INFO)
log.info("example INFO message in main")
log.debug("example DEBUG message in main")
example_2_module.function1()
if __name__ == '__main__':
main()
An example module could be something like the following:
import logging
log = logging.getLogger(__name__)
def function1():
print("printout of function 1")
log.info("example INFO message in module")
log.debug("example DEBUG message in module")
You can see that in the module there is minimal infrastructure written to import the logging of the appearance and the level set in the script. This has worked fine, but I've encountered a problem: other modules that have logging. This can result in output being printed twice, and very detailed debug logging from modules that are not my own.
How should I code this such that the logging appearance/level is set from the script but then used only by my modules?
You need to set the propagate attribute to False so that the log message does not propagate to ancestor loggers. Here is the documentation for Logger.propagate -- it defaults to True. So just:
import logging
log = logging.getLogger(__name__)
log.propagate = False