I'm using PTB library and have many handlers as:
def some_handler(update, context): # Update is new data from user, context is all my data
do_something()
And I want to notify user if error has occured, like:
def some_handler(update, context):
try:
do_something()
except Exception as e:
notify_user(text="Some error occured")
logger.error(e)
To follow DRY and make code more nicely I wrote such decorator:
def bot_logger(text: str = ''):
def decorator(function):
#loguru_logger.catch
#wraps(function)
def wrapper(*args, **kwargs):
try:
return function(*args, **kwargs)
except Exception as e:
notify_user(text=f'Unknown error. {text} Please try againt latter')
loguru_logger.error(e
return wrapper
return decorator
But in most cases I'm getting a pretty obscure log record like
2021-11-26 19:47:32.558 | ERROR | bot:wrapper:46 - There are no messages to send
Question:
How I can make this message a more informative as a standard python error?
What should I fix in bot_logger decorator?
Logger setup:
from loguru import logger as loguru_logger
loguru_logger.add(
sink="log_error.txt",
filter=lambda record: record["level"].name == "ERROR",
backtrace=True,
format="{time} {level} {function}:{line} {message}",
level="ERROR",
rotation="1 MB",
compression="zip",
enqueue=True,
diagnose=True
)
P.S. I checked another similar questions
Best practices for logging in python
Python logging using a decorator
And the others but don't found an asnwer
Also, I tried different logger formats and parameters but it's not change a log record a much
Related
I have a class in which I log errors and raise them. However, I have different functions which expect that error.
Now, even if these functions except the error properly, it is still logged. This leads to confusing log files in which multiple conflicting entries can be seen. For example:
import logging
logging.basicConfig(filename = "./log")
logger = logging.getLogger()
logger.setLevel(logging.INFO)
class Foo:
def __init__(self):
pass
def foo_error(self):
logger.error("You have done something very stupid!")
raise RuntimeError("You have done something very stupid!")
def foo_except(self):
try:
self.foo_error()
except RuntimeError as error:
logger.info("It was not so stupid after all!")
Foo = Foo()
Foo.foo_except()
Here, both messages show up in "./log". Preferably, I would like to suppress the first error log message if it is caught later on.
I have not seen an answer anywhere else. Maybe the way I am doing things suggests bad design. Any ideas?
You cannot really ask Python whether an exception will be caught lateron. So your only choice is to log only after you know whether the exception was caught or not.
One possible solution (though I'm not sure if this will work in your context):
import logging
logging.basicConfig(filename = "./log")
logger = logging.getLogger()
class Foo:
def __init__(self):
pass
def foo_error(self):
# logger.error("You have done something very stupid!")
raise RuntimeError("You have done something very stupid!")
def foo_except(self):
try:
self.foo_error()
except RuntimeError as error:
logger.warning("It was not so stupid after all!")
try:
Foo = Foo()
Foo.foo_except()
Foo.foo_error()
except Exception as exc:
if isinstance(exc, RuntimeError):
logger.error("%s", exc)
raise
After some more thinking and several failed attempts, i arrived at the following answer.
Firstly, as #gelonida mentioned:
You cannot really ask Python whether an exception will be caught later on.
This implies that a log entry which also raises an exception has to be written, because if the exception is not caught later on, the log entry is wiped out and missing from the file.
So instead of trying to control which log message gets written to file, we should implement a way to delete voided log messages from the file.
import logging
logging.basicConfig(filename = "./log")
logger = logging.getLogger()
logger.setLevel(logging.INFO)
class Foo:
def __init__(self):
pass
def foo_error(self):
logger.error("You have done something very stupid!")
raise RuntimeError("You have done something very stupid!")
def foo_except(self):
try:
self.foo_error()
except RuntimeError as error:
logger.info("It was not so stupid after all!")
Foo = Foo()
Foo.foo_except()
Following that logic we should replace in the original example above the line logger.info("It was not so stupid after all!") with a function that deletes the last committed log message and logs the correct one instead!
One way to achieve this is to modify the logging class and add two components. Namely a log record history and a FileHandler which supports deletion of log records. Let's start with the log record history.
class RecordHistory:
def __init__(self):
self._record_history = []
def write(self, record):
self._record_history.append(record)
def flush(self):
pass
def get(self):
return self._record_history[-1]
def pop(self):
return self._record_history.pop()
This is basically a data container which implements the write and flush methods alongside some other conveniences. The write and flush methods are required by logging.StreamHandler. For more information visit the logging.handlers documentation.
Next, we modify the existing logging.FileHandler to support the revoke method. This method allows us to delete a specific log record.
import re
class RevokableFileHandler(logging.FileHandler):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def revoke(self, record):
with open(self.baseFilename, mode="r+") as log:
substitute = re.sub(rf"{record}", "", log.read(), count=1)
log.seek(0)
log.write(substitute)
log.truncate()
Finally, we modify the Logger class. Note however that we cannot inherit from logging.Logger directly as stated here. Additionally, we add a logging.StreamHandler which pushes log records to our RecordHistory object. Also we implement a addRevokableHandler method which registers all handlers that support revoking records.
import logging
class Logger(logging.getLoggerClass()):
def __init__(self, name):
super().__init__(name)
self.revokable_handlers = []
self.record_history = RecordHistory()
stream_handler = logging.StreamHandler(stream=self.record_history)
stream_handler.setLevel(logging.INFO)
self.addHandler(stream_handler)
def addRevokableHandler(self, handler):
self.revokable_handlers.append(handler)
super().addHandler(handler)
def pop_and_log(self, level, msg):
record = self.record_history.pop()
for handler in self.revokable_handlers:
handler.revoke(record)
self.log(level, msg)
This leads to the following solution in the original code:
logging.setLoggerClass(Logger)
logger = logging.getLogger("root")
logger.setLevel(logging.INFO)
file_handler = RevokableFileHandler("./log")
file_handler.setLevel(logging.INFO)
logger.addRevokableHandler(file_handler)
class Foo:
def __init__(self):
pass
def foo_error(self):
logger.error("You have done something very stupid!")
raise RuntimeError("You have done something very stupid!")
def foo_except(self):
try:
self.foo_error()
except KeyError as error:
logger.pop_and_log(logging.INFO, "It was not so stupid after all!")
Foo = Foo()
Foo.foo_except()
Hopefully this lengthy answer can be of use to someone. Though it is still not clear to me if logging errors and info messages in such a way is considered bad code design.
I use assertions a lot in my code, and I would like to log any assertion errors that I have. After googling the problem, I didn't find a convenient solution.
So what I came up with is I added a method to logging.Logger class.
import logging
def assertion(self, bool_condition, message):
try:
assert bool_condition, message
except AssertionError:
self.exception(message)
raise
logging.Logger.assertion = assertion
""" apply log config """
log = logging.getLogger(__name__)
log.assertion(1 == 2, 'Assertion failed.')
It seems to do the job but I was wondering if it is a good practice to do so.
I am working with a class in python that is part of a bigger program. The class is calling different methods.
If there is an error in one of the method I would like code to keep running after, but after the program is finished, I want to be able to see which methods had potential errors in them.
Below is roughly how I am structuring it at the moment, and this solution doesn't scale very well with more methods. Is there a better way to provide feedback (after the code has been fully run) as to which of the method had a potential error?
class Class():
def __init__(self):
try:
self.method_1()
except:
self.error_method1 = "Yes"
break
try:
self.method_2()
except:
self.error_method2 = "Yes"
break
try:
self.method_3()
except:
self.error_method3 = "Yes"
break
Although you could use sys.exc_info() to retrieve information about an Exception when one occurs as I mentioned in a comment, doing so may not be required since Python's standard try/expect mechanism seems adequate.
Below is a runnable example showing how to do so in order to provide "feedback" later about the execution of several methods of a class. This approach uses a decorator function, so should scale well since the same decorator can be applied to as many of the class' methods as desired.
from contextlib import contextmanager
from functools import wraps
import sys
from textwrap import indent
def provide_feedback(method):
""" Decorator to trap exceptions and add messages to feedback. """
#wraps(method)
def wrapped_method(self, *args, **kwargs):
try:
return method(self, *args, **kwargs)
except Exception as exc:
self._feedback.append(
'{!r} exception occurred in {}()'.format(exc, method.__qualname__))
return wrapped_method
class Class():
def __init__(self):
with self.feedback():
self.method_1()
self.method_2()
self.method_3()
#contextmanager
def feedback(self):
self._feedback = []
try:
yield
finally:
# Example of what could be done with any exception messages.
# They could instead be appended to some higher-level container.
if self._feedback:
print('Feedback:')
print(indent('\n'.join(self._feedback), ' '))
#provide_feedback
def method_1(self):
raise RuntimeError('bogus')
#provide_feedback
def method_2(self):
pass
#provide_feedback
def method_3(self):
raise StopIteration('Not enough foobar to go around')
inst = Class()
Output:
Feedback:
RuntimeError('bogus') exception occurred in Class.method_1()
StopIteration('Not enough foobar to go around') exception occurred in Class.method_3()
In my python code, I am expecting exceptions could possibly be raised after calling method requests.Session.request(), for example these:
requests.exceptions.ConnectTimeout
requests.exceptions.ReadTimeout
requests.exceptions.Timeout
When any of these expected exceptions are raised, I handle them appropriately, for example possibly a retry situation.
My question, I am using py.test for unit testing, and I purposely want to inject raising exceptions from specific parts of my code. For example, the function that calls requests.Session.request(), instead of returning a valid requests.Response, it raises a requests.exception.
What I want to make sure that my code successfully handles expected and unexpected exceptions coming from other packages, which include those exceptions from requests.
Maybe... Is there a #decorator that I could add to the aforementioned function to raise exceptions upon request during unit testing?
Suggestions for doing exceptions injections for unit testing? (proper phrasing of my question would be greatly appreciated.)
Thanks for the responses!!!
Here is the entire singleton class that creates requests.Session and calls requests.Session.request():
class MyRequest(metaclass=Singleton):
def __init__(self, retry_tries=3, retry_backoff=0.1, retry_codes=None):
self.session = requests.session()
if retry_codes is None:
retry_codes = set(REQUEST_RETRY_HTTP_STATUS_CODES)
self.session.mount(
'http',
HTTPAdapter(
max_retries=Retry(
total=retry_tries,
backoff_factor=retry_backoff,
status_forcelist=retry_codes,
),
),
)
def request(self, request_method, request_url, **kwargs):
try:
return self.session.request(method=request_method, url=request_url, **kwargs)
except Exception as ex:
log.warning(
"Session Request: Failed: {}".format(get_exception_message(ex)),
extra={
'request_method': request_method,
'request_url': request_url
}
)
raise
You can make use of py.test raises, check it here: http://doc.pytest.org/en/latest/assert.html#assertions-about-expected-exceptions
Taking into account your code you could do something along the lines of the following:
from requests.exceptions import ConnectTimeout, ReadTimeout, Timeout
from unittest.mock import patch
import pytest
class TestRequestService:
#patch('path_to_module.MyRequest')
def test_custom_request(self, my_request_mock):
my_request_mock.request.side_effect = ConnectTimeout
with pytest.raises(ConnectTimeout):
my_request_mock.request(Mock(), Mock())
Moreover, you could make use of pytest.parametrize(http://doc.pytest.org/en/latest/parametrize.html) as well:
from requests.exceptions import ConnectTimeout, ReadTimeout, Timeout
from unittest.mock import patch
import pytest
class TestRequestService:
#pytest.mark.parametrize("expected_exception", [ConnectTimeout, ReadTimeout, Timeout])
#patch('path_to_module.MyRequest')
def test_custom_request(self, my_request_mock, expected_exception):
my_request_mock.request.side_effect = expected_exception
with pytest.raises(expected_exception):
my_request_mock.request(Mock(), Mock())
Here you can find some more examples about parametrize: http://layer0.authentise.com/pytest-and-parametrization.html
In my application I am catching exception requests.exceptions.ConnectionError
and returning message which is in expected variable below.
So the test looks like this:
import pytest
import requests
expected = {'error': 'cant connect to given url'}
class MockConnectionError:
def __init__(self, *args, **kwargs):
raise requests.exceptions.ConnectionError
def test_project_method(monkeypatch):
monkeypatch.setattr("requests.get", MockConnectionError)
response = project_method('http://some.url.com/')
assert response == expected
Patching, mocking and dependecy-injection are techniques to inject fake objects. Patching is sometimes hard to do right, on the other hand dependency injection requires that have to change the code you want to test.
This is just a simple example how to use dependency-injection. First the code we want to test:
import requests
...
def fetch_data(url, get=requests.get):
return get(url).json()
# this is how we use fetch_data in productive code:
answer = fetch_data("www.google.com?" + term)
And this is then the test:
import pytest
def test_fetch():
def get_with_timeout(url):
raise ConnectTimeout("message")
with pytest.raises(ConnectTimeout) as e:
# and now we inject the fake get method:
fetch_data("https://google.com", get=get_with_timeout)
assert e.value == "message"
In your example above, the mocking technique would be as follows:
def test_exception():
class TimeoutSessionMock:
def get(self, *args, **kwargs):
raise ConnectTimeout("message")
mr = MyRequest()
mr.session = TimeoutSessionMock()
with pytest.raises(ConnectTimeout) as e:
mr.request("get", "http://google.com")
assert e.value == "message"
I got this-like hierarchy and similar code:
class FrontendException:
pass
class BackendException:
pass
class BackendRequest:
def exec():
raise BackendException()
class Frontend:
def cmd_a():
BackendRequest().exec()
def cmd_b():
BackendRequest().exec()
The goal is to make developer able to operate with Frontend objects and exceptions within functions cmd_x of Frontend.
Basicly, i need a place to handle common BackendException types, to raise FrontendException. For example:
class Frontend:
def cmd_a():
try:
BackendRequest().exec()
except BackendException as e:
raise FrontendException()
And this will be repeated in each cmd_x function! It's so ugly! And it operates with Backend things! I want to remove repeated exception handling.
Any suggestions?
Btw, my solution, but i find it ugly too, so view it after you'll try to suggest me something. Maybe you'll suggest me something about my solution.
class BaseFrontend:
def exec_request(req):
try:
return req.exec()
except BackendException as e:
raise FrontendException
class Frontend(BaseFrontend):
def cmd_a():
result self.exec_request(BackendRequest())
def cmd_b():
result self.exec_request(BackendRequest())
EDIT: Ok, yes, i know, i dont need to create a lot of classes to build simple API. But, let's see what i need in the result:
class APIManager:
def cmd_a(): ...
def cmd_b(): ...
This manager needs to access HTTP REST service to perform each command. So, if i'll get an error during REST request, i'll need to raise exception APIManagerException - i can't leave raw pycurl exception, beacause APIManager user don't knows what pycurl is, he will be confused with getting pycurl error if he will give wrong ID as argument of cmd_x.
So i need to raise informative exceptions for some common cases. Let it be just one exception - APIManagerException. But i dont want to repeat try...except block each time, in each command, to each pycurl request. In fact, i want to process some errors in commands(functions cmd_x), not to parse pycurl errors.
You can create a decorator that wraps all Frontend calls, catches BackendExceptions, and raises FrontendException if they are thrown. (Honestly, though, it's not clear why Frontend and Backend are classes and not a set of functions.) See below:
class FrontendException:
pass
class BackendException:
pass
class BackendRequest:
def exec():
raise BackendException()
class Frontend:
def back_raiser(func):
def wrapped(*args, **kwargs):
try:
func(*args, **kwargs)
except BackendException:
raise FrontendException
return wrapped
#back_raiser
def cmd_a():
BackendRequest().exec()
#back_raiser
def cmd_b():
BackendRequest().exec()