How to create a Custom Validator in Python Clint Module - python

I am using the Python package named Clint to prettify the inputs necessary in my application. In the package you can access the module validators and use it combined with prompt to properly ask users to input data.
I was looking for a possibility to implement a custom validator in Clint, due to a relatively short list of built-in classes of validators in the module:
[FileValidator, IntegerValidator, OptionValidator, PathValidator, RegexValidator, ValidationError]
So I wrote the code bellow:
from clint.textui import prompt, validators
class NumberValidator(object):
message = 'Input is not valid.'
def __init__(self, message=None):
if message is not None:
self.message = message
def __call__(self, value):
"""
Validates the input.
"""
try:
if int(value) > 10:
return value
else:
raise ValueError()
except (TypeError, ValueError):
raise validators.ValidationError(self.message)
answer = prompt.query(f'Insert range in days:',
'365',
validators=[NumberValidator("Must to be > 10")],
batch=False)
print(answer)
It works, but I found the solution a bit messy. That is because using this solution I have to create a new class every time I need to perform a new different type of validation.
I think it would be better if somehow the class could be dynamic using decorators, accepting a new function each time it would be initiated. But I found myself really bad in the decorators' subject.
So I ask you to help me to make a more Pythonic solution to this problem.

Not sure if this is the best way, but I may found a way better to this issue. In the code bellow I can create as much custom functions as I want (custom_validation_1, custom_validation_2, custom_validation_3...) and then just change the parameter validators in the prompt.query
from clint.textui import prompt, validators
class InputValidator(object):
message = 'Input is not valid.'
def __init__(self, fun, message=None, *args):
if message is not None:
self.message = message
self.my_function = fun
self.my_args = args
def __call__(self, value):
"""
Validates the input.
"""
try:
return self.my_function(value, *self.my_args)
except (TypeError, ValueError):
raise validators.ValidationError(self.message)
def custom_validation_1(value, number):
if int(value) > int(number):
return value
else:
raise ValueError
answer = prompt.query(f'Insert range in days:',
'365',
validators=[InputValidator(custom_validation_1,
"Must to be greater than 10",
10)],
batch=False)
print(answer)

Related

Pythonic Classes and the Zen of Python

The Zen of Python says:
“There should be one—and preferably only one—obvious way to do it.”
Let’s say I want to create a class that builds a financial transaction. The class should allow the user to build a transaction and then call a sign() method to sign the transaction in preparation for it to be broadcast via an API call.
The class will have the following parameters:
sender
recipient
amount
signer (private key for signing)
metadata
signed_data
All of these are strings, except for the amount which is an int, and all are required except for the last two: metadata which is an optional parameter, and signed_data which is created when the method sign() is called.
We would like all of the parameters to undergo some kind of validation before the signing happens so we can reject badly formatted transactions by raising an appropriate error for the user.
This seems straight-forward using a classic Python class and constructor:
class Transaction:
def __init__(self, sender, recipient, amount, signer, metadata=None):
self.sender = sender
self.recipient = recipient
self.amount = amount
self.signer = signer
if metadata:
self.metadata = metadata
def is_valid(self):
# check that all required parameters are valid and exist and return True,
# otherwise return false
def sign(self):
if self.is_valid():
# sign transaction
self.signed_data = "pretend signature"
else:
# raise InvalidTransactionError
Or with properties:
class Transaction:
def __init__(self, sender, recipient, amount, signer, metadata=None):
self._sender = sender
self._recipient = recipient
self._amount = amount
self._signer = signer
self._signed_data = None
if metadata:
self._metadata = metadata
#property
def sender(self):
return self._sender
#sender.setter
def sender(self, sender):
# validate value, raise InvalidParamError if invalid
self._sender = sender
#property
def recipient(self):
return self._recipient
#recipient.setter
def recipient(self, recipient):
# validate value, raise InvalidParamError if invalid
self._recipient = recipient
#property
def amount(self):
return self._amount
#amount.setter
def amount(self, amount):
# validate value, raise InvalidParamError if invalid
self._amount = amount
#property
def signer(self):
return self._signer
#signer.setter
def signer(self, signer):
# validate value, raise InvalidParamError if invalid
self._signer = signer
#property
def metadata(self):
return self._metadata
#metadata.setter
def metadata(self, metadata):
# validate value, raise InvalidParamError if invalid
self._metadata = metadata
#property
def signed_data(self):
return self._signed_data
#signed_data.setter
def signed_data(self, signed_data):
# validate value, raise InvalidParamError if invalid
self._signed_data = signed_data
def is_valid(self):
return (self.sender and self.recipient and self.amount and self.signer)
def sign(self):
if self.is_valid():
# sign transaction
self.signed_data = "pretend signature"
else:
# raise InvalidTransactionError
print("Invalid Transaction!")
We can now validate each value when it’s set so by the time we go to sign we know we have valid parameters and the is_valid() method only has to check that all required parameters have been set. This feels a little more Pythonic to me than doing all the validation in the single is_valid() method but I am unsure if all the extra boiler plate code is really worth it.
With dataclasses:
#dataclass
class Transaction:
sender: str
recipient: str
amount: int
signer: str
metadata: str = None
signed_data: str = None
def is_valid(self):
# check that all parameters are valid and exist and return True,
# otherwise return false
def sign(self):
if self.is_valid():
# sign transaction
self.signed_data = "pretend signature"
else:
# raise InvalidTransactionError
print("Invalid Transaction!")
Comparing this to Approach
1, this is pretty nice. It’s concise, clean, and readable and already has __init__(), __repr__() and __eq__() methods built-in. On the other hand, compared to Approach
2 we’re back to validating all the inputs via a massive is_valid() method.
We could try to use properties with dataclasses but that's actually harder than it sounds. According to this blog post it can be done something like this:
#dataclass
class Transaction:
sender: str
_sender: field(init=False, repr=False)
recipient: str
_recipient: field(init=False, repr=False)
. . .
# properties for all parameters
def is_valid(self):
# if all parameters exist, return True,
# otherwise return false
def sign(self):
if self.is_valid():
# sign transaction
self.signed_data = "pretend signature"
else:
# raise InvalidTransactionError
print("Invalid Transaction!")
Is there one and only one obvious way to do this? Are dataclasses recommended for this kind of application?
As a general rule, and not limited to Python, it is a good idea to write code which "fails fast": that is, if something goes wrong at runtime, you want it to be detected and signalled (e.g. by throwing an exception) as early as possible.
Especially in the context of debugging, if the bug is that an invalid value is being set, you want the exception to be thrown at the time the value is set, so that the stack trace includes the method setting the invalid value. If the exception is thrown at the time the value is used, then you can't signal which part of the code caused the invalid value.
Of your three examples, only the second one allows you to follow this principle. It may require more boilerplate code, but writing boilerplate code is easy and doesn't take much time, compared to debugging without a meaningful stack trace.
By the way, if you have setters which do validation, then you should call these setters from your constructor too, otherwise it's possible to create an object with an invalid initial state.
Given your constraints, I think your dataclass approach can be improved to produce an expressive and idiomatic solution with very strong runtime assertions about the resulting Transaction instances, mostly by leveraging the __post_init__ mechanism:
from dataclasses import dataclass, asdict, field
from typing import Optional
#dataclass(frozen=True)
class Transaction:
sender: str
recipient: str
amount: int
signer: str
metadata: Optional[str] = None
signed_data: str = field(init=False)
def is_valid(self) -> bool:
... # implement your validity assertion logic
def __post_init__(self):
if self.is_valid():
object.__setattr__(self, "signed_data", "pretend signature")
else:
raise ValueError(f"Invalid transaction with parameter list "
f"{asdict(self)}.")
This reduces the amount of code you have to maintain and understand to a degree where every written line relates to a meaningful part of your requirements, which is the essence of pythonic code.
Put into words, instances of this Transaction class may specify metadata but don't need to and may not supply their own signed_data, something which was possible in your variant #3. Attributes can't be mutated any more after initialization (enforced by frozen=True), so that an instance that is valid cannot be altered into an invalid state. And most importantly, since the validation is now part of the constructor, it is impossible for an invalid instance to exist. Whenever you are able to refer to a Transaction in runtime, you can be 100% sure that it passed the validity check and would do so again.
Since you based your question on python-zen conformity (referring to Beautiful is better than ugly and Simple is better than complex in particular), I'd say this solution is preferable to the property based one.

How to provide feedback that an error occurred in the error handling methods of a class?

I am working with a class in python that is part of a bigger program. The class is calling different methods.
If there is an error in one of the method I would like code to keep running after, but after the program is finished, I want to be able to see which methods had potential errors in them.
Below is roughly how I am structuring it at the moment, and this solution doesn't scale very well with more methods. Is there a better way to provide feedback (after the code has been fully run) as to which of the method had a potential error?
class Class():
def __init__(self):
try:
self.method_1()
except:
self.error_method1 = "Yes"
break
try:
self.method_2()
except:
self.error_method2 = "Yes"
break
try:
self.method_3()
except:
self.error_method3 = "Yes"
break
Although you could use sys.exc_info() to retrieve information about an Exception when one occurs as I mentioned in a comment, doing so may not be required since Python's standard try/expect mechanism seems adequate.
Below is a runnable example showing how to do so in order to provide "feedback" later about the execution of several methods of a class. This approach uses a decorator function, so should scale well since the same decorator can be applied to as many of the class' methods as desired.
from contextlib import contextmanager
from functools import wraps
import sys
from textwrap import indent
def provide_feedback(method):
""" Decorator to trap exceptions and add messages to feedback. """
#wraps(method)
def wrapped_method(self, *args, **kwargs):
try:
return method(self, *args, **kwargs)
except Exception as exc:
self._feedback.append(
'{!r} exception occurred in {}()'.format(exc, method.__qualname__))
return wrapped_method
class Class():
def __init__(self):
with self.feedback():
self.method_1()
self.method_2()
self.method_3()
#contextmanager
def feedback(self):
self._feedback = []
try:
yield
finally:
# Example of what could be done with any exception messages.
# They could instead be appended to some higher-level container.
if self._feedback:
print('Feedback:')
print(indent('\n'.join(self._feedback), ' '))
#provide_feedback
def method_1(self):
raise RuntimeError('bogus')
#provide_feedback
def method_2(self):
pass
#provide_feedback
def method_3(self):
raise StopIteration('Not enough foobar to go around')
inst = Class()
Output:
Feedback:
RuntimeError('bogus') exception occurred in Class.method_1()
StopIteration('Not enough foobar to go around') exception occurred in Class.method_3()

Python 3 - raise statement

I am currently working through Learning Python by Mark Lutz and David Ascher and I have come across a section of code that keeps bringing up errors. I am aware that that book was written with Python 2 in mind unlike the Pyhton 3 I am using. I was wondering if anyone knew a solution to my problem as I have looked everywhere but I have been unable to find a solution .
.........................
MyBad = 'oops'
def stuff( ):
raise MyBad
try:
stuff( )
except MyBad:
print('got it')
Basically, MyBad is not an exception, and the raise statement can only be used with exceptions.
To make MyBad an exception, you must make it extend a subclass of Exception. For instance, the following will work:
class MyBad(Exception):
pass
def stuff( ):
raise MyBad
try:
stuff( )
except MyBad:
print('got it')
Output:
got it
However, it's better to raise an instance of an exception class, rather than the class itself, because it allows the use of parameters, usually describing the error. The following example illustrates this:
class MyBad(Exception):
def __init__(self, message):
super().__init__()
self.message = message
def stuff(message):
raise MyBad(message)
try:
stuff("Your bad")
except MyBad as error:
print('got it (message: {})'.format(error.message))
Output:
got it (Your bad)
You cannot raise a custom exception without creating a class (at least an empty one).
You can add custom text as you want by using also an __init__ function instead of pass:
class MyBad(Exception):
pass
# def __init__(self, txt):
# print(txt)
def stuff( ):
raise MyBad('test')
try:
stuff( )
except MyBad:
print('got it')
If you use pass, you will have :
got it
If you use the __init__() in comment, you will have
test and got it

Exceptions that reflect error codes of a remote service

I'm working with an external service which reports errors by code.
I have the list of error codes and the associated messages. Say, the following categories exist: authentication error, server error.
What is the smartest way to implement these errors in Python so I can always lookup an error by code and get the corresponding exception object?
Here's my straightforward approach:
class AuthError(Exception):
pass
class ServerError(Exception):
pass
map = {
1: AuthError,
2: ServerError
}
def raise_code(code, message):
""" Raise an exception by code """
raise map[code](message)
Would like to see better solutions :)
Your method is correct, except that map should be renamed something else (e.g. ERROR_MAP) so it does not shadow the builtin of the same name.
You might also consider making the function return the exception rather than raising it:
def error(code, message):
""" Return an exception by code """
return ERROR_MAP[code](message)
def foo():
raise error(code, message)
By placing the raise statement inside foo, you'd raise the error closer to where the error occurred and there would be one or two less lines to trace through if the stack trace is printed.
Another approach is to create a polymorphic base class which, being instantiated, actually produces a subclass that has the matching code.
This is implemented by traversing __subclasses__() of the parent class and comparing the error code to the one defined in the class. If found, use that class instead.
Example:
class CodeError(Exception):
""" Base class """
code = None # Error code
def __new__(cls, code, *args):
# Pick the appropriate class
for E in cls.__subclasses__():
if E.code == code:
C = E
break
else:
C = cls # fall back
return super(CodeError, cls).__new__(C, code, *args)
def __init__(self, code, message):
super(CodeError, self).__init__(message)
# Subclasses with error codes
class AuthError(CodeError):
code = 1
class ServerError(CodeError):
code = 2
CodeError(1, 'Wrong password') #-> AuthError
CodeError(2, 'Failed') #-> ServerError
With this approach, it's trivial to associate error message presets, and even map one class to multiple codes with a dict.

how to mock an exception call in python?

I have a code like this:
def extract(data):
if len(data) == 3:
a = 3
else:
component = data.split("-")
if len(component) == 3:
a,b,c = component
else:
raise globals.myException("data1", "Incorrect format", data)
return a,b,c
This is a simplified one. I want to mock the exception class globals.myException. I'm doing that:
def test_extract_data_throws_exception(self):
with patch('globals.myException') as mock:
mock.__init__("data1", "Incorrect format", "")
with self.assertRaises(myException):
self.assertEqual(extract(""), (""))
And I always get the error: "TypeError: exceptions must be old-style classes or derived from BaseException, not MagicMock"
EDIT: As #Aaron Digulla suggest, monkey patching is the correct solution. I post the solution to help others.
def test_extract_data_throws_exception(self):
#monkey patching
class ReplaceClass(myException):
def __init__(self, module, message, detail = u''):
pass
globals.myException = ReplaceClass
with self.assertRaises(myException:
self.assertEqual(extract(""), (""))
The reason is raise checks the type of the argument. It must be a string (a.k.a "old style exceptions") or derived from BaseException
Since a mock isn't either, raise refuses to use it.
In this specific case, you either have to raise the exception or use monkey patching (= overwrite the symbol globals.myException in your test and restore it afterwards).

Categories