I have a function that processes an AWS Lambda context, looking for a query string parameter.
The return value for the function is always a Tuple that contains the error code and the returned value. In case of success, it returns (None, the_value_of_the_query_string). In case of failure, it returns (Exception, None).
The code is written to behave similarly to what is very commonly seen in the Go world.
Here is the line triggering the warning:
file_name = file_path.split("/")[-1]
And below is the code that takes you through everything.
class QSException(Exception):
pass
def get_query_string(
event: Dict, query_string: str
) -> Union[Tuple[QSException, None], Tuple[None, str]]:
error = QSException()
#* [...snip...]
if not query_string in event["queryStringParameters"]:
return (error, None)
return (None, event["queryStringParameters"][query_string])
def get_file(event: Dict, context: Dict) -> Dict:
err, file_path = get_query_string(event, "file")
if err is not None:
message = {"message": "No file specified."}
return {"statusCode": 403, "body": json.dumps(message)}
# from here on I'm on the happy path
file_name = file_path.split("/")[-1]
#* [...snip...]
return {
#* [...bogus dict...]
}
If you follow the code, I treat the exceptions first and return 403 on the unhappy path. That is, once I've processed all exceptions I know for a fact that the error code was None and my result was a str. So I would expect that doing a .split("/") would work (which it does) and not trigger a typing warning.
Instead, I'm getting Item "None" of "Optional[str]" has no attribute "split" [union-attr].
So the question is how should typing look for this code so that I don't get this typing warning?
It is annotated correctly.
However when you unpack the tuple err, file_path = get_... the connection between those two variables is lost.
A static code analyzer (mypy, pyright, ...) will now assume that err is an Optional[QSException] and file_path is an Optional[str]. And when you check for the type of the first variable, it doesn't have any effect on the second variable type.
If you really want to keep that idiom, returning a tuple (exception, value), then just help the static code analyzers with asserts.
It's manual work (and therefore error prone), but I guess the tools are not clever enough to figure out the correct type in such a case.
err, file_path = get_query_string(event, "file")
if err:
return ...
assert isinstance(file_path, str)
# now static code analyzers know the correct type
However Python is not the same language as Go, and has completely different idioms.
Returning such a tuple is an antipattern in Python. Python, unlike Go, has real exceptions. So use them.
def get_query_string(event: Dict, query_string: str) -> str:
if not query_string in event["queryStringParameters"]:
raise QSException()
return event["queryStringParameters"][query_string]
def get_file(event: Dict, context: Dict) -> Dict:
try:
file_path = get_query_string(event, "file")
except QSException:
message = {"message": "No file specified."}
return {"statusCode": 403, "body": json.dumps(message)}
file_name = file_path.split("/")[-1]
Or alternatively just return an Optional[str] in case you don't wanna raise an exception, or a Union[QSE, str].
Related
Have this mutation
class AddStudentMutation(graphene.Mutation):
class Arguments:
input = StudentInputType()
student = graphene.Field(StudentType)
#classmethod
#staff_member_required
def mutate(cls, root, info, input):
try:
_student = Student.objects.create(**input)
except IntegrityError:
raise Exception("Student already created")
return AddStudentMutation(student=_student)
Before executing the above mutation in graphiql, I add the request header "Authorization": "JWT <token>" so that the user is authorized.
But I get the error graphql.error.located_error.GraphQLLocatedError: 'NoneType' object has no attribute 'fields'.
The error doesn't occur when I remove the header. It also works fine when I include it in requests for queries. Am I doing something wrong? I need the authorization to happen for mutations too.
I tracked the Traceback and it leads to file .../site-packages\graphql_jwt\middleware.py. It appears it's a bug in the package in function allow_any() line 18 field = info.schema.get_type(operation_name).fields.get(info.field_name). New to this I need help.
I'm using graphene-django==2.15.0 and django-graphql-jwt==0.3.4
The allow_any function that comes with django-graphql-jwt is expecting somehow to be used with Queries not Mutations. So you may overwrite the allow_any function by adding the native try/except block:
def allow_any(info, **kwargs):
try:
operation_name = get_operation_name(info.operation.operation).title()
operation_type = info.schema.get_type(operation_name)
if hasattr(operation_type, 'fields'):
field = operation_type.fields.get(info.field_name)
if field is None:
return False
else:
return False
graphene_type = getattr(field.type, "graphene_type", None)
return graphene_type is not None and issubclass(
graphene_type, tuple(jwt_settings.JWT_ALLOW_ANY_CLASSES)
)
except Exception as e:
return False
and in your settings.py you have to add the path of the overwritten allow_any function:
GRAPHQL_JWT = {
'JWT_ALLOW_ANY_HANDLER': 'path.to.middleware.allow_any'
}
I hope this may solve your problem as it worked with me
I have a very simple caching service that caches files on S3. There are times when the file I am trying to cache locally does not exist on AWS S3. As such in one of my files that uses the caching service I prefer to return None if the file i am trying to cache is no found.
However I realize that i will be using the caching service in many other places and as a result I have been told by my peers that cache.cache_file() should still raise an error in this case but a simpler one like FileNotFoundError that doesn't require the caller to do if e.response["Error"]["Code"] == "404"
My Caching Code
import logging
import os
from pathlib import Path
from stat import S_IREAD, S_IRGRP, S_IROTH
from mylib import s3
from mylib.aws.clients import s3_client, s3_resource
logger = logging.getLogger(__name__)
class Cache:
def _is_file_size_equal(self, s3_path_of_file: str, local_path: Path, file_name: str) -> bool:
bucket, key = s3.deconstruct_s3_url(f"{s3_path_of_file}/{file_name}")
s3_file_size = s3_resource().Object(bucket, key).content_length
local_file_size = (local_path / file_name).stat().st_size
return s3_file_size == local_file_size
def cache_file(self, s3_path_of_file: str, local_path: Path, file_name: str) -> None:
bucket, key = s3.deconstruct_s3_url(f"{s3_path_of_file}/{file_name}")
if not (local_path / file_name).exists() or not self._is_file_size_equal(
s3_path_of_file, local_path, file_name
):
os.makedirs(local_path, exist_ok=True)
s3_client().download_file(bucket, key, f"{local_path}/{file_name}")
os.chmod(local_path / file_name, S_IREAD | S_IRGRP | S_IROTH)
else:
logger.info("Cached File is Valid!")
My Code that calls the Caching Code
def get_required_stream(environment: str, proxy_key: int) -> Optional[BinaryIO]:
s3_overview_file_path = f"s3://{TRACK_BUCKET}/{environment}"
overview_file = f"{some_key}.mv"
local_path = _cache_directory(environment)
try:
cache.cache_file(s3_overview_file_path, local_path, overview_file)
overview_file_cache = local_path / f"{proxy_key}.mv"
return overview_file_cache.open("rb")
except botocore.exceptions.ClientError as e:
if e.response["Error"]["Code"] == "404":
return None
else:
raise
Issue
Being new to Python I am a little unsure how this would work. I assume it means that my code that calls the caching service especially the except part would look something like this.
except FileNotFoundError:
return None
And in the caching service where i have s3_client().download_file(bucket, key, f"{local_path}/{file_name}") I would wrap it with a try and catch ?
While this question probably comes across as trivial and it is I thought I would ask it here anyway since it would be good learning opportunity for me and also understand how to write clean code. I would love suggestions on how I can achieve the desired and if my assumption is wrong?
def get_required_stream(environment: str, proxy_key: int) -> Optional[BinaryIO]:
s3_overview_file_path = f"s3://{TRACK_BUCKET}/{environment}"
overview_file = f"{some_key}.mv"
local_path = _cache_directory(environment)
try:
cache.cache_file(s3_overview_file_path, local_path, overview_file)
overview_file_cache = local_path / f"{proxy_key}.mv"
return overview_file_cache.open("rb")
except botocore.exceptions.ClientError as e:
if e.response["Error"]["Code"] == "404":
exc = FileNotFoundError()
exc.filename = overview_file_cache
raise exc
raise
# then you can use your function like this
try:
filedesc = get_required_stream(...)
except FileNotFoundError e:
print(f'{e.filename} not found')
If you do not want calling code to catch botocore.exceptions.ClientError exception, you could wrap your cache_file method in a try except block and throw a specific exception. I would also go a step further and create a simple custom exception object that wraps botocore.exceptions.ClientError and exposes error_code and error_message from boto exception. That way, caller doesn't have to separately catch FileNotFoundError when file not found and then again botocore.exceptions.ClientError for a different type of error (say permission or some network error). They can just catch the custom exception and further inspect for more details.
try:
//do something
except botocore.exceptions.ClientError as ex:
raise YourCustomS3Exception(ex) //YourCustomS3Exception needs to handle ex
One clean way would be an additional file_exists() method in the Cache class, so every user of the cache can check before they attempt the actual caching/download. Just like using pythons filesystem/path functions.
An exception can still occur if the file is deleted/becomes unreachable between the file_exists() call and the download, but I think in this rare case, the botocore exception is just fine.
What is the proper way to test a scenario that while initializing my object an exception will be raised? With given snippet of code:
def __init__(self, snmp_node: str = "0", config_file_name: str = 'config.ini'):
[...]
self.config_file_name = config_file_name
try:
self.config_parser.read(self.config_file_name)
if len(self.config_parser.sections()) == 0:
raise FileNotFoundError
except FileNotFoundError:
msg = "Error msg"
return msg
I tried the following test:
self.assertTrue("Error msg", MyObj("0", 'nonExistingIniFile.ini')
But I got an AssertionError that init may not return str.
What is the proper way to handle such situation? Maybe some other workaround: I just want to be sure that if an user passes wrong .ini file the program won't accept that.
init is required to return None. I think you are looking for self.assertRaises
with self.assertRaises(FileNotFoundError):
MyObj("0", 'nonExistingIniFile.ini')
In one application I have code which generates dynamic classes which reduces the amount of duplicated code considerably. But adding type-hints for mypy checking resulted in an error. Consider the following example code (simplified to focus on the relevant bits):
class Mapper:
#staticmethod
def action() -> None:
raise NotImplementedError('Not yet implemnented')
def magic(new_name: str) -> type:
cls = type('%sMapper' % new_name.capitalize(), (Mapper,), {})
def action() -> None:
print('Hello')
cls.action = staticmethod(action)
return cls
MyCls = magic('My')
MyCls.action()
Checking this with mypy will result in the following error:
dynamic_type.py:15: error: "type" has no attribute "action"
dynamic_type.py:21: error: "type" has no attribute "action"
mypy is obviously unable to tell that the return-value from the type call is a subclass of Mapper, so it complains that "type" has not attribute "action" when I assign to it.
Note that the code functions perfectly and does what it is supposed to but mypy still complains.
Is there a way to flag cls as being a type of Mapper? I tried to simply append # type: Mapper to the line which creates the class:
cls = type('%sMapper' % new_name.capitalize(), (Mapper,), {}) # type: Mapper
But then I get the following errors:
dynamic_type.py:10: error: Incompatible types in assignment (expression has type "type", variable has type "Mapper")
dynamic_type.py:15: error: Cannot assign to a method
dynamic_type.py:15: error: Incompatible types in assignment (expression has type "staticmethod", variable has type "Callable[[], None]")
dynamic_type.py:16: error: Incompatible return value type (got "Mapper", expected "type")
dynamic_type.py:21: error: "type" has no attribute "action"
One possible solution is basically to:
Type your magic function with the expected input and output types
Leave the contents of your magic function dynamically typed with judicious use of Any and # type: ignore
For example, something like this would work:
class Mapper:
#staticmethod
def action() -> None:
raise NotImplementedError('Not yet implemnented')
def magic(new_name: str) -> Mapper:
cls = type('%sMapper' % new_name.capitalize(), (Mapper,), {})
def action() -> None:
print('Hello')
cls.action = staticmethod(action) # type: ignore
return cls # type: ignore
MyCls = magic('My')
MyCls.action()
It may seem slightly distasteful to leave a part of your codebase dynamically typed, but in this case, I don't think there's any avoiding it: mypy (and the PEP 484 typing ecosystem) deliberately does not try and handle super-dynamic code like this.
Instead, the best you can do is to cleanly document the "static" interface, add unit tests, and keep the dynamic portions of your code confined to as small of region as possible.
I'm writing an API wrapper to a couple of different web services.
I have a method that has an article url, and I want to extract text from it using alchemyapi.
def extractText(self):
#All Extract Text Methods ---------------------------------------------------------//
#Extract page text from a web URL (ignoring navigation links, ads, etc.).
if self.alchemyapi == True:
self.full_text = self.alchemyObj.URLGetText(self.article_link)
which goes to the following code in the python wrapper
def URLGetText(self, url, textParams=None):
self.CheckURL(url)
if textParams == None:
textParams = AlchemyAPI_TextParams()
textParams.setUrl(url)
return self.GetRequest("URLGetText", "url", textParams)
def GetRequest(self, apiCall, apiPrefix, paramObject):
endpoint = 'http://' + self._hostPrefix + '.alchemyapi.com/calls/' + apiPrefix + '/' + apiCall
endpoint += '?apikey=' + self._apiKey + paramObject.getParameterString()
handle = urllib.urlopen(endpoint)
result = handle.read()
handle.close()
xpathQuery = '/results/status'
nodes = etree.fromstring(result).xpath(xpathQuery)
if nodes[0].text != "OK":
raise 'Error making API call.'
return result
However I get this error ---
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "text_proc.py", line 97, in __init__
self.alchemyObj.loadAPIKey("api_key.txt");
File "text_proc.py", line 115, in extractText
if self.alchemyapi == True:
File "/Users/Diesel/Desktop/AlchemyAPI.py", line 502, in URLGetText
return self.GetRequest("URLGetText", "url", textParams)
File "/Users/Diesel/Desktop/AlchemyAPI.py", line 618, in GetRequest
raise 'Error making API call.'
I know I'm somehow passing the url string to the api wrapper in a faulty format, but I can't figure out how to fix it.
The information provided is not actually very helpful to diagnose or solve the problem. Have you considered taking a look at the response from the server? You might inspect a complete traffic log using Fiddler.
Additionally, the SDK provided by Alchemy doesn't seem to be of - cough, cough - the greatest quality. Since it really consists only of around 600 lines of source code, I'd consider writing a shorter, more robust / pythonic / whatever SDK.
I might also add that right now, even the on-site demo at the Alchemy web site is failing, so maybe your problem is related to that. I really suggest taking a look at the traffic.
You should raise Exception or a subclass thereof, instead of a string.
You're getting the error because your function GetRequest() raising a string as an exception:
if nodes[0].text != "OK":
raise 'Error making API call.'
If that's not what you want, you have two options:
You can have the function return the string or None, or
You can pass the error message to a real subclass of Exception (as suggested by knutin)
In either case, if you are assigning that return value to a variable, you can handle it accordingly. Here is an example:
Option 1
Let's assume you decide to have GetRequest() return None:
def URLGetText(self, url, textParams=None):
self.CheckURL(url)
if textParams == None:
textParams = AlchemyAPI_TextParams()
textParams.setUrl(url)
# Capture the value of GetRequest() before returning it
retval = self.GetRequest("URLGetText", "url", textParams)
if retval is None:
print 'Error making API call.' # print the error but still return
return retval
def GetRequest(self, apiCall, apiPrefix, paramObject):
# ...
if nodes[0].text != "OK":
return None
return result
This option is a little ambiguous. How do you know that it was really an error, or the return value truly was None?
Option 2
This is probably the better way to do it:
First create an subclass of Exception:
class GetRequestError(Exception):
"""Error returned from GetRequest()"""
pass
Then raise it in GetRequest()`:
def URLGetText(self, url, textParams=None):
self.CheckURL(url)
if textParams == None:
textParams = AlchemyAPI_TextParams()
textParams.setUrl(url)
# Attempt to get a legit return value & handle errors
try:
retval = self.GetRequest(apiCall, apiPrefix, paramObject)
except GetRequestError as err:
print err # prints 'Error making API call.'
# handle the error here
retval = None
return retval
def GetRequest(self, apiCall, apiPrefix, paramObject):
# ...
if nodes[0].text != "OK":
raise GetRequestError('Error making API call.')
return result
This way you're raising a legitimate error when GetRequest() doesn't return the desired result, and then you can handle the error using a try..except block and optionally print the error, stop the program there, or keep going (which is what I think you want to do based on your question).
This is Shaun from AlchemyAPI. We just posted a new version of the python SDK that raises exceptions properly. You can get it here http://www.alchemyapi.com/tools/.
If you have any other feedback about the SDK, please message me. Thanks for using our NLP service.