I have a dictionary like the following:
OAUTH2_PROVIDER = {
'SCOPES': {
'read': 'Read scope',
'write': 'Write scope',
'userinfo': 'Access to user info',
'full-userinfo': 'Access to full user info',
},
'DEFAULT_SCOPES': {
'userinfo'
},
'ALLOWED_REDIRECT_URI_SCHEMES': ['http', 'https', 'rutube'],
'PKCE_REQUIRED': import_string('tools.oauth2.is_pkce_required'),
'OAUTH2_VALIDATOR_CLASS': 'oauth2.validator.OAuth2WithJwtValidator',
'REFRESH_TOKEN_EXPIRE_SECONDS': 30 * 24 * 60 * 60,
"ACCESS_TOKEN_EXPIRE_SECONDS": 3600,
}
And I want to annotate the following key with integer type to check that it's always integer:
'REFRESH_TOKEN_EXPIRE_SECONDS': 30 * 24 * 60 * 60,
as Integer. In python 3.6 we don't have TypedDict. What may I replace it with?
Inside of dictionaries object types are preserved. You can set the type of the value inside of the dictionary (as this value is actually its own object with its own type).
In your example, you can set the type of the value as an int with:
OAUTH2_PROVIDER['REFRESH_TOKEN_EXPIRE_SECONDS']=int(
OAUTH2_PROVIDER['REFRESH_TOKEN_EXPIRE_SECONDS']
)
print(type(OAUTH2_PROVIDER['REFRESH_TOKEN_EXPIRE_SECONDS'])) #=> <class 'int'>
EDIT:
Getting to OP's core issue of removing nested if statements here:
You can simply add a single if statement that forces a int class or raises an error:
REFRESH_TOKEN_EXPIRE_SECONDS = oauth2_settings.REFRESH_TOKEN_EXPIRE_SECONDS
if not isinstance(REFRESH_TOKEN_EXPIRE_SECONDS, int):
REFRESH_TOKEN_EXPIRE_SECONDS = timedelta(seconds=REFRESH_TOKEN_EXPIRE_SECONDS)
else:
e = "REFRESH_TOKEN_EXPIRE_SECONDS must be an int"
raise ImproperlyConfigured(e)
refresh_expire_at = now - REFRESH_TOKEN_EXPIRE_SECONDS
Related
Here is the JSON response I get from an API request:
{
"associates": [
{
"name":"DOE",
"fname":"John",
"direct_shares":50,
"direct_shares_details":{
"shares_PP":25,
"shares_NP":25
},
"indirect_shares":50,
"indirect_shares_details": {
"first_type": {
"shares_PP": 25,
"shares_NP": 0
},
"second_type": {
"shares_PP": 25,
"shares_NP": 0
}
}
}
]
}
However, in some occasions, some values will be equal to None. In that case, I handle it in my function for all the values that I know will be integers. But it doesn't work in this scenario for the nested keys inside indirect_shares_details:
{
"associates": [
{
"name":"DOE",
"fname":"John",
"direct_shares":50,
"direct_shares_details":{
"shares_PP":25,
"shares_NP":25
},
"indirect_shares":None,
"indirect_shares_details": None
}
}
]
}
So when I run my function to get the API values and put them in a custom dict, I get an error because the keys are simply inexistant in the response.
def get_shares_data(response):
associate_from_api = []
for i in response["associates"]:
associate_data = {
"PM_shares": round(company["Shares"], 2),
"full_name": i["name"] + " " + ["fname"]
"details": {
"shares_in_PM": i["direct_shares"],
"shares_PP_in_PM": i["direct_shares_details"]["shares_PP"],
"shares_NP_in_PM": i["direct_shares_details"]["shares_NP"],
"shares_directe": i["indirect_shares"],
"shares_indir_PP_1": i["indirect_shares_details"]["first_type"]["shares_PP"],
"shares_indir_NP_1": i["indirect_shares_details"]["first_type"]["shares_NP"],
"shares_indir_PP_2": i["indirect_shares_details"]["second_type"]["shares_PP"],
"shares_indir_NP_2": i["indirect_shares_details"]["second_type"]["shares_NP"],
}
}
for key,value in associate_data["details"].items():
if value != None:
associate_data["details"][key] = value * associate_data["PM_shares"] / 100
else:
associate_data["calculs"][key] = 0.0
associate_from_api.append(associate_data)
return associate_from_api
I've tried conditioning the access of the nested keys only if the parent key wasn't equal to None but I ended up declaring 3 different dictionaries inside if/else conditions and it turned into a mess, is there an efficient way to achieve this?
You can try accessing the values using dict.get('key') instead of accessing them directly, as in dict['key'].
Using the first approach, you will get None instead of KeyError if the key is not there.
EDIT: tested using the dictionary from the question:
You can try pydantic
Install pydantic
pip install pydantic
# OR
conda install pydantic -c conda-forge
Define some models based on your response structure
from pydantic import BaseModel
from typing import List, Optional
# There are some common fields in your json response.
# So you can put them together.
class ShareDetail(BaseModel):
shares_PP: int
shares_NP: int
class IndirectSharesDetails(BaseModel):
first_type: ShareDetail
second_type: ShareDetail
class Associate(BaseModel):
name: str
fname: str
direct_shares: int
direct_shares_details: ShareDetail
indirect_shares: int = 0 # Sets a default value for this field.
indirect_shares_details: Optional[IndirectSharesDetails] = None
class ResponseModel(BaseModel):
associates: List[Associate]
use ResponseModel.parse_xxx functions to parse response.
Here I use parse_file funtion, you can also use parse_json function
See: https://pydantic-docs.helpmanual.io/usage/models/#helper-functions
def main():
res = ResponseModel.parse_file("./NullResponse.json",
content_type="application/json")
print(res.dict())
if __name__ == "__main__":
main()
Then the response can be successfully parsed. And it automatically validates the input.
I have a use case where I am reading some data from an API call, but need to transform the data before inserting it into a database. The data comes in a integer format, and I need to save it as a string. The database does not offer a datatype conversion, so the conversion needs to happen in Python before inserting.
Within a config file I have like:
config = {"convert_fields": ["payment", "cash_flow"], "type": "str"}
Then within python I am using the eval() function to check what type to convert the fields to.
So the code ends up being like data['field'] = eval(config['type'])(data['field'])
Does anyone have a better suggestion how I can dynamically change these values, maybe without storing the python class type within a config file.
To add, like sure I could just do str(), but there may be a need to have other fields to convert at some point, which are not string. So I want it to be dynamic, from whatever is defined in the config file for the required conversion fields.
How about using getattr() and __builtins__ that I feel is a little better than exec()/eval() in this instance.
def cast_by_name(type_name, value):
return getattr(__builtins__, type_name)(value)
print(cast_by_name("bool", 1))
Should spit back:
True
You will likely want to include some support for exceptions and perhaps defaults but this should get you started.
#mistermiyagi Points out a critical flaw that of course eval is a bulitin as well. We might want to limit this to safe types:
def cast_by_name(type_name, value):
trusted_types = ["int", "float", "complex", "bool", "str"] ## others as needed
if type_name in trusted_types:
return getattr(__builtins__, type_name)(value)
return value
print(cast_by_name("bool", 1))
Build up a conversion lookup dictionary in advance.
Faster
Easier to debug
config = {"convert_fields":
{"payment" : "str", "cash_flow" : "str", "customer_id" : "int", "name" : "name_it"}
}
def name_it(s : str):
return s.capitalize()
data_in = dict(
payment = 101.00,
customer_id = 3,
cash_flow = 1,
name = "bill",
city = "london"
)
convert_functions = {
#support builtins and custom functions
fieldname : globals().get(funcname) or getattr(__builtins__, funcname)
for fieldname, funcname in config["convert_fields"].items()
if not funcname in {"eval"}
}
print(f"{convert_functions=}")
data_db = {
fieldname :
#if no conversion is specified, use `str`
convert_functions.get(fieldname, str)(value)
for fieldname, value in data_in.items()
}
print(f"{data_db=}")
Output:
convert_functions={'payment': <class 'str'>, 'cash_flow': <class 'str'>, 'customer_id': <class 'int'>, 'name': <function name_it at 0x10f0fbe20>}
data_db={'payment': '101.0', 'customer_id': 3, 'cash_flow': '1', 'name': 'Bill', 'city': 'london'}
if the config could be stored in code, rather than a json-type approach, I'd look into Pydantic though that is not exactly your problem space here:
from pydantic import BaseModel
class Data_DB(BaseModel):
payment : str
customer_id : int
cash_flow : str
#you'd need a custom validator to handle capitalization
name : str
city : str
pydata = Data_DB(**data_in)
print(f"{pydata=}")
print(pydata.dict())
output:
pydata=Data_DB(payment='101.0', customer_id=3, cash_flow='1', name='bill', city='london')
{'payment': '101.0', 'customer_id': 3, 'cash_flow': '1', 'name': 'bill', 'city': 'london'}
I'm trying to do a ternary like operator for python to check if my dictionary value exist then use it or else leave it blank, for example in the code below I want to get the value of creator and assignee, if the value doesn't exist I want it to be '' if theres a way to use ternary operator in python?
Here's my code :
in_progress_response = requests.request("GET", url, headers=headers, auth=auth).json()
issue_list = []
for issue in in_progress_response['issues'] :
# return HttpResponse( json.dumps( issue['fields']['creator']['displayName'] ) )
issue_list.append(
{
"id": issue['id'],
"key": issue['key'],
# DOESN'T WORK
"creator": issue['fields']['creator']['displayName'] ? '',
"is_creator_active": issue['fields']['creator']['active'] ? '',
"assignee": issue['fields']['assignee']['displayName'] ? '',
"is_assignee_active": issue['fields']['assignee']['active'] ? '',
"updated": issue['fields']['updated'],
}
)
return issue_list
Ternary operators in python act as follows:
condition = True
foo = 3.14 if condition else 0
But for your particular use case, you should consider using dict.get(). The first argument specifies what you are trying to access, and the second argument specifies a default return value if the key does not exist in the dictionary.
some_dict = {'a' : 1}
foo = some_dict.get('a', '') # foo is 1
bar = some_dict.get('b', '') # bar is ''
You can use .get(…) [Django-doc] to try to fetch an item from a dictionary and return an optional default value in case the dictionary does not contain the given key, you thus can implement this as:
"creator": issue.get('fields', {}).get('creator', {}).get('displayName', ''),
the same with the other items.
if you want to use something like ternary then
you can say
value = issue['fields']['creator']['displayName'] if issue['fields']['creator'] else ""
I have a dictionary for creating insert statements for a test I'm doing. The insert value for the description field needs to have the id of the current row, WHICH I DO NOT HAVE until I run the program. Also, that ID increments by 1 each time I insert, and the description for each insert has to have its corresponding row_num.
I want to load a dictionary of all the fields in the table in advance, so I can use the information in it to create the insert and alter statements for my test. I don't want to hardcode the test_value of a field in the code; I want what's supposed to be in it to be defined in the dictionary, and calculated at runtime. The dictionary is meant to be a template for what I want the value of the field to be.
I am getting the max id from the database, and adding 1 to it. That's the row number. I want the value that's being inserted for the description to be, for example, Row Num: {row_num} - Num Inserts {num_inserts} - Wait Time {wait_time}. I have the num_inserts and the wait_time from a config file. They are defined in advance.
I am getting NameError: name 'row_num' is not defined no matter how I've tried to define row_num in this dictionary. When I import the dictionary, the row_num isn't available yet, hence the error.
Here's a small snippet of my database fields dictionary (users is the table in this example):
all_fields_dict = {
'users':
{
'first_name': {
'db_field' : 'FirstName',
'datatype': 'varchar(50)',
'test_value': {utils.calc_field_value(['zfill', 'FirstName'])}, # another attempt that didn't work
'num_bool': False
},
'username': {
'db_field' : 'username',
'datatype': 'varchar(50)',
'test_value': f"user{utils.get_random_str(5)}", # this works, but it's a diff kind of calculation
'num_bool': False,
},
'description': {
'db_field' : 'description',
'datatype': 'text',
'test_value': f"{utils.get_desc_info(row_num)}", # one of my attempts - fails
'num_bool': False,
},
}
}
Among other things, I have tried:
{row_num}:
test_value: f"{row_num"}
calling a function that returns the row num:
def get_row_num()
return row_num
test_value: f"{utils.get_row_num()}
calling a function that CALLS the get_row_num function:
def get_desc_info():
row_num = get_row_num()
return f"Row Num: {row_num} - Wait Time: {wait_time} - Total Inserts: {num_inserts}"
test_value: f"{utils.get_desc_info()}"
I've even tried creating a function with a switcher that returns the get_row_num function, if 'rnum' is passed in as the test_value
def calc_field_value(type):
switcher = {
'rnum': get_row_num(),
etc
}
return switcher[type]
test_value: f"{utils.calc_field_value('rnum')
I've tried declaring it as global in just about every place I can think of.
I haven't tried eval, because of all the security warnings I've read about it.
Same thing, every single time.
Initialize test_field to some placeholder value, or simply don't set a value at all.
Then, later in the code when you do know the value, update the dict.
Code I execute:
from py2neo import Graph, Node, Relationship
g = Graph(url + '/db/data/', username=username, password=password)
query = '''MATCH (n:Node) WHERE n.name='Test' RETURN n '''
tmp = g.run(query)
tmp = tmp.to_subgraph()
print(type(tmp.values))
print(tmp.values)
Result I get:
<class 'builtin_function_or_method'>
<built-in method values of Node object at 0x7f9da8b2d888>
What I ll expeceted is a string value.Because the node looks like this:
n
{
"name": "Test",
"values": "Basic information",
"type": "data"
}
The type-property can be printed easily...Someone has an idea? My assumption is NULL value or some hidden function...or is values a keyword?
https://py2neo.org/v4/data.html?highlight=values#py2neo.data.Record.values
Yes values is a keyword - so do not name a property values if you afterwards wants to access the property again.