I'm having some trouble verifying the HMAC parameter coming from Shopify. The code I'm using per the Shopify documentation is returning an incorrect result.
Here's my annotated code:
import urllib
import hmac
import hashlib
qs = "hmac=96d0a58213b6aa5ca5ef6295023a90694cf21655cf301975978a9aa30e2d3e48&locale=en&protocol=https%3A%2F%2F&shop=myshopname.myshopify.com×tamp=1520883022"
Parse the querystring
params = urllib.parse.parse_qs(qs)
Extract the hmac value
value = params['hmac'][0]
Remove parameters from the querystring per documentation
del params['hmac']
del params['signature']
Recombine the parameters
new_qs = urllib.parse.urlencode(params)
Calculate the digest
h = hmac.new(SECRET.encode("utf8"), msg=new_qs.encode("utf8"), digestmod=hashlib.sha256)
Returns False!
hmac.compare_digest(h.hexdigest(), value)
That last step should, ostensibly, return true. Every step followed here is outlined as commented in the Shopify docs.
At some point, recently, Shopify started including the protocol parameter in the querystring payload. This itself wouldn't be a problem, except for the fact that Shopify doesn't document that : and / are not to be URL-encoded when checking the signature. This is unexpected, given that they themselves do URL-encode these characters in the query string that is provided.
To fix the issue, provide the safe parameter to urllib.parse.urlencode with the value :/ (fitting, right?). The full working code looks like this:
params = urllib.parse.parse_qsl(qs)
cleaned_params = []
hmac_value = dict(params)['hmac']
# Sort parameters
for (k, v) in sorted(params):
if k in ['hmac', 'signature']:
continue
cleaned_params.append((k, v))
new_qs = urllib.parse.urlencode(cleaned_params, safe=":/")
secret = SECRET.encode("utf8")
h = hmac.new(secret, msg=new_qs.encode("utf8"), digestmod=hashlib.sha256)
# Compare digests
hmac.compare_digest(h.hexdigest(), hmac_value)
Hope this is helpful for others running into this issue!
import hmac
import hashlib
...
# Inside your view in Django's views.py
params = request.GET.dict()
#
myhmac = params.pop('hmac')
params['state'] = int(params['state'])
line = '&'.join([
'%s=%s' % (key, value)
for key, value in sorted(params.items())
])
print(line)
h = hmac.new(
key=SHARED_SECRET.encode('utf-8'),
msg=line.encode('utf-8'),
digestmod=hashlib.sha256
)
# Cinderella ?
print(hmac.compare_digest(h.hexdigest(), myhmac))
Related
i want to build some function that read a url from txt file, then save it to some variable, then add some values inside the url between another values
example of the url: https://domains.livedns.co.il/API/DomainsAPI.asmx/NewDomain?UserName=apidemo#livedns.co.il&Password=demo
lets say i want to inject some values between UserName and Password and save it into file again and use it later.
i started to write the function and play with urllib parser but i still doesnt understand how to do that.
what i tried until now:
def dlastpurchase():
if os.path.isfile("livednsurl.txt"):
apikeyfile = open("livednsurl.txt", "r")
apikey = apikeyfile.read()
url_parse = urlsplit(apikey)
print(url_parse.geturl())
dlastpurchase()
Thanks in advance for every tip and help
A little bit more complex example that I believe you will find interesting and also enjoy improving it (while it takes care of some scenarios, it might be lacking in some). Also functional to enable reuse in other cases. Here we go
assuming we have a text file, named 'urls.txt' that contains this url
https://domains.livedns.co.il/API/DomainsAPI.asmx/NewDomain?UserName=apidemo#livedns.co.il&Password=demo
from os import error
from urllib.parse import urlparse, parse_qs, urlunparse
filename = 'urls.txt'
function to parse the url and return its query parameters as well as the url object, which will be used to reconstruct the url later on
def parse_url(url):
"""parse a given url and return its query parameters
Args:
url (string): url string to parse
Returns:
parsed (tupple): the tupple object returned by urlparse
query_parameters (dictionary): dictionary containing the query parameters as keys
"""
try :
# parse the url and get the queries parameters from there
parsed = urlparse(url)
# parse the queries and return the dictionary containing them
query_result = parse_qs(parsed.query)
return (query_result, parsed)
except(error):
print('something failed !!!')
print(error)
return False
function to add a new query parameter or to replace an existing one
def insert_or_replace_word(query_dic, word,value):
"""Insert a value for the query parameters of a url
Args:
query_dic (object): the dictionary containing the query parameters
word (string): the query parameter to replace or insert values for
value (string): the value to insert or use as replacement
Returns:
result (string):the result of the insertion or replacement
"""
try:
query_dic[word] = value
return query_dic
except (error):
print('Something went wrong {0}'.format(error))
function to format the query parameter and get them ready to reconstruct the new url
def format_query_strings(query_dic):
"""format the final query dictionaries ready to be used to construct a new url and construct the new url
Args:
query_dic (dictionary): final query dictionary after insertion or update
"""
final_string = ''
for key, value in query_dic.items():
#unfortunatly, query params from parse_qs are in list, so remove them before creating the final string
if type(value) == list:
query_string = '{0}={1}'.format(key, value[0])
final_string += '{0}&'.format(query_string)
else:
query_string = '{0}={1}'.format(key, value)
final_string += '{0}&'.format(query_string)
# this is to remove any extra & inserted at the end of the loop above
if final_string.endswith('&'):
final_string = final_string[:len(final_string)-1]
return final_string
we check out everything works by reading in text file, performing above operation and then saving the new url to a new file
with open(filename) as url:
lines = url.readlines()
for line in lines:
query_params,parsed = parse_url(line)
new_query_dic = insert_or_replace_word(query_params,'UserName','newUsername')
final = format_query_strings(new_query_dic)
#here you have to pass an iterable of lenth 6 in order to reconstruct the url
new_url_object = [parsed.scheme,parsed.netloc,parsed.path,parsed.params,final,parsed.fragment]
#this reconstructs the new url
new_url = urlunparse(new_url_object)
#create a new file and append the link inside of it
with open('new_urls.txt', 'a') as new_file:
new_file.writelines(new_c)
new_file.write('\n')
You don't have to use fancy tools to do that. Just split the url based on "?" Character. Then, split the second part based on "&" character. Add your new params to the list you have, and merge them with the base url you get.
url = "https://domains.livedns.co.il/API/DomainsAPI.asmx/NewDomain?UserName=apidemo#livedns.co.il&Password=demo"
base, params = url.split("?")
params = params.split("&")
params.insert(2, "new_user=yololo&new_passwd=hololo")
for param in params:
base += param + "&"
base = base.strip("&")
print(base)
I did it like this since you asked for inserting to a specific location. But url params are not depends on the order, so you can just append at the end of the url for ease. Or, you can edit the parameters from the list I show.
I don't actually want to make any calls to a website but I have to take this GET request and turn it into a JSON object.
?goal=NOT_GOAL_SETTING&kpi=lfa&sec_kpi=not_being_used&numdays=31&budget=13000000&channel=not_being_used&channel_max=not_being_used&brand=Ram&nameplate=namplate1_nameplate2_&target=800000&nameplate_min_spend=0_0_0_0_0_0_0&nameplate_max_spend=0_0_0_0_0_0_0&max_lfas=70000_100000_4000_400000_90000_15000_2000&search_digital_min_spend=0_0&search_digital_max_spend=0_0&search_digital_min_lfas=0_0&search_digital_max_lfas=0_0
I want every variable that is defined after the = and I want to split the variables by _.
A smaller request is like this-
?variable1=1_2_3_4&variable2=string
What I want is the following:
{"variable1":[1,2,3,4], "variable2":"string"}
I've built a simple function for this before which uses urllib:
import sys
import urllib.parse
def parseGetUrl(url):
result = {}
for data in url.split("&"):
key, val = urllib.parse.unquote(data).split("=")
if val.find('_') != -1:
val = val.split('_')
result[key] = val
return result
if __name__ == "__main__":
url = sys.argv[1][1:] # Gets argument then removes ?
parsedData = parseGetUrl(url)
print(parsedData)
You need to wrap your url inside quotes(")
python3 app.py "?goal=102&value=1_0_0_0"
Do note though that depending on which python version you use urrlib might throw an error:
# python 3
import urrlib.parse
...
key, val = urllib.parse.unquote(data).split("=")
# python 2
import urllib
...
key, val = urllib.unquote(data).split("=")
I am trying to convert boto3 dynamoDB conditional expressions (using types from boto3.dynamodb.conditions) to its string representation. Of course this could be hand coded but naturally I would prefer to be able to find something developed by AWS itself.
Key("name").eq("new_name") & Attr("description").begins_with("new")
would become
"name = 'new_name' and begins_with(description, 'new')"
I have been checking in the boto3 and boto core code but so far no success, but I assume it must exist somewhere in the codebase...
In the boto3.dynamodb.conditions module there is a class called ConditionExpressionBuilder. You can convert a condition expression to string by doing the following:
condition = Key("name").eq("new_name") & Attr("description").begins_with("new")
builder = ConditionExpressionBuilder()
expression = builder.build_expression(condition, is_key_condition=True)
expression_string = expression.condition_expression
expression_attribute_names = expression.attribute_name_placeholders
expression_attribute_values = expression.attribute_value_placeholders
I'm not sure why this isn't documented anywhere. I just randomly found it looking through the source code at the bottom of this page https://boto3.amazonaws.com/v1/documentation/api/latest/_modules/boto3/dynamodb/conditions.html.
Unfortunately, this doesn't work for the paginator format string notation, but it should work for the Table.query() format.
From #Brian's answer with
ConditionExpressionBuilder I had to add the Dynamodb's {'S': 'value'} type notation before the query execution.
I changed it with expression_attribute_values[':v0'] = {'S': pk_value}, where :v0 is the first Key/Attr in the condition. Not sure but should work for next values (:v0, :v1, :v2...).
Here is the full code, using pagination to retrieve only part of data
from boto3.dynamodb.conditions import Attr, Key, ConditionExpressionBuilder
from typing import Optional, List
import boto3
client_dynamodb = boto3.client("dynamodb", region_name="us-east-1")
def get_items(self, pk_value: str, pagination_config: dict = None) -> Optional[List]:
if pagination_config is None:
pagination_config = {
# Return only first page of results when no pagination config is not provided
'PageSize': 300,
'StartingToken': None,
'MaxItems': None,
}
condition = Key("pk").eq(pk_value)
builder = ConditionExpressionBuilder()
expression = builder.build_expression(condition, is_key_condition=True)
expression_string = expression.condition_expression
expression_attribute_names = expression.attribute_name_placeholders
expression_attribute_values = expression.attribute_value_placeholders
# Changed here to make it compatible with dynamodb typing
python expression_attribute_values[':v0'] = {'S': pk_value}
paginator = client_dynamodb.get_paginator('query')
page_iterator = paginator.paginate(
TableName="TABLE_NAME",
IndexName="pk_value_INDEX",
KeyConditionExpression=expression_string,
ExpressionAttributeNames=expression_attribute_names,
ExpressionAttributeValues=expression_attribute_values,
PaginationConfig=pagination_config
)
for page in page_iterator:
resp=page
break
if ("Items" not in resp) or (len(resp["Items"]) == 0):
return None
return resp["Items"]
EDIT:
I used this question to get string representation for Dynamodb Resource's query, which is not compatible (yet) with dynamodb conditions, but then I found a better solution from Github (Boto3)[https://github.com/boto/boto3/issues/2300]:
Replace paginator with the one from meta
dynamodb_resource = boto3.resource("dynamodb")
paginator = dynamodb_resource.meta.client.get_paginator('query')
And now I can simply use Attr and Key
I'm trying to store the private/public keys as UTF-8 strings in a database. The problem is that when I bring them back into code, they are not the correct type. As bytes they print the same, as per the following code:
import nacl.utils
from nacl.public import PrivateKey, SealedBox
from nacl.encoding import Base64Encoder
import base64
prvkbob = PrivateKey.generate()
pubkbob = prvkbob.public_key
prvk_db = prvkbob.encode(Base64Encoder).decode('utf8')
pubk_db = pubkbob.encode(Base64Encoder).decode('utf8')
prvk = base64.b64decode(prvk_db.encode('utf8'))
shdk = base64.b64decode(pubk_db.encode('utf8'))
print(prvkbob)
print(prvk)
print(pubkbob)
print(shdk)
# It works with the original key
sealed_box = SealedBox(prvkbob)
# Error on key returned from database
sealed_box = SealedBox(prvk)
How do I initialize them as PublicKey or PrivateKey objects?
I might be a little late to the party, but I ran in to the similar problem where it says:
nacl.exceptions.TypeError: Box must be created from a PrivateKey and a
PublicKey
This is easily fixed by instantiation a Public or private key instance using the following lines:
imported_private_key = nacl.public.PrivateKey(bytes_that_are_a_key)
imported_public_key = nacl.public.PublicKey(bytes_that_are_a_key)
I hope it might help you or anyone else with the same problem
Since you explicitly decoded keys in utf-8 (.decode('utf8')), you must first encode those in with the very same encoding (as you did). As #DisplayName said, you just then need to instantiate PrivateKey and PublicKey
Since you plan to store the Base64 representations of those keys, you could do the following. Here are keys generated the ways you wanted:
john_private = "OSEuOrw7BDANm2b0lwddBXUxN6OFGBLBDoFbqnkdMNU="
john_public = "bQNbTjHETLTc/RNJYa1mTDg0fQF70GsuIZFsrb43DQc="
paul_private = "ry860ekZ8T1UDTzvoPSlAVMEOjcVz3ODLYbjXfySns0="
paul_public = "G8608AL7TE2n3P10OLS8V/8wCaf/mzflCS/5qw/TzG4="
The two functions work with keys and messages stored in Base64
def base64_to_bytes(key:str) -> bytes:
return base64.b64decode(key.encode('utf-8'))
def encrypt_for_user(sender_private:str, receiver_public:str, message:str) -> str:
sender_private = PrivateKey(base64_to_bytes(sender_private))
receiver_public = PublicKey(base64_to_bytes(receiver_public))
sender_box = Box(sender_private, receiver_public)
return base64.b64encode(sender_box.encrypt(bytes(message, "utf-8"))).decode('utf-8')
def decrypt_for_user(receiver_private:str, sender_public:str, message:str) -> str:
receiver_private = PrivateKey(base64_to_bytes(receiver_private))
sender_public = PublicKey(base64_to_bytes(sender_public))
receiver_box = Box(receiver_private, sender_public)
return receiver_box.decrypt(base64.b64decode(message.encode('utf-8'))).decode('utf-8')
John sends a message to Paul:
message = encrypt_for_user(john_private,paul_public,"Hi Paul, 'up?")
print(message)
9BxTezSQVlxPU5evODskj4EIb5hXqIPnkQVuhpY2qoYvcnIaBgUVhkbN8baSytsmF4RSXdI=
Paul decrypts it:
decrypt_for_user(paul_private, john_public, message)
"Hi Paul, 'up?"
I am getting JIRA data using the following python code,
how do I store the response for more than one key (my example shows only one KEY but in general I get lot of data) and print only the values corresponding to total,key, customfield_12830, summary
import requests
import json
import logging
import datetime
import base64
import urllib
serverURL = 'https://jira-stability-tools.company.com/jira'
user = 'username'
password = 'password'
query = 'project = PROJECTNAME AND "Build Info" ~ BUILDNAME AND assignee=ASSIGNEENAME'
jql = '/rest/api/2/search?jql=%s' % urllib.quote(query)
response = requests.get(serverURL + jql,verify=False,auth=(user, password))
print response.json()
response.json() OUTPUT:-
http://pastebin.com/h8R4QMgB
From the the link you pasted to pastebin and from the json that I saw, its a you issues as list containing key, fields(which holds custom fields), self, id, expand.
You can simply iterate through this response and extract values for keys you want. You can go like.
data = response.json()
issues = data.get('issues', list())
x = list()
for issue in issues:
temp = {
'key': issue['key'],
'customfield': issue['fields']['customfield_12830'],
'total': issue['fields']['progress']['total']
}
x.append(temp)
print(x)
x is list of dictionaries containing the data for fields you mentioned. Let me know if I have been unclear somewhere or what I have given is not what you are looking for.
PS: It is always advisable to use dict.get('keyname', None) to get values as you can always put a default value if key is not found. For this solution I didn't do it as I just wanted to provide approach.
Update: In the comments you(OP) mentioned that it gives attributerror.Try this code
data = response.json()
issues = data.get('issues', list())
x = list()
for issue in issues:
temp = dict()
key = issue.get('key', None)
if key:
temp['key'] = key
fields = issue.get('fields', None)
if fields:
customfield = fields.get('customfield_12830', None)
temp['customfield'] = customfield
progress = fields.get('progress', None)
if progress:
total = progress.get('total', None)
temp['total'] = total
x.append(temp)
print(x)