Errors using python's ECDSA lib to sign AWS messages - python

I'm signing my messages using my code below:
def sign_msg_hash(self, msg_hash: HexBytes):
signature = self._kms_client.sign(
KeyId=self._key_id,
Message=msg_hash,
MessageType="DIGEST",
SigningAlgorithm="ECDSA_SHA_256",
)
act_signature = signature["Signature"]
return (act_signature)
However, when trying to do the following:
sign = kms.sign_msg_hash(transaction)
vks = ecdsa.VerifyingKey.from_public_key_recovery_with_digest(
sign, transaction, curve=ecdsa.SECP256k1, hashfunc=sha256
)
I get the following error:
ecdsa.util.MalformedSignature: Invalid length of signature, expected 64 bytes long, provided string is 72 bytes long
Now, when trying to use sign[:64] instead, it sometimes works but at other times gives me the following error:
raise SquareRootError("%d has no square root modulo %d" % (a, p))
ecdsa.numbertheory.SquareRootError: 6631794589973073742270549970789085262483305971731159368608959895351281521222 has no square root modulo 115792089237316195423570985008687907853269984665640564039457584007908834671663
I'm not sure if this has anything to do with the encoding of the signature from the KMS or not.

So after some digging, it turned it out it was a matter of encodings, as the kms sign function returns the signature in DER format.
Hence, what I did was the following:
I took the signature and split it, and got the r and s values as follows:
decoding=ecdsa.util.sigdecode_der(sign,int("fffffffffffffffffffffffffffffffebaaedce6af48a03bbfd25e8cd0364141",16))
I then concatenated the result (note that r and s should be in big-endian order):
new_sign = decoding[0].to_bytes(32, byteorder= 'big') + decoding[1].to_bytes(32, byteorder= 'big')
I then used the new sign:
vks = ecdsa.VerifyingKey.from_public_key_recovery_with_digest(
new_sign, transaction, curve=ecdsa.SECP256k1, hashfunc=sha256
)
Another thing I encountered was with kms_client.get_public_key()
First call the method:
kms_pub_key_bytes = kms_client.get_public_key(KeyId=key_id)["PublicKey"]
Then get the last 128 chars which are essentially the r and s values
kms_pre_sha3 = kms_pub_key_bytes.hex()[-128:]
Then do what you want with the public key.
Related blog post on Medium

Related

How can I fix scrypt's invalid parameter combination?

I have the following function call to scrypt() from hashlib using Python 3.7.9:
def aes_encrypt(msg, passwordStr):
kdfSalt = urandom(16) # 16 bytes == 128 bits
hashedKey = scrypt(passwordStr.encode(),salt=kdfSalt,n=16384,r=16,p=1, dklen=64) # 64 octets = 512 bits
When this code runs, I get the error:
File "aes_scrypt_hmac.py", line 69, in <module>
main()
File "aes_scrypt_hmac.py", line 38, in main
print(aes_encrypt(sampleData,testPassword))
File "aes_scrypt_hmac.py", line 18, in aes_encrypt
hashedKey = scrypt(passwordStr.encode(),salt=kdfSalt,n=16384,r=16,p=1, dklen=64)
ValueError: Invalid parameter combination for n, r, p, maxmem.
I have read the documentation for scrypt, and it does not specify the expectations for the parameters; though it does link to the RFC and these params seem valid. maxmem's specific requirement is not mentioned in the documentation (e.g. what does 0 mean? And what the unit of measurement is) or in the RFC.
I have to admit I'm not sure which API you plan on using. According to scrypt's python package, the APIs are encrypt, decrypt and hash, and you are using something I can't find.
Your method is named encrypt, but the variable is called hashedKey, so I'm not sure if you are hashing or encrypting, and those are obviously different.
However, these references might help.
scrypt's python package implementation:
def hash(password, salt, N=1 << 14, r=8, p=1, buflen=64):
"""
Compute scrypt(password, salt, N, r, p, buflen).
The parameters r, p, and buflen must satisfy r * p < 2^30 and
buflen <= (2^32 - 1) * 32. The parameter N must be a power of 2
greater than 1. N, r and p must all be positive.
Notes for Python 2:
- `password` and `salt` must be str instances
- The result will be a str instance
Notes for Python 3:
- `password` and `salt` can be both str and bytes. If they are str
instances, they wil be encoded with utf-8.
- The result will be a bytes instance
Exceptions raised:
- TypeError on invalid input
- scrypt.error if scrypt failed
"""
When I run the following script using Python3 and the PyPi scrypt package, everything works for me:
import scrypt
import os
def aes_encrypt(pwd):
kdfSalt = os.urandom(16) # 16 bytes == 128 bits
return scrypt.hash(pwd.encode(), salt=kdfSalt, N=16384, r=16, p=1, buflen=64)
Go's scrypt package manual:
Key derives a key from the password, salt, and cost parameters, returning a byte slice of length keyLen that can be used as cryptographic key.
N is a CPU/memory cost parameter, which must be a power of two greater than 1. r and p must satisfy r * p < 2³⁰. If the parameters do not satisfy the limits, the function returns a nil byte slice and an error.
For example, you can get a derived key for e.g. AES-256 (which needs a 32-byte key) by doing:
dk, err := scrypt.Key([]byte("some password"), salt, 32768, 8, 1, 32)
The recommended parameters for interactive logins as of 2017 are N=32768, r=8 and p=1. The parameters N, r, and p should be increased as memory latency and CPU parallelism increases; consider setting N to the highest power of 2 you can derive within 100 milliseconds. Remember to get a good random salt.

How can I find the big endian key in a message?

I am trying to read a binary message from an ESP32 using a broker; i wrote a phyton script where I subscribe the topic. the message that i actually receive is:
b'\x00\x00\x00?'
this is a float binary little endian message but I don't the key to decode it. Is there a way to find the decode key based on this data?
This is my python code:
import paho.mqtt.client as mqtt
def on_connect1(client1, userdata1, flags1, rc1):
client1.subscribe("ESP32DevKit123/mytopic")
def on_message1(client1, userdata1, msg1):
print(msg1.topic+" "+ "TESTENZA: "+str(msg1.payload))
client1 = mqtt.Client()
client1.username_pw_set(username="myuser",password="mypassword")
client1.on_connect = on_connect1
client1.on_message = on_message1
client1.connect("linkclient", portnumber, 60)
def twosComplement_hex(hexval):
bits = 16 # Number of bits in a hexadecimal number format
on_message1 = int(hexval, bits)
if on_message1 & (1 << (bits-1)):
on_message1 -= 1 << bits
return on_message1
client1.loop_forever()
It also gives me an error in the line on_message1 -= 1 << bits; the error says: Expected intended block pylance. Any solutions?
The data you provided is b'\x00\x00\x00?' - I'm going to assume that this is 0000003f (please output hex with msg1.payload.hex()).
I'll also assume that by "float binary little endian" you mean a big endian floating point (IEE754) - note that this does not match up with the algorithm you are using (twos compliment). Plugging this input into an online tool indicates that the expected result ("Float - Big Endian (ABCD)") is 8.82818e-44 (it's worth checking with this tool; sometimes the encoding may not be what you think it is!).
Lets unpack this using python (see the struct docs for more information):
>>> from struct import unpack
>>> unpack('>f', b'\x00\x00\x00\x3f')[0]
8.828180325246348e-44
Notes:
The [0] is there because unpack returns an array (you can unpack more than one item from the input)
>f - the > means big-endian and the f float (standard size = 4 bytes)
The reason your original code gives the error "Expected intended block" is due to the lack of indentation in the line on_message1 -= 1 << bits (as it follows an if it needs to be indented). The algorithm does not appear relevant to your task (but there may be details I'm missing).

How to use base32 in combination with hotp (one time passwords) in python?

for a university exercise I want to develop a simple hotp server-client system in python. In this case the client sends a password and a one time password to the server. The server knows the secret, calculates the current hotp and compares the values it receives. So far, so good. With plaintext this works perfectly fine and the calculated values are the same I get when I use the iOS App "OTP Auth". But there is also the possibility to calculate the OTP in combination with base32. So I added a few lines to encode the plaintext to base32 but now the output in not correct.
Let's assume we're using the secret "1234" so the plaintext output would be "110366". That's working. But if I'm encoding the secret to base32 the output should be "807244" but my program calculates "896513". Anybody know why this is happening?
I've already tried to use different secrets and checked it on different apps. Always the same result.
import hmac
import hashlib
import array
import base64
counter = 0
digits = 6 #Anzahl der Zeichen
def hotp(secret, c):
global digits
counter = extendCounter(c)
hmac_sha1 = hmac.new(secret, counter, hashlib.sha1).hexdigest()
return truncate(hmac_sha1)[-digits:]
def truncate(hmac_sha1):
offset = int(hmac_sha1[-1], 16)
binary = int(hmac_sha1[(offset * 2):((offset * 2) + 8)], 16) & 0x7fffffff
return str(binary)
def extendCounter(long_num):
byte_array = array.array('B')
for i in reversed(range(0, 8)):
byte_array.insert(0, long_num & 0xff)
long_num >>= 8
return byte_array
def main():
secret = "1234"
bSecret = secret.encode("UTF-8")
bSecret = base64.b32encode(bSecret)
otp = hotp(bSecret, counter)
one_time_password = otp
I expect 807244 as the output but the output is 896513
First, it's important to point out that the result of secret.encode('UTF-8') has exactly the same type as the result of base64.b32encode(bSecret) (and for that matter base64.b64encode(bSecret)) -- they all return bytes objects. Also worth noting is that the implementation of hmac in Python has no mention of base64/base32 encoding. So the short answer is that your expected result of 807244 is only valid if the shared secret is a base64/UTF-8 encoded blob.
This quick snippet shows that really you can give any bytes you like to hotp and it will come up with some result (because hotp is called multiple times in the example, counter is changed)
# ... everything from your example above ...
secret = "1234"
secret_bytes = secret.encode("UTF-8")
secret_bytes
>>> b'1234'
b32_secret = base64.b32encode(bSecret)
b32_secret
>>> b'GEZDGNA='
b64_secret = base64.b64encode(bSecret)
b64_secret
>>> b'MTIzNA=='
hotp(secret_bytes, counter) # just a UTF-8 blob works
>>> '110366'
hotp(b32_secret, counter) # base32/UTF-8 also works
>>> '896513'
hotp(b64_secret, counter) # base64/UTF-8 works as well
>>> '806744'
If you have more detail of why you expected 807244 for a base32/UTF8 blob, I'll be happy to amend this answer.
Found the mistake:
Instead of translating the secret to base32, the secret must be a Base32 decoded value. Also instead of encoding this value, it must be decoded ("base64.b32decode(bytes(saved_secret, 'utf-8'))")
So the correct main looks like this:
def main():
secret = "V6X27L5P" #Base32 value
secret = base64.b32decode(bytes(secret, 'utf-8'))
one_time_password = hotp(secret, counter)

Python struct as networking data packets (uknown byte sequence)

I am working on a server engine in Python, for my game made in GameMaker Studio 2. I'm currently having some issues with making and sending a packet.
I've successfully managed to establish a connection and send the first packet, but I can't find a solution for sending data in a sequence of which if the first byte in the packed struct is equal to a value, then unpack other data into a given sequence.
Example:
types = 'hhh' #(message_id, x, y) example
message_id = 0
x = 200
y = 200
buffer = pack(types, 0,x, y)
On the server side:
data = conn.recv(BUFFER_SIZE)
mid = unpack('h', data)[0]
if not data: break
if mid == 0:
sequnce = 'hhh'
x = unpack(sequnce, data)[1]
y = unpack(sequnce, data)[2]
It looks like your subsequent decoding is going to vary based on the message ID?
If so, you will likely want to use unpack_from which allows you to pull only the first member from the data (as written now, your initial unpack call will generate an exception because the buffer you're handing it is not the right size). You can then have code that varies the unpacking format string based on the message ID. That code could look something like this:
from struct import pack, unpack, unpack_from
while True:
data = conn.recv(BUFFER_SIZE)
# End of file, bail out of loop
if not data: break
mid = unpack_from('!h', data)[0]
if mid == 0:
# Message ID 0
types = '!hhh'
_, x, y = unpack(types, data)
# Process message type 0
...
elif mid == 1:
types = '!hIIq'
_, v, w, z = unpack(types, data)
# Process message type 1
...
elif mid == 2:
...
Note that we're unpacking the message ID again in each case along with the ID-specific parameters. You could avoid that if you like by using the optional offset argument to unpack_from:
x, y = unpack_from('!hh', data, offset=2)
One other note of explanation: If you are sending messages between two different machines, you should consider the "endianness" (byte ordering). Not all machines are "little-endian" like x86. Accordingly it's conventional to send integers and other structured numerics in a certain defined byte order - traditionally that has been "network byte order" (which is big-endian) but either is okay as long as you're consistent. You can easily do that by prepending each format string with '!' or '<' as shown above (you'll need to do that for every format string on both sides).
Finally, the above code probably works fine for a simple "toy" application but as your program increases in scope and complexity, you should be aware that there is no guarantee that your single recv call actually receives all the bytes that were sent and no other bytes (such as bytes from a subsequently sent buffer). In other words, it's often necessary to add a buffering layer, or otherwise ensure that you have received and are operating on exactly the number of bytes you intended.
Could you unpack whole data to list, and then check its elements in the loop? What is the reason to unpack it 3 times? I guess, you could unpack it once, and then work with that list - check its length first, if not empty -> check first element -> if equal to special one, continue on list parsing. Did you try like that?

TLS MAC message verification

I'm developing a SSL de-cipher in python but I'm having some problems on HMAC verification:
I've extracted all keyring related material (client IV, MAC, Key and Server IV, MAC, key).
When I receive the first Application_Data message (0x17), I am able to decrypt it, but unable to verify message integrity.
On RFC 2246 (https://www.ietf.org/rfc/rfc2246.txt), tells:
The MAC is generated as:
HMAC_hash(MAC_write_secret, seq_num + TLSCompressed.type +
TLSCompressed.version + TLSCompressed.length +
TLSCompressed.fragment));
where "+" denotes concatenation.
seq_num
The sequence number for this record.
hash
The hashing algorithm specified by
SecurityParameters.mac_algorithm.
Taking this as an example:
Chosen cipher_suite is TLS_RSA_WITH_AES_256_CBC_SHA256
client_mac = "some random stuff"
message_type = 0x17
message_version = 0x0303
encrypted_message_length = 1184 (IV|Message|MAC|Offset)
decrypted_message_length = 1122 (removing IV, MAC and offset)
message = "some message of length 1122"
client_mac is extracted from keyring_material
message_type is 0x17, because as an Application_data message type, the correct value should be 0x17
message version is 0x0303 as it's TLS 1.2
message length is 1122, removing preceding IV, offset and MAC verification, message, gets a final length of 1122
seq_number is 1 as it's the first message
HMAC_SHA256 calculation, in python, is as follows:
import hashlib
import hmac
hmac.new(<client_mac>,label+message,hashlib.sha256).digest()
My question is, how do I calculate label?
As RFC mentions, "+" denotes concatenation, but concatenation of what
HEX values converted to string
"1" + "17" + "0303" + "462"
INT values converted to strings
"1" + "23" + "771" + "1122"
And other thing to mention, TLSCompressed.version means:
0x0303
771
"1.2"
"12"
"TLS 1.2"
In this maillist (http://www.ietf.org/mail-archive/web/tls/current/msg14357.html) I found a supposed clarification of MAC values,
MAC(MAC_write_key, seq_num +
TLSCipherText.type +
TLSCipherText.version +
length of ENC(content + padding + padding_length) +
IV +
ENC(content + padding + padding_length));
where the length is encoded as two bytes in the usual way.
but it makes no sense to me, because it's useless to re-encode decrypted values to check to compute MAC. And from last line "where length is encoded as two bytes in the usual way", does it means that I should use
struct.pack("!H",length)
Then remove "\x" and use this value? or should I encode this value in HEX and then concatenate it?
I'm a bit lost, because RFC are not clear about how values should be used.
I've been trying several combinations (even brute forcing), but none of them worked, I hope you can light my way.
Well, after diggin' a bit I've managed to solve the issue.
RFC 5246, in section 6.2.3.1 (https://www.rfc-editor.org/rfc/rfc5246#section-6.2.3.1)
The MAC is generated as:
MAC(MAC_write_key, seq_num +
TLSCompressed.type +
TLSCompressed.version +
TLSCompressed.length +
TLSCompressed.fragment);
where "+" denotes concatenation.
But it does not points the data size, either representation format (hex, string...).
The way every field must be represented is as follows:
seq_num:
Description: A int counter, starting in 0, which will be incremented every frame received or sended. For a TCP Session, two seq_numbers must be used, one for the server and other for the client, incrementing everytime each of them sends a frame.
Representation: This value must be represented as Unsigned Long Long with 8 bytes
Representation example:
struct.pack("!Q",seq_num)
TLSCompressed.type
Description: This field is extracted from TLS Record layer (the encrypted payload). For example, if it's an Application Data frame, we must use 0x17.
Representation: This value must be represented as Signed Char, with 2 bytes.
Representation example:
struct.pack("!b",TLSCompressed.type)
TLSCompressed.version
Description: This field is also extracted from TLS Record layer (the encrypted payload). For example, if the frame is transferred using TLS 1.2, we must use it's hex representation 0x0303.
Representation: This value must be represented as Unsigned Short, with 2 bytes.
Representation example:
struct.pack("!H",TLSCompressed.version)
TLSCompressed.length
Description: This field represents the actual length of the decrypted payload.
Representation: This value must be represented as Unsigned Short, with 2 bytes.
Representation example:
struct.pack("!H",TLSCompressed.length)
TLSCompressed.fragment
Description: This field **is the actual decrypted payload.
Representation: This value must be represented as a string
As a python example, the HMAC hashing will be as follows for our previous example:
hmac_digest = hmac.new(mac_secret,'',digestmod=hashlib.sha256)
hmac_digest.update(struct.pack('!QbHH',seq_num,TLSCompressed.type,TLSCompressed.version, len(decrypted)))
hmac_digest.update(decrypted)
hmac_digest.digest()

Categories