I have the following function call to scrypt() from hashlib using Python 3.7.9:
def aes_encrypt(msg, passwordStr):
kdfSalt = urandom(16) # 16 bytes == 128 bits
hashedKey = scrypt(passwordStr.encode(),salt=kdfSalt,n=16384,r=16,p=1, dklen=64) # 64 octets = 512 bits
When this code runs, I get the error:
File "aes_scrypt_hmac.py", line 69, in <module>
main()
File "aes_scrypt_hmac.py", line 38, in main
print(aes_encrypt(sampleData,testPassword))
File "aes_scrypt_hmac.py", line 18, in aes_encrypt
hashedKey = scrypt(passwordStr.encode(),salt=kdfSalt,n=16384,r=16,p=1, dklen=64)
ValueError: Invalid parameter combination for n, r, p, maxmem.
I have read the documentation for scrypt, and it does not specify the expectations for the parameters; though it does link to the RFC and these params seem valid. maxmem's specific requirement is not mentioned in the documentation (e.g. what does 0 mean? And what the unit of measurement is) or in the RFC.
I have to admit I'm not sure which API you plan on using. According to scrypt's python package, the APIs are encrypt, decrypt and hash, and you are using something I can't find.
Your method is named encrypt, but the variable is called hashedKey, so I'm not sure if you are hashing or encrypting, and those are obviously different.
However, these references might help.
scrypt's python package implementation:
def hash(password, salt, N=1 << 14, r=8, p=1, buflen=64):
"""
Compute scrypt(password, salt, N, r, p, buflen).
The parameters r, p, and buflen must satisfy r * p < 2^30 and
buflen <= (2^32 - 1) * 32. The parameter N must be a power of 2
greater than 1. N, r and p must all be positive.
Notes for Python 2:
- `password` and `salt` must be str instances
- The result will be a str instance
Notes for Python 3:
- `password` and `salt` can be both str and bytes. If they are str
instances, they wil be encoded with utf-8.
- The result will be a bytes instance
Exceptions raised:
- TypeError on invalid input
- scrypt.error if scrypt failed
"""
When I run the following script using Python3 and the PyPi scrypt package, everything works for me:
import scrypt
import os
def aes_encrypt(pwd):
kdfSalt = os.urandom(16) # 16 bytes == 128 bits
return scrypt.hash(pwd.encode(), salt=kdfSalt, N=16384, r=16, p=1, buflen=64)
Go's scrypt package manual:
Key derives a key from the password, salt, and cost parameters, returning a byte slice of length keyLen that can be used as cryptographic key.
N is a CPU/memory cost parameter, which must be a power of two greater than 1. r and p must satisfy r * p < 2³⁰. If the parameters do not satisfy the limits, the function returns a nil byte slice and an error.
For example, you can get a derived key for e.g. AES-256 (which needs a 32-byte key) by doing:
dk, err := scrypt.Key([]byte("some password"), salt, 32768, 8, 1, 32)
The recommended parameters for interactive logins as of 2017 are N=32768, r=8 and p=1. The parameters N, r, and p should be increased as memory latency and CPU parallelism increases; consider setting N to the highest power of 2 you can derive within 100 milliseconds. Remember to get a good random salt.
Related
I'm signing my messages using my code below:
def sign_msg_hash(self, msg_hash: HexBytes):
signature = self._kms_client.sign(
KeyId=self._key_id,
Message=msg_hash,
MessageType="DIGEST",
SigningAlgorithm="ECDSA_SHA_256",
)
act_signature = signature["Signature"]
return (act_signature)
However, when trying to do the following:
sign = kms.sign_msg_hash(transaction)
vks = ecdsa.VerifyingKey.from_public_key_recovery_with_digest(
sign, transaction, curve=ecdsa.SECP256k1, hashfunc=sha256
)
I get the following error:
ecdsa.util.MalformedSignature: Invalid length of signature, expected 64 bytes long, provided string is 72 bytes long
Now, when trying to use sign[:64] instead, it sometimes works but at other times gives me the following error:
raise SquareRootError("%d has no square root modulo %d" % (a, p))
ecdsa.numbertheory.SquareRootError: 6631794589973073742270549970789085262483305971731159368608959895351281521222 has no square root modulo 115792089237316195423570985008687907853269984665640564039457584007908834671663
I'm not sure if this has anything to do with the encoding of the signature from the KMS or not.
So after some digging, it turned it out it was a matter of encodings, as the kms sign function returns the signature in DER format.
Hence, what I did was the following:
I took the signature and split it, and got the r and s values as follows:
decoding=ecdsa.util.sigdecode_der(sign,int("fffffffffffffffffffffffffffffffebaaedce6af48a03bbfd25e8cd0364141",16))
I then concatenated the result (note that r and s should be in big-endian order):
new_sign = decoding[0].to_bytes(32, byteorder= 'big') + decoding[1].to_bytes(32, byteorder= 'big')
I then used the new sign:
vks = ecdsa.VerifyingKey.from_public_key_recovery_with_digest(
new_sign, transaction, curve=ecdsa.SECP256k1, hashfunc=sha256
)
Another thing I encountered was with kms_client.get_public_key()
First call the method:
kms_pub_key_bytes = kms_client.get_public_key(KeyId=key_id)["PublicKey"]
Then get the last 128 chars which are essentially the r and s values
kms_pre_sha3 = kms_pub_key_bytes.hex()[-128:]
Then do what you want with the public key.
Related blog post on Medium
I am trying to read a binary message from an ESP32 using a broker; i wrote a phyton script where I subscribe the topic. the message that i actually receive is:
b'\x00\x00\x00?'
this is a float binary little endian message but I don't the key to decode it. Is there a way to find the decode key based on this data?
This is my python code:
import paho.mqtt.client as mqtt
def on_connect1(client1, userdata1, flags1, rc1):
client1.subscribe("ESP32DevKit123/mytopic")
def on_message1(client1, userdata1, msg1):
print(msg1.topic+" "+ "TESTENZA: "+str(msg1.payload))
client1 = mqtt.Client()
client1.username_pw_set(username="myuser",password="mypassword")
client1.on_connect = on_connect1
client1.on_message = on_message1
client1.connect("linkclient", portnumber, 60)
def twosComplement_hex(hexval):
bits = 16 # Number of bits in a hexadecimal number format
on_message1 = int(hexval, bits)
if on_message1 & (1 << (bits-1)):
on_message1 -= 1 << bits
return on_message1
client1.loop_forever()
It also gives me an error in the line on_message1 -= 1 << bits; the error says: Expected intended block pylance. Any solutions?
The data you provided is b'\x00\x00\x00?' - I'm going to assume that this is 0000003f (please output hex with msg1.payload.hex()).
I'll also assume that by "float binary little endian" you mean a big endian floating point (IEE754) - note that this does not match up with the algorithm you are using (twos compliment). Plugging this input into an online tool indicates that the expected result ("Float - Big Endian (ABCD)") is 8.82818e-44 (it's worth checking with this tool; sometimes the encoding may not be what you think it is!).
Lets unpack this using python (see the struct docs for more information):
>>> from struct import unpack
>>> unpack('>f', b'\x00\x00\x00\x3f')[0]
8.828180325246348e-44
Notes:
The [0] is there because unpack returns an array (you can unpack more than one item from the input)
>f - the > means big-endian and the f float (standard size = 4 bytes)
The reason your original code gives the error "Expected intended block" is due to the lack of indentation in the line on_message1 -= 1 << bits (as it follows an if it needs to be indented). The algorithm does not appear relevant to your task (but there may be details I'm missing).
I'm trying to modify the code shown far below, which works in Python 2.7.x, so it will also work unchanged in Python 3.x. However I'm encountering the following problem I can't solve in the first function, bin_to_float() as shown by the output below:
float_to_bin(0.000000): '0'
Traceback (most recent call last):
File "binary-to-a-float-number.py", line 36, in <module>
float = bin_to_float(binary)
File "binary-to-a-float-number.py", line 9, in bin_to_float
return struct.unpack('>d', bf)[0]
TypeError: a bytes-like object is required, not 'str'
I tried to fix that by adding a bf = bytes(bf) right before the call to struct.unpack(), but doing so produced its own TypeError:
TypeError: string argument without an encoding
So my questions are is it possible to fix this issue and achieve my goal? And if so, how? Preferably in a way that would work in both versions of Python.
Here's the code that works in Python 2:
import struct
def bin_to_float(b):
""" Convert binary string to a float. """
bf = int_to_bytes(int(b, 2), 8) # 8 bytes needed for IEEE 754 binary64
return struct.unpack('>d', bf)[0]
def int_to_bytes(n, minlen=0): # helper function
""" Int/long to byte string. """
nbits = n.bit_length() + (1 if n < 0 else 0) # plus one for any sign bit
nbytes = (nbits+7) // 8 # number of whole bytes
bytes = []
for _ in range(nbytes):
bytes.append(chr(n & 0xff))
n >>= 8
if minlen > 0 and len(bytes) < minlen: # zero pad?
bytes.extend((minlen-len(bytes)) * '0')
return ''.join(reversed(bytes)) # high bytes at beginning
# tests
def float_to_bin(f):
""" Convert a float into a binary string. """
ba = struct.pack('>d', f)
ba = bytearray(ba)
s = ''.join('{:08b}'.format(b) for b in ba)
s = s.lstrip('0') # strip leading zeros
return s if s else '0' # but leave at least one
for f in 0.0, 1.0, -14.0, 12.546, 3.141593:
binary = float_to_bin(f)
print('float_to_bin(%f): %r' % (f, binary))
float = bin_to_float(binary)
print('bin_to_float(%r): %f' % (binary, float))
print('')
To make portable code that works with bytes in both Python 2 and 3 using libraries that literally use the different data types between the two, you need to explicitly declare them using the appropriate literal mark for every string (or add from __future__ import unicode_literals to top of every module doing this). This step is to ensure your data types are correct internally in your code.
Secondly, make the decision to support Python 3 going forward, with fallbacks specific for Python 2. This means overriding str with unicode, and figure out methods/functions that do not return the same types in both Python versions should be modified and replaced to return the correct type (being the Python 3 version). Do note that bytes is a reserved word, too, so don't use that.
Putting this together, your code will look something like this:
import struct
import sys
if sys.version_info < (3, 0):
str = unicode
chr = unichr
def bin_to_float(b):
""" Convert binary string to a float. """
bf = int_to_bytes(int(b, 2), 8) # 8 bytes needed for IEEE 754 binary64
return struct.unpack(b'>d', bf)[0]
def int_to_bytes(n, minlen=0): # helper function
""" Int/long to byte string. """
nbits = n.bit_length() + (1 if n < 0 else 0) # plus one for any sign bit
nbytes = (nbits+7) // 8 # number of whole bytes
ba = bytearray(b'')
for _ in range(nbytes):
ba.append(n & 0xff)
n >>= 8
if minlen > 0 and len(ba) < minlen: # zero pad?
ba.extend((minlen-len(ba)) * b'0')
return u''.join(str(chr(b)) for b in reversed(ba)).encode('latin1') # high bytes at beginning
# tests
def float_to_bin(f):
""" Convert a float into a binary string. """
ba = struct.pack(b'>d', f)
ba = bytearray(ba)
s = u''.join(u'{:08b}'.format(b) for b in ba)
s = s.lstrip(u'0') # strip leading zeros
return (s if s else u'0').encode('latin1') # but leave at least one
for f in 0.0, 1.0, -14.0, 12.546, 3.141593:
binary = float_to_bin(f)
print(u'float_to_bin(%f): %r' % (f, binary))
float = bin_to_float(binary)
print(u'bin_to_float(%r): %f' % (binary, float))
print(u'')
I used the latin1 codec simply because that's what the byte mappings are originally defined, and it seems to work
$ python2 foo.py
float_to_bin(0.000000): '0'
bin_to_float('0'): 0.000000
float_to_bin(1.000000): '11111111110000000000000000000000000000000000000000000000000000'
bin_to_float('11111111110000000000000000000000000000000000000000000000000000'): 1.000000
float_to_bin(-14.000000): '1100000000101100000000000000000000000000000000000000000000000000'
bin_to_float('1100000000101100000000000000000000000000000000000000000000000000'): -14.000000
float_to_bin(12.546000): '100000000101001000101111000110101001111110111110011101101100100'
bin_to_float('100000000101001000101111000110101001111110111110011101101100100'): 12.546000
float_to_bin(3.141593): '100000000001001001000011111101110000010110000101011110101111111'
bin_to_float('100000000001001001000011111101110000010110000101011110101111111'): 3.141593
Again, but this time under Python 3.5)
$ python3 foo.py
float_to_bin(0.000000): b'0'
bin_to_float(b'0'): 0.000000
float_to_bin(1.000000): b'11111111110000000000000000000000000000000000000000000000000000'
bin_to_float(b'11111111110000000000000000000000000000000000000000000000000000'): 1.000000
float_to_bin(-14.000000): b'1100000000101100000000000000000000000000000000000000000000000000'
bin_to_float(b'1100000000101100000000000000000000000000000000000000000000000000'): -14.000000
float_to_bin(12.546000): b'100000000101001000101111000110101001111110111110011101101100100'
bin_to_float(b'100000000101001000101111000110101001111110111110011101101100100'): 12.546000
float_to_bin(3.141593): b'100000000001001001000011111101110000010110000101011110101111111'
bin_to_float(b'100000000001001001000011111101110000010110000101011110101111111'): 3.141593
It's a lot more work, but in Python3 you can more clearly see that the types are done as proper bytes. I also changed your bytes = [] to a bytearray to more clearly express what you were trying to do.
I had a different approach from #metatoaster's answer. I just modified int_to_bytes to use and return a bytearray:
def int_to_bytes(n, minlen=0): # helper function
""" Int/long to byte string. """
nbits = n.bit_length() + (1 if n < 0 else 0) # plus one for any sign bit
nbytes = (nbits+7) // 8 # number of whole bytes
b = bytearray()
for _ in range(nbytes):
b.append(n & 0xff)
n >>= 8
if minlen > 0 and len(b) < minlen: # zero pad?
b.extend([0] * (minlen-len(b)))
return bytearray(reversed(b)) # high bytes at beginning
This seems to work without any other modifications under both Python 2.7.11 and Python 3.5.1.
Note that I zero padded with 0 instead of '0'. I didn't do much testing, but surely that's what you meant?
In Python 3, integers have a to_bytes() method that can perform the conversion in a single call. However, since you asked for a solution that works on Python 2 and 3 unmodified, here's an alternative approach.
If you take a detour via hexadecimal representation, the function int_to_bytes() becomes very simple:
import codecs
def int_to_bytes(n, minlen=0):
hex_str = format(n, "0{}x".format(2 * minlen))
return codecs.decode(hex_str, "hex")
You might need some special case handling to deal with the case when the hex string gets an odd number of characters.
Note that I'm not sure this works with all versions of Python 3. I remember that pseudo-encodings weren't supported in some 3.x version, but I don't remember the details. I tested the code with Python 3.5.
I'm trying to RSA encrypt a word 2 characters at a time padding with a space using Python but not sure how I go about it.
For example if the encryption exponent was 8 and a modulus of 37329 and the word was 'Pound' how would I go about it? I know I need to start with pow(ord('P') and need to take into consideration that the word is 5 characters and I need to do it 2 characters at a time padding with a space. I'm not sure but do I also need to use <<8 somewhere?
Thank you
Here's a basic example:
>>> msg = 2495247524
>>> code = pow(msg, 65537, 5551201688147) # encrypt
>>> code
4548920924688L
>>> plaintext = pow(code, 109182490673, 5551201688147) # decrypt
>>> plaintext
2495247524
See the ASPN cookbook recipe for more tools for working with mathematical part of RSA style public key encryption.
The details of how characters get packed and unpacked into blocks and how the numbers get encoded is a bit arcane. Here is a complete, working RSA module in pure Python.
For your particular packing pattern (2 characters at a time, padded with spaces), this should work:
>>> plaintext = 'Pound'
>>> plaintext += ' ' # this will get thrown away for even lengths
>>> for i in range(0, len(plaintext), 2):
group = plaintext[i: i+2]
plain_number = ord(group[0]) * 256 + ord(group[1])
encrypted = pow(plain_number, 8, 37329)
print group, '-->', plain_number, '-->', encrypted
Po --> 20591 --> 12139
un --> 30062 --> 2899
d --> 25632 --> 23784
If you want to efficiently code the RSA encryption using python, my github repository would definitely to understand and interpret the mathematical definitions of RSA in python
Cryptogrphic Algoritms Implementation Using Python
RSA Key Generation
def keyGen():
''' Generate Keypair '''
i_p=randint(0,20)
i_q=randint(0,20)
# Instead of Asking the user for the prime Number which in case is not feasible,
# generate two numbers which is much highly secure as it chooses higher primes
while i_p==i_q:
continue
primes=PrimeGen(100)
p=primes[i_p]
q=primes[i_q]
#computing n=p*q as a part of the RSA Algorithm
n=p*q
#Computing lamda(n), the Carmichael's totient Function.
# In this case, the totient function is the LCM(lamda(p),lamda(q))=lamda(p-1,q-1)
# On the Contrary We can also apply the Euler's totient's Function phi(n)
# which sometimes may result larger than expected
lamda_n=int(lcm(p-1,q-1))
e=randint(1,lamda_n)
#checking the Following : whether e and lamda(n) are co-prime
while math.gcd(e,lamda_n)!=1:
e=randint(1,lamda_n)
#Determine the modular Multiplicative Inverse
d=modinv(e,lamda_n)
#return the Key Pairs
# Public Key pair : (e,n), private key pair:(d,n)
return ((e,n),(d,n))
I try to create a python client for bacula, but I have some problem with the authentication.
The algorithm is :
import hmac
import base64
import re
...
challenge = re.search("auth cram-md5 ()", data)
#exemple ''
passwd = 'b489c90f3ee5b3ca86365e1bae27186e'
hm = hmac.new(passwd, challenge).digest()
rep = base64.b64encode(hm).strp().rstrip('=')
#result with python : 9zKE3VzYQ1oIDTpBuMMowQ
#result with bacula client : 9z+E3V/YQ1oIDTpBu8MowB'
There's a way more simple than port the bacula's implemenation of base 64?
int
bin_to_base64(char *buf, int buflen, char *bin, int binlen, int compatible)
{
uint32_t reg, save, mask;
int rem, i;
int j = 0;
reg = 0;
rem = 0;
buflen--; /* allow for storing EOS */
for (i=0; i >= (rem - 6);
if (j
To verify your CRAM-MD5 implementation, it is best to use some simple test vectors and check combinations of (challenge, password, username) inputs against the expected output.
Here's one example (from http://blog.susam.in/2009/02/auth-cram-md5.html):
import hmac
username = 'foo#susam.in'
passwd = 'drowssap'
encoded_challenge = 'PDc0NTYuMTIzMzU5ODUzM0BzZGNsaW51eDIucmRzaW5kaWEuY29tPg=='
challenge = encoded_challenge.decode('base64')
digest = hmac.new(passwd, challenge).hexdigest()
response = username + ' ' + digest
encoded_response = response.encode('base64')
print encoded_response
# Zm9vQHN1c2FtLmluIDY2N2U5ZmE0NDcwZGZmM2RhOWQ2MjFmZTQwNjc2NzIy
That said, I've certainly found examples on the net where the response generated by the above code differs from the expected response stated on the relevant site, so I'm still not entirely clear as to what is happening in those cases.
I HAVE CRACKED THIS.
I ran into exactly the same problem you did, and have just spent about 4 hours identifying the problem, and reimplementing it.
The problem is the Bacula's base64 is BROKEN, AND WRONG!
There are two problems with it:
The first is that the incoming bytes are treated as signed, not unsigned. The effect of this is that, if a byte has the highest bit set (>127), then it is treated as a negative number; when it is combined with the "left over" bits from previous bytes are all set to (binary 1).
The second is that, after b64 has processed all the full 6-bit output blocks, there may be 0, 2 or 4 bits left over (depending on input block modulus 3). The standard Base64 way to handle this is to multiply the remaining bits, so they are the HIGHEST bits in the last 6-bit block, and process them - Bacula leaves them as the LOWEST bits.
Note that some versions of Bacula may accept both the "Bacula broken base64 encoding" and the standard ones, for incoming authentication; they seem to use the broken one for their authentication.
def bacula_broken_base64(binarystring):
b64_chars="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/"
remaining_bit_count=0
remaining_bits=0
output=""
for inputbyte in binarystring:
inputbyte=ord(inputbyte)
if inputbyte>127:
# REPRODUCING A BUG! set all the "remaining bits" to 1.
remaining_bits=(1 << remaining_bit_count) - 1
remaining_bits=(remaining_bits<<8)+inputbyte
remaining_bit_count+=8
while remaining_bit_count>=6:
# clean up:
remaining_bit_count-=6
new64=(remaining_bits>>remaining_bit_count) & 63 # 6 highest bits
output+=b64_chars[new64]
remaining_bits&=(1 << remaining_bit_count) - 1
if remaining_bit_count>0:
output+=b64_chars[remaining_bits]
return output
I realize it's been 6 years since you asked, but perhaps someone else will find this useful.