I'm trying to generate an x5t parameter for a header to make a request to Azure using a certificate to authenticate.
In the example given in the docs here: https://learn.microsoft.com/en-us/azure/active-directory/develop/active-directory-certificate-credentials, it's saying that the SHA-1 hash of 84E05C1D98BCE3A5421D225B140B36E86A3D5534 should give an x5t value of hOBcHZi846VCHSJbFAs26Go9VTQ=
When I try to convert this hash using the following, I find the x5t value to be ODRFMDVDMUQ5OEJDRTNBNTQyMUQyMjVCMTQwQjM2RTg2QTNENTUzNA==
What am I doing wrong in the conversion process?
import base64
x="84E05C1D98BCE3A5421D225B140B36E86A3D5534"
x5t = base64.b64encode(x.encode()).decode()
print(x)
The given SHA-1 hash
84E05C1D98BCE3A5421D225B140B36E86A3D5534
is a long hexadecimal number. In your code you treat it as a string, (e.g. "84") but you need to interpret it as hexadecimal representation of a byte array (e.g. first byte is 0x84):
import base64
x = "84E05C1D98BCE3A5421D225B140B36E86A3D5534"
x5t = base64.b64encode(bytearray.fromhex(x))
print(x5t.decode())
The result is:
hOBcHZi846VCHSJbFAs26Go9VTQ=
Related
Cryptography noob here. I'm trying to write a script in NodeJS that encrypts a string and produces output that matches the output of my Python script that uses the cryptography.fernet library. My overall goal is to use the original key to encrypt messages in Node that will later be decrypted using Python.
Sample of my Python code:
from cryptography.fernet import Fernet
key = Fernet.generate_key() # For example: 6saGtiTFEXej729GUWSeAyQdIpRFdGhfY2XFUDpvsu8=
f = Fernet(key)
message = 'Hello World'
encoded = message.encode()
encrypted = f.encrypt(encoded)
Which produces the output: gAAAAABhJs_E-dDVp_UrLK6PWLpukDAM0OT5M6bfcqvVoCvg7r63NSi4OWOamLpABuYQG-5wsts_9h7cLbCsWmctArXcGqelXz_BXl_o2C7KM9o7_eq7VTc=
My Node script uses the built-in Crypto module and must also use the same 32-byte key that is being used in my Python program. I know that fernet uses is AES-128-CBC as its algorithm, so that's what I'm using for my Node script.
My NodeJS code:
const crypto = require("crypto");
const key = '6saGtiTFEXej729GUWSeAyQdIpRFdGhfY2XFUDpvsu8=';
const algorithm = 'aes-128-cbc';
const message = 'Hello World';
const iv = crypto.randomBytes(16);
const cipher = crypto.createCipheriv(algorithm, key, iv);
const encrypted = cipher.update(message, 'utf8', 'hex') + cipher.final('hex');
Which is giving me: Error: Invalid key length
My first problem is that I'm unsure how to convert the key so that it's the proper length. I also know from looking at fernet's source code that the key is split into two parts: the first 16 bytes are the signing_key and the last 16 bytes are the encryption_key - I haven't found much information on whether/how I need to deal with those two pieces of the original key in my Node implementation.
Since I'm new to this I'm a little confused on how to accomplish what I'm after. Any tips or advice is very much appreciated.
The specs for the Fernet format can be found on https://github.com/fernet/spec/blob/master/Spec.md
There they specify both a generating and a veryfying steps, here is the generating which should give enough information for your implementation:
Record the current time for the timestamp field.
Choose a unique IV.
Construct the ciphertext:
Pad the message to a multiple of 16 bytes (128 bits) per RFC 5652, section 6.3. This is the same padding technique used in PKCS #7 v1.5 and all versions of SSL/TLS (cf. RFC 5246, section 6.2.3.2 for TLS 1.2).
Encrypt the padded message using AES 128 in CBC mode with the chosen IV and user-supplied encryption-key.
Compute the HMAC field as described above using the user-supplied signing-key.
Concatenate all fields together in the format above.
base64url encode the entire token.
From this we can see that the signing key (first half of full key) is used in HMAC, while the second half is used in the AES128-CBC, so just dividing the key into two separate elements (with proper conversion from hex string to bytes) should be enough for using Node.js crypto module (https://nodejs.org/en/knowledge/cryptography/how-to-use-crypto-module/) to construct your own implementation.
my problem is the following.
For an AES encryption I generate a random key of size 32.
random_key = os.urandom(32)
Output: b'\xf3\xd9\x9a\n>\x99-k\xecv#VG\xecK\x9c\xbf\xcb\xae!\xb8&}\x01:\x9f0\xf1\x04\xb4\xd7\x81'
I need to store this key as an string. So i convert it to a string as follows.
key_string = random_key.decode('latin-1')
Output: óÙ
>-kìv#VGìK¿Ë®!¸&}☺:♦´×
I use 'latin-1', because when I use 'utf-8' or anything else I get an error message. The string has a lot of spaces as well.
Later on I convert this string back to a byte string.
key_byte = key_string.encode()
Output: b'\xc3\xb3\xc3\x99\xc2\x9a\n>\xc2\x99-k\xc3\xacv#VG\xc3\xacK\xc2\x9c\xc2\xbf\xc3\x8b\xc2\xae!\xc2\xb8&}\x01:\xc2\x9f0\xc3\xb1\x04\xc2\xb4\xc3\x97\xc2\x81'
As you can see the first byte code and the second dont match at all. Not even in the size.
I guess its becuase of the decoding method.
I am trying to implement the OS2IP algorithm in Python. However I do not know how I can convert a character string, say "Men of few words are the best men." into the octet format.
Use the .encode() method of str. For example:
"öä and ü".encode("utf-8")
displays
b'\xc3\xb6\xc3\xa4 and \xc3\xbc'
If you then want to convert this to an int, you can just use the int.from_bytes() method, e.g.
the_bytes = "öä and ü".encode("utf-8")
the_int = int.from_bytes(the_bytes, 'big')
print(the_int)
displays
236603614466389086088250300
In preparing for an RSA encryption, a padding algorithm is typically applied to the result of the first encoding step to pad the byte array out to the size of the RSA modulus, and then padded byte array is converted to an integer. This padding step is critical to the security of RSA cryptography.
I am migrating a platform which used Passlib 1.6.2 to generate password hashes. The code to encrypt the password is (hash is called with default value for rounds):
from passlib.hash import pbkdf2_sha512 as pb
def hash(cleartext, rounds=10001):
return pb.encrypt(cleartext, rounds=rounds)
The output format looks like (for the password "Patient3" (no quotes)):
$pbkdf2-sha512$10001$0dr7v7eWUmptrfW.9z6HkA$w9j9AMVmKAP17OosCqDxDv2hjsvzlLpF8Rra8I7p/b5746rghZ8WrgEjDpvXG5hLz1UeNLzgFa81Drbx2b7.hg
And "Testing123"
$pbkdf2-sha512$10001$2ZuTslYKAYDQGiPkfA.B8A$ChsEXEjanEToQcPJiuVaKk0Ls3n0YK7gnxsu59rxWOawl/iKgo0XSWyaAfhFV0.Yu3QqfehB4dc7yGGsIW.ARQ
I can see that represents:
Algorithm SHA512
Iterations 10001
Salt 0dr7v7eWUmptrfW.9z6HkA (possibly)
The Passlib algorithm is defined on their site and reads:
All of the pbkdf2 hashes defined by passlib follow the same format, $pbkdf2-digest$rounds$salt$checksum.
$pbkdf2-digest$ is used as the Modular Crypt Format identifier ($pbkdf2-sha256$ in the example).
digest - this specifies the particular cryptographic hash used in conjunction with HMAC to form PBKDF2’s pseudorandom function for that particular hash (sha256 in the example).
rounds - the number of iterations that should be performed. this is encoded as a positive decimal number with no zero-padding (6400 in the example).
salt - this is the adapted base64 encoding of the raw salt bytes passed into the PBKDF2 function.
checksum - this is the adapted base64 encoding of the raw derived key bytes returned from the PBKDF2 function. Each scheme uses the digest size of its specific hash algorithm (digest) as the size of the raw derived key. This is enlarged by approximately 4/3 by the base64 encoding, resulting in a checksum size of 27, 43, and 86 for each of the respective algorithms listed above.
I found passlib.net which looks a bit like an abandoned beta and it uses '$6$' for the algorithm. I could not get it to verify the password. I tried changing the algorithm to $6$ but I suspect that in effect changes the salt as well.
I also tried using PWDTK with various values for salt and hash, but it may have been I was splitting the shadow password incorrectly, or supplying $ in some places where I should not have been.
Is there any way to verify a password against this hash value in .NET? Or another solution which does not involve either a Python proxy or getting users to resupply a password?
The hash is verified by passing the password into the PBKDF HMAC-SHA-256 hash method and then comparing the resulting hash to the saved hash portion, converted back from the Base64 version.
Saved hash to binary, then separate the hash
Convert the password to binary using UTF-8 encoding
PBKDF2,HMAC,SHA-256(toBinary(password, salt, 10001) == hash
Password: "Patient3"
$pbkdf2 - sha512$10001$0dr7v7eWUmptrfW.9z6HkA$w9j9AMVmKAP17OosCqDxDv2hjsvzlLpF8Rra8I7p/b5746rghZ8WrgEjDpvXG5hLz1UeNLzgFa81Drbx2b7.hg
Breaks down to (with the strings converted to standard Base64 (change '.' to '+' and add trailing '=' padding:
pbkdf2 - sha512
10001
0dr7v7eWUmptrfW+9z6HkA==
w9j9AMVmKAP17OosCqDxDv2hjsvzlLpF8Rra8I7p/b5746rghZ8WrgEjDpvXG5hLz1UeNLzgFa81Drbx2b7+hg==
Decoded to hex:
D1DAFBBFB796526A6DADF5BEF73E8790
C3D8FD00C5662803F5ECEA2C0AA0F10EFDA18ECBF394BA45F11ADAF08EE9FDBE7BE3AAE0859F16AE01230E9BD71B984BCF551E34BCE015AF350EB6F1D9BEFE86
Which makes sense: 16-byte (128-bit) salt and 64-byte (512-bit) SHA-512 hash.
Converting "Patient3" using UTF-8 to a binary array
Converting the salt from a modified BASE64 encoding to a 16 byte binary array
Using an iteration count od 10001
Feeding this to PBKDF2 using HMAC with SHA-512
I get
C3D8FD00C5662803F5ECEA2C0AA0F10EFDA18ECBF394BA45F11ADAF08EE9FDBE7BE3AAE0859F16AE01230E9BD71B984BCF551E34BCE015AF350EB6F1D9BEFE86
Which when Base64 encoded, replacing '+' characters with '.' and stripping the trailing '=' characters returns:
w9j9AMVmKAP17OosCqDxDv2hjsvzlLpF8Rra8I7p/b5746rghZ8WrgEjDpvXG5hLz1UeNLzgFa81Drbx2b7.hg
I quickly knocked together a .NET implementation using zaph's logic and using the code from JimmiTh on SO answer. I have put the code on GitHub (this is not supposed to be production ready). It appears to work with more than a handful of examples from our user base.
As zaph said the logic was:
Split the hash to find the iteration count, salt and hashed password. (I have assumed the algorithm, but you'd verify it). You'll have an array of 5 values containing [0] - Nothing, [1] - Algorithm, [2] - Iterations, [3] - Salt and [4] - Hash
Turn the salt into standard Base64 encoding by replacing any '.' characters with '+' characters and appending "==".
Pass the password, salt and iteration count to the PBKDF2-HMAC-SHA512 generator.
Convert back to the original base64 format by replacing any '+' characters with '.' characters and stripping the trailing "==".
Compare to the original hash (element 4 in the split string) to this converted value and if they're equal you've got a match.
I wanted to use sha1 alghoritm to calculate the checksum of some data, the thing is that in python hashlib input is given as string.
Is it possible to calculate sha1 in python, but somehow give raw bytes as input?
I am asking because if I would want to calculate hash of an file, in C I would use openssl library and just pass normal bytes, but in Python I need to pass string, so if I would calculate hash of some specific file I would get different results in both languages.
In Python 2.x, str objects can be arbitrary byte streams. So yes, you can just pass the data into the hashlib functions as strs.
>>> import hashlib
>>> "this is binary \0\1\2"
'this is binary \x00\x01\x02'
>>> hashlib.sha1("this is binary \0\1\2").hexdigest()
'17c27af39d476f662be60be7f25c8d3873041bb3'