Why does using this code can generate a random password? - python

Here a snippet for generating password code,
I have 2 questions about this, Could you please share how to understand?
urandom(6), help from urandom said,return n random bytes suitable for cryptographic use, it is say, it will return 6 bytes, is it 6 of ASCII ?
ord(c) , get the decimal base for above bytes, why here transfer to decimal base?
Help for urandom:
def urandom(n): # real signature unknown; restored from __doc__
"""
urandom(n) -> str
Return n random bytes suitable for cryptographic use.
"""
return ""
Python script:
from os import urandom
letters = "ABCDEFGHJKLMNPRSTUVWXYZ"
password = "".join(letters[ord(c) % len(letters)] for c in urandom(6))

urandom will return a byte (i.e. a value between 0 and 255). The sample code uses that value and the modulo operator (%) to convert it into a value between 0 and 22, so that it can return one of the 23 letters (I, O, and Q are excluded not to be confused with numbers).
Note that it is not a perfectly balanced algorithm as it would favour the first 3 letters (A, B, and C) more, because 256 is not divisible by 23 and 256 % 23 is 3.

ord() function takes in a string containing a single character, and returns its Unicode index.
ex.
ord("A") => 65
ord("£") => 163
It is not used to get the decimal base of a byte as you mentioned, but rather its Unicode Index (its place in the Unicode Table).
P.S. :- Even though it returns the Unicode index but that doesn't mean its, range = len(Unicode Table) , the reason being that your python compiler may not support such long character sets under normal circumstances.

Related

Convert binary to signed, little endian 16bit integer in Python

Trying to a convert a binary list into a signed 16bit little endian integer
input_data = [['1100110111111011','1101111011111111','0010101000000011'],['1100111111111011','1101100111111111','0010110100000011']]
Desired Output =[[-1074, -34, 810],[-1703, -39, 813]]
This is what I've got so far. It's been adapted from: Hex string to signed int in Python 3.2?,
Conversion from HEX to SIGNED DEC in python
results = []
for i in input_data:
hex_convert = [hex(int(x,2)) for x in i]
convert = [int(y[4:6] + y[2:4], 16) for y in hex_convert]
results.append(convert)
print (results)
output: [[64461, 65502, 810], [64463, 65497, 813]]
This is works fine, but the above are unsigned integers. I need signed integers capable of handling negative values. I then tried a different approach:
results_2 = []
for i in input_data:
hex_convert = [hex(int(x,2)) for x in i]
to_bytes = [bytes(j, 'utf-8') for j in hex_convert]
split_bits = [int(k, 16) for k in to_bytes]
convert_2 = [int.from_bytes(b, byteorder = 'little', signed = True) for b in to_bytes]
results_2.append(convert_2)
print (results_2)
Output: [[108191910426672, 112589973780528, 56282882144304], [108191943981104, 112589235583024, 56282932475952]]
This result is even more wild than the first. I know my approach is wrong, and it doesn't help that i've never been able to get my head around binary conversion etc, but I feel i'm on the right path with:
(b, byteorder = 'little', signed = True)
but can't work out where i'm wrong. Any help explaining this concept would be greatly appreciated.
This result is even more wild than the first. I know my approach is wrong... but can't work out where i'm wrong.
The problem is in the conversion to bytes. Let's look at it a step at a time:
int(x, 2)
Fine; we treat the string as a base-2 representation of the integer value, and get that integer. Only problem is it's a) unsigned and b) big-endian.
hex(int(x,2))
What this does is create a string representation of the integer, in base 16, with a 0x prefix. Notably, there are two text characters per byte that we want. This is already heading is down the wrong path.
You might have thought of using hexadecimal because you've seen \xAB style escapes inside string representations. This is a completely different thing. The string '\xAB' contains one character. The string '0xAB' contains four.
From there, everything else is still nonsense. Converting to bytes with a text encoding just means that the text character 0 for example is replaced with the byte value 48 (since in UTF-8 it's encoded with a single byte with that value). For this data we get the same results with UTF-8 that we would by assuming plain ASCII (since UTF-8 is "ASCII transparent" and there are no non-ASCII characters in the text).
So how do we do it?
We want to convert the integer from the first step into the bytes used to represent it. Just as there is a .from_bytes class method allowing us to create an integer from underlying bytes, there is an instance method allowing us to get the bytes that would represent the integer.
So, we use .to_bytes, specifying the length, signedness and endianness that was assumed when we created the int from the binary string - that gives us bytes that correspond to that string. Then, we re-create the integer from those bytes, except now specifying the proper signedness and endianness. The reason that .to_bytes makes us specify a length is because the integer doesn't have a particular length - there are a minimum number of bytes required to represent it, but you could use as many more as you like. (This is especially important if you want to handle signed values, since it will do sign-extension automatically.)
Thus:
for i in input_data:
values = [int(x,2) for x in i]
as_bytes = [x.to_bytes(2, byteorder='big', signed=False) for x in values]
reinterpreted = [int.from_bytes(x, byteorder='little', signed=True) for x in as_bytes]
results_2.append(reinterpreted)
But let's improve the organization of the code a bit. I will first make a function to handle a single integer value, and then we can use comprehensions to process the list. In fact, we can use nested comprehensions for the nested list.
def as_signed_little(binary_str):
# This time, taking advantage of positional args and default values.
as_bytes = int(binary_str, 2).to_bytes(2, 'big')
return int.from_bytes(as_bytes, 'little', signed=True)
# And now we can do:
results_2 = [[as_signed_little(x) for x in i] for i in input_data]

Convert hex-string to integer with python

Note that the problem is not hex to decimal but a string of hex values to integer.
Say I've got a sting from a hexdump (eg. '6c 02 00 00') so i need to convert that into actual hex first, and then get the integer it represents... (this particular one would be 620 as an int16 and int32)
I tried a lot of things but confused myself more. Is there a quick way to do such a conversion in python (preferably 3.x)?
update From Python 3.7 on, bytes.from_hex will ignore whitespaces -so, the straightforward thing to do is parse the string to a bytes object, and then see then as an integer:
In [10]: int.from_bytes(bytes.fromhex("6c 02 00 00"), byteorder="little")
Out[10]: 620
original answer
Not only that is a string, but it is in little endian order - meanng that just removing the spaces, and using int(xx, 16) call will work. Neither does it have the actual byte values as 4 arbitrary 0-255 numbers (in which case struct.unpack would work).
I think a nice approach is to swap the components back into "human readable" order, and use the int call - thus:
number = int("".join("6c 02 00 00".split()[::-1]), 16)
What happens there: the first part of th expession is the split - it breaks the string at the spaces, and provides a list with four strings, two digits in each. The [::-1] special slice goes next - it means roughly "provide me a subset of elements from the former sequence, starting at the edges, and going back 1 element at a time" - which is a common Python idiom to reverse any sequence.
This reversed sequence is used in the call to "".join(...) - which basically uses the empty string as a concatenator to every element on the sequence - the result of the this call is "0000026c". With this value, we just call Python's int class which accepts a secondary optional paramter denoting the base that should be used to interpret the number denoted in the first argument.
>>> int("".join("6c 02 00 00".split()[::-1]), 16)
620
Another option, is to cummulatively add the conversion of each 2 digits, properly shifted to their weight according to their position - this can also be done in a single expression using reduce, though a 4 line Python for loop would be more readable:
>>> from functools import reduce #not needed in Python2.x
>>> reduce(lambda x, y: x + (int(y[1], 16)<<(8 * y[0]) ), enumerate("6c 02 00 00".split()), 0)
620
update The OP just said he does not actually have the "spaces" in the string - in that case, one can use just abotu the same methods, but taking each two digits instead of the split() call:
reduce(lambda x, y: x + (int(y[1], 16)<<(8 * y[0]//2) ), ((i, a[i:i+2]) for i in range(0, len(a), 2)) , 0)
(where a is the variable with your digits, of course) -
Or, convert it to an actual 4 byte number in memory, usign the hex codec, and unpack the number with struct - this may be more semantic correct for your code:
import codecs
import struct
struct.unpack("<I", codecs.decode("6c020000", "hex") )[0]
So the approach here is to pass each 2 digits to an actual byte in memory in a bytes object returned by the codecs.decode call, and struct to read the 4 bytes in the buffer as a single 32bit integer.
You can use unhexlify() to convert the hex string to its binary form, and then use struct.unpack() to decode the little endian value into an int:
>>> from struct import unpack
>>> from binascii import unhexlify
>>> n = unpack('<i', unhexlify('6c 02 00 00'.replace(' ','')))[0]
>>> n
The format string '<i' means little endian signed integer. You can substitute with '<I' or '<L' for unsigned int or long (both 4 bytes).
If the data does not contain spaces this simplifies to
>>> n = unpack('<i', unhexlify('6c020000'))[0]

what does the following line of code do?

var=hashlib.md5(str(random.random())).hexdigest()[:16]
I was reading a code in python,when I came across the above code line.
can anybody explain me what the above code line does ?
The line creates a random 16 character hex string.
random.random() produces a random float value in the range [0.0, 1.0).
>>> import random
>>> random.random()
0.845295579640289
str() produces a string version of that random value.
>>> str(0.845295579640289)
'0.84529557964'
hashlib.md5() creates a MD5 message digest hash object, initialised with the string value.
>>> hashlib.md5('0.84529557964')
<md5 HASH object # 0x10074c530>
The hexdigest() method then produces the hash digest, expressed in hexadecimal. The MD5 algorithm produces a 16 bytes of information, when expressed as in hexadecimal that means 32 characters are produced:
>>> hashlib.md5('0.84529557964').hexdigest()
'5180b52225eac65bee1d6419e28ef397'
The [:16] slice picks out the first 16 characters. This step is halves the digest to just the first 16 characters out of 32:
>>> '5180b52225eac65bee1d6419e28ef397'[:16]
'5180b52225eac65b'
All in all, a rather verbose, inefficient and insecure way of producing a random 16 character hex value. I'd use os.urandom() instead, encoding to hex:
>>> import os
>>> os.urandom(8).encode('hex')
'a8cb7b56d476b556'
This produces a random 8-byte string value, which when expressed as hex, also produces 16 hex characters, entirely random.
My crypto-fu isn't that great, but I have the impression that the latter form is cryptographically stronger than taking half of a MD5 hash of a string of a floating point psuedo-random value.
md5 is Encryption-Decryption technique which generates 128 bit checksum and expressed in 32 digit Hex number in text format.
so hashlib.md5(str(random.random())).hexdigest() will give you this number in a string and
[:16] will extract first 16 Digits of that hash and will store in var
Read these References for more details .
Python Md5
Md5 Hash

How to convert floating point number in python?

How to convert floating point number to base-16 numbers, 8 hexadecimal digits per 32-bit FLP number in python?
eg : input = 1.2717441261e+20 output wanted : 3403244E
If you want the byte values of the IEEE-754 representation, the struct module can do this:
>>> import struct
>>> f = 1.2717441261e+20
>>> struct.pack('f', f)
'\xc9\x9c\xdc`'
This is a string version of the bytes, which can then be converted into a string representation of the hex values:
>>> struct.pack('f', f).encode('hex')
'c99cdc60'
And, if you want it as a hex integer, parse it as such:
>>> s = struct.pack('f', f).encode('hex')
>>> int(s, 16)
3382500448
To display the integer as hex:
>>> hex(int(s, 16))
'0xc99cdc60'
Note that this does not match the hex value in your question -- if your value is the correct one you want, please update the question to say how it is derived.
There are several possible ways to do so, but none of them leads to the result you wanted.
You can code this float value into its IEEE binary representation. This leads indeed to a 32 bit number (if you do it with single precision). But it leads to different results, no matter which endianness I suppose:
import struct
struct.pack("<f", 1.2717441261e+20).encode("hex")
# -> 'c99cdc60'
struct.pack(">f", 1.2717441261e+20).encode("hex")
# -> '60dc9cc9'
struct.unpack("<f", "3403244E".decode("hex"))
# -> (687918336.0,)
struct.unpack(">f", "3403244E".decode("hex"))
# -> (1.2213533295835077e-07,)
As the other one didn't fit result-wise, I'll take the other answers and include them here:
float.hex(1.2717441261e+20)
# -> '0x1.b939919e12808p+66'
Has nothing to do with 3403244E as well, so maybe you want to clarify what exactly you mean.
There are surely other ways to do this conversation, but unless you specify which method you want, no one is likely to be able to help you.
There is something wrong with your expected output :
import struct
input = 1.2717441261e+20
buf = struct.pack(">f", input)
print ''.join("%x" % ord(c) for c in struct.unpack(">4c", buf) )
Output :
60dc9cc9
Try float.hex(input) if input is already a float.
Try float.hex(input). This should convert a number into a string representing the number in base 16, and works with floats, unlike hex(). The string will begin with 0x however, and will contain 13 digits after the decimal point, so I can't help you with the 8 digits part.
Source: http://docs.python.org/2/library/stdtypes.html#float.hex

Appending ten digit integer to list concatenates some entries with an "L"

I wrote a small script to find n-digit primes in the digits of e (in relation to that old Google ad):
import math
# First 251 digits of e
e_digits = ("2"
"7182818284 5904523536 0287471352 6624977572 4709369995"
"9574966967 6277240766 3035354759 4571382178 5251664274"
"2746639193 2003059921 8174135966 2904357290 0334295260"
"5956307381 3232862794 3490763233 8298807531 9525101901"
"1573834187 9307021540 8914993488 4167509244 7614606680")
e_digits = e_digits.replace(" ", "")
digits = int(raw_input("Number of digits: "))
print "Finding ", str(digits) + "-digit primes in the first", len(e_digits), "digits of e."
numbers = []
primes = []
# Creates list of numbers based on variable digits
for n in range(0,len(e_digits) - (digits - 1)):
test_number = e_digits[n:n+digits]
numbers.append(int(test_number))
# Checks each number for divisors smaller than its sqrt, then appends to list primes
for n in numbers:
n_sqrt = int(math.floor(math.sqrt(n)))
div = []
for i in range(2,n_sqrt+1):
if n % i == 0:
div.append(i)
if div == []:
primes.append(n)
print primes
However, when I set digits = 10, this is printed:
[7427466391L, 7413596629L, 6059563073L, 3490763233L, 2988075319L, 1573834187, 7021540891L, 5408914993L]
All of the list entries except for number six has been concatenated with an "L", and I have no clue why. The problem arises when I run the code in IDLE as well as in CMD, though only when appending ten digit integers using this specific code.
In the if-statement in the last for loop, the correct numbers are printed if I print n, or if I convert n to a string before appending. However, then converting to a integer again creates the same problem.
The problem also occurs with digits = 11, but not with digits < 10.
I cannot for the life of me find the error (or figure out if there is an error at all, really). Some advice on this would be greatly appreciated.
Your code is working just fine and what you see is normal. Those are literal representations of Python long integers.
When printing a list, the contents of a list are printed as representations, the same output as the repr() function would give. The alternative is to print individual elements of the list instead.
You don't need to worry about the long representation, however. That is just an implementation detail of Python integers leaking through:
>>> 1234567890
1234567890
>>> type(1234567890)
<type 'int'>
>>> 12345678901234567890
12345678901234567890L
>>> type(12345678901234567890)
<type 'long'>
Here, the Python interpreter prints the results of expressions as repr() representations too. Integers larger than sys.maxint automatically become long integers.
Quoting the documentation:
Plain integers (also just called integers) are implemented using long in C, which gives them at least 32 bits of precision (sys.maxint is always set to the maximum plain integer value for the current platform, the minimum value is -sys.maxint - 1). Long integers have unlimited precision.
and
Unadorned integer literals (including binary, hex, and octal numbers) yield plain integers unless the value they denote is too large to be represented as a plain integer, in which case they yield a long integer. Integer literals with an 'L' or 'l' suffix yield long integers ('L' is preferred because 1l looks too much like eleven!).
Comparisons and arithmetic between regular and long integers is fully supported and transparent:
Python fully supports mixed arithmetic: when a binary arithmetic operator has operands of different numeric types, the operand with the “narrower” type is widened to that of the other, where plain integer is narrower than long integer is narrower than floating point is narrower than complex. Comparisons between numbers of mixed type use the same rule.
Python 3 removed the distinction altogether.

Categories