I would like to be able to represent any string as a unique integer (means every integer in the world could mean only one string, and a certain string would result constantly in the same integer).
The obvious point is, that's how the computer works, representing the string 'Hello' (for example) as a number for each character, specifically a byte (assuming ASCII encoding).
But... I would like to perform arithmetic calculations over that number (Encode it as a number using RSA).
The reason this is getting messy is because assuming I have a bit larger string 'I am an average length string' I have more characters (29 in this case), and an integer with 29 bytes could come up HUGE, maybe too much for the computer to handle (when coming up with bigger strings...?).
Basically, my question is, how could I do? I wouldn't like to use any module for RSA, it's a task I would like to implement myself.
Here's how to turn a string into a single number. As you suspected, the number will get very large, but Python can handle integers of any arbitrary size. The usual way of working with encryption is to do individual bytes all at once, but I'm assuming this is only for a learning experience. This assumes a byte string, if you have a Unicode string you can encode to UTF-8 first.
num = 0
for ch in my_string:
num = num << 8 + ord(ch)
Related
Suppose that you hash a string in python using a custom-made hash function named sash().
sash("hello world") returns something like 2769834847158000631.
What code (in python) would implement sash() function and a unsash() function such that unsash(sash("hello world")) returns "hello world"?
If you like, assume that the string contains ASCII characters only.
There are 128 ASCII characters.
Thus, each python string is like a natural number written in base 128.
A hash is fixed in size, whereas a string is not. Therefore there will be more more possible strings than hash values, making it impossible to reverse.
In your example, you have an 11-character string containing 77 bits. Your corresponding integer would fit in 64 bits (actually 62 bits, but I will take 64 bits as what you might have been imagining). If we consider only 11-character strings (obviously there are far more), we have 277 possible strings. Assuming a 64-bit hash, there are only 264 hash values. Each hash value would have, on average, 8192 strings that map to it. So given just the hash value, you would have no idea which of those 8192 strings to decode it to.
If you don't mind a hash of unbounded size, then sure, you can simply consider the string itself to be the hash. Then no decoding required. You can get a little fancier, since you are limiting the characters to 0..127, and pack seven bits for each character into a string of bytes, reducing the size by 1/8th. This is effectively the base-128 number you are referring to. You may be able to get it smaller with compression if your 0..127 characters do not have the same probability. Then on average, the string can be compressed, with some possible strings necessarily getting larger instead of smaller.
For this question, please assume python, but it doesn't necessarily matter.
Imagine you have an arbitrary ASCII string, for example:
jrioj4oi3m_=\.,ei9#
Sparing the extensive details, I need to pass this string as a "label" on to another program, but that program doesn't support "labels" containing "special characters" or even numbers. So I'm trying to encode an ASCII string into a string that uses an arbitrary subset of ASCII.
One very naive solution would be to convert the original string into binary, then convert 0s into "a" and 1s into "b". This works to solve my problem, but I would like to learn a better solution here, to become a better programmer.
First of all, what exactly is this problem called?
This is not exactly a hashing problem, because IIRC hashing generally involves encoding into a string that is shorter than the original, and involves collisions.
I need no collisions, and I don't really care how long the encoded string is, as long as it's shorter than the naive case. (Ideally it would be the shortest length possible given the subset)
In fact, it would be ideal to specify exactly what the allowed character set is, then use a generalized encoding algorithm to do the encoding.
Decoding would be nice to know also.
A simple solution would be to first convert to a hex encoding:
jrioj4oi3m_=.,ei9# => 6a72696f6a346f69336d5f3d2e2c65693923
and then translate any numbers into non-hex letters:
6a72696f6a346f69336d5f3d2e2c65693923 => waxswzwfwatuwfwzttwdvftdsescwvwztzst
So the output string would always be exactly twice the length of the input string and only ever contain characters in the range a-z.
This can be easily achieved in python like this:
>>> enc = str.maketrans('0123456789', 'qrstuvwxyz')
>>> dec = str.maketrans('qrstuvwxyz', '0123456789')
>>> s = 'jrioj4oi3m_=.,ei9#'
>>> x = s.encode('ascii').hex().translate(enc)
>>> x
'waxswzwfwatuwfwzttwdvftdsescwvwztzst'
>>> bytes.fromhex(x.translate(dec)).decode('ascii')
'jrioj4oi3m_=.,ei9#'
Interestingly, this actually turns out to be a really simple and common math problem: Base conversion. As a programmer, you probably know, at least in theory, how to convert between base 2, 10, and 16 representations of a value. There are 96 printable ASCII characters, so any ASCII string can be considered to be a base 96 representation of a (probably very large) value. If your label only accepts 64 characters (uppercase, lowercase, digits, and 2 others, for instance), then you simply need to convert your base 96 representation into a base 64 representation of the same value.
Decoding is simply converting your base 64 representation back to the base 96 representation.
When I print a program such as this in Python:
x = b'francis'
The output is b'francis'. If bytes is in 0's and 1's why is it not printing it out?
You seem to be fundamentally confused, in a very common way. The data itself is a distinct concept from its representation, i.e. what you see when you attempt to print it out or otherwise display it. There may be multiple ways to represent the same data. This is just like how if I write 23 (in decimal) or 0x17 (hexadecimal) or 0o27 (octal) or 0b10111 (binary) or twenty-three (English), I am talking about the same number.
At some lower level below Python, everything is bytes, and each byte consists of bits; but it is not correct to say that the bytes "are in" 0s and 1s - just like how it is not correct to say that the number twenty-three "is in" decimal digits (or hexadecimal, octal or binary ones, or in English text characters).
The symbols 0 and 1 are just pictures that we draw on a screen to represent the state of those bits - if we choose to represent them individually. Sometimes, we choose larger groupings, and assign different symbols to various combinations of states. For example, we may interpret multiple bits as a single integer value in binary; or (using Unicode) we might further interpret that number as a "code point" (most of these are text characters; some are control characters, or portions of text characters).
A Python bytes object is a wrapper for a "raw" sequence of bytes. When you display it, Python uses a representation where each byte (grouping of 8 bits) corresponds to one or more symbols: bytes whose corresponding integer value is between thirty-two and one hundred twenty-six (inclusive) are (for historical reasons) represented using individual text characters (following the so-called ASCII encoding), while others are represented with a four-character "escape sequence" beginning with \x and followed by the hexadecimal representation of the number.
From python docs:
bytes and bytearray objects are sequences of integers (between 0 and
255), representing the ASCII value of single bytes.
So they are sequence of integers which represents ASCII values.
For conversion you can use:
import sys
int.from_bytes(b'\x11', byteorder=sys.byteorder) # => 17
bin(int.from_bytes(b'\x11', byteorder=sys.byteorder)) # => '0b10001'
The bytes object was intentionally designed to work like this: the repr uses the corresponding ASCII characters for bytes in the printable ASCII range, well-known backslash escapes for a few special ASCII control characters, and hex backslash escapes for everything else (and the str just is the repr).
The basic idea is that bytes can be used as an immutable array of integers from 0-255, but more often it's used as an immutable array of characters encoded in some ASCII-compatible charset.
In particular, one of the most common uses of bytes is for things like the headers in HTTP, SMTP, and other network protocols. These headers are generally entirely in pure ASCII, or at least pure ASCII keys with some values in pure ASCII and others in an ASCII-compatible charset—and you generally have to parse the ASCII headers to figure out what charset to use to decode the body. Being able to see those headers are ASCII characters is a lot more useful than just seeing them as a sequence of numbers.
Basically, everything on your computer is eventually represented by 0's and 1's.
The purpose of b-notation isn't as you expected it to be.
I would like to refer you to a great answer that might help you understand what the b-notation is for and how to use it properly:
What does the 'b' character do in front of a string literal?
Good luck.
I've been reading about base64 conversion, and what I understand is that the encoded version of the original data will be 133% of the original size.
Then, I'm reading about how YouTube is able to have unique identifiers to their videos like FJZQSHn7fc and the reason was: an 11 character base64 string can map to a huge number.
Wait, say a huge number contains 20 characters, then wouldn't a base64 encoded string be 133% of that size, not shorter?
I'm very confused. Are there different types of base64 conversion (string to base64 vs. decimal to base64), once resulting in a bigger, and the other in a smaller resulting string?
Each character in base 64 can encode 6 bits of data. Thus 11 characters can encode 6x11 = 66 bits of data.
2^66 = 73786976294838206464
73786976294838206464 (approximately 7.4 x 10^19 or 74 quintillion) possible identifiers is more than enough to distinguish unique YouTube videos for the foreseeable future.
It is unlikely that YouTube is using these strings of length 11 as encodings of smaller objects. You can use base64 (just a number in base 64 after all) without having to think of it as an encoding of something else, just like you can use bytes (binary numbers with 8 bits) without thinking of those bytes as being encodings of ascii characters. The only important question with an identifier scheme is if there are enough identifiers to go around. In this case there clearly are.
Think of it like this: you have a 64bit number (called long in Java, for example).
Now, you can print that number in different ways:
As a binary number (base 2), printing 64 '0' or '1'
As a decimal number (base 10), printing up to 20 decimal digits
As a hexadecimal number (base 16), printing 16 hexadeciaml digits
As a number in base 64, printing 11 "digits" in that base. You can use any graphical symbols as digits.
... you understand by now that there are many more possibilities ...
It seems like they use the same base-64 numbers as the ones that are used in base64 encoding, that is, uppercase and lowercase letters, ordinary digits and 2 extra chars. Each character represents a 6-bit value. So you get 66 bits, and depending on the algorithm used, either the leading or trailing 2 bits are cut off to get a nice long value back.
You are confusing what things are being compared.
There are 2 statements, both comparing different things:
"base64 encoding is 133% bigger than original size"
"An 11 character base64 string can encode a huge number"
In the case of 1, they are normally referring to a string encoded maybe with ASCII using 8bits a character, and comparing that with the same string encoded in base64. That is 133% bigger, because in base64 you can't use all 255 bit combinations in every byte.
In the case of 2, they are comparing using a numeric identifier, and then either encoding it as base64, or base10. In this case, base64 is a lot shorter than base10.
You can also think of the (1) case as comparing base256 against base64, and the (2) case as comparing base10 against base64.
When you say Base64, some would think of RFC 4648. If YouTube is using RFC 4648, then it's a 12-digit number where they're omitting the last digit because it is always '=', the padding character (the 65th element of the base64 alphabet). The 12 digits represent three blocks of four digits, and four digits yield 24 bits of information. YouTube video IDs would therefore be 64-bit, not 66-bit, if they're using the standard.
Those 64 bits might be representing an unsigned integer. YouTube used MySQL and then sharded MySQL through Vitess, so you could imagine them using an UNSIGNED BIGINT key internally that they encode via RFC 4648-compliant Base64 externally.
Clearly Tom Scott thinks YouTube is squeezing 66 bits out of their 11 characters; his video says so.
If he's wrong, then their frontend might allow you to specify four distinct video IDs for the same video. Those two extra bits' values do not affect the UNSIGNED BIGINT. Which two bits they are depend on endianness and other choices of encoding.
Regardless of whether YouTube is using standard or nonstandard encoding, they can represent 18446744073709551615 in 11 characters (since the padding character is always there and and thus omitted for a 64-bit quantity).
Perhaps they use something like the following to compute a pseudorandom 64-bit integer when a new video is created:
import base64
import random
def Base64RandomSlug():
array = bytearray(random.getrandbits(8) for x in range(64 // 8))
b = base64.urlsafe_b64encode(bytes(array))
return b.decode('utf-8').rstrip('=')
I have been working on a program and I have been trying to convert a big binary file (As a string) and pack it into a file. I have tried for days to make such thing possible. Here is the code I had written to pack the large binary string.
binaryRecieved="11001010101....(Shortened)"
f=open(fileName,'wb')
m=long(binaryRecieved,2)
struct.pack('i',m)
f.write(struct.pack('i',m))
f.close()
quit()
I am left with the error
struct.pack('i',x)
struct.error: integer out of range for 'i' format code
My integer is out of range, so I was wondering if there is a different way of going about with this.
Thanks
Convert your bit string to a byte string: see for example this question Converting bits to bytes in Python. Then pack the bytes with struct.pack('c', bytestring)
For encoding m in big-endian order (like "ten" being written as "10" in normal decimal use) use:
def as_big_endian_bytes(i):
out=bytearray()
while i:
out.append(i&0xff)
i=i>>8
out.reverse()
return out
For encoding m in little-endian order (like "ten" being written as "01" in normal decimal use) use:
def as_little_endian_bytes(i):
out=bytearray()
while i:
out.append(i&0xff)
i=i>>8
return out
both functions work on numbers - like you do in your question - so the returned bytearray may be shorter than expected (because for numbers leading zeroes do not matter).
For an exact representation of a binary-digit-string (which is only possible if its length is dividable by 8) you would have to do:
def as_bytes(s):
assert len(s)%8==0
out=bytearray()
for i in range(0,len(s)-8,8):
out.append(int(s[i:i+8],2))
return out
In struct.pack you have used 'i' which represents an integer number, which is limited. As your code states, you have a long output; thus, you may want to use 'd' in stead of 'i', to pack your data up as double. It should work.
See Python struct for more information.