I'm doing a programming challenge and I'm going crazy with one of the challenges. In the challenge, I need to compute the MD5 of a string. The string is given in the following form:
n[c]: Where n is a number and c is a character. For example: b3[a2[c]] => baccaccacc
Everything went ok until I was given the following string:
1[2[3[4[5[6[7[8[9[10[11[12[13[a]]]]]]]]]]]]]
This strings turns into a string with 6227020800 a's. This string is more than 6GB, so it's nearly impossible to compute it in practical time. So, here is my question:
Are there any properties of MD5 that I can use here?
I know that there has to be a form to make it in short time, and I suspect it has to be related to the fact that all the string has is the same character repeated multiple times.
You probably have created a (recursive) function to produce the result as a single value. Instead you should use a generator to produce the result as a stream of bytes. These you can then feed byte by byte into your MD5 hash routine. The size of the stream does not matter this way, it will just have an impact on the computation time, not on the memory used.
Here's an example using a single-pass parser:
import re, sys, md5
def p(s, pos, callBack):
while pos < len(s):
m = re.match(r'(d+)[', s[pos:])
if m: # repetition?
number = m.group(1)
for i in range(int(number)):
endPos = p(s, pos+len(number)+1, callBack)
pos = endPos
elif s[pos] == ']':
return pos + 1
else:
callBack(s[pos])
pos += 1
return pos + 1
digest = md5.new()
def feed(s):
digest.update(s)
sys.stdout.write(s)
sys.stdout.flush()
end = p(sys.argv[1], 0, feed)
print
print "MD5:", digest.hexdigest()
print "finished parsing input at pos", end
All hash functions are designed to work with byte streams, so you should not first generate the whole string, and after that hash it - you should write generator, which produces chunks of string data, and feed it to MD5 context.
And, MD5 uses 64-byte (or char) buffer so it would be a good idea to feed 64-byte chunks of data to the context.
Take advantage of the good properties of hashes:
import hashlib
cruncher = hashlib.md5()
chunk = 'a' * 100
for i in xrange(100000):
cruncher.update(chunk)
print cruncher.hexdigest()
Tweak the number of rounds (x = 10000) and the length of the chunk (y = 100) so that x * y = 13!. The point is that your are feeding the algorithm with chunks of your string (each one x characters long), one after the other, for y times.
Related
I am trying to read bytes from an image, and get all the int (16 bit) values from that image.
After I parsed the image header, I got to the pixel values. The values that I get when the pair of bytes are like b"\xd4\x00" is incorrect. In this case it should be 54272, not 3392.
This are parts of the code:
I use a generator to get the bytes:
import itertools
def osddef_generator(in_file):
with open(in_file, mode='rb') as f:
dat = f.read()
for byte in dat:
yield byte
def take_slice(in_generator, size):
return ''.join(str(chr(i)) for i in itertools.islice(in_generator, size))
def take_single_pixel(in_generator):
pix = itertools.islice(in_generator, 2)
hex_list = [hex(i) for i in pix]
hex_str = "".join(hex_list)[2:].replace("0x", '')
intval = int(hex_str, 16)
print("hex_list: ", hex_list)
print("hex_str: ", hex_str)
print("intval: ", intval)
After I get the header correctly using the take_slice method, I get to the part with the pixel values, where I use the take_single_pixel method.
Here, I get the bad results.
This is what I get:
hex_list: ['0xd4', '0x0']
hex_str: d40
intval: 3392
But the actual sequence of bytes that should be interpreted is: \xd4\x00, which equals to 54272, so that my hex_list = ['0xd4', '0x00'] and hex_str = d400.
Something happens when I have a sequence of bytes when the second one is \x00.
Got any ideas? Thanks!
There are much better ways of converting bytes to integters:
int.from_bytes() takes bytes input, and a byte order argument:
>>> int.from_bytes(b"\xd4\x00", 'big')
54272
>>> int.from_bytes(b"\xd4\x00", 'little')
212
The struct.unpack() function lets you convert a whole series of bytes to integers following a pattern:
>>> import struct
>>> struct.unpack('!4H', b'\xd4\x00\xd4\x00\xd4\x00\xd4\x00')
(54272, 54272, 54272, 54272)
The array module lets you read binary data representing homogenous integer data into a memory structure efficiently:
>>> array.array('H', fileobject)
However, array can't be told what byte order to use. You'd have to determine the current architecture byte order and call arr.byteswap() to reverse order if the machine order doesn't match the file order.
When reading image data, it is almost always preferable to use the struct module to do the parsing. You generally then use file.read() calls with specific sizes; if the header consists of 10 bytes, use:
headerinfo = struct.unpack('<expected header pattern for 10 bytes>', f.read(10))
and go from there. For examples, look at the Pillow / PIL image plugins source code; here is how the Blizzard Mipmap image format header is read:
def _read_blp_header(self):
self._blp_compression, = struct.unpack("<i", self.fp.read(4))
self._blp_encoding, = struct.unpack("<b", self.fp.read(1))
self._blp_alpha_depth, = struct.unpack("<b", self.fp.read(1))
self._blp_alpha_encoding, = struct.unpack("<b", self.fp.read(1))
self._blp_mips, = struct.unpack("<b", self.fp.read(1))
self._size = struct.unpack("<II", self.fp.read(8))
if self.magic == b"BLP1":
# Only present for BLP1
self._blp_encoding, = struct.unpack("<i", self.fp.read(4))
self._blp_subtype, = struct.unpack("<i", self.fp.read(4))
self._blp_offsets = struct.unpack("<16I", self.fp.read(16 * 4))
self._blp_lengths = struct.unpack("<16I", self.fp.read(16 * 4))
Because struct.unpack() always returns tuples, you can assign individual elements in a tuple to name1, name2, ... names on the left-hand size, including single_name, = assignments to extract a single result.
The separate set of read calls above could also be compressed into fewer calls:
comp, enc, adepth, aenc, mips, *size = struct.unpack("<i4b2I", self.fp.read(16))
if self.magic == b"BLP1":
# Only present for BLP1
enc, subtype = struct.unpack("<2i", self.fp.read(8))
followed by specific attribute assignments.
I am working on a server engine in Python, for my game made in GameMaker Studio 2. I'm currently having some issues with making and sending a packet.
I've successfully managed to establish a connection and send the first packet, but I can't find a solution for sending data in a sequence of which if the first byte in the packed struct is equal to a value, then unpack other data into a given sequence.
Example:
types = 'hhh' #(message_id, x, y) example
message_id = 0
x = 200
y = 200
buffer = pack(types, 0,x, y)
On the server side:
data = conn.recv(BUFFER_SIZE)
mid = unpack('h', data)[0]
if not data: break
if mid == 0:
sequnce = 'hhh'
x = unpack(sequnce, data)[1]
y = unpack(sequnce, data)[2]
It looks like your subsequent decoding is going to vary based on the message ID?
If so, you will likely want to use unpack_from which allows you to pull only the first member from the data (as written now, your initial unpack call will generate an exception because the buffer you're handing it is not the right size). You can then have code that varies the unpacking format string based on the message ID. That code could look something like this:
from struct import pack, unpack, unpack_from
while True:
data = conn.recv(BUFFER_SIZE)
# End of file, bail out of loop
if not data: break
mid = unpack_from('!h', data)[0]
if mid == 0:
# Message ID 0
types = '!hhh'
_, x, y = unpack(types, data)
# Process message type 0
...
elif mid == 1:
types = '!hIIq'
_, v, w, z = unpack(types, data)
# Process message type 1
...
elif mid == 2:
...
Note that we're unpacking the message ID again in each case along with the ID-specific parameters. You could avoid that if you like by using the optional offset argument to unpack_from:
x, y = unpack_from('!hh', data, offset=2)
One other note of explanation: If you are sending messages between two different machines, you should consider the "endianness" (byte ordering). Not all machines are "little-endian" like x86. Accordingly it's conventional to send integers and other structured numerics in a certain defined byte order - traditionally that has been "network byte order" (which is big-endian) but either is okay as long as you're consistent. You can easily do that by prepending each format string with '!' or '<' as shown above (you'll need to do that for every format string on both sides).
Finally, the above code probably works fine for a simple "toy" application but as your program increases in scope and complexity, you should be aware that there is no guarantee that your single recv call actually receives all the bytes that were sent and no other bytes (such as bytes from a subsequently sent buffer). In other words, it's often necessary to add a buffering layer, or otherwise ensure that you have received and are operating on exactly the number of bytes you intended.
Could you unpack whole data to list, and then check its elements in the loop? What is the reason to unpack it 3 times? I guess, you could unpack it once, and then work with that list - check its length first, if not empty -> check first element -> if equal to special one, continue on list parsing. Did you try like that?
So my program is a Stenography program, it inserts an image into another image, and I'm trying to encrypt the data before inserting it into the cover image. However, most encryption modules expect strings and I'm trying to pass integers.
I've tried converting to string then encrypting, but the encryption is full of special characters and letters so converting back to integer for insertion is impossible.
Anyone know if I can somehow encrypt an integer? It doesn't have to be very secure.
I'm trying to add the encryption in here:
for i in range(0,3):
#verify we have reached the end of our hidden file
if count >= len(Stringbits):
#convert the bits to their rgb value and appened them
for rgbValue in pixelList:
pixelnumbers1 = int(''.join(str(b) for b in rgbValue), 2)
#print pixelnumbers1
rgb_Array.append(pixelnumbers1)
pixels[x, y] = (rgb_Array[0], rgb_Array[1], rgb_Array[2])
print "Completed"
return imageObject.save(output)
I've been trying to encrypt pixelnumbers1 then add it in. But pixels[x, y] requires an integer.
Below is the rest of the code in-case:
def write(mainimage, secret, output):
#string contains the header, data and length in binary
Stringbits = dcimage.createString(secret)
imageObject = Image.open(mainimage).convert('RGB')
imageWidth, imageHeight = imageObject.size
pixels = imageObject.load()
rgbDecimal_Array = []
rgb_Array = []
count = 0
#loop through each pixel
for x in range (imageWidth):
for y in range (imageHeight):
r,g,b = pixels[x,y]
#convert each pixel into an 8 bit representation
redPixel = list(bin(r)[2:].zfill(8))
greenPixel = list(bin(g)[2:].zfill(8))
bluePixel = list(bin(b)[2:].zfill(8))
pixelList = [redPixel, greenPixel, bluePixel]
#for each of rgb
for i in range(0,3):
#verify we have reached the end of our hidden file
if count >= len(Stringbits):
#convert the bits to their rgb value and appened them
for rgbValue in pixelList:
pixelnumbers1 = int(''.join(str(b) for b in rgbValue), 2)
#print pixelnumbers1
rgb_Array.append(pixelnumbers1)
pixels[x, y] = (rgb_Array[0], rgb_Array[1], rgb_Array[2])
print "Completed"
return imageObject.save(output)
#If we haven't rached the end of the file, store a bit
else:
pixelList[i][7] = Stringbits[count]
count+=1
pixels[x, y] = dcimage.getPixel(pixelList)
You have a fundamental misunderstanding of how computers see any type of data.
You read the bytestream of a file, which looks like a string to you, but each character is actually a byte, a value from 0 to 255. It just happens that some of them are represented by conventional string characters. Try print(bytes(range(256)) to see them all. Most standard encryption functions take a byte array in and spit a byte array out. It just happens that you get more of the bytes that don't have a "simple" representation. But they are not any less bytes than what you initially fed in.
Your dcimage.py has the following:
#get the file data in binary
fileData = bytearray(open(secret, 'rb').read())#opens the binary file in read or write mode
for bits in fileData:
binDataString += bin(bits)[2:].zfill(8)#convert the file data to binary
There is nothing that stops you from doing this
fileData = open(secret, 'rb').read() # a bytes object by default
encryptedData = myEncryptionFuction(fileData) # also a bytes object
for bits in encryptedData:
# ...
VERY IMPORTANT: You add a null byte at the end of your message so your extracting sequence knows when to stop. If you compress, or encrypt, a string (or byte array), it is likely a null byte will be part of that stream, which will break your extraction sequence. In that case you want to use a header that tells your program ahead of time how many bits to extract.
By the way, bytes are already in an integer form.
>>> some_byte = b'G'
>>> some_byte[0]
71
You're better of using bitwise operations for steganography. You take bytes and instead of using bitwise operations between them and your pixels, you turn both to binary strings, slice and stitch them and then turn them back to integers.
def bytes_to_bits(stream):
for byte in stream:
for shift in range(7, -1, -1):
yield (byte >> shift) & 0x01
secret_bits = tuple(bytes_to_bits(encoded_data))
# simplified for one colour plane
for x in range(image_height):
for y in range(image_width):
# (pixel AND 254) OR bit - the first part zeroes out the lsb
pixels[x,y] = (pixels[x,y] & 0xfe) | secret_bits[count]
count += 1
# -------------------------------------
# to extract the bit from a stego pixel
bit = pixel & 0x01
Integers can be encryted by adding each digit to a random integer stream in the range 0 to 9, subtracting 10 when the sum > 9. Modulo should be avoided because of ambiguities.
I've got a folder full of very large files that need to be byte flipped by a power of 4. So essentially, I need to read the files as a binary, adjust the sequence of bits, and then write a new binary file with the bits adjusted.
In essence, what I'm trying to do is read a hex string hexString that looks like this:
"00112233AABBCCDD"
And write a file that looks like this:
"33221100DDCCBBAA"
(i.e. every two characters is a byte, and I need to flip the bytes by a power of 4)
I am very new to python and coding in general, and the way I am currently accomplishing this task is extremely inefficient. My code currently looks like this:
import binascii
with open(myFile, 'rb') as f:
content = f.read()
hexString = str(binascii.hexlify(content))
flippedBytes = ""
inc = 0
while inc < len(hexString):
flippedBytes += file[inc + 6:inc + 8]
flippedBytes += file[inc + 4:inc + 6]
flippedBytes += file[inc + 2:inc + 4]
flippedBytes += file[inc:inc + 2]
inc += 8
..... write the flippedBytes to file, etc
The code I pasted above accurately accomplishes what I need (note, my actual code has a few extra lines of: "hexString.replace()" to remove unnecessary hex characters - but I've left those out to make the above easier to read). My ultimate problem is that it takes EXTREMELY long to run my code with larger files. Some of my files I need to flip are almost 2gb in size, and the code was going to take almost half a day to complete one single file. I've got dozens of files I need to run this on, so that timeframe simply isn't practical.
Is there a more efficient way to flip the HEX values in a file by a power of 4?
.... for what it's worth, there is a tool called WinHEX that can do this manually, and only takes a minute max to flip the whole file.... I was just hoping to automate this with python so we didn't have to manually use WinHEX each time
You want to convert your 4-byte integers from little-endian to big-endian, or vice-versa. You can use the struct module for that:
import struct
with open(myfile, 'rb') as infile, open(myoutput, 'wb') as of:
while True:
d = infile.read(4)
if not d:
break
le = struct.unpack('<I', d)
be = struct.pack('>I', *le)
of.write(be)
Here is a little struct awesomeness to get you started:
>>> import struct
>>> s = b'\x00\x11\x22\x33\xAA\xBB\xCC\xDD'
>>> a, b = struct.unpack('<II', s)
>>> s = struct.pack('>II', a, b)
>>> ''.join([format(x, '02x') for x in s])
'33221100ddccbbaa'
To do this at full speed for a large input, use struct.iter_unpack
I want to build a small formatter in python giving me back the numeric
values embedded in lines of hex strings.
It is a central part of my formatter and should be reasonable fast to
format more than 100 lines/sec (each line about ~100 chars).
The code below should give an example where I'm currently blocked.
'data_string_in_orig' shows the given input format. It has to be
byte swapped for each word. The swap from 'data_string_in_orig' to
'data_string_in_swapped' is needed. In the end I need the structure
access as shown. The expected result is within the comment.
Thanks in advance
Wolfgang R
#!/usr/bin/python
import binascii
import struct
## 'uint32 double'
data_string_in_orig = 'b62e000052e366667a66408d'
data_string_in_swapped = '2eb60000e3526666667a8d40'
print data_string_in_orig
packed_data = binascii.unhexlify(data_string_in_swapped)
s = struct.Struct('<Id')
unpacked_data = s.unpack_from(packed_data, 0)
print 'Unpacked Values:', unpacked_data
## Unpacked Values: (46638, 943.29999999943209)
exit(0)
array.arrays have a byteswap method:
import binascii
import struct
import array
x = binascii.unhexlify('b62e000052e366667a66408d')
y = array.array('h', x)
y.byteswap()
s = struct.Struct('<Id')
print(s.unpack_from(y))
# (46638, 943.2999999994321)
The h in array.array('h', x) was chosen because it tells array.array to regard the data in x as an array of 2-byte shorts. The important thing is that each item be regarded as being 2-bytes long. H, which signifies 2-byte unsigned short, works just as well.
This should do exactly what unutbu's version does, but might be slightly easier to follow for some...
from binascii import unhexlify
from struct import pack, unpack
orig = unhexlify('b62e000052e366667a66408d')
swapped = pack('<6h', *unpack('>6h', orig))
print unpack('<Id', swapped)
# (46638, 943.2999999994321)
Basically, unpack 6 shorts big-endian, repack as 6 shorts little-endian.
Again, same thing that unutbu's code does, and you should use his.
edit Just realized I get to use my favorite Python idiom for this... Don't do this either:
orig = 'b62e000052e366667a66408d'
swap =''.join(sum([(c,d,a,b) for a,b,c,d in zip(*[iter(orig)]*4)], ()))
# '2eb60000e3526666667a8d40'
The swap from 'data_string_in_orig' to 'data_string_in_swapped' may also be done with comprehensions without using any imports:
>>> d = 'b62e000052e366667a66408d'
>>> "".join([m[2:4]+m[0:2] for m in [d[i:i+4] for i in range(0,len(d),4)]])
'2eb60000e3526666667a8d40'
The comprehension works for swapping byte order in hex strings representing 16-bit words. Modifying it for a different word-length is trivial. We can make a general hex digit order swap function also:
def swap_order(d, wsz=4, gsz=2 ):
return "".join(["".join([m[i:i+gsz] for i in range(wsz-gsz,-gsz,-gsz)]) for m in [d[i:i+wsz] for i in range(0,len(d),wsz)]])
The input params are:
d : the input hex string
wsz: the word-size in nibbles (e.g for 16-bit words wsz=4, for 32-bit words wsz=8)
gsz: the number of nibbles which stay together (e.g for reordering bytes gsz=2, for reordering 16-bit words gsz = 4)
import binascii, tkinter, array
from tkinter import *
infile_read = filedialog.askopenfilename()
with open(infile, 'rb') as infile_:
infile_read = infile_.read()
x = (infile_read)
y = array.array('l', x)
y.byteswap()
swapped = (binascii.hexlify(y))
This is a 32 bit unsigned short swap i achieved with code very much the same as "unutbu's" answer just a little bit easier to understand. And technically binascii is not needed for the swap. Only array.byteswap is needed.