I want to create an MD5 hash of a ZipFile, not of one of the files inside it. However, ZipFile objects aren't easily convertible to streams.
from hashlib import md5
from zipfile import ZipFile
zipped = ZipFile(r'/Foo/Bar/Filename.zip')
hasher = md5()
hasher.update(zipped)
return hasher.hexdigest()
The above code generates the error :TypeError: must be convertible to a buffer, not ZipFile.
Is there a straightforward way to turn a ZipFile into a stream?
There's no security issues here, I just need a quick an easy way to determine if I've seen a file before. hash(zipped) works fine, but I'd like something a little more robust if possible.
Just open the ZipFile as a regular file. Following code works on my machine.
from hashlib import md5
m = md5()
with open("/Foo/Bar/Filename.zip", "rb") as f:
data = f.read() #read file in chunk and call update on each chunk if file is large.
m.update(data)
print m.hexdigest()
This function should return the MD5 hash of any file, provided it's path (requires pycrypto module):
from Crypto.Hash import MD5
def get_MD5(file_path):
chunk_size = 8192
h = MD5.new()
with open(file_path, 'rb') as f:
while True:
chunk = f.read(chunk_size)
if len(chunk):
h.update(chunk)
else:
break
return h.hexdigest()
print get_MD5('pics.zip') # example
output:
6a690fa3e5b34e30be0e7f4216544365
Info on pycrypto
I have created a simple checksum script that checks a checksum of a file called tecu.a2l and compares it to a few .md5 files - ensuring that they all have the exact same checksum whenever this script is running.
To make things easier to understand:
Lets say i have tecu.a2l with the checksum 1x2x3x. So the md5 files (if generated correctly) should have the same checksum (1x2x3x). If one of the md5 files has a different checksum than what the latest tecu.a2l has it will give an error.
Hopefully the code might fill in the blanks if you did not quite understand my description.
import hashlib
import dst_creator_constants as CONST
import Tkinter
path_a2l = 'C:<path>\tecu.a2l'
md5 = hashlib.md5()
blocks = 65565
with open(path_a2l, 'rb') as a2l:
readA2L = a2l.read(blocks)
generatedMD5 = md5.hexdigest()
print "stop1"
ihx_md5_files = CONST.PATH_DELIVERABLES_DST
for file in ihx_md5_files:
print "stop2"
if file.endswith('.md5'):
print "stop3"
readMD5 = file.read()
if compare_checksums:
print "Yes"
# Add successful TkInter msg here
else:
print "No"
# Add error msg here
def compare_checksums(generatedMD5, readMD5):
if generatedMD5 == readMD5:
return True
else:
return False
When i run this script, nothing happens. No messages, nothing. If i type in python checksum.py into cmd - it returns no message. So i put in some print statements to see what could be the issue. The issue is, that stop3 is never shown in the command prompt - which means that the problem has something to do with if file.endswith('.md5'): statement.
I have no idea why it's the culprit as I have used this file.endswith() statement on a previous script I wrote in relation to this and there it has worked so I am turning to you.
You are not creating a hash object. Your file stays in your readA2L variable. Also, your file may be larger than the 65565 byte buffer you allow it. Try to update your hasher like the function below and let us know what the result is.
import hashlib as h
from os.path import isfile
hasher = h.md5()
block_size = 65536
def get_hexdigest(file_path, hasher, block_size):
if isfile(file_path):
with open(file_path, 'rb') as f:
buf = f.read(block_size)
while len(buf) > 0:
# Update the hasher until the entire file has been read
hasher.update(buf)
buf = f.read(block_size)
digest = hasher.hexdigest()
else:
return None
return digest
I am writing a simple tool that allows me to quickly check MD5 hash values of downloaded ISO files. Here is my algorithm:
import sys
import hashlib
def main():
filename = sys.argv[1] # Takes the ISO 'file' as an argument in the command line
testFile = open(filename, "r") # Opens and reads the ISO 'file'
# Use hashlib here to find MD5 hash of the ISO 'file'. This is where I'm having problems
hashedMd5 = hashlib.md5(testFile).hexdigest()
realMd5 = input("Enter the valid MD5 hash: ") # Promt the user for the valid MD5 hash
if (realMd5 == hashedMd5): # Check if valid
print("GOOD!")
else:
print("BAD!!")
main()
My problem is on the 9th line when I try to take the MD5 hash of the file. I'm getting the Type Error: object supporting the buffer API required. Could anyone shed some light on to how to make this function work?
The object created by hashlib.md5 doesn't take a file object. You need to feed it data a piece at a time, and then request the hash digest.
import hashlib
testFile = open(filename, "rb")
hash = hashlib.md5()
while True:
piece = testFile.read(1024)
if piece:
hash.update(piece)
else: # we're at end of file
hex_hash = hash.hexdigest()
break
print hex_hash # will produce what you're looking for
You need to read the file:
import sys
import hashlib
def main():
filename = sys.argv[1] # Takes the ISO 'file' as an argument in the command line
testFile = open(filename, "rb") # Opens and reads the ISO 'file'
# Use hashlib here to find MD5 hash of the ISO 'file'. This is where I'm having problems
m = hashlib.md5()
while True:
data = testFile.read(4*1024*1024)
if not data: break
m.update(data)
hashedMd5 = m.hexdigest()
realMd5 = input("Enter the valid MD5 hash: ") # Promt the user for the valid MD5 hash
if (realMd5 == hashedMd5): # Check if valid
print("GOOD!")
else:
print("BAD!!")
main()
And you probably need to open the file in binary ("rb") and read the blocks of data in chunks. An ISO file is likely too large to fit in memory.
I've had a look around for the answer to this, but I only seem to be able to find software that does it for you. Does anybody know how to go about doing this in python?
I wrote a piece of python code that verifies the hashes of downloaded files against what's in a .torrent file. Assuming you want to check a download for corruption you may find this useful.
You need the bencode package to use this. Bencode is the serialization format used in .torrent files. It can marshal lists, dictionaries, strings and numbers somewhat like JSON.
The code takes the hashes contained in the info['pieces'] string:
torrent_file = open(sys.argv[1], "rb")
metainfo = bencode.bdecode(torrent_file.read())
info = metainfo['info']
pieces = StringIO.StringIO(info['pieces'])
That string contains a succession of 20 byte hashes (one for each piece). These hashes are then compared with the hash of the pieces of on-disk file(s).
The only complicated part of this code is handling multi-file torrents because a single torrent piece can span more than one file (internally BitTorrent treats multi-file downloads as a single contiguous file). I'm using the generator function pieces_generator() to abstract that away.
You may want to read the BitTorrent spec to understand this in more details.
Full code bellow:
import sys, os, hashlib, StringIO, bencode
def pieces_generator(info):
"""Yield pieces from download file(s)."""
piece_length = info['piece length']
if 'files' in info: # yield pieces from a multi-file torrent
piece = ""
for file_info in info['files']:
path = os.sep.join([info['name']] + file_info['path'])
print path
sfile = open(path.decode('UTF-8'), "rb")
while True:
piece += sfile.read(piece_length-len(piece))
if len(piece) != piece_length:
sfile.close()
break
yield piece
piece = ""
if piece != "":
yield piece
else: # yield pieces from a single file torrent
path = info['name']
print path
sfile = open(path.decode('UTF-8'), "rb")
while True:
piece = sfile.read(piece_length)
if not piece:
sfile.close()
return
yield piece
def corruption_failure():
"""Display error message and exit"""
print("download corrupted")
exit(1)
def main():
# Open torrent file
torrent_file = open(sys.argv[1], "rb")
metainfo = bencode.bdecode(torrent_file.read())
info = metainfo['info']
pieces = StringIO.StringIO(info['pieces'])
# Iterate through pieces
for piece in pieces_generator(info):
# Compare piece hash with expected hash
piece_hash = hashlib.sha1(piece).digest()
if (piece_hash != pieces.read(20)):
corruption_failure()
# ensure we've read all pieces
if pieces.read():
corruption_failure()
if __name__ == "__main__":
main()
Here how I've extracted HASH value from torrent file:
#!/usr/bin/python
import sys, os, hashlib, StringIO
import bencode
def main():
# Open torrent file
torrent_file = open(sys.argv[1], "rb")
metainfo = bencode.bdecode(torrent_file.read())
info = metainfo['info']
print hashlib.sha1(bencode.bencode(info)).hexdigest()
if __name__ == "__main__":
main()
It is the same as running command:
transmissioncli -i test.torrent 2>/dev/null | grep "^hash:" | awk '{print $2}'
Hope, it helps :)
According to this, you should be able to find the md5sums of files by searching for the part of the data that looks like:
d[...]6:md5sum32:[hash is here][...]e
(SHA is not part of the spec)
I have used hashlib (which replaces md5 in Python 2.6/3.0), and it worked fine if I opened a file and put its content in the hashlib.md5() function.
The problem is with very big files that their sizes could exceed the RAM size.
How can I get the MD5 hash of a file without loading the whole file into memory?
You need to read the file in chunks of suitable size:
def md5_for_file(f, block_size=2**20):
md5 = hashlib.md5()
while True:
data = f.read(block_size)
if not data:
break
md5.update(data)
return md5.digest()
Note: Make sure you open your file with the 'rb' to the open - otherwise you will get the wrong result.
So to do the whole lot in one method - use something like:
def generate_file_md5(rootdir, filename, blocksize=2**20):
m = hashlib.md5()
with open( os.path.join(rootdir, filename) , "rb" ) as f:
while True:
buf = f.read(blocksize)
if not buf:
break
m.update( buf )
return m.hexdigest()
The update above was based on the comments provided by Frerich Raabe - and I tested this and found it to be correct on my Python 2.7.2 Windows installation
I cross-checked the results using the jacksum tool.
jacksum -a md5 <filename>
Break the file into 8192-byte chunks (or some other multiple of 128 bytes) and feed them to MD5 consecutively using update().
This takes advantage of the fact that MD5 has 128-byte digest blocks (8192 is 128×64). Since you're not reading the entire file into memory, this won't use much more than 8192 bytes of memory.
In Python 3.8+ you can do
import hashlib
with open("your_filename.txt", "rb") as f:
file_hash = hashlib.md5()
while chunk := f.read(8192):
file_hash.update(chunk)
print(file_hash.digest())
print(file_hash.hexdigest()) # to get a printable str instead of bytes
Python < 3.7
import hashlib
def checksum(filename, hash_factory=hashlib.md5, chunk_num_blocks=128):
h = hash_factory()
with open(filename,'rb') as f:
for chunk in iter(lambda: f.read(chunk_num_blocks*h.block_size), b''):
h.update(chunk)
return h.digest()
Python 3.8 and above
import hashlib
def checksum(filename, hash_factory=hashlib.md5, chunk_num_blocks=128):
h = hash_factory()
with open(filename,'rb') as f:
while chunk := f.read(chunk_num_blocks*h.block_size):
h.update(chunk)
return h.digest()
Original post
If you want a more Pythonic (no while True) way of reading the file, check this code:
import hashlib
def checksum_md5(filename):
md5 = hashlib.md5()
with open(filename,'rb') as f:
for chunk in iter(lambda: f.read(8192), b''):
md5.update(chunk)
return md5.digest()
Note that the iter() function needs an empty byte string for the returned iterator to halt at EOF, since read() returns b'' (not just '').
Here's my version of Piotr Czapla's method:
def md5sum(filename):
md5 = hashlib.md5()
with open(filename, 'rb') as f:
for chunk in iter(lambda: f.read(128 * md5.block_size), b''):
md5.update(chunk)
return md5.hexdigest()
Using multiple comment/answers for this question, here is my solution:
import hashlib
def md5_for_file(path, block_size=256*128, hr=False):
'''
Block size directly depends on the block size of your filesystem
to avoid performances issues
Here I have blocks of 4096 octets (Default NTFS)
'''
md5 = hashlib.md5()
with open(path,'rb') as f:
for chunk in iter(lambda: f.read(block_size), b''):
md5.update(chunk)
if hr:
return md5.hexdigest()
return md5.digest()
This is Pythonic
This is a function
It avoids implicit values: always prefer explicit ones.
It allows (very important) performance optimizations
A Python 2/3 portable solution
To calculate a checksum (md5, sha1, etc.), you must open the file in binary mode, because you'll sum bytes values:
To be Python 2.7 and Python 3 portable, you ought to use the io packages, like this:
import hashlib
import io
def md5sum(src):
md5 = hashlib.md5()
with io.open(src, mode="rb") as fd:
content = fd.read()
md5.update(content)
return md5
If your files are big, you may prefer to read the file by chunks to avoid storing the whole file content in memory:
def md5sum(src, length=io.DEFAULT_BUFFER_SIZE):
md5 = hashlib.md5()
with io.open(src, mode="rb") as fd:
for chunk in iter(lambda: fd.read(length), b''):
md5.update(chunk)
return md5
The trick here is to use the iter() function with a sentinel (the empty string).
The iterator created in this case will call o [the lambda function] with no arguments for each call to its next() method; if the value returned is equal to sentinel, StopIteration will be raised, otherwise the value will be returned.
If your files are really big, you may also need to display progress information. You can do that by calling a callback function which prints or logs the amount of calculated bytes:
def md5sum(src, callback, length=io.DEFAULT_BUFFER_SIZE):
calculated = 0
md5 = hashlib.md5()
with io.open(src, mode="rb") as fd:
for chunk in iter(lambda: fd.read(length), b''):
md5.update(chunk)
calculated += len(chunk)
callback(calculated)
return md5
A remix of Bastien Semene's code that takes the Hawkwing comment about generic hashing function into consideration...
def hash_for_file(path, algorithm=hashlib.algorithms[0], block_size=256*128, human_readable=True):
"""
Block size directly depends on the block size of your filesystem
to avoid performances issues
Here I have blocks of 4096 octets (Default NTFS)
Linux Ext4 block size
sudo tune2fs -l /dev/sda5 | grep -i 'block size'
> Block size: 4096
Input:
path: a path
algorithm: an algorithm in hashlib.algorithms
ATM: ('md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512')
block_size: a multiple of 128 corresponding to the block size of your filesystem
human_readable: switch between digest() or hexdigest() output, default hexdigest()
Output:
hash
"""
if algorithm not in hashlib.algorithms:
raise NameError('The algorithm "{algorithm}" you specified is '
'not a member of "hashlib.algorithms"'.format(algorithm=algorithm))
hash_algo = hashlib.new(algorithm) # According to hashlib documentation using new()
# will be slower then calling using named
# constructors, ex.: hashlib.md5()
with open(path, 'rb') as f:
for chunk in iter(lambda: f.read(block_size), b''):
hash_algo.update(chunk)
if human_readable:
file_hash = hash_algo.hexdigest()
else:
file_hash = hash_algo.digest()
return file_hash
You can't get its md5 without reading the full content. But you can use the update function to read the file's content block by block.
m.update(a); m.update(b) is equivalent to m.update(a+b).
I think the following code is more Pythonic:
from hashlib import md5
def get_md5(fname):
m = md5()
with open(fname, 'rb') as fp:
for chunk in fp:
m.update(chunk)
return m.hexdigest()
I don't like loops. Based on Nathan Feger's answer:
md5 = hashlib.md5()
with open(filename, 'rb') as f:
functools.reduce(lambda _, c: md5.update(c), iter(lambda: f.read(md5.block_size * 128), b''), None)
md5.hexdigest()
Implementation of Yuval Adam's answer for Django:
import hashlib
from django.db import models
class MyModel(models.Model):
file = models.FileField() # Any field based on django.core.files.File
def get_hash(self):
hash = hashlib.md5()
for chunk in self.file.chunks(chunk_size=8192):
hash.update(chunk)
return hash.hexdigest()
I'm not sure that there isn't a bit too much fussing around here. I recently had problems with md5 and files stored as blobs in MySQL, so I experimented with various file sizes and the straightforward Python approach, viz:
FileHash = hashlib.md5(FileData).hexdigest()
I couldn’t detect any noticeable performance difference with a range of file sizes 2 KB to 20 MB and therefore no need to 'chunk' the hashing. Anyway, if Linux has to go to disk, it will probably do it at least as well as the average programmer's ability to keep it from doing so. As it happened, the problem was nothing to do with md5. If you're using MySQL, don't forget the md5() and sha1() functions already there.
import hashlib,re
opened = open('/home/parrot/pass.txt','r')
opened = open.readlines()
for i in opened:
strip1 = i.strip('\n')
hash_object = hashlib.md5(strip1.encode())
hash2 = hash_object.hexdigest()
print hash2