Related
hoping somebody can provide guidance as to how to compute pairwise hamming distance of a bunch of hashes and then cluster them. I don't care so much as to performance as from looking at what I am doing and what I want to do its going to be slow no matter what and its not something that will be repeatedly run over and over.
So...in a nutshell I had mistakenly erased 1000's of photos off a drive and had no backups (I know...bad practise). Using various tools I was able to recover a very high % of them from the drive but was left with hundreds of 1000's of photos. Due to the techniques used for recovering some of the photos (such as file carving) some of the images are corrupt to various degrees, others were identical copies, and yet others were essentially identical visually but byte for byte were different.
What I am looking at doing to help the situation is the following:
check each image and identify if image file is structurally corrupt or not (done)
generate perceptual hashes (fingerprint) for each image so that images can be compared for similarity and clustered (fingerprinting part is done)
calculate the pairwise distance of the fingerprints
cluster the pairwise distance so that similar images can be viewed together to aide manual cleanup
In the script attached you will notice a couple of places I calculate hashes, I will explain as to not cause confusion...
for images that are supported by PIL I generate three hashes, 1st for original image, 2nd is rotated 90 degrees, and 3rd is rotated 180 degrees. This was done so that when the pairwise calculations are done I can account for images that just vary in orientation.
for raw images not supported by PIL I instead favour hashes that are generated from the extracted embedded preview image. I did this instead of using the raw image because in the case of a corrupt raw image file there was a high chance of probability that the preview image was intact due to its smaller size and thus would be better for identifying if the image is similar to others
other place hashes are generated is during a last ditch effort to identify corrupt raw images. I would compare hashes of the extracted/converted raw image to that of the extracted embedded preview image and if the similarity does not meet a defined threshold it is assumed that there is probably corruption of the raw file as a whole.
What I need guidance on is how to accomplish the following:
take the three hashes I have for each image and calculate hamming pairwise distances
for each image comparison keep only the hamming distance that is most similar
feed the results into scipy hierarchical clustering so that I can group similar images
I am just learning Python so that is part of the my challenge... I think from what I have got from google I can do this by first getting the pairwise distances using scipy.spatial.distance.pdist, then process this to keep the most similar distance for each image comparison, then feed this to a scipy clustering function. But I cannot figure how to organize this and provide things in the proper format etc. Can anyone provide some guidance on this?
Here is my current script for reference in case anyone else finds it interesting that I will need to alter for storing some sort of dictionary of hashes or maybe some sort of on disk storage.
from PIL import Image
from PIL import ImageFile
import os, sys, imagehash, pyexiv2, rawpy, re
from tempfile import NamedTemporaryFile
from subprocess import check_call, call
# allow PIL to load truncated images (so that perceptual hashes can be created for truncated/damaged images still)
ImageFile.LOAD_TRUNCATED_IMAGES = True
# image files this script will handle
# PIL supported image formats
stdimageext = ('.jpg','.jpeg', '.bmp', '.png', '.gif', '.tif', '.tiff')
# libraw/ufraw supported formats
rawimageext = ('.nef', '.dng', '.tif', '.tiff')
devnull = open(os.devnull, 'w')
corruptRegex = re.compile(r'_\[.+\]\..{3,4}$')
for root, dirs, files in os.walk(sys.argv[1]):
for filename in files:
ext = os.path.splitext(filename.lower())[1]
filepath = os.path.join(root, filename)
if ext in (stdimageext + rawimageext):
hashes = [None] * 4
print(filename)
# reset corrupt string
corrupt_str = None
if ext in (stdimageext):
metadata = pyexiv2.ImageMetadata(filepath)
metadata.read()
rotate = 0
try:
im = Image.open(filepath)
except:
None
else:
for x in range(3):
hashes[x] = imagehash.dhash(im.rotate(90 * (x + 1)),32)
# use jpeginfo against all jpg images as its pretty accurate
if ext in ('.jpg','.jpeg'):
rc = 0
rc = call(["jpeginfo", "--check", filepath], stdout=devnull, stderr=devnull)
if rc == 1:
corrupt_str = 'JpegInfo'
if corrupt_str is None:
try:
im = Image.open(filepath)
im.verify()
except:
e = sys.exc_info()[0]
corrupt_str = 'PIL_Verify'
else:
try:
im = Image.open(filepath)
im.load()
except:
e = sys.exc_info()[0]
corrupt_str = 'PIL_Load'
# raw image processing
else:
# extract largest embedded preview image first
metadata_orig = pyexiv2.ImageMetadata(filepath)
metadata_orig.read()
if len(metadata_orig.previews) > 0:
preview = metadata_orig.previews[-1]
# save preview to temp file
temp_preview = NamedTemporaryFile()
preview.write_to_file(temp_preview.name)
os.rename(temp_preview.name + preview.extension, temp_preview.name)
rotate = 0
try:
im = Image.open(temp_preview.name)
except:
None
else:
for x in range(4):
hashes[x] = imagehash.dhash(im.rotate(90 * (x + 1)),32)
# close temp file
temp_preview.close()
# try to load raw using libraw via rawpy first,
# generally if libraw can't load it then ufraw extraction would also fail
try:
with rawpy.imread(filepath) as im:
None
except:
e = sys.exc_info()[0]
corrupt_str = 'Libraw_Load'
else:
# as a final last ditch effort compare perceptual hashes of extracted
# raw and embedded preview to detect possible internal corruption
if len(metadata_orig.previews) > 0:
# extract and convert raw to jpeg image using ufraw
temp_raw = NamedTemporaryFile(suffix='.jpg')
try:
check_call(['ufraw-batch', '--wb=camera', '--rotate=camera', '--out-type=jpg', '--compression=95', '--noexif', '--lensfun=none', '--output=' + temp_raw.name, '--overwrite', '--silent', filepath],stdout=devnull, stderr=devnull)
except:
e = sys.exc_info()[0]
corrupt_str = 'Ufraw-conv'
else:
rhash = imagehash.dhash(Image.open(temp_raw.name),32)
# compare preview with raw image and compute the most similar hamming distance (best)
hamdiff = .0
for h in range(4):
# calculate hamming distance to compare similarity
hamdiff = max((256 - sum(bool(ord(ch1) - ord(ch2)) for ch1, ch2 in zip(str(hashes[h]), str(rhash))))/256,hamdiff)
if hamdiff < .7: # raw file is probably corrupt
corrupt_str = 'hash' + str(round(hamdiff*100,2))
# close temp files
temp_raw.close()
print(hamdiff)
print(rhash)
print(hashes[0])
print(hashes[1])
print(hashes[2])
print(hashes[3])
# prefix file if corruption was detected ensuring that existing files already prefixed are re prefixed
mo = corruptRegex.search(filename)
if corrupt_str is not None:
if mo is not None:
os.rename(filepath,os.path.join(root, re.sub(corruptRegex, '_[' + corrupt_str + ']', filename) + ext))
else:
os.rename(filepath,os.path.join(root, os.path.splitext(filename)[0] + '_[' + corrupt_str + ']' + ext))
else:
if mo is not None:
os.rename(filepath,os.path.join(root, re.sub(corruptRegex, '', filename) + ext))
EDITED
Just want to provide an update with what I came up with in the end that seems to work quite nicely for my intended purpose and maybe it will prove useful for other users in a similar situation. The script can still use some polishing but otherwise all the meat is there. As I am green with respect using Python if anyone see something that can improved greatly please let me know.
The script does the following:
attempts to detect image corruption in terms of file structure using various methods. For raw image formats (NEF, DNG, TIF) sometimes I found that a corrupt image could still load fine so I decided to hash both the preview image and an extracted .jpg of the raw image and compare the hashes and if they were not similar enough I assume that image is corrupted in some form.
create perceptual hashes for each image that could be loaded. Three are created for the base file (original, original rotated 90, original rotated 180). In addition, for raw images an additional 3 hashes were created for the extracted preview image, this was done so that in cases where the raw image was corrupted we would still have hashes based on the full image (assuming the preview is fine).
for images that are identified as corrupt they are renamed with a suffix that indicates that are corrupt and what determined it.
Pairwise hamming distances were computed by comparing hashes against all file pairs and stored in a numpy array.
square form of pairwise distances are fed to fastcluster for clustering
output from fastcluster is used to generate a dendrogram to visualize clusters of similar images
I save the numpy array to disk so that I can later rerun the fastcluster/dendrogram part without recomputing the hashes for each file which is slow. This is something I have to alter the script to allow yet....
from PIL import Image
from PIL import ImageFile
import os, sys, imagehash, pyexiv2, rawpy, re
from tempfile import NamedTemporaryFile
from subprocess import check_call, call
import numpy as np
from scipy.cluster.hierarchy import dendrogram
from scipy.spatial.distance import squareform
import fastcluster
import matplotlib.pyplot as plt
# allow PIL to load truncated images (so that perceptual hashes can be created for truncated/damaged images still)
ImageFile.LOAD_TRUNCATED_IMAGES = True
# image files this script will handle
# PIL supported image formats
stdimageext = ('.jpg','.jpeg', '.bmp', '.png', '.gif', '.tif', '.tiff')
# libraw/ufraw supported formats
rawimageext = ('.nef', '.dng', '.tif', '.tiff')
devnull = open(os.devnull, 'w')
corruptRegex = re.compile(r'_\[.+\]\..{3,4}$')
hashes = []
filelist = []
for root, _, files in os.walk(sys.argv[1]):
for filename in files:
ext = os.path.splitext(filename.lower())[1]
relpath = os.path.relpath(root, sys.argv[1])
filepath = os.path.join(root, filename)
if ext in (stdimageext + rawimageext):
hashes_tmp = []
rhash = []
# reset corrupt string
corrupt_str = None
if ext in (stdimageext):
try:
im=Image.open(filepath)
for x in range(3):
hashes_tmp.append(str(imagehash.dhash(im.rotate(90 * x, expand=1),32)))
except:
None
# use jpeginfo against all jpg images as its pretty accurate
if ext in ('.jpg','.jpeg'):
rc = 0
rc = call(["jpeginfo", "--check", filepath], stdout=devnull, stderr=devnull)
if rc == 1:
corrupt_str = 'JpegInfo'
if corrupt_str is None:
try:
im = Image.open(filepath)
im.verify()
except:
e = sys.exc_info()[0]
corrupt_str = 'PIL_Verify'
else:
try:
im = Image.open(filepath)
im.load()
except:
e = sys.exc_info()[0]
corrupt_str = 'PIL_Load'
# raw image processing
if ext in (rawimageext):
# extract largest embedded preview image first
metadata_orig = pyexiv2.ImageMetadata(filepath)
metadata_orig.read()
if len(metadata_orig.previews) > 0:
preview = metadata_orig.previews[-1]
# save preview to temp file
temp_preview = NamedTemporaryFile()
preview.write_to_file(temp_preview.name)
os.rename(temp_preview.name + preview.extension, temp_preview.name)
try:
im = Image.open(temp_preview.name)
for x in range(3):
hashes_tmp.append(str(imagehash.dhash(im.rotate(90 * x,expand=1),32)))
except:
None
# try to load raw using libraw via rawpy first,
# generally if libraw can't load it then ufraw extraction would also fail
try:
im = rawpy.imread(filepath)
except:
e = sys.exc_info()[0]
corrupt_str = 'Libraw_Load'
else:
# as a final last ditch effort compare perceptual hashes of extracted
# raw and embedded preview to detect possible internal corruption
# extract and convert raw to jpeg image using ufraw
temp_raw = NamedTemporaryFile(suffix='.jpg')
try:
check_call(['ufraw-batch', '--wb=camera', '--rotate=camera', '--out-type=jpg', '--compression=95', '--noexif', '--lensfun=none', '--output=' + temp_raw.name, '--overwrite', '--silent', filepath],stdout=devnull, stderr=devnull)
except:
e = sys.exc_info()[0]
corrupt_str = 'Ufraw-conv'
else:
try:
im = Image.open(temp_raw.name)
for x in range(3):
rhash.append(str(imagehash.dhash(im.rotate(90 * x,expand=1),32)))
except:
None
# compare preview with raw image and compute the most similar hamming distance (best)
if len(hashes_tmp) > 0 and len(rhash) > 0:
hamdiff = 1
for rh in rhash:
# calculate hamming distance to compare similarity
hamdiff = min(hamdiff,(sum(bool(ord(ch1) - ord(ch2)) for ch1, ch2 in zip(hashes_tmp[0], rh))/len(hashes_tmp[0])))
if hamdiff > .3: # raw file is probably corrupt
corrupt_str = 'hash' + str(round(hamdiff*100,2))
hashes_tmp = hashes_tmp + rhash
# prefix file if corruption was detected ensuring that existing files already prefixed are re prefixed
mo = corruptRegex.search(filename)
newfilename = None
if corrupt_str is not None:
if mo is not None:
newfilename = re.sub(corruptRegex, '_[' + corrupt_str + ']', filename) + ext
else:
newfilename = os.path.splitext(filename)[0] + '_[' + corrupt_str + ']' + ext
else:
if mo is not None:
newfilename = re.sub(corruptRegex, '', filename) + ext
if newfilename is not None:
os.rename(filepath,os.path.join(root, newfilename))
if len(hashes_tmp) > 0:
hashes.append(hashes_tmp)
if newfilename is not None:
filelist.append(os.path.join(relpath, newfilename))
else:
filelist.append(os.path.join(relpath, filename))
print(len(filelist))
print(len(hashes))
a = np.empty(shape=(len(filelist),len(filelist)))
for hash_idx1, hash in enumerate(hashes):
a[hash_idx1,hash_idx1] = 0
hash_idx2 = hash_idx1 + 1
while hash_idx2 < len(hashes):
ham_dist = 1
for h1 in hash:
for h2 in hashes[hash_idx2]:
ham_dist = min(ham_dist, (sum(bool(ord(ch1) - ord(ch2)) for ch1, ch2 in zip(h1, h2)))/len(h1))
a[hash_idx1,hash_idx2] = ham_dist
a[hash_idx2,hash_idx1] = ham_dist
hash_idx2 = hash_idx2 + 1
print(a)
X = squareform(a)
print(X)
linkage = fastcluster.single(X)
clustdict = {i:[i] for i in range(len(linkage)+1)}
fig = plt.figure(figsize=(25,25))
plt.title('test title')
plt.xlabel('perpetual hash hamming distance')
plt.axvline(x=.15,c='red',linestyle='--')
dg = dendrogram(linkage, labels=filelist, orientation='right', show_leaf_counts=True)
ax = fig.gca()
ax.set_xlim(-.01,ax.get_xlim()[1])
plt.show
plt.savefig('foo1.pdf', bbox_inches='tight', dpi=100)
with open('numpyarray.npy','wb') as f:
np.save(f,a)
It took awhile...but I figured things out eventually and got a script that does a pretty good job of identifying if an image is corrupt and then uses perceptual hashes to try and group similar images together.
from PIL import Image, ImageFile
import os, sys, imagehash, pyexiv2, rawpy, re
from tempfile import NamedTemporaryFile
from subprocess import Popen, PIPE
import shlex
import numpy as np
from scipy.cluster.hierarchy import dendrogram, fcluster
from scipy.spatial.distance import squareform
import fastcluster
#import matplotlib.pyplot as plt
import math
import string
from wand.image import Image as wImage
import wand.exceptions
from io import BytesIO
from datetime import datetime
#import fd_table_status
def redirect_stdout():
print("Redirecting stdout and stderr")
sys.stdout.flush() # <--- important when redirecting to files
sys.stderr.flush()
newstdout = os.dup(1)
newstderr = os.dup(2)
devnull = os.open(os.devnull, os.O_WRONLY)
devnull2 = os.open(os.devnull, os.O_WRONLY)
os.dup2(devnull, 1)
os.dup2(devnull2,2)
os.close(devnull)
os.close(devnull2)
sys.stdout = os.fdopen(newstdout, 'w')
sys.stderr = os.fdopen(newstderr, 'w')
redirect_stdout()
def ct(linkage_matrix,flist,score):
cluster_id = []
for fidx, file_ in enumerate(flist):
link_ = np.where(linkage_matrix[:,:2] == fidx)[0]
if len(link_) == 1:
link = link_[0]
if linkage_matrix[link][2] <= score:
fcluster_idx = str(link).zfill(len(str(len(linkage_matrix))))
while True:
match = np.where(linkage_matrix[:,:2] == link+1+len(linkage_matrix))[0]
if len(match) == 1:
link = match[0]
link_d = linkage_matrix[link]
if link_d[2] <= score:
fcluster_idx = str(match[0]).zfill(len(str(len(linkage_matrix)))) + fcluster_idx
else:
break
else:
break
else:
fcluster_idx = None
cluster_id.append(fcluster_idx)
return cluster_id
def get_exitcode_stdout_stderr(cmd):
"""
Execute the external command and get its exitcode, stdout and stderr.
"""
args = shlex.split(cmd)
proc = Popen(args, stdout=PIPE, stderr=PIPE, close_fds=True)
out, err = proc.communicate()
exitcode = proc.returncode
del proc
return exitcode, out, err
if os.path.isdir(sys.argv[1]):
start_time = datetime.now()
# allow PIL to load truncated images (so that perceptual hashes can be created for truncated/damaged images still)
ImageFile.LOAD_TRUNCATED_IMAGES = True
# image files this script will handle
# PIL supported image formats
stdimageext = ('.jpg','.jpeg', '.bmp', '.png', '.gif', '.tif', '.tiff')
# libraw/ufraw supported formats
rawimageext = ('.nef', '.dng', '.tif', '.tiff')
corruptRegex = re.compile(r'_\[.+\]\..{3,4}$')
groupRegex = re.compile(r'^\[\d+\]_')
ufrawRegex = re.compile(r'Corrupt data near|Unexpected end of file|has the wrong dimensions!|Cannot open file|Cannot decode file|requests a nonexistent image!')
for subdirs,dirs,files in os.walk(sys.argv[1]):
files.clear()
dirs.clear()
for root,_,files in os.walk(subdirs):
print('\n******** Processing files in ' + root)
hashes = []
w_hash = []
w_hash_idx = []
filelist = []
files_ = []
cnt = 0
for f in files:
#cnt = cnt + 1
#if cnt < 10:
files_.append(f)
continue
cnt = 0
for f_idx, fname in enumerate(files_):
e=None
ext = os.path.splitext(fname.lower())[1]
filepath = os.path.join(root, fname)
imformat = ''
hashes_tmp = []
# reset corrupt string
corrupt_str = None
if ext in (stdimageext + rawimageext):
print(str(int(round(((f_idx+1)/len(files_))*100))) + '%' + ' : ' + fname + '....', end='', flush=True)
try:
with wImage(filename=filepath) as im:
imformat = '.' + im.format.lower()
ext = imformat if imformat is not '' else ext
with im.convert('jpeg') as converted:
jpeg_bin = converted.make_blob()
with Image.open(BytesIO(jpeg_bin)) as im2:
hash_image = []
for x in range(3):
print('.',end='',flush=True)
hash_i = str(imagehash.dhash(im2.rotate(90 * x, expand=1),32))
if ''.join(set(hash_i)) != '0':
hash_image.append(hash_i)
if hash_image:
hash_image.append(1)
hashes_tmp.append(hash_image)
except:
e = sys.exc_info()[0]
errcode = str([k for k, v in wand.exceptions.TYPE_MAP.items() if v == e][0]).zfill(3)
if int(errcode[-2:]) in (15,25,30,35,40,50,55):
corrupt_str = 'magick'
finally:
try:
im.close()
except:
pass
try:
im2.close()
except:
pass
if ext in (stdimageext):
try:
with Image.open(filepath) as im:
hash_image = []
for x in range(3):
print('.',end='',flush=True)
hash_i = str(imagehash.dhash(im.rotate(90 * x, expand=1),32))
if ''.join(set(hash_i)) != '0':
hash_image.append(hash_i)
if hash_image:
hash_image.append(2)
hashes_tmp.append(hash_image)
except:
pass
finally:
try:
im.close()
except:
pass
# use jpeginfo against all jpg images as its pretty accurate
if ext in ('.jpg','.jpeg'):
#rc = 0
print('.',end='',flush=True)
cmd = 'jpeginfo --check "' + filepath + '"'
exitcode, out, err = get_exitcode_stdout_stderr(cmd)
#rc = call(["jpeginfo", "--check", filepath], stdout=DEVNULL, stderr=DEVNULL, close_fds=True)
if exitcode == 1:
corrupt_str = 'JpegInfo' if corrupt_str == None else corrupt_str
#del rc
if corrupt_str is None:
try:
with Image.open(filepath) as im:
print('.',end='',flush=True)
im.verify()
except:
e = sys.exc_info()[0]
corrupt_str = 'PIL_Verify' if corrupt_str == None else corrupt_str
else:
try:
with Image.open(filepath) as im:
print('.',end='',flush=True)
temp = im.copy()
im.load()
except:
e = sys.exc_info()[0]
corrupt_str = 'PIL_Load' if corrupt_str == None else corrupt_str
finally:
try:
temp.close()
except:
pass
try:
im.close()
except:
pass
finally:
try:
im.close()
except:
pass
try:
temp.close()
except:
pass
# raw image processing
if ext in (rawimageext):
print('.',end='',flush=True)
# try to load raw using libraw via rawpy first,
# generally if libraw can't load it then ufraw extraction would also fail
if corrupt_str == None:
try:
with rawpy.imread(filepath) as raw:
rgb = raw.postprocess(use_camera_wb=True)
temp_raw = NamedTemporaryFile(suffix='.jpg')
Image.fromarray(rgb).save(temp_raw.name)
with Image.open(temp_raw.name) as im:
hash_image = []
for x in range(3):
print('.',end='',flush=True)
hash_i = str(imagehash.dhash(im.rotate(90 * x, expand=1),32))
if ''.join(set(hash_i)) != '0':
hash_image.append(hash_i)
if hash_image:
hash_image.append(3)
hashes_tmp.append(hash_image)
except(rawpy.LibRawFatalError):
e = sys.exc_info()[1]
corrupt_str = 'Libraw_FE'
except(rawpy.LibRawNonFatalError):
e = sys.exc_info()[1]
corrupt_str = 'Libraw_NFE'
except:
#print(sys.exc_info())
corrupt_str = 'Libraw'
finally:
try:
im.close()
except:
pass
try:
temp_raw.close()
except:
pass
try:
raw.close()
except:
pass
if corrupt_str == None:
# as a final last ditch effort compare perceptual hashes of extracted
# raw and embedded preview to detect possible internal corruption
# extract and convert raw to jpeg image using ufraw
temp_raw = NamedTemporaryFile(suffix='.jpg')
#rc = 0
cmd = 'ufraw-batch --wb=camera --rotate=camera --out-type=jpg --compression=95 --noexif --lensfun=none --auto-crop --output=' + temp_raw.name + ' --overwrite "' + filepath + '"'
print('.',end='',flush=True)
exitcode, out, err = get_exitcode_stdout_stderr(cmd)
if exitcode == 1 or ufrawRegex.search(str(err)) is not None:
corrupt_str = 'Ufraw' if corrupt_str is None else corrupt_str
tmpfilesize = os.stat(temp_raw.name).st_size
if tmpfilesize > 0:
try:
with Image.open(temp_raw.name) as im:
hash_image = []
for x in range(3):
print('.',end='',flush=True)
hash_i = str(imagehash.dhash(im.rotate(90 * x, expand=1),32))
if ''.join(set(hash_i)) != '0':
hash_image.append(hash_i)
if hash_image:
hash_image.append(4)
hashes_tmp.append(hash_image)
except:
pass
finally:
try:
im.close()
except:
pass
try:
temp_raw.close()
except:
pass
# attempt to extract preview images
imfile = filepath
try:
with pyexiv2.ImageMetadata(imfile) as metadata_orig:
metadata_orig.read()
#for i,p in enumerate(metadata_orig.previews):
if metadata_orig.previews:
preview = metadata_orig.previews[-1]
# save preview to temp file
temp_preview = NamedTemporaryFile()
preview.write_to_file(temp_preview.name)
os.rename(temp_preview.name + preview.extension, temp_preview.name)
try:
with Image.open(temp_preview.name) as im:
hash_image = []
for x in range(3):
print('.',end='',flush=True)
hash_i = str(imagehash.dhash(im.rotate(90 * x, expand=1),32))
if ''.join(set(hash_i)) != '0':
hash_image.append(hash_i)
if hash_image:
hash_image.append(5)
hashes_tmp.append(hash_image)
except:
pass
finally:
try:
temp_preview.close()
except:
pass
try:
im.close()
except:
pass
except:
pass
finally:
try:
metadata_orig.close()
except:
pass
# compare hashes for all images that were found or extracted and find most dissimilar hamming distance (worst)
if len(hashes_tmp) > 1:
#print('checking_hashes')
print('.',end='',flush=True)
scores = []
for h_idx, hash in enumerate(hashes_tmp):
i = h_idx + 1
while i < len(hashes_tmp):
ham_dist = 1
for h1 in hash[:-1]:
for h2 in hashes_tmp[i][:-1]:
ham_dist = min(ham_dist, (sum(bool(ord(ch1) - ord(ch2)) for ch1, ch2 in zip(h1, h2)))/len(h1))
if (hash[-1] == 5 and hashes_tmp[i][-1] != 5) or (hash[-1] != 5 and hashes_tmp[i][-1] == 5):
scores.append([ham_dist,hash[-1],hashes_tmp[i][-1]])
i = i + 1
if scores:
worst = sorted(scores, key = lambda x: x[0])[-1]
if worst[0] > 0.3:
worst1 = str(worst[1])
worst2 = str(worst[2])
corrupt_str = 'hash' + str(round(worst[0]*100,2)) + '_' + worst1 + '-' + worst2 if corrupt_str == None else corrupt_str
# prefix file if corruption was detected ensuring that existing files already prefixed are re prefixed
mo = corruptRegex.search(fname)
newfilename = None
if corrupt_str is not None:
print('Corrupt: ' + corrupt_str)
if mo is not None:
newfilename = re.sub(corruptRegex, '_[' + corrupt_str + ']', fname) + ext
else:
newfilename = os.path.splitext(fname)[0] + '_[' + corrupt_str + ']' + ext
else:
print('OK!')
if mo is not None:
newfilename = re.sub(corruptRegex, '', fname) + ext
# remove group index from name if present, this will be assigned in the next step if needed
newfilename = newfilename if newfilename is not None else fname
mo = groupRegex.search(newfilename)
if mo is not None:
newfilename = re.sub(groupRegex, '', newfilename)
if hashes_tmp:
# set function unduplicates flattened list
hashes.append(set([item for sublist in hashes_tmp for item in sublist[:-1]]))
filelist.append([root,fname,newfilename, len(hashes_tmp)])
print('******** Grouping similar images... ************')
if len(hashes) > 1:
scores = []
for h_idx, hash in enumerate(hashes):
i = h_idx + 1
while i < len(hashes):
ham_dist = 1
for h1 in hash:
for h2 in hashes[i]:
ham_dist = min(ham_dist, (sum(bool(ord(ch1) - ord(ch2)) for ch1, ch2 in zip(h1, h2)))/len(h1))
scores.append(ham_dist)
i = i + 1
X = np.array(scores)
linkage = fastcluster.single(X)
w_hash_idx = [el_idx for el_idx, el in enumerate(filelist) if el[3] > 0]
w_hash = [filelist[i] for i in w_hash_idx]
test=ct(linkage,[el[2] for el in w_hash],.2)
for i, prfx in enumerate(test):
curfilename = w_hash[i][2]
mo = groupRegex.search(curfilename)
newfilename = None
if prfx is not None:
if mo is not None:
newfilename = re.sub(groupRegex, '[' + prfx + ']_', curfilename)
else:
newfilename = '[' + prfx + ']_' + curfilename
else:
if mo is not None:
newfilename = re.sub(groupRegex, '', curfilename)
# if newfilename is not None:
filelist[w_hash_idx[i]][2] = newfilename if newfilename is not None else curfilename
#fig = plt.figure(figsize=(25,25))
#plt.title(root)
#plt.xlabel('perpetual hash hamming distance')
#plt.axvline(x=.15,c='red',linestyle='--')
#dg = dendrogram(linkage, labels=[el[2] for el in w_hash], orientation='right', show_leaf_counts=True)
#ax = fig.gca()
#ax.set_xlim(-.02,ax.get_xlim()[1])
#plt.show
#plt.savefig(os.path.join(root,'dendrogram.pdf'), bbox_inches='tight', dpi=100)
w_hash.clear()
w_hash_idx.clear()
print('******** Renameing file if applicable... ************')
for fr in filelist:
if fr[1] != fr[2]:
#print(fr[1] + ' -- ' + fr[2])
path = fr[0]
os.rename(os.path.join(path,fr[1]),os.path.join(path,fr[2]))
filelist.clear()
duration = datetime.now() - start_time
days = divmod(duration.total_seconds(), 86400) # Get days (without [0]!)
hours = divmod(days[1], 3600) # Use remainder of days to calc hours
minutes = divmod(hours[1], 60) # Use remainder of hours to calc minutes
seconds = divmod(minutes[1], 1) # Use remainder of minutes to calc seconds
print("Time to complete: %d days, %d:%d:%d" % (days[0], hours[0], minutes[0], seconds[0]))
I'm having some trouble creating a face recognition system with OpenCV and Python. I was trying to use the documentation given by Philipp Wagner, and I have the following code:
import os
import sys
import cv2
import numpy as np
def normalize(X, low, high, dtype=None):
"""Normalizes a given array in X to a value between low and high."""
X = np.asarray(X)
minX, maxX = np.min(X), np.max(X)
# normalize to [0...1].
X = X - float(minX)
X = X / float((maxX - minX))
# scale to [low...high].
X = X * (high-low)
X = X + low
if dtype is None:
return np.asarray(X)
return np.asarray(X, dtype=dtype)
def read_images(path, sz=None):
"""Reads the images in a given folder, resizes images on the fly if size is given.
Args:
path: Path to a folder with subfolders representing the subjects (persons).
sz: A tuple with the size Resizes
Returns:
A list [X,y]
X: The images, which is a Python list of numpy arrays.
y: The corresponding labels (the unique number of the subject, person) in a Python list.
"""
c = 0
X,y = [], []
for dirname, dirnames, filenames in os.walk(path):
for subdirname in dirnames:
subject_path = os.path.join(dirname, subdirname)
for filename in os.listdir(subject_path):
try:
im = cv2.imread(os.path.join(subject_path, filename), cv2.IMREAD_GRAYSCALE)
# resize to given size (if given)
if (sz is not None):
im = cv2.resize(im, sz)
X.append(np.asarray(im, dtype=np.uint8))
y.append(c)
except IOError, (errno, strerror):
print "I/O error({0}): {1}".format(errno, strerror)
except:
print "Unexpected error:", sys.exc_info()[0]
raise
c = c+1
return [X,y]
if __name__ == "__main__":
out_dir = None
if len(sys.argv) < 2:
print "USAGE: facerec_demo.py </path/to/images> [</path/to/store/images/at>]"
sys.exit()
[X,y] = read_images(sys.argv[1])
y = np.asarray(y, dtype=np.int32)
# If a out_dir is given, set it:
if len(sys.argv) == 3:
out_dir = sys.argv[2]
model = cv2.face.createEigenFaceRecognizer()
model.train(np.asarray(X), np.asarray(y))
model.save('individual.xml')
[p_label, p_confidence] = model.predict(np.asarray(X[0]))
# Print it:
print "Predicted label = %d (confidence=%.2f)" % (p_label, p_confidence)
print model.getParams()
# Now let's get some data:
mean = model.getMat("mean")
eigenvectors = model.getMat("eigenvectors")
# We'll save the mean, by first normalizing it:
mean_norm = normalize(mean, 0, 255, dtype=np.uint8)
mean_resized = mean_norm.reshape(X[0].shape)
if out_dir is None:
cv2.imshow("mean", mean_resized)
else:
cv2.imwrite("%s/mean.png" % (out_dir), mean_resized)
for i in xrange(min(len(X), 16)):
eigenvector_i = eigenvectors[:,i].reshape(X[0].shape)
eigenvector_i_norm = normalize(eigenvector_i, 0, 255, dtype=np.uint8)
if out_dir is None:
cv2.imshow("%s/eigenface_%d" % (out_dir,i), eigenvector_i_norm)
else:
cv2.imwrite("%s/eigenface_%d.png" % (out_dir,i), eigenvector_i_norm)
if out_dir is None:
cv2.waitKey(0)
But it keeps me getting the following error:
print model.getParams()
AttributeError: 'cv2.face_BasicFaceRecognizer' object has no attribute 'getParams'
Any idea why I can't get the any parameters? I thought that maybe it is because of the incorporation of the submodule cv2.face,and therefore it might be some alternative to model.getParams() as well as getMat() but I'm just guessing...
Thanks in advance.
Maybe it's too late but this what I did.
First, to see the list of methods that supports your cv2.face
model = cv2.face.createEigenFaceRecognizer ()
help (model)
And as you'll notice, there are some changes no longer used: model.getMat ("mean") now only used mean = model.getMean().
I hope it helps you :)
I've constructed Python script and it works well on OS X/Linux but I'm having problems in Windows (see title). It's using Pillow module and the error originates in module PIL\Image.py on line 2274.
My code:
# -*- coding: utf-8 -*-
import os
import sys
import urllib2
from PIL import Image, ImageFile
from PyPDF2 import PdfFileReader, PdfFileWriter, PdfFileMerger
from bs4 import BeautifulSoup
ImageFile.LOAD_TRUNCATED_IMAGES = True
def parser():
try:
return sys.argv[1].lower()
except IndexError:
print 'no argument specified'
the_url = 'http://www.oldgames.sk'
base_url = the_url + '/mags/'
# Add magazines + relative URLs here
magazines = {
'score': 'score/',
'level': 'level/',
'amiga': 'amiga-magazin/',
'bit': 'bit/',
'commodore': 'commodore-amater/',
'CGW': 'cgw/',
'excalibur': 'excalibur/',
'hrac': 'hrac-cz/',
'joystick': 'joystick-sk/',
'pocitac-aktivne': 'pocitac-aktivne/',
'pocitacove-hry': 'pocitacove-hry/',
'riki': 'riki/',
'zzap64': 'zzap64/'}
issue_links = []
download_list = {}
def parse_args(arg):
if arg == '--list':
items = [i for i in magazines.keys()]
for item in items:
print item
sys.exit()
elif arg in magazines:
print "Scraping %s magazine..." % arg.capitalize()
return base_url + magazines[arg]
else:
return sys.exit('invalid magazine name')
def extract_links_to_issue(url):
soup = BeautifulSoup(urllib2.urlopen(url))
for div in soup.findAll('div','mImage'):
issue_links.append(the_url + div.a['href'])
print 'Scraped %d links' % len(issue_links)
def issue_renamer(issue_name):
char1 = '\\'
char2 = '/'
replacement = '-'
if char1 in issue_name:
issue_name = issue_name.replace(char1, replacement)
print 'inv. char (%s): renaming to %s' % (char1, issue_name)
elif char2 in issue_name:
issue_name = issue_name.replace(char2, replacement)
print 'inv. char (%s): renaming to %s' % (char2, issue_name)
return issue_name
def extract_links_to_images(issue_links):
for index, link in enumerate(issue_links):
print 'Scraping issue #%d: %s' % (index + 1, link)
issue_soup = BeautifulSoup(urllib2.urlopen(link))
image_list = []
for image in issue_soup.findAll('div', 'mags_thumb_article'):
issue_name = issue_renamer(issue_soup.findAll('h1','top')[0].text)
image_list.append(the_url + image.a['href'])
download_list[issue_name] = image_list
def clean_up(list_of_files, list_of_pdfs):
num = len(list_of_files) + len(list_of_pdfs)
for file in list_of_files:
os.remove(file)
for pdf in list_of_pdfs:
os.remove(pdf)
print 'Cleaned up %d files' % num
def convert_images(list_of_files, issue):
list_of_pdfs = []
for index, file in enumerate(list_of_files):
im = Image.open(file)
outfile = file + '.pdf'
im.save(outfile, 'PDF')
list_of_pdfs.append(outfile)
print 'converting ...' + str((index + 1)) + '/' + str(len(list_of_files))
final_pdf = PdfFileMerger()
for pdf in list_of_pdfs:
final_pdf.append(open(pdf, 'rb'))
issue_name = issue + '.pdf'
final_pdf.write(open(issue_name, 'wb'))
final_pdf.close()
print '--- PDF completed ---'
clean_up(list_of_files, list_of_pdfs)
def download_images(download_list):
for issues,image_list in download_list.items():
print 'Preparing %s ...' % issues
list_of_files = []
for image in image_list:
image_name = os.path.split(image)[1]
list_of_files.append(image_name)
f = open(image_name, 'w')
f.write(urllib2.urlopen(image).read())
print 'Downloading image: %s' % image
f.close()
convert_images(list_of_files, issues)
arg = parser()
extract_links_to_issue(parse_args(arg))
extract_links_to_images(issue_links)
download_images(download_list)
I'd like to fix this, can anyone help me?
You are copying images into a file opened in text mode:
f = open(image_name, 'w')
f.write(urllib2.urlopen(image).read())
On Windows this means that any 0A (newline) bytes are translated to 0D 0A byte sequences (carriage return, newline), as that is the Windows line separator.
Open your files in binary mode:
f = open(image_name, 'wb')
f.write(urllib2.urlopen(image).read())
I'd switch to using the file as a context manager (with the with statement) so you don't have to manually close it, and using shutil.copyfileobj() to stream the data straight to disk (in blocks) rather than read the whole image into memory in one go:
import shutil
# ...
with open(image_name, 'wb') as f:
shutil.copyfileobj(urllib2.urlopen(image), f)
I have a seemingly impossible conundrum and hope that you guys can help point me in the right direction. I have been coming to and leaving this project for weeks now and I think it is about time that I solve it, with your help hopefully.
I am making a script which is supposed to read a bunch of .xls excel files from a directory structure, parse their contents and load it into a mysql database. Now, in the main function, a list of (croatian) file names gets passed to the xlrd, and that is where the problem lies.
The environment is up to date FreeBSD 9.1.
I get the following error when executing the script:
mars:~/20130829> python megascript.py
Python version: 2.7.5
Filesstem encoding is: UTF-8
Removing error.log if it exists...
It doesn't.
Done!
Connecting to database...
Done!
MySQL database version: 5.6.13
Loading pilots...
Done!
Loading tehnicians...
Done!
Loading aircraft registrations...
Done!
Loading file list...
Done!
Processing files...
/2006/1_siječanj.xls
Traceback (most recent call last):
File "megascript.py", line 540, in <module>
main()
File "megascript.py", line 491, in main
data = readxlsfile(files, 'UPIS', piloti, tehnicari, helikopteri)
File "megascript.py", line 129, in readxlsfile
workbook = open_workbook(f)
File "/usr/local/lib/python2.7/site-packages/xlrd-0.9.2-py2.7.egg/xlrd/__init__.py", line 394, in open_workbook
f = open(filename, "rb")
IOError: [Errno 2] No such file or directory: u'/2006/1_sije\u010danj.xls'
I have included the complete output to make the code fow easier to follow.
I suppose that the problem is in xlrd not accepting utf-8 file list. I'm not sure how to get around that without messing around with xlrd code though. Any ideas?
Here goes the code:
#! /usr/bin/env/python
# -#*- coding: utf-8 -*-
import os, sys, getopt, codecs, csv, MySQLdb, platform
from mmap import mmap,ACCESS_READ
from xlrd import open_workbook, xldate_as_tuple
# Define constants
NALET_OUT = ''
PUTNICI_OUT = ''
DB_HOST = 'localhost'
DB_USER = 'user'
DB_PASS = 'pass'
DB_DATABASE = 'eth'
START_DIR = u'./'
ERROR_FILE = START_DIR + 'mega_error.log'
# Functions
def isNumber(s):
# Check if a string could be a number
try:
float(s)
return True
except ValueError:
return False
def getMonth(f):
# Izvuci mjesec iz imena datoteke u formatu "1_sijecanj.xls"
temp = os.path.basename(f)
temp = temp.split('_')
mjesec = int(temp[0])
return mjesec
def getYear(f):
# Izvuci godinu iz path
f = f.split('/')
godina = f[-2]
return godina
def databaseVersion(cur):
# Print Mysql database version
try:
cur.execute("SELECT VERSION()")
result = cur.fetchone()
except MySQLdb.Error, e:
try:
print "MySQL Error [%d]: %s]" % (e.args[0], e.args[1])
except IndexError:
print "MySQL Error: %s" % (e.args[0], e.args[1])
print "MySQL database version: %s" % result
def getQuery(cur, sql_query):
# Perform passed query on passed database
try:
cur.execute(sql_query)
result = cur.fetchall()
except MySQLdb.Error, e:
try:
print "MySQL Error [%d]: %s]" % (e.args[0], e.args[1])
except IndexError:
print "MySQL Error: %s" % (e.args[0], e.args[1])
return result
def getFiles():
files = []
# Find subdirectories
for i in [x[0] for x in os.walk(START_DIR)]:
if (i != '.' and isNumber(os.path.basename(i))):
# Find files in subdirectories
for j in [y[2] for y in os.walk(i)]:
# For every file in file list
for y in j:
fn, fe = os.path.splitext(y)
is_mj = fn.split("_")
if(fe == '.xls' and y.find('_') and isNumber(is_mj[0])):
mj = fn.split('_')
files.append(i.lstrip('.') + "/" + y)
# Sort list cronologically
files.sort(key=lambda x: getMonth(x))
files.sort(key=lambda x: getYear(x))
return files
def errhandle(f, datum, var, vrijednost, ispravka = "NULL"):
# Get error information, print it on screen and write to error.log
f = unicode(str(f), 'utf-8')
datum = unicode(str(datum), 'utf-8')
var = unicode(str(var), 'utf-8')
try:
vrijednost = unicode(str(vrijednost.decode('utf-8')), 'utf-8')
except UnicodeEncodeError:
vrijednost = vrijednost
ispravka = unicode(str(ispravka), 'utf-8')
err_f = codecs.open(ERROR_FILE, 'a+', 'utf-8')
line = f + ": " + datum + " " + var + "='" + vrijednost\
+ "' Ispravka='" + ispravka + "'"
#print "%s" % line
err_f.write(line)
err_f.close()
def readxlsfile(files, sheet, piloti, tehnicari, helikopteri):
# Read xls file and return a list of rows
data = []
nalet = []
putn = []
id_index = 0
# For every file in list
for f in files:
print "%s" % f
temp = f.split('/')
godina = str(temp[-2])
temp = os.path.basename(f).split('_')
mjesec = str(temp[0])
workbook = open_workbook(f)
sheet = workbook.sheet_by_name('UPIS')
# For every row that doesn't contain '' or 'POSADA' or 'dan' etc...
for ri in range(sheet.nrows):
if sheet.cell(ri,1).value!=''\
and sheet.cell(ri,2).value!='POSADA'\
and sheet.cell(ri,1).value!='dan'\
and (sheet.cell(ri,2).value!=''):
temp = sheet.cell(ri, 1).value
temp = temp.split('.')
dan = temp[0]
# Datum
datum = "'" + godina + "-" + mjesec + "-" + dan + "'"
# Kapetan
kapetan = ''
kapi=''
if sheet.cell(ri, 2).value == "":
kapetan = "NULL"
else:
kapetan = sheet.cell(ri, 2).value
if kapetan[-1:] == " ":
errhandle(f, datum, 'kapetan', kapetan, kapetan[-1:])
kapetan = kapetan[:-1]
if(kapetan):
try:
kapi = [x[0] for x in piloti if x[2].lower() == kapetan]
kapi = kapi[0]
except ValueError:
errhandle(f, datum, 'kapetan', kapetan, '')
kapetan = ''
except IndexError:
errhandle(f, datum, 'kapetan', kapetan, '')
kapi = 'NULL'
else:
kapi="NULL"
# Kopilot
kopilot = ''
kopi = ''
if sheet.cell(ri, 3).value == "":
kopi = "NULL"
else:
kopilot = sheet.cell(ri, 3).value
if kopilot[-1:] == " ":
errhandle(f, datum,'kopilot', kopilot,\
kopilot[:-1])
if(kopilot):
try:
kopi = [x[0] for x in piloti if x[2].lower() == kopilot]
kopi = kopi[0]
except ValueError:
errhandle(f, datum,'kopilot', kopilot, '')
except IndexError:
errhandle(f, datum, 'kopilot', kopilot, '')
kopi = 'NULL'
else:
kopi="NULL"
# Teh 1
teh1 = ''
t1i = ''
if sheet.cell(ri, 4).value=='':
t1i = 'NULL'
else:
teh1 = sheet.cell(ri, 4).value
if teh1[-1:] == " ":
errhandle(f, datum,'teh1', teh1, teh1[:-1])
teh1 = 'NULL'
if(teh1):
try:
t1i = [x[0] for x in tehnicari if x[2].lower() == teh1]
t1i = t1i[0]
except ValueError:
errhandle(f, datum,'teh1', teh1, '')
except IndexError:
errhandle(f, datum, 'teh1', teh1, '')
t1i = 'NULL'
else:
t1i="NULL"
# Teh 2
teh2=''
t2i=''
if sheet.cell(ri, 5).value=='':
t2i = "NULL"
else:
teh2 = sheet.cell(ri, 5).value
if teh2[-1:] == " ":
errhandle(f, datum,'teh2', teh2, teh2[-1:])
teh2 = ''
if(teh2):
try:
t2i = [x[0] for x in tehnicari if x[2].lower() == teh2]
t2i = t2i[0]
except ValueError:
errhandle(f, datum,'teh2', teh2, 'NULL')
t2i = 'NULL'
except IndexError:
errhandle(f, datum,'teh2', teh2, 'NULL')
t2i = 'NULL'
else:
t2i="NULL"
# Oznaka
oznaka = ''
heli = ''
if sheet.cell(ri, 6).value=="":
oznaka = errhandle(f, datum, "helikopter", oznaka, "")
else:
oznaka = str(int(sheet.cell(ri, 6).value))
try:
heli = [x[0] for x in helikopteri if x[0] == oznaka]
except ValueError:
errhandle(f, datum, 'helikopter', oznaka, '')
except IndexError:
errhandle(f, datum, 'helikopter', oznaka, '')
heli = ''
# Uvjeti
uvjeti = sheet.cell(ri, 9).value
# Letova
letova_dan = 0
letova_noc = 0
letova_ifr = 0
letova_sim = 0
if sheet.cell(ri, 7).value == "":
errhandle(f, datum, 'letova', letova, '')
else:
letova = str(int(sheet.cell(ri, 7).value))
if uvjeti=="vfr":
letova_dan = letova
elif uvjeti=="ifr":
letova_ifr = letova
elif uvjeti=="sim":
letova_sim = letova
else:
letova_noc = letova
#Block time
bt_dan = "'00:00:00'"
bt_noc = "'00:00:00'"
bt_ifr = "'00:00:00'"
bt_sim = "'00:00:00'"
try:
bt_tpl = xldate_as_tuple(sheet.cell(ri, 8).value, workbook.datemode)
bt_m = bt_tpl[4]
bt_h = bt_tpl[3]
bt = "'" + str(bt_h).zfill(2)+":"+str(bt_m)+":00'"
except ValueError or IndexError:
errhandle(f, datum, 'bt', sheet.cell(ri,8).value, '')
if uvjeti[:3]=="vfr":
bt_dan = bt
elif uvjeti[:3]=="ifr":
bt_ifr = bt
elif uvjeti[:3]=="sim":
bt_sim = bt
elif uvjeti[:2] == "no":
bt_noc = bt
else:
errhandle(f, datum, 'uvjeti', uvjeti, '')
# Vrsta leta
vrsta = "'" + sheet.cell(ri, 10).value + "'"
# Vjezba
vjezba = 'NULL';
try:
vjezba = sheet.cell(ri, 11).value
if vjezba == '':
# Too many results
#errhandle(f, datum, 'vjezba', vjezba, '')
vjezba = 'NULL'
if vjezba == "?":
errhandle(f, datum, 'vjezba', str(vjezba), '')
vjezba = 'NULL'
if str(vjezba) == 'i':
errhandle(f, datum, 'vjezba', str(vjezba), '')
vjezba = 'NULL'
if str(vjezba)[-1:] == 'i':
errhandle(f, datum, 'vjezba', str(vjezba),\
str(vjezba).rstrip('i'))
vjezba = str(vjezba).rstrip('i')
if str(vjezba).find(' i ') != -1:
errhandle(f, datum, 'vjezba', str(vjezba), str(vjezba).split(' i ')[0])
vjezba = str(vjezba).split(' i ')
vjezba = vjezba[0]
if str(vjezba)[-1:] == 'm':
errhandle(f, datum, 'vjezba', str(vjezba), str(vjezba).rstrip('m'))
vjezba = str(vjezba).rstrip('m')
if str(vjezba).find(';') != -1:
errhandle(f, datum, 'vjezba', str(vjezba), str(vjezba).split(';')[0])
temp = str(vjezba).split(';')
vjezba = temp[0]
if str(vjezba).find('/') != -1:
errhandle(f, datum, 'vjezba', str(vjezba), str(vjezba).split('/')[0])
temp = str(vjezba).split('/')
vjezba = temp[0]
if str(vjezba).find('-') != -1:
errhandle(f, datum, 'vjezba', str(vjezba), str(vjezba).split('-')[0])
temp = str(vjezba).split('-')
vjezba = temp[0]
if str(vjezba).find(',') != -1:
errhandle(f, datum, 'vjezba', str(vjezba), str(vjezba).split(',')[0])
temp = str(vjezba).split(',')
vjezba = temp[0]
if str(vjezba).find('_') != -1:
errhandle(f, datum, 'vjezba', str(vjezba), str(vjezba).split('_')[0])
temp = str(vjezba).split('_')
vjezba = temp[0]
if str(vjezba) == 'bo':
errhandle(f, datum, 'vjezba', str(vjezba), '')
vjezba = 'NULL'
if str(vjezba).find(' ') != -1:
if str(vjezba) == 'pp 300':
errhandle(f, datum, 'vjezba', str(vjezba), str(vjezba).split(' ')[1])
temp = str(vjezba).split(' ')
vjezba = temp[1]
else:
errhandle(f, datum, 'vjezba', str(vjezba), str(vjezba).split(' ')[0])
temp = str(vjezba).split(' ')
vjezba = temp[0]
if str(vjezba) == 'pp':
errhandle(f, datum, 'vjezba', str(vjezba), '')
vjezba = ''
except UnicodeEncodeError:
errhandle(f, datum, 'Unicode error! vjezba', vjezba, '')
if vjezba != 'NULL':
vjezba = int(float(vjezba))
# Visinska slijetanja
# Putnici
vp1 = str(sheet.cell(ri, 12).value)
bp1 = str(sheet.cell(ri, 13).value)
vp2 = str(sheet.cell(ri, 14).value)
bp2 = str(sheet.cell(ri, 15).value)
# Teret
teret = ''
teret = str(sheet.cell(ri, 16).value)
if teret == '':
teret = 0
# Baja
baja = ''
if sheet.cell(ri, 17).value == '':
baja = 0
else:
baja = int(sheet.cell(ri, 17).value) / 2 # dodano /2 da se dobiju tone
# Redosljed csv
id_index = id_index + 1
row = [id_index, datum, kapi, kopi, t1i, t2i, oznaka,\
letova, letova_dan, letova_noc, letova_ifr,\
letova_sim, bt, bt_dan, bt_noc, bt_ifr,\
bt_sim, vrsta, vjezba, teret, baja]
row = [str(i) for i in row]
nalet.append(row)
putn = []
if bp1 != '':
put = [id_index, vp1, bp1]
putn.append(put)
if bp2 != '':
put = [id_index, vp2, bp2]
putn.append(put)
data.append(nalet)
data.append(putn)
return data
def main():
# Python version
print "\nPython version: %s \n" % platform.python_version()
# Print filesystem encoding
print "Filesstem encoding is: %s" % sys.getfilesystemencoding()
# Remove error file if exists
print "Removing error.log if it exists..."
try:
os.remove(ERROR_FILE)
print "It did."
except OSError:
print "It doesn't."
pass
print "Done!"
# Connect to database
print "Connecting to database..."
db = MySQLdb.connect(DB_HOST, DB_USER, DB_PASS, DB_DATABASE,\
use_unicode=True, charset='utf8')
cur=db.cursor()
print "Done!"
# Database version
databaseVersion(cur)
# Load pilots, tehnicians and helicopters from db
print "Loading pilots..."
sql_query = "SELECT eth_osobnici.id, eth_osobnici.ime,\
eth_osobnici.prezime FROM eth_osobnici RIGHT JOIN \
eth_letacka_osposobljenja ON eth_osobnici.id=\
eth_letacka_osposobljenja.id_osobnik WHERE \
eth_letacka_osposobljenja.vrsta_osposobljenja='kapetan' \
OR eth_letacka_osposobljenja.vrsta_osposobljenja='kopilot'"
#piloti = []
#piloti = getQuery(cur, sql_query)
piloti=[]
temp = []
temp = getQuery(cur, sql_query)
for row in temp:
piloti.append(row)
print "Done!"
print "Loading tehnicians..."
sql_query = "SELECT eth_osobnici.id, eth_osobnici.ime,\
eth_osobnici.prezime FROM eth_osobnici RIGHT JOIN \
eth_letacka_osposobljenja ON eth_osobnici.id=\
eth_letacka_osposobljenja.id_osobnik WHERE \
eth_letacka_osposobljenja.vrsta_osposobljenja='tehničar 1' \
OR eth_letacka_osposobljenja.vrsta_osposobljenja='tehničar 2'"
tehnicari=[]
temp = []
temp = getQuery(cur, sql_query)
for row in temp:
tehnicari.append(row)
print "Done!"
print "Loading aircraft registrations..."
sql_query = "SELECT id FROM eth_helikopteri"
helikopteri=[]
temp = []
temp = getQuery(cur, sql_query)
for row in temp:
helikopteri.append(row)
print "Done!"
# Get file names to process
print "Loading file list..."
files = getFiles()
print "Done!"
# Process all files from array
print "Processing files..."
data = readxlsfile(files, 'UPIS', piloti, tehnicari, helikopteri)
print "Done!"
# Enter new information in database
result = 0
print "Reseting database..."
sql_query = "DELETE FROM eth_nalet"
cur.execute(sql_query)
db.commit()
sql_query = "ALTER TABLE eth_nalet AUTO_INCREMENT=0"
cur.execute(sql_query)
db.commit()
print "Done!"
print "Loading data in 'eth_nalet'..."
for row in data[0]:
sql_query = """INSERT INTO eth_nalet (id, datum, kapetan,
kopilot, teh1, teh2, registracija, letova_uk, letova_dan,
letova_noc, letova_ifr, letova_sim, block_time, block_time_dan,
block_time_noc, block_time_ifr, block_time_sim, vrsta_leta,
vjezba, teret, baja) VALUES (%s)""" % (", ".join(row))
cur.execute(sql_query)
db.commit()
print "Done!"
print "Loading data in 'eth_putnici'..."
for row in data[1]:
sql_query = """INSERT INTO eth_putnici (id_leta,
vrsta_putnika, broj_putnika) VALUES (%s)""" % (", ".join(row))
cur.execute(sql_query)
db.commit()
print "Done!"
# Close the database connection
print "Closing database connection..."
if cur:
cur.close()
if db:
db.close()
print "Database closed!"
if __name__ == '__main__':
main()
I apologize for not translating comments in the code, it was an old project of mine and I tend to make comments in english now. If something needs explanation please fire away.
The funny thing is that if I print the file list to the screen, they display just fine. But when they get passed to the xlrd they don't seem to be in the right format.
Respectfully,
me
I finally managed to find an error! It wasn't due to encoding error after all. It was a logic error.
In function getFiles() I stripped the leading "." from file list, and didn't strip "./" as I ought to. So, naturally file names were "/2006/1_siječanj.xls" instead of "2006/1_siječanj.xls" as they should be. It was an IOError and not not UnicodeEncodeError. And result of my oversight was that the script tried to find an absolute path instead of a relative path.
Well this was embarrassing. Thank you guys, hope this post helps someone else pay more attention to the error types python throws at us.
It looks like xlrd isn't converting the Unicode type to a local encoded type before trying to open the file. Python has guessed that the filesystem name encoding is UTF-8 and has correctly converted the č to the correct Unicode point.
There's two ways to fix this:
Try encoding the Unicode filename before asking xlrd to open it with:
workbook = open_workbook(f.encode(sys.getfilesystemencoding() ) )
Use raw 8bit filenames and don't convert filenames to Unicode
START_DIR = './'
IMHO, option 2 is probably safer in-case filenames haven't been written with UTF-8 filenames.
UPD
Note, os.walk returns Unicode strings when the given path is a Unicode string. A normal string path will return binary strings. This is the same behaviour as os.listdir (http://docs.python.org/2/library/os.html#os.listdir).
Example:
$ ls
€.txt
$ python
>>> import os
>>> os.listdir(".")
['\xe2\x82\xac.txt']
>>> os.listdir(u".")
[u'\u20ac.txt']
(e282 = UTF-8 €)
Remember: In Unix, unlike Windows, filenames do not contain encoding hints. Filenames are simply 8bit strings. You need to know what encoding they were created with if you want to convert them to a different encoding.
I am trying to grab the ffprobe values from the video file into a variable that I can compare against others or move the value into a database. The question I have; Is there a better way of doing it than below?
I don't like the multiple if/elif/line.startswith statements and I am not sure of split is the best way of getting the ffprobe values?
#!/usr/bin/python
import os, sys, subprocess, shlex, re, fnmatch
from subprocess import call
videoDrop_dir="/mnt/VoigtKampff/Temp/_Jonatha/test_drop"
for r,d,f in os.walk(videoDrop_dir):
for files in f:
print "Files: %s" % files
if files.startswith(('._', '.')):
print "This file: %s is not valid" % files
elif files.endswith(('.mov', '.mpg', '.mp4', '.wmv', '.mxf')):
fpath = os.path.join(r, files)
def probe_file(fpath):
cmnd = ['ffprobe', '-show_format', '-show_streams', '-pretty', '-loglevel', 'quiet', fpath]
p = subprocess.Popen(cmnd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
print files
out, err = p.communicate()
print "===============================OUTPUT START: %s ===============================" % files
print out
for line in out.split('\n'):
line = line.strip()
if line.startswith('codec_name='):
s = line
codec_name = s.split('codec_name=', 1)
print "Codec is: %s" % codec_name[1]
codec_1 = codec_name[1]
elif line.startswith('codec_type='):
s = line
codec_type = s.split('codec_type=', 1)
print "Codec type is: %s" % codec_type[1]
codec_type1 = codec_type[1]
elif line.startswith('codec_long_name=', 1):
s = line
codec_long_name = s.split('codec_long_name=', 1)
print "Codec long name: %s" % codec_long_name[1]
codec_long_name = codec_long_name[1]
elif line.startswith('format_long_name=', 1):
s = line
format_long_name = s.split('format_long_name=', 1)
print "Format long name: %s" % format_long_name[1]
format_long_name = format_long_name[1]
elif line.startswith('width='):
s = line
width = s.split('width=', 1)
print "Video pixel width is: %s" % width[1]
p_width = width[1]
elif line.startswith('height='):
s = line
height = s.split('height=', 1)
print "Video pixel height is: %s" % height[1]
p_height = height[1]
elif line.startswith('bit_rate='):
s = line
bit_rate = s.split('bit_rate=', 1)
print "Bit rate is: %s" % bit_rate[1]
bit_rate1 = bit_rate[1]
elif line.startswith('display_aspect_ratio=', 1):
s = line
display_aspect_ratio = s.split('display_aspect_ratio=', 1)
print "Display aspect ratio: %s" % display_aspect_ratio[1]
display_aspect_ratio1 = display_aspect_ratio[1]
elif line.startswith('avg_frame_rate=', 1):
s = line
avg_frame_rate = s.split('avg_frame_rate=', 1)
print "Average Frame Rate: %s" % avg_frame_rate[1]
avg_frame_rate1 = avg_frame_rate[1]
print "===============================OUTPUT FINISH: %s ===============================" % files
if err:
print "===============================ERROR: %s ===============================" % files
print err
probe_file(fpath)
else:
if not files.startswith(('.mov', '.mpg', '.mp4', '.wmv', '.mxf')):
print "This file: %s is not a valid video file" % files
This is a bit late, but hopefully it helps others searching for a similar answer.
import json, subprocess
# grab info about video_file
ffprobe_cmd = '/home/ubuntu/bin/ffprobe -v quiet -print_format json -show_format -show_streams - i ' + v + ' 2>&1'
# print ffprobe_cmd
s = subprocess.Popen(ffprobe_cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
ffprobe_out, err = s.communicate()
ffprobe_dict = json.loads(ffprobe_out)
From here, I re-use a common method, search_dict, which can be used like:
search_dict(ffprobe_dict, 'height')
def search_dict(my_dict, field):
"""Takes a dict with nested lists and dicts,
and searches all dicts for a key of the field
provided.
"""
fields_found = []
for key, value in my_dict.iteritems():
if key == field:
fields_found.append(value)
elif isinstance(value, dict):
results = search_dict(value, field)
for result in results:
fields_found.append(result)
elif isinstance(value, list):
for item in value:
if isinstance(item, dict):
more_results = search_dict(item, field)
for another_result in more_results:
fields_found.append(another_result)
return fields_found
You should ask this question on https://codereview.stackexchange.com/