How do I skip values with no content? - python

How do I store the values with content into strings?
I know there has to be a much cleaner and more efficient way of doing this but currently I am struggling to find a way. I would appreciate a set of fresh eyes on this since I must be missing something. I have spent an outlandish time on this.
My objective is:
Check if sheet.values has content -> if so, store as a string
Check if sheet.values has content -> if not, skip or create no string
The priority of this is that sheet.values can contain an undetermined amount of content that needs to be identified. Such as sheet.values filled in being up to [9] one instance but being filled in to [6] another instance. So it needs to account for this.
The sheet.values also have to return as a string as I use makedirs() later in the code (it gets a bit testy this also needs work if you can help)
I know a for loop should be able to help me but just not found the right one just yet.
import os
import pandas as pd
from openpyxl import load_workbook
from pandas.core.indexes.base import Index
os. chdir("C:\\Users\\NAME\\desktop")
workbook = pd.ExcelFile('Example.xlsx')
sheet = workbook.parse('Sheet1')
print (sheet.values[0])
os.getcwd()
path = os.getcwd()
for input in sheet.values:
if any(sheet.values):
if input == None:
break
else:
if any(sheet.values):
sheet.values == input
set
str1 = '1'.join(sheet.values[0])
str2 = '2'.join(sheet.values[1])
str3 = '3'.join(sheet.values[2])
str4 = '4'.join(sheet.values[3])
str5 = '5'.join(sheet.values[4])
str6 = '6'.join(sheet.values[5])
str7 = '7'.join(sheet.values[6])
str8 = '8'.join(sheet.values[7])
str9 = '9'.join(sheet.values[8])
str10 = '10'.join(sheet.values[9])
str11 = '11'.join(sheet.values[10])
str12 = '12'.join(sheet.values[11])
str13 = '13'.join(sheet.values[12])
str14 = '14'.join(sheet.values[13])
str15 = '15'.join(sheet.values[14])
str16 = '16'.join(sheet.values[15])
str17 = '17'.join(sheet.values[16])
str18 = '18'.join(sheet.values[17])
str19 = '19'.join(sheet.values[18])
str20 = '20'.join(sheet.values[19])
str21 = '21'.join(sheet.values[20])
########################ONE################################################
try:
if not os.path.exists(str1):
os.makedirs(str1)
except OSError:
print ("Creation of the directory %s failed" % str1)
else:
print ("Successfully created the directory %s " % str1)
########################TWO################################################
try:
if not os.path.exists(str2):
os.makedirs(str2)
except OSError:
print ("Creation of the directory %s failed" % str2)
else:
print ("Successfully created the directory %s " % str2)
########################THREE################################################
try:
if not os.path.exists(str3):
os.makedirs(str3)
except OSError:
print ("Creation of the directory %s failed" % str3)
else:
print ("Successfully created the directory %s " % str3)
########################FOUR################################################
try:
if not os.path.exists(str4):
os.makedirs(str4)
except OSError:
print ("Creation of the directory %s failed" % str4)
else:
print ("Successfully created the directory %s " % str4)
Note: The makedirs() code runs down till to the full amount of strings
The Excel document shows the following: enter image description here
This script results in: index 9 is out of bounds for axis 0 with size 9
This is truthfully expected as the sheet.values only this amount.
Can anyone help me? I know it is messy
Updated Code
import os
import pandas as pd
from openpyxl import load_workbook
from pandas.core.indexes.base import Index
os. chdir("C:\\Users\\NAME\\desktop")
workbook = pd.ExcelFile('Example.xlsx')
sheet = workbook.parse('Sheet1')
print (sheet.values[0])
os.getcwd()
path = os.getcwd()
print ("The current working Directory is %s" % path)
for col in sheet.values:
for row in range(len(col)):
dir_name = str(row + 1) + col[row]
try:
os.makedirs(dir_name, exist_ok=True)
except OSError:
print ("Creation of the directory %s failed" % dir_name)
else:
print ("Successfully created the directory %s " % dir_name)

it seems like you're trying to read the first column of a csv, and create directories based on the value.
with open(mypath+file) as file_name:
file_read = csv.reader(file_name)
file = list(file_read)
for col in file:
for row in range(len(col)):
dir_name = str(row + 1) + col[row]
try:
# https://docs.python.org/3/library/os.html#os.makedirs
os.makedirs(dir_name, exist_ok=True)
except OSError:
print ("Creation of the directory %s failed" % str1)
else:
print ("Successfully created the directory %s " % str1)

Related

How to convert a set of osm files to shape files using ogr2ogr in python

I strongly believe that this question is already asked but I can't find the answer so I am placing it before you. I am having a problem while running the script to convert osm files to shp files. The script is reading all the osm files but just creating one shp file of the first osm file at the end instead of converting all the osm files. I am providing the code I used below. So please kindly help me in resolving me this.
from xml.dom import minidom
import os, sys
import xml.etree.ElementTree as ET
### ruta a gdal-data C:\Program Files (x86)\PostgreSQL\9.4\gdal-data
path = r"C:\Users\Administrator\Desktop\CHECKING\T2"
systemOutput = 'Shp'
print ("\n#### Execute python NY_osm2shapes")
print ("#### MONITORING CITIES")
print ("#### Conversor osm to shapes")
print ("#### OSM Path: " + path)
print "#### "
"""
Modify
Win: C:/Program Files/GDAL/gdal-data/osmconfig.ini
Linux: /usr/share/gdal/1.11/osmconfig.ini
report_all_ways=yes #activate lines without tag
attributes=landuse, plots #inside [lines]
attributes=landuse, plots #inside [multipolygons]
"""
### Check if path from argv
try:
if len(sys.argv) >= 2:
print("#### Path from argv: ", sys.argv[1])
path = sys.argv[1]
else:
print "#### Path set to", path
sys.exit()
except:
pass
#### Ogr config
print "\n#### Process: osm to shapes"
ogrOutputType = '' #-f "Esri Shapefile"'
ogrProjection = '' # -t_srs EPSG:4326' #+ epsg
ogrProjectionA = '' #-a_srs EPSG:3827'
ogrProjectionIn = '' #-s_srs EPSG:3827' #-t_srs EPSG:4326
ogrConfigType = ' --config OSM_USE_CUSTOM_INDEXING NO'
ogr2ogr = 'ogr2ogr %s %s %s %s %s %s -overwrite %s %s %s %s layer %s'
### Process
for l in os.walk(path):
archivos = l[2]
ruta = l[0]
for a in archivos:
if a.endswith(".osm"):
osmFile = os.path.join(ruta, a)
folder = os.path.join(ruta, systemOutput)
shapeFile = a[:-4]
ogrFileOutput = " -nln " + shapeFile
print "Archivo Shape: ", shapeFile,
layerType = shapeFile[-1]
if layerType=="0":
print "\t TIPO 0: Circles"
ogrSelectLayer = "lines"
ogrLcoType = ' -lco SHPT=ARC'
ogrSelect = ' -select ID_string'
elif layerType == "1":
print "\t TIPO 1: Blocks"
ogrSelectLayer = "lines"
ogrLcoType = ' -lco SHPT=ARC'
ogrSelect = ' -select Land_use'
elif layerType == "2":
print "\t TIPO 2: Plots"
ogrSelectLayer = "lines"
ogrLcoType = ' -lco SHPT=ARC'
ogrSelect = ' -select Plot'
elif layerType == "3":
print "\t TIPO 3: Medians"
ogrSelectLayer = "lines"
ogrLcoType = ' -lco SHPT=ARC'
ogrSelect = ' -select ID_string'
else:
print "ELSE ERROR*"
systemOutput = ogr2ogr % (ogrOutputType, folder, osmFile, ogrProjectionA, ogrProjectionIn, ogrProjection, ogrFileOutput, ogrLcoType, ogrConfigType, ogrSelect, ogrSelectLayer)
#print ("Fichero: ", osmFile, shapeFile, layerType, ogrSelectLayer)
os.system(systemOutput)
print "End process"
The way you used os.walk returns in archivos all osm files in the last ruta of the tree structure traversed. That is possibly (at least part of) your problem, or it may be so in the future.
You have to use os.walk differently:
import os, re
ext_regx = '\.osm$'
archivos = []
for ruta, dirs, archs in os.walk( path ) :
for arch in archs :
if re.search( ext_regx, arch ) :
archivos.append( os.path.join( ruta, arch ) )
for osmFile in archivos :
print( osmFile )
...
Now if the code inside the for loop does not do what you mean to, that is another issue.
I suggest you:
Add print( systemOutput ) to check that each command executed is what you intend it to be.
Check that the files and dirs refered to in that command are correct.
PS: each item in archivos will already contain the dir part, so you have to split the folder part, instead of joining.
PS2: you might need to use double backslashes for dirs. Also, bear in mind os.sep.

Python tarfile gzipped file bigger than sum of source files

I have a Python routine which archives file recordings into a GZipped tarball. The output file appears to be far larger than the source files, and I cannot work out why. As an example of the scale of the issue, 6GB of call recordings are generating an archive of 10GB.
There appear to be no errors in the script and the output .gz file is readable and appears OK apart from the huge size.
Excerpt from my script as follows:
# construct tar filename and open file
client_fileid = client_id + "_" + dt.datetime.now().strftime("%Y%m%d_%H%M%S")
tarname = tar_path + "/" + client_fileid + ".tar.gz"
print "Opening tar file %s " % (tarname), "\n"
try:
tar = tarfile.open (tarname, "w:gz")
except:
print "Error opening tar file: %s" % sys.exc_info()[0]
sql="""SELECT number, er.id, e.id, flow, filename, filesize, unread, er.cr_date, callerid,
length, callid, info, party FROM extension_recording er, extension e, client c
WHERE er.extension_id = e.id AND e.client_id = c.id AND c.parent_client_id = %s
AND DATE(er.cr_date) BETWEEN '%s' AND '%s'""" % (client_id, start_date, end_date)
rows = cur.execute(sql)
recordings = cur.fetchall()
if rows == 0: sys.exit("No recordings for selected date range - exiting")
for recording in recordings: # loop through recordings cursor
try:
ext_len = len(str(recording[0]))
# add preceding zeroes if the ext no starts with 0 or 00
if ext_len == 2: extension_no = "0" + str(recording[0])
elif ext_len == 1: extension_no = "00" + str(recording[0])
else: extension_no = str(recording[0])
filename = recording[4]
extended_no = client_id + "*%s" % (extension_no)
sourcedir = recording_path + "/" + extended_no
tardir = extended_no + "/" + filename
complete_name = sourcedir + "/" + filename
tar.add(complete_name, arcname=tardir) # add to tar archive
except:
print "Error '%s' writing to tar file %s" % (sys.exc_info()[1], csvfullfilename)

list is coming as blank in python

I am trying to use "directory path" and "prefirx_pattern" from config file.
I get correct results in vdir2 and vprefix2 variable but list local_file_list is still empty.
result
vdir2 is"/home/ab_meta/abfiles/"
vprefix2 is "rp_pck."
[]
code
def get_files(self):
try:
print "vdir2 is" + os.environ['dir_path']
print "vprefix2 is "+ os.environ['prefix_pattern']
local_file_list = filter(os.path.isfile, glob.glob(os.environ['dir_path'] + os.environ['prefix_pattern'] + "*"))
print local_file_list
local_file_list.sort(key=lambda s: os.path.getmtime(os.path.join(os.environ['dir_path'], s)))
except Exception, e:
print e
self.m_logger.error("Exception: Process threw an exception " + str(e))
log.sendlog("error",50)
sys.exit(1)
return local_file_list
I have tried another way as given below but again list is coming as empty.
2nd Option :
def get_config(self):
try:
v_dir_path = os.environ['dir_path']
v_mail_prefix = os.environ['mail_prefix']
self.m_dir_path = v_dir_path
self.m_prefix_pattern = v_prefix_pattern
self.m_mail_prefix = v_mail_prefix
except KeyError, key:
self.m_logger.error("ERROR: Unable to retrieve the key " + str(key))
except Exception, e:
print e
self.m_logger.error("Error: job_prefix Unable to get variables " + str(e))
sys.exit(1)
def get_files(self):
try:
local_file_list = filter(os.path.isfile, glob.glob(self.m_dir_path + self.m_prefix_pattern + "*"))
local_file_list.sort(key=lambda s: os.path.getmtime(os.path.join(os.environ['dir_path'], s)))
except Exception, e:
print e
Thanks
Sandy
Outside of this program, wherever you set the environment variables, you are setting them incorrectly. Your environment variables have quote characters in them.
Set your environment varaibles to have the path data, but no quotes.
Assign the enviornment variable and then pass the path you are interested in into the function.
Accessing global state from within your function can make it hard to follow and debug.
Use os.walk to get the list of files, it returns a tuple of the root dir, a list of dirs, and a list of files. For me its cleaner than using os.isfile to filter.
Use a list comprehension to filter the list of files returned by os.walk.
I'm presuming the prints statements are for debugging so left them out.
vdir2 = os.environ['dir_path']
vprefix2 = os.environ['prefix_pattern']
def get_files(vpath):
for root, dirs, files in os.walk(vpath):
local_file_list = [f for f in files if f.startswith(vprefix2)]
local_file_list.sort(key=lambda x: os.path.getmtime(x))
return local_file_list

Python - Is this code lacking List Comprehensions and Generators [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
This is my first question, and I apologize if its a bit long on the code-example side.
As part of a job application I was asked to write a Bit Torrent file parser that exposed some of the fields. I did the code, and was told my code was "not quite at the level that we require from a team lead". Ouch!
That's fine its, been years since I have coded, and list comprehensions, generators did not exist back in the day (I started with COBOL, but have coded with C, C++, etc). To me the below code is very clean. Sometimes there is no need to use more complex structures, syntax or patterns - "Keep it Simple".
Could I ask some Python guru's to critique this code please? I'm believe it is useful to others to see where the code could be improved. There were more comments, etc (the bencode.py is from http://wiki.theory.org/Decoding_bencoded_data_with_python )
The areas I can think of:
in the display_* methods to use list comprehensions to avoid the string of "if's"better
list comprehension / generator usage
bad use of globals
stdin/stdout/piping? This was a simple assignment, so I thought it was not necessary.
I was personally proud of this code, so would like to know where I need to improve. Thanks.
#!/usr/bin/env python2
"""Bit Torrent Parsing
Parses a Bit Torrent file.
A basic parser for Bit Torrent files. Visit http://wiki.theory.org/BitTorrentSpecification for the BitTorrent specification.
"""
__author__ = "...."
__version__ = "$Revision: 1.0 $"
__date__ = "$Date: 2012/10/26 11:08:46 $"
__copyright__ = "Enjoy & Distribute"
__license__ = "Python"
import bencode
import argparse
from argparse import RawTextHelpFormatter
import binascii
import time
import os
import pprint
torrent_files = 0
torrent_pieces = 0
def display_root(filename, root):
"""prints main (root) information on torrent"""
global torrent_files
global torrent_pieces
print
print "Bit Torrent Metafile Structure root nodes:"
print "------------------------------------------"
print "Torrent filename: ", filename
print " Info: %d file(s), %d pieces, ~%d kb/pc" % (
torrent_files,
torrent_pieces,
root['info']['piece length'] / 1024)
if 'private' in root['info']:
if root['info']['private'] == 1:
print " Publish presence: Private"
print " Announce: ", root['announce']
if 'announce-list' in root:
print " Announce List: "
for i in root['announce-list']:
print " ", i[0]
if 'creation date' in root:
print " Creation Date: ", time.ctime(root['creation date'])
if 'comment' in root:
print " Comment: ", root['comment']
if 'created-by' in root:
print " Created-By: ", root['created-by']
print " Encoding: ", root['encoding']
print
def display_torrent_file(info):
"""prints file information (single or multifile)"""
global torrent_files
global torrent_pieces
if 'files' in info:
# multipart file mode
# directory, followed by filenames
print "Files:"
max_files = args.maxfiles
display = max_files if (max_files < torrent_files) else torrent_files
print " %d File %d shown: " % (torrent_files, display)
print " Directory: ", info['name']
print " Filenames:"
i = 0
for files in info['files']:
if i < max_files:
prefix = ''
if len(files['path']) > 1:
prefix = './'
filename = prefix + '/'.join(files['path'])
if args.filehash:
if 'md5sum' in files:
md5hash = binascii.hexlify(files['md5sum'])
else:
md5hash = 'n/a'
print ' %s [hash: %s]' % (filename, md5hash)
else:
print ' %s ' % filename
i += 1
else:
break
else:
# single file mode
print "Filename: ", info['name']
print
def display_pieces(pieceDict):
"""prints SHA1 hash for pieces, limited by arg pieces"""
global torrent_files
global torrent_pieces
# global pieceDict
# limit since a torrent file can have 1,000's of pieces
max_pieces = args.pieces if args.pieces else 10
print "Pieces:"
print " Torrent contains %s pieces, %d shown."% (
torrent_pieces, max_pieces)
print " piece : sha1"
i = 0
while i < max_pieces and i < torrent_pieces:
# print SHA1 hash in readable hex format
print ' %5d : %s' % (i+1, binascii.hexlify(pieceDict[i]))
i += 1
def parse_pieces(root):
"""create dictionary [ piece-num, hash ] from info's pieces
Returns the pieces dictionary. key is the piece number, value is the
SHA1 hash value (20-bytes)
Keyword arguments:
root -- a Bit Torrent Metafile root dictionary
"""
global torrent_pieces
pieceDict = {}
i = 0
while i < torrent_pieces:
pieceDict[i] = root['info']['pieces'][(20*i):(20*i)+20]
i += 1
return pieceDict
def parse_root_str(root_str):
"""create dictionary [ piece-num, hash ] from info's pieces
Returns the complete Bit Torrent Metafile Structure dictionary with
relevant Bit Torrent Metafile nodes and their values.
Keyword arguments:
root_str -- a UTF-8 encoded string with root-level nodes (e.g., info)
"""
global torrent_files
global torrent_pieces
try:
torrent_root = bencode.decode(root_str)
except StandardError:
print 'Error in torrent file, likely missing separators like ":"'
if 'files' in torrent_root['info']:
torrent_files = len(torrent_root['info']['files'])
else:
torrent_files = 1
torrent_pieces = len(torrent_root['info']['pieces']) / 20
torrent_piece = parse_pieces(torrent_root)
return torrent_root, torrent_piece
def readfile(filename):
"""read file and return file's data"""
global torrent_files
global torrent_pieces
if os.path.exists(filename):
with open(filename, mode='rb') as f:
filedata = f.read()
else:
print "Error: filename: '%s' does not exist." % filename
raise IOError("Filename not found.")
return filedata
if __name__ == "__main__":
parser = argparse.ArgumentParser(formatter_class=RawTextHelpFormatter,
description=
"A basic parser for Bit Torrent files. Visit "
"http://wiki.theory.org/BitTorrentSpecification for "
"the BitTorrent specification.",
epilog=
"The keys for the Bit Torrent MetaInfo File Structure "
"are info, announce, announce-list, creation date, comment, "
"created by and encoding. \n"
"The Info Dictionary (info) is dependant on whether the torrent "
"file is a single or multiple file. The keys common to both "
"are piece length, pieces and private.\nFor single files, the "
"additional keys are name, length and md5sum.For multiple files "
"the keys are, name and files. files is also a dictionary with "
"keys length, md5sum and path.\n\n"
"Examples:\n"
"torrentparse.py --string 'l4:dir14:dir28:file.exte'\n"
"torrentparse.py --filename foo.torrent\n"
"torrentparse.py -f foo.torrent -f bar.torrent "
"--maxfiles 2 --filehash --pieces 2 -v")
filegroup = parser.add_argument_group('Input File or String')
filegroup.add_argument("-f", "--filename",
help="name of torrent file to parse",
action='append')
filegroup.add_argument("-fh", "--filehash",
help="display file's MD5 hash",
action = "store_true")
filegroup.add_argument("-maxf", "--maxfiles",
help="display X filenames (default=20)",
metavar = 'X',
type=int, default=20)
piecegroup = parser.add_argument_group('Torrent Pieces')
piecegroup.add_argument("-p", "--pieces",
help = "display X piece's SHA1 hash (default=10)",
metavar = 'X',
type = int)
parser.add_argument("-s", "--string",
help="string for bencoded dictionary item")
parser.add_argument("-v", "--verbose",
help = "Display MetaInfo file to stdout",
action = "store_true")
args = parser.parse_args()
if args.string:
print
text = bencode.decode(args.string)
print text
else:
for fn in args.filename:
try:
filedata = readfile(fn)
torrent_root, torrent_piece = parse_root_str(filedata)
except IOError:
print "Please enter a valid filename"
raise
if torrent_root:
display_root(fn, torrent_root)
display_torrent_file(torrent_root['info'])
if args.pieces:
display_pieces(torrent_piece)
verbose = True if args.verbose else False
if verbose:
print
print "Verbose Mode: \nPrinting root and info dictionaries"
# remove pieces as its long. display it afterwards
pieceless_root = torrent_root
del pieceless_root['info']['pieces']
pp = pprint.PrettyPrinter(indent=4)
pp.pprint(pieceless_root)
print
print "Print info's piece information: "
pp.pprint(torrent_piece)
print
print "\n"
The following snippet:
i = 0
while i < torrent_pieces:
pieceDict[i] = root['info']['pieces'][(20*i):(20*i)+20]
i += 1
should be replaced by:
for i in range(torrent_pieces):
pieceDict[i] = root['info']['pieces'][(20*i):(20*i)+20]
That might be the kind of thing they want to see. In general, Python code shouldn't need explicit index variable manipulation in for loops very much.
The first thing I notice is that you've got a lot of global variables. That's no good; your code is no longer threadsafe, for one problem. (I see now that you noted that in your question, but that is something that should be changed.)
This looks a little odd:
i = 0
for files in info['files']:
if i < max_files:
# ...
else:
break
Instead, you could just do this:
for file in info['files'][:max_files]:
# ...
I also notice that you parse the file just enough to output all of the data pretty-printed. You might want to put it into appropriate structures. For example, have Torrent, Piece, and File classes.

Can't get Python picture renaming program to work right

This program is supposed to run from command line like this:
python Filename Folder_photos_are_in New_Prefix
It should just rename the files, but it wasn't working, so I had it print out each function separately as it runs. It seems to work all right until the SortByMTime function at which time it deletes all of the files from my list except the last one.
Here is the code:
import sys
import os
import random
def filterByExtension(root, allfiles, extensions):
files = []
for f in allfiles:
hasExt = f.rfind('.')
if(hasExt > 0):
ext = f[hasExt+1::].lower()
if(ext in extensions):
f2 = os.path.join(root, f)
if(os.path.isfile(f2)):
files.append(f)
else:
print "Matching irregular file " + f
return files
def sortByMTime(root, matching):
photos = []
for f in matching:
path = os.path.join(root, f)
mtime = os.path.getmtime(path)
photos.append((mtime, f))
photos.sort()
return photos
def assignNames(prefix, inorder):
kount = str(len(inorder))
digits = len(kount)
template = '%%0%dd' % digits
newnames={}
kount = 0
for i in inorder:
kount += 1
s = template % kount
newnames [i[1]] = prefix+s+'.'+i[1].split('.')[1]
return newnames
print newnames
def makeTempName(allfiles):
r = random.randrange(0,1000000000)
name = "__temp%i__" % r
while name in allfiles:
r+=1
name = "__temp%i__" % r
return name
def makeScript(inorder, newnames, tempname):
chain = []
inthechain = {}
script = []
for i in inorder:
if i not in newnames:
continue
if newnames[i] == id:
del newnames[i]
continue
if newnames[i] not in newnames:
target = newnames[i]
script.append( (i,target) )
del newnames[i]
continue
else:
link = i
while True:
target = newnames[i]
chain.append( (link, target) )
inthechain[link] = True
link = target
if link not in newnames:
break
chain.reverse()
for (a, b) in chain:
print "This is in the chain: "
print chain
script.append(a,b)
del newnames[a]
return script
def doRenames(root, script):
for (old, new) in script:
print "%s -> %s" %(old, new)
fulloldpath=os.path.join(root, old)
fullnewpath = os.path.join(root, new)
if os.path.exists(fullnewpath):
print "File already exists"
os.exit(1)
else:
os.rename(fulloldpath, fullnewpath)
def main():
usrdir = []
allfiles = []
path = []
prefix = ''
args = sys.argv
args.pop(0) #remove first thing from list
if len(args) == 2: #Directory and Prefix are provided
print "Directory: ", args[0]
print "Prefix: ", args[1]
usrdir = args[0]
path = os.path.abspath(usrdir)
prefix = os.path.basename(path)
if len(args) == 1: #Only directory is provided
args.append(args[0]) #Makes the directory name the prefix as well
print "Directory: ", args[0]
print "Prefix: ", args[1]
usrdir = args[0]
path = os.path.abspath(usrdir)
prefix = os.path.basename(path)
if len(args) == 0 or len(args) > 2: #ends the programs because wrong number of arguments.
print "INVALID Number of Arguments:"
print "Usage: python bulkrename.py <directory> [<prefix>]"
exit(1)
allfiles = os.listdir(usrdir)
print "Printout of allfiles"
print allfiles
print
root = os.path.abspath(args[0])
print "root: ", root
print
extensions = ['jpeg', 'jpg', 'png', 'gif']
print "What Extensions should be able to be used: "
print extensions
print
matching = filterByExtension(root, allfiles, extensions)
print "What comes out of filterByExtension"
print matching
print
inorder = sortByMTime(path, matching)
print "What comes out of sortByMTime"
print inorder
print
newnames = assignNames(prefix, inorder)
print "What comes out of assignNames"
print newnames
print
tempname = makeTempName(allfiles)
print "What comes out of makeTempName"
print tempname
print
script = makeScript(inorder, newnames, tempname)
print "What comes out of makeScript"
print script
print
doRenames(path, script)
print "What comes out of doRenames"
print doRenames
print
main()
and here is the output from terminal
virus-haven:CS1410 chrislebaron$ python bulkrenamer.py bulkrename test
Directory: bulkrename
Prefix: test
Printout of allfiles
['.DS_Store', '20120902Snow_Canyon02.JPG', '20120902Snow_Canyon03.JPG', '20120902Snow_Canyon05.JPG', '20120902Snow_Canyon06.JPG', '20120902Snow_Canyon08.JPG', '20120902Snow_Canyon09.JPG', '20120902Snow_Canyon11.JPG', '20120902Snow_Canyon12.JPG', 'airplane.png', 'BackNoText.jpg', 'blah', 'FrontNoText.jpg', 'glitchbusters.jpg', 'IMG_7663.JPG', 'IMG_7664.JPG', 'Pomegranates.jpg', 'rccar.png']
root: /Users/chrislebaron/Documents/School/CS1410/bulkrename
What Extensions should be able to be used:
['jpeg', 'jpg', 'png', 'gif']
What comes out of filterByExtension
['20120902Snow_Canyon02.JPG', '20120902Snow_Canyon03.JPG', '20120902Snow_Canyon05.JPG', '20120902Snow_Canyon06.JPG', '20120902Snow_Canyon08.JPG', '20120902Snow_Canyon09.JPG', '20120902Snow_Canyon11.JPG', '20120902Snow_Canyon12.JPG', 'airplane.png', 'BackNoText.jpg', 'FrontNoText.jpg', 'glitchbusters.jpg', 'IMG_7663.JPG', 'IMG_7664.JPG', 'Pomegranates.jpg', 'rccar.png']
What comes out of sortByMTime
[(1322960835.0, 'rccar.png')]
What comes out of assignNames
{'rccar.png': 'bulkrename1.png'}
What comes out of makeTempName
__temp55210675__
What comes out of makeScript
[]
What comes out of doRenames
<function doRenames at 0x100dede60>
virus-haven:CS1410 chrislebaron$
You've goofed your indentation, mixing spaces and tabs. Use python -tt to verify.

Categories