How can I change a part of a filename with python - python

I am brand new to python (and coding in general) and I've been unable to find a solution to my specific problem online. I am currently creating a tool which will allow a user to save a file to a network location. The file will have a version number. What I would like to is to have the script auto version up before it saves. I have the rest of the script done, but it is the auto versioning that I am having issues with. Here's what I have so far:
import re
import os
def main():
wip_folder = "L:/xxx/xxx/xxx/scenes/wip/"
_file_list = os.listdir('%s' % wip_folder)
if os.path.exists('%s' wip_path):
for file in _file_list:
versionPattern = re.compile('_v\d{3}')
curVersions = versionPattern.findall('%s' % wip_folder)
curVersions.sort()
nextVersion = '_v%03d' % (int(curVersions[-1][2:]) + 1)
return nextVersion
else:
nextVersion = '_v001'
name = xxx_xxx_xx
name += '%s' nextVersion
name += '_xxx_wip
I should probably point out that main() is going to be called by a QPushbutton in another module. Also, that wip_path will most likely have several versions of a single file in it. So if there are 10 versions of this file in wip_path, this save should be v011. I apologize if this question makes no sense. Any help would be appreciated. Thank you!

You do not need to use re at all, programming is about simplification and you are over complicating it! I chose to return but anything in this function can be changed to whatever you need it to specifically do! Just read the comments and good luck!
def getVersioning(path):
#passing path
count = 1 #init count
versionStr = 'wip_path_v'
try: #in case path dosen't exist
for i in os.listdir(path): #loop thorough files in passed dir
if versionStr in i: #check if files contain default versioning string e.g. wip_path_V or any other (used in case other files are in the same dir)
count += 1 #incriment count
except:
os.mkdir(path) #makedir if one does not exist for future use
newName = versionStr + str(count) #new versioning file name
return newName #return new versioning name
print getVersioning('tstFold')

Related

Python - pygithub If file exists then update else create

Im using PyGithub library to update files. Im facing an issue with that. Generally, we have to options. We can create a new file or if the file exists then we can update.
Doc Ref: https://pygithub.readthedocs.io/en/latest/examples/Repository.html#create-a-new-file-in-the-repository
But the problem is, I want to create a new file it if doesn't exist. If exist the use update option.
Is this possible with PyGithub?
I followed The Otterlord's suggestion and I have achieved this. Sharing my code here, it maybe helpful to someone.
from github import Github
g = Github("username", "password")
repo = g.get_user().get_repo(GITHUB_REPO)
all_files = []
contents = repo.get_contents("")
while contents:
file_content = contents.pop(0)
if file_content.type == "dir":
contents.extend(repo.get_contents(file_content.path))
else:
file = file_content
all_files.append(str(file).replace('ContentFile(path="','').replace('")',''))
with open('/tmp/file.txt', 'r') as file:
content = file.read()
# Upload to github
git_prefix = 'folder1/'
git_file = git_prefix + 'file.txt'
if git_file in all_files:
contents = repo.get_contents(git_file)
repo.update_file(contents.path, "committing files", content, contents.sha, branch="master")
print(git_file + ' UPDATED')
else:
repo.create_file(git_file, "committing files", content, branch="master")
print(git_file + ' CREATED')
If you are interested in knowing if a single path exists in your repo you can use something like this and then branch your logic out from the result.
from github.Repository import Repository
def does_object_exists_in_branch(repo: Repository, branch: str, object_path: str) -> bool:
try:
repo.get_contents(object_path, branch)
return True
except github.UnknownObjectException:
return False
But the method in the approved answer is more efficient if you want to check if multiple files exist in a given path, but I wanted to provide this as an alternative.

Watch logs in a folder in real time with Python

I'm trying to make a custom logwatcher of a log folder using python. The objective is simple, finding a regex in the logs and write a line in a text if find it.
The problem is that the script must be running constantly against a folder in where could be multiple log files of unknown names, not a single one, and it should detect the creation of new log files inside the folder on the fly.
I made some kind of tail -f (copying part of the code) in python which is constantly reading a specific log file and write a line in a txt file if regex is found in it, but I don't know how could I do it with a folder instead a single log file, and how can the script detect the creation of new log files inside the folder to read them on the fly.
#!/usr/bin/env python
import time, os, re
from datetime import datetime
# Regex used to match relevant loglines
error_regex = re.compile(r"ERROR:")
start_regex = re.compile(r"INFO: Service started:")
# Output file, where the matched loglines will be copied to
output_filename = os.path.normpath("log/script-log.txt")
# Function that will work as tail -f for python
def follow(thefile):
thefile.seek(0,2)
while True:
line = thefile.readline()
if not line:
time.sleep(0.1)
continue
yield line
logfile = open("log/service.log")
loglines = follow(logfile)
counter = 0
for line in loglines:
if (error_regex.search(line)):
counter += 1
sttime = datetime.now().strftime('%Y%m%d_%H:%M:%S - ')
out_file=open(output_filename, "a")
out_file.write(sttime + line)
out_file.close()
if (start_regex.search(line)):
sttime = datetime.now().strftime('%Y%m%d_%H:%M:%S - ')
out_file=open(output_filename, "a")
out_file.write(sttime + "SERVICE STARTED\n" + sttime + "Number of errors detected during the startup = {}\n".format(counter))
counter = 0
out_file.close()
You can use watchgod for this purpose. This may be a comment too, not sure if it deserves to be na answer.

Maya Python - Embed zip file into maya file?

This was a suggestion from another stack thread that I'm finally getting back to. It was part of a discussion about how to embed tools into a maya file.
You can write the whole thing as a python package, zip it, then stuff the binary contents of the zip file into a fileInfo. When you need to code, look for it in the user's $MAYA_APP_DIR; if there's no zip, write the contents of the fileInfo to disk as a zip and then insert the zip into sys.path
Source discussions were:
python copy scripts into script
and Maya Python Create and Use Zipped Package?
So far the programming is going okay, but I think I hit a snag. When I attempt this:
with open("..directory/myZip.zip","rb") as file:
cmds.fileInfo("myZip", file.read())
..and then I..
print cmds.fileInfo("myZip",q=1)
I get
[u'PK\003\004\024']
which is a bad translation of the first line of gibberish when reading the zip file as a text document.
How can I embed my zip file into my maya file as binary?
====================
Update:
Maya doesn't like writing to the file as a direct read of the utf-8 encoded zip file. I found various methods of making it into an acceptable string that could be written, but the decoding back to the file didn't appear to work. I see now that Theodox's suggestion was to write it to binary and put that in the fileInfo node.
How can one encode, store and then decode to write to file later?
If I were to convert to binary using for instance:
' '.join(format(ord(x), 'b') for x in line)
What code would I need to turn that back into the original utf-8 zip information?
you can find related code here:
http://tech-artists.org/forum/showthread.php?4161-Maya-API-Singleton-Nodes&highlight=mayapersist
the relevant bit is
import base64
encoded = base64.b64encode(value)
decoded = base64.b64decode(encoded)
basically it's the same idea, except using the base64 module instead of binascii. Any method of converting an arbitary character stream to an ascii-safe representation will work fine, as long as you use a reversable method: the potential problem you need to watch out for is a character in the data block that looks to maya like a close quote - an open quote int he fileInfo will be messy in an MA file.
This example uses YAML to do arbitrary key-value pairs but that part is irrelevant to storing the binary stuff. I've used this technique for fairly large data (up to 640k if i recall) but I don't know if it has an upper limit in terms of what you can stash in Maya
Found the answer. Great script on stack overflow. I had to encode to 'string_escape', which is something I found while trying to figure out the whole characters situation. But anyways, you open the zip, encode to 'string_escape', write it to fileInfo and then before fetching to write it back to zip, you decode it back.
Convert binary to ASCII and vice versa
import maya.cmds as cmds
import binascii
def text_to_bits(text, encoding='utf-8', errors='surrogatepass'):
bits = bin(int(binascii.hexlify(text.encode(encoding, errors)), 16))[2:]
return bits.zfill(8 * ((len(bits) + 7) // 8))
def text_from_bits(bits, encoding='utf-8', errors='surrogatepass'):
n = int(bits, 2)
return int2bytes(n).decode(encoding, errors)
def int2bytes(i):
hex_string = '%x' % i
n = len(hex_string)
return binascii.unhexlify(hex_string.zfill(n + (n & 1)))
And then you can
with open("..\maya\scripts/test.zip","rb") as thing:
texty = text_to_bits(thing.read().encode('string_escape'))
cmds.fileInfo("binaryZip",texty)
...later
with open("..\maya\scripts/test_2.zip","wb") as thing:
texty = cmds.fileInfo("binaryZip",q=1)
thing.write( text_from_bits( texty ).decode('string_escape') )
and this appears to work.. so far..
Figured I'd post the final product for anyone wanting to undertake this approach. I tried to do some corruption checking, so that a bad zip doesn't get passed between machines. That's what all the check hashing is for.
def writeTimeFull(tl):
import TimeFull
#reload(TimeFull)
with open(TimeFull.__file__.replace(".pyc",".py"),"r") as file:
cmds.scriptNode( tl.scriptConnection[1][0], e=1, bs=file.read() )
cmds.expression("spark_timeliner_activator",
e=1,s='if (Spark_Timeliner.ShowTimeliner == 1)\n'
'{\n'
'\tsetAttr Spark_Timeliner.ShowTimeliner 0;\n'
'\tpython \"Timeliner.InitTimeliner()\";\n'
'}',
o="Spark_Timeliner",ae=1,uc=all)
def checkHash(zipPath,hash1,hash2,hash3):
check = False
hashes = [hash1,hash2,hash3]
for ii, hash in enumerate(hashes):
if hash == hashes[(ii+1)%3]:
hashes[(ii+2)%3] = hashes[ii]
check = True
if check:
if md5(zipPath) == hashes[0]:
return [zipPath,hashes[0],hashes[1],hashes[2]]
else:
cmds.warning("Hash checks and/or zip are corrupted. Attain toolbox_fix.zip, put it in scripts folder and restart.")
return []
#this writes the zip file to the local users space
def saveOutZip(filename):
if os.path.isfile(filename):
if not os.path.isfile(filename.replace('_pkg','_'+__version__)):
os.rename(filename,filename.replace('_pkg','_'+__version__))
with open(filename,"w") as zipFile:
zipInfo = cmds.fileInfo("zipInfo1",q=1)[0]
zipHash_1 = cmds.fileInfo("zipHash1",q=1)[0]
zipHash_2 = cmds.fileInfo("zipHash2",q=1)[0]
zipHash_3 = cmds.fileInfo("zipHash3",q=1)[0]
zipFile.write( base64.b64decode(zipInfo) )
if checkHash(filename,zipHash_1,zipHash_2,zipHash_3):
cmds.fileInfo("zipInfo2",zipInfo)
return filename
with open(filename,"w") as zipFile:
zipInfo = cmds.fileInfo("zipInfo2",q=1)[0]
zipHash_1 = cmds.fileInfo("zipHash1",q=1)[0]
zipHash_2 = cmds.fileInfo("zipHash2",q=1)[0]
zipHash_3 = cmds.fileInfo("zipHash3",q=1)[0]
zipFile.write( base64.b64decode(zipInfo) )
if checkHash(filename,zipHash_1,zipHash_2,zipHash_3):
cmds.fileInfo("zipInfo1",zipInfo)
return filename
return False
#this writes the local zip to this file
def loadInZip(filename):
zipResults = []
for ii in range(0,10):
with open(filename,"r") as theRead:
zipResults.append([base64.b64encode(theRead.read())]+checkHash(filename,md5(filename),md5(filename),md5(filename)))
if ii>0 and zipResults[ii]==zipResults[ii-1]:
cmds.fileInfo("zipInfo1",zipResults[ii][0])
cmds.fileInfo("zipInfo2",zipResults[ii-1][0])
cmds.fileInfo("zipHash1",zipResults[ii][2])
cmds.fileInfo("zipHash2",zipResults[ii][3])
cmds.fileInfo("zipHash3",zipResults[ii][4])
return True
#file check
#http://stackoverflow.com/questions/3431825/generating-a-md5-checksum-of-a-file
def md5(fname):
import hashlib
hash = hashlib.md5()
with open(fname, "rb") as f:
for chunk in iter(lambda: f.read(4096), b""):
hash.update(chunk)
return hash.hexdigest()
filename = path+'/toolbox_pkg.zip'
zipPaths = [path+'/toolbox_update.zip',
path+'/toolbox_fix.zip',
path+'/toolbox_'+__version__+'.zip',
filename]
zipPaths_exist = [os.path.isfile(zipPath) for zipPath in zipPaths ]
if any(zipPaths_exist[:2]):
if zipPaths_exist[0]:
cmds.warning('Timeliner update present. Forcing file to update version')
if zipPaths_exist[2]:
os.remove(zipPaths[3])
elif os.path.isfile(zipPaths[3]):
os.rename(zipPaths[3], zipPaths[2])
os.rename(zipPaths[0],zipPaths[3])
if zipPaths_exist[1]:
os.remove(zipPaths[1])
else:
cmds.warning('Timeliner fix present. Replacing file to the fix version.')
if os.path.isfile(zipPaths[3]):
os.remove(zipPaths[3])
os.rename(zipPaths[1],zipPaths[3])
loadInZip(filename)
if not cmds.fileInfo("zipInfo1",q=1) and not cmds.fileInfo("zipInfo2",q=1):
loadInZip(filename)
if not os.path.isfile(filename):
saveOutZip(filename)
sys.path.append(filename)
import Timeliner
Timeliner.InitTimeliner(theVers=__version__)
if not any(zipPaths[:2]):
if __version__ > Timeliner.__version__:
cmds.warning('Saving out newer version of timeliner to local machine. Restart Maya to access latest version.')
saveOutZip(filename)
elif __version__ < Timeliner.__version__:
cmds.warning('Timeliner on machine is newer than file version. Saving machine version over timeliner in file.')
loadInZip(filename)
__version__ = Timeliner.__version__
if __name__ != "__main__":
tl = getTimeliner()
writeTimeFull(tl)

sublime text 3 auto-complete plugin not working

I try to write a plugin to get all the classes in current folder to do an auto-complete injection.
the following code is in my python file:
class FolderPathAutoComplete(sublime_plugin.EventListener):
def on_query_completions(self, view, prefix, locations):
folders = view.window().folders()
results = get_path_classes(folders)
all_text = ""
for result in results:
all_text += result + "\n"
#sublime.error_message(all_text)
return results
def get_path_classes(folders):
classesList = []
for folder in folders:
for root, dirs, files in os.walk(folder):
for filename in files:
filepath = root +"/"+filename
if filepath.endswith(".java"):
filepath = filepath.replace(".java","")
filepath = filepath[filepath.rfind("/"):]
filepath = filepath[1:]
classesList.append(filepath)
return classesList
but somehow when I work in a folder dir with a class named "LandingController.java" and I try to get the result, the auto complete is not working at all.
However, as you may noticed I did a error_message output of all the contents I got, there are actual a list of class name found.
Can anyone help me solve this? thank you!
It turn outs that the actual format which sublime text accept is: [(word,word),...]
but thanks to MattDMo who point out the documentation since the official documentation says nothing about the auto complete part.
For better understanding of the auto complete injection api, you could follow Zinggi's plugin DictionaryAutoComplete and this is the github link
So for a standard solution:
class FolderPathAutoComplete(sublime_plugin.EventListener):
def on_query_completions(self, view, prefix, locations):
suggestlist = self.get_autocomplete_list(prefix)
return suggestlist
def get_autocomplete_list(self, word):
global classesList
autocomplete_list = []
uniqueautocomplete = set()
# filter relevant items:
for w in classesList:
try:
if word.lower() in w.lower():
actual_class = parse_class_path_only_name(w)
if actual_class not in uniqueautocomplete:
uniqueautocomplete.add(actual_class)
autocomplete_list.append((actual_class, actual_class))
except UnicodeDecodeError:
print(actual_class)
# autocomplete_list.append((w, w))
continue
return autocomplete_list

python unzip files and provide update to wxPython status bar

I have several large zip file that contain a dir structure that I must maintain. Currently to unzip them I am using
zip = zipfile.ZipFile(self.fileName)
zip.extractall(self.destination)
zip.close()
The problem is that these process can take upwards of 3-5 minutes and I have no feedback that they are still working. What I would like to do is output the name of the file currently being unziped to the status bar of my gui. What I have in mind is something like
zip = zipfile.ZipFile(self.fileName)
zipNameList = zipfile.namelist(self.fileName)
for item in zipNameList:
self.SetStatusText("Unzipping" + str(item))
zip.extract(item)
zip.close()
The problem with this is that it does not create the correct dir structure. I am not sure that this is even the best way to go about it.
I was also looking into using wx.progressdialog but could not come up with a way to have it show progress of the zip.extractall(filename).
I got it to an acceptable solution - Though I think I would prefer it thread it eventually.
def unzipItem(self, fileName, destination)
print "--unzipItem--"
zip = zipfile.ZipFile(fileName)
nameList = zip.namelist()
#get the amount of files in the file to properly size the progress bar
fileCount = 0
for item in nameList:
fileCount += 1
#Built progress dialog
dlg = wx.ProgressDialog("Unziping files",
"An informative message",
fileCount,
parent = self,
)
keepGoing = True
count = 0
for item in nameList:
count += 1
dir,file = os.path.split(item)
print "unzip " + file
#update status bar
self.SetStatusText("Unziping " + str(item))
#update progress dialog
(keepGoing, skip) = dlg.Update(count, file)
zip.extract(item,destination)
zip.close()
You can use infolist instead of namelist. From the docs:
The objects are in the same order as their entries in the actual ZIP
file on disk if an existing archive was opened.
Also, consider this note:
The open(), read() and extract() methods can take a filename or a ZipInfo object. You will appreciate this when trying to read a ZIP file that contains members with duplicate names.
So you can write something like this:
with ZipFile(zip_file_name) as myzipfile:
members = myzipfile.infolist()
for i, member in enumerate(members):
myzipfile.extract(member, destination_path)
self.SetStatusText("Unziping " + str(i))
self.mysignal.emit(i) # use this to update inside a thread
You can put this on a thread and then update through a signal, and the SetStatusText method should be called inside the corresponding slot.

Categories