Thread in Thread - python

My company is working for visual effects and we set up an internal shot playback via a browser for our clients. For that we need to upload the video file to a FTP server.
I want to convert a image sequence to mp4 and upload this file directly after the rendering will finish.
For that I use:
one command prompt to convert
one command prompt to get an `md5hash
one for uploading the file
I already achieved that on my local computer, where I just chained os.system('command').
After recognizing that the program freezes very long with longer image sequences I changed the script to spawn a thread using the os.system chain.
But on the Render Farm Server this script does not actually work.
The RenderFarm Server runs Python 2.5
There are some code examples:
class CopraUpload(threading.Thread):
# initializing Thread
# via super constructor
def __init__(self):
threading.Thread.__init__(self)
# return the size of the
# created mp4 file
#
# #return: the file size in byte
def _getFileSize(self):
# creates a random id for organising
# the server upload used as flag
#
# #return: a hash
def _getHash(self):
self.fileLoc = str(self.outputfileName + '.mp4')
self.fileLoc = os.path.normpath(self.fileLoc)
return str(os.path.getsize(self.fileLoc))
# integrates the "missing" data for the xml file
# generated post render from the mp4 file
def _setPreviewDataToXML(self):
self.xmlFile = str(self.outputfileName + '_copraUpload.xml')
self.xmlFile = os.path.normpath(self.xmlFile)
ett = ET.parse(self.xmlFile)
root = ett.getroot()
for child in root.getiterator('preview_size'):
child.text = self._getFileSize()
for child in root.getiterator('preview_md5hash'):
child.text = self._getHash()
ett.write(self.xmlFile)
# create a connection to a ftp server
# and copies the mp4 file and the xml file
# on the server
def _uploadToCopra(self):
os.system(self.uploadCommand)
#process = Popen(self.uploadCommand)
# the main function of the program
# called via start from a Thread Object
def run(self):
# the command which will be send to the commando shell
# for further adjustments see ffmpeg help with ffmpeg.exe -h
FinalCommand = self.ffmpegLocation + " -r "+ self.framerate + " -i " + self.inputPath + " -an -strict experimental -s hd720 -vcodec libx264 -preset slow -profile:v baseline -level 31 -refs 1 -maxrate 6M -bufsize 10M -vb 6M -threads 0 -g 8 -r " + self.framerate + " " + self.outputfileName + ".mp4 -y"
FinalCommandList = FinalCommand.split(" ")
# calling the program
print "Start ffmpeg convertion"
outInfo = os.path.normpath("C:\\Users\\sarender\\Desktop\\stdoutFFMPEG.txt")
outError = os.path.normpath("C:\\Users\\sarender\\Desktop\\stderrFFMPEG.txt")
stdoutFile = open(outInfo,"w")
stderrFile = open(outError,"w")
handle = subp.check_all(FinalCommandList,stdout = stdoutFile,stderr = stderrFile)
handle.communicate()
stdoutFile.close()
stderrFile.close()
print "Convertion from ffmpeg done"
# fill the xml file with the missing data
# - preview file size
# - preview md5hash
self._setPreviewDataToXML()
self._uploadToCopra()
print "---------------------------------->FINISHED------------------------------------------------------>"
# Creates a callable Thread for the Copra Upload.
# start is calling the run method which will start the Uploading
and the main start:
if "$(RenderSet.writenode)" == "PREVIEW":
print "---------------------------------->Initializing Script------------------------------------------------------>"
process = CopraUpload()
process.start()
What happens:
The script starts after the rendering and ffmpeg converts the image sequence and creates an mp4. But it stops after that. It does not print "Conversion from ffmpeg complet". Just stops the script.
It actually should create the Thread converting with ffmpeg and wait until it finishes. After it should write some stuff in an xml file and upload both to the server.
Do I miss something? Is subprocess within a thread not the way to go? But I need a Thread because I can not deadlock the render management server.

My guess is that the command fails and throws an exception in handle.communicate(). Make sure you catch all exceptions and log them somewhere because threads have no way to pass on exceptions.
Also, you shouldn't use FinalCommand.split(" ") - if the file names contain spaces, this will fail in odd (and unpredictable) ways. Create a list instead pass that list to subp:
FinalCommand = [
self.ffmpegLocation,
"-r", self.framerate,
"-i", self.inputPath,
"-an",
"-strict experimental",
...
]
Much more readable as well.

Related

Pexpect not waiting for whole output - Ubuntu

I have a simple script to SSH into a network switch and run commands and save output into a file. It works fine for output that is displayed instantly but when I run "show iproute" it does not capture any output. The reason is when I run same command on switch directly, it thinks for 5-6 seconds, shows bunch of lines and thinks again and shows couple more lines and then ends. It is not waiting properly for whole command to execute that I am having issue fixing:
str_prompt = ' # '
command = "sh iproute"
device_name = "switch1.test.com"
# Spawn SSH session
ssh_command = 'ssh {}#{}'.format(username, device_name)
session = pexpect.spawn(ssh_command, timeout=5)
# Send the password
session.sendline(password)
# Expect the switch prompt (successful login)
expect_index = session.expect([pexpect.TIMEOUT, str_prompt])
# Success
if expect_index == 1:
# Disable clipaging so that all the output is shown (not in pages) | same as term len 0 in Cisco
session.sendline('disable clip')
# Expect the switch prompt if command is successful
expect_index = session.expect([pexpect.TIMEOUT, str_prompt])
# Send show iproute command
session.sendline(command)
# < This is where it needs to wait >
#session.expect(pexpect.EOF) - Tried this and wait() but that broke the scipt
#session.wait()
# Expect the switch prompt if command is successful
session.expect(str_prompt)
# Save output of "sh iproute" to a variable
output = session.before
# Save results to a file
fp = open(host + '-route.txt', "w")
fp.write(output)
fp.close()
Here is a sample output. The out put does have "#" but not " # ".
#oa 10.10.10.0/24 10.0.0.1 4 UG-D---um--f- V-BB1 99d:0h:14m:49s
#oa 10.10.20.0/24 10.0.0.1 4 UG-D---um--f- V-BB2 99d:0h:14m:49s
#oa 10.10.30.0/24 10.0.0.1 4 UG-D---um--f- V-BB3 99d:0h:14m:49s
#oa 10.10.40.0/24 10.0.0.1 4 UG-D---um--f- V-BB4 99d:0h:14m:49s
and many more line ....
Any help will be appreciated. Thanks
Edit:
I added sleep(60) and that seems to do the trick, but I do not want to use it as I am sending multiple commands and some are super fast. I do not want to wait 1 min for each command, script will take forever to run.
So you need to associate timeout to a commands.
The way i do it today is have a xml file format which my code parses, the xml tag will have a attribute for command and another for timeout, another for end_prompt of the command and so on..
My code reads the command and its timeout and sets the respective variables accordingly before sending the command.
session.sendline(command) #command is read from a xml file
session.expect(end_prompt, timeout=int(tmout)) # end_prompt, tmout for the command read form the same file
For your case, if you dont want to parse a file to get command and its related params you can have them as a dictionary in your script and use it
command_details_dict = { "cmd_details" :[
{'cmd': 'pwd',
'timeout': 5,
},
{'cmd': 'iproute',
'timeout': 60,
}
]
}
inside the dictionary cmd_details is a list of dictionary so that you can maintain the order of your commands while iterating, each command is a dictionary with relevant details, you can add more keys to the command dictionary like prompts, unique identifier etcc..
But if you have the time i would suggest to use a config file instead

cisco OS upload with Python Script

I have developed the following python script to help me upload NX-OS images to the Cisco Nexus switches.
The script is running just fine with small files. Tried with files under 100M and it's working fine. However I have also NX-OS images which are about 600M . At some point while script is running and the TFTP upload in in progress the upload stops when the file on the Cisco flashdisk reach size: 205987840. The programs freezes and when I type show users in the cisco console I can see that the user used for upload is already disconnected.
I am thinking that maybe is something related to the ssh session timed out ? Or maybe something wrong in my script? I am new with python.
I am posting only relevant parts of the script:
def ssh_connect_no_shell(command):
global output
ssh_no_shell = paramiko.SSHClient()
ssh_no_shell.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_no_shell.connect(device, port=22, username=myuser, password=mypass)
ssh_no_shell.exec_command('terminal length 0\n')
stdin, stdout, stder = ssh_no_shell.exec_command(command)
output = stdout.readlines()
ssh_no_shell.close()
def upload_file():
cmd_1 = "copy tftp:" + "//" + tftp_server + "/" + image + " " + "bootflash:" + " vrf " + my_vrf
ssh_connect_no_shell(cmd_1)
print '\n##### Device Output Start #####'
print '\n'.join(output)
print '\n##### Device Output End #####'
def main():
print 'Program starting...\n'
time.sleep(1)
variables1()
check_if_file_present()
check_if_enough_space()
upload_file()
check_file_md5sum()
are_you_sure(perform_upgrade)
perform_upgrade_and_reboot()
if __name__ == '__main__':
clear_screen()
main()
My experience is:
don't use TFTP
...it's incredibly slow for large files
...it doesn't work well with some firewalls
...it depends on the server implementation to handle large files
=> i'd guess, your script would just run fine using a different TFTP-server-software...
Rather than troubleshooting TFTP I'd suggest to
go with SCP
...it requires an open SSH-Port at your Nexus-Device
...if SSH is possible through your firewall, SCP is, too - no extra rule required
+++ you can "push" the images from your laptop to your device without having to login into your device
for example - use "putty scp" => pscp.exe
//pscp # Windows-Client
cd d:\DOWNLOADS
start pscp n7000-s1-kickstart.6.2.12.bin admin#10.10.10.11:bootflash:
start pscp n7000-s1-dk9.6.2.12.bin admin#10.10.10.11:bootflash:
This copies, in parallel!, the nxos- and the kickstart-image to a device.
...easy to loop over several devices to add more parallel transfers
btw. some "IOS"-based devices require additonal flags:
pscp -2 -scp ...

Make python script processing large number of files faster

I have written a python script which takes input as a directory and lists all files in that directory, it then decompresses each of these files and does some extra processing it. The code is very straightforward, uses a list of files from os.listdir( directory ) and for each file in the list it decompresses it and then executes a bunch of different system calls on it. My question is , is there any way to make the loop executions parallel or make the code run faster leveraging the cores on the cpu, and what might that be, below is some demo code to depict what I am aiming to optimize :
files = os.listdir( directory )
for file in files:
os.system( "tar -xvf %s" %file )
os.system( "Some other sys call" )
os.system( "One more sys call" )
EDIT: The sys calls are the only way possible since I am using certain CLI custom made utilities that expect input as decompressed files, hence the decompression.
Note os.system() is synchronous, i.e. python waits for the task to complete before going to the next line.
Here is a simplification of what I do on Windows 7 and Python 2.66
You should be able to easily modify this for your needs.
1. create and run a process for each task I want to run in parallel
2. after they are all started I wait for them to complete
import win32api, win32con, win32process, win32event
def CreateMyProcess2(cmd):
''' create process width no window that runs a task or without arguments'''
si = win32process.STARTUPINFO()
info = win32process.CreateProcess(
None, # AppName
cmd, # Command line
None, # Process Security
None, # Thread Security
0, # inherit Handles?
win32process.NORMAL_PRIORITY_CLASS,
None, # New environment
None, # Current directory
si) # startup info
return info[0]
# info is tuple (hProcess, hThread, processId, threadId)
if __name__ == '__main__' :
handles = []
cmd = 'cmd /c "dir/w"'
handle = CreateMyProcess2(cmd)
handles.append(handle)
cmd = 'cmd /c "path"'
handle = CreateMyProcess2(cmd)
handles.append(handle)
rc = win32event.WaitForMultipleObjects(
handles, # sequence of objects (here = handles) to wait for
1, # wait for them all (use 0 to wait for just one)
15000) # timeout in milli-seconds
print rc
# rc = 0 if all tasks have completed before the time out

Where does the Xilinx TCL shell emit the results?

I'm trying to develop a Python based wrapper around the Xilinx ISE TCL shell xtclsh.exe. If it works, I'll add support for other shells like PlanAhead or Vivado ...
So what's the big picture? I have a list of VHDL source files, which form an IP core. I would like to open an existing ISE project, search for missing VHDL files and add them if necessary. Because IP cores have overlapping file dependencies, it's possible that a project already contains some files, so I'm only looking for missing files.
The example user Python 3.x and subprocess with pipes. The xtclsh.exe is launched and commands are send line by line to the shell. The output is monitored for results. To ease the example, I redirected STDERR to STDOUT. A dummy output POC_BOUNDARY is inserted into the command stream, to indicate completed commands.
The attached example code can be tested by setting up an example ISE project, which has some VHDL source files.
My problem is that INFO, WARNING and ERROR messages are displayed, but the results from the TCL commands can not be read by the script.
Manually executing search *.vhdl -type file in xtclsh.exe results in:
% search *.vhdl -type file
D:/git/PoC/src/common/config.vhdl
D:/git/PoC/src/common/utils.vhdl
D:/git/PoC/src/common/vectors.vhdl
Executing the script results in:
....
press ENTER for the next step
sending 'search *.vhdl -type file'
stdoutLine='POC_BOUNDARY
'
output consumed until boundary string
....
Questions:
Where does xtclsh write to?
How can I read the results from TCL commands?
Btw: The prompt sign % is also not visible to my script.
Python code to reproduce the behavior:
import subprocess
class XilinxTCLShellProcess(object):
# executable = "sortnet_BitonicSort_tb.exe"
executable = r"C:\Xilinx\14.7\ISE_DS\ISE\bin\nt64\xtclsh.exe"
boundarString = "POC_BOUNDARY"
boundarCommand = bytearray("puts {0}\n".format(boundarString), "ascii")
def create(self, arguments):
sysargs = []
sysargs.append(self.executable)
self.proc = subprocess.Popen(sysargs, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
self.sendBoundardCommand()
while(True):
stdoutLine = self.proc.stdout.readline().decode()
if (self.boundarString in stdoutLine):
break
print("found boundary string")
def terminate(self):
self.proc.terminate()
def sendBoundardCommand(self):
self.proc.stdin.write(self.boundarCommand)
self.proc.stdin.flush()
def sendCommand(self, line):
command = bytearray("{0}\n".format(line), "ascii")
self.proc.stdin.write(command)
self.sendBoundardCommand()
def sendLine(self, line):
self.sendCommand(line)
while(True):
stdoutLine = self.proc.stdout.readline().decode()
print("stdoutLine='{0}'".format(stdoutLine))
if (stdoutLine == ""):
print("reached EOF in stdout")
break
elif ("vhdl" in stdoutLine):
print("found a file name")
elif (self.boundarString in stdoutLine):
print("output consumed until boundary string")
break
def main():
print("creating 'XilinxTCLShellProcess' instance")
xtcl = XilinxTCLShellProcess()
print("launching process")
arguments = []
xtcl.create(arguments)
i = 1
while True:
print("press ENTER for the next step")
from msvcrt import getch
from time import sleep
sleep(0.1) # 0.1 seconds
key = ord(getch())
if key == 27: # ESC
print("aborting")
print("sending 'exit'")
xtcl.sendLine("exit")
break
elif key == 13: # ENTER
if (i == 1):
#print("sending 'project new test.xise'")
#xtcl.sendLine("project new test.xise")
print("sending 'project open PoCTest.xise'")
xtcl.sendLine("project open PoCTest.xise")
i += 1
elif (i == 2):
print("sending 'lib_vhdl get PoC files'")
xtcl.sendLine("lib_vhdl get PoC files")
i += 1
elif (i == 3):
print("sending 'search *.vhdl -type file'")
xtcl.sendLine("search *.vhdl -type file")
i += 1
elif (i == 4):
print("sending 'xfile add ../../src/common/strings.vhdl -lib_vhdl PoC -view ALL'")
xtcl.sendLine("xfile add ../../src/common/strings.vhdl -lib_vhdl PoC -view ALL")
i += 16
elif (i == 20):
print("sending 'project close'")
xtcl.sendLine("project close")
i += 1
elif (i == 21):
print("sending 'exit'")
xtcl.sendCommand("exit")
break
print("exit main()")
xtcl.terminate()
print("the end!")
# entry point
if __name__ == "__main__":
main()
I have tried several approaches on Linux, but it seemes that xtclsh detects whether standard input is connected to a pipe or a (pseudo) terminal. If it is connected to a pipe, xtclsh suppresses any output which would be normally written to standard output (prompt output, command results). I think, the same applies to Windows.
Messages (whether informative, warning or error) which are printed on standard error still go there even if the input is connected to a pipe.
To get the messages printed on standard output you can use the puts tcl command which always prints on standard output. That is, puts [command] takes the standard output of command and prints it always to standard output.
Example: Let's assume we have a test.xise project with two files: the top-level entity in test.vhd and the testbench in test_tb.vhd. And, we want to list all files in the project using this tcl script (commands.tcl):
puts [project open test]
puts "-----------------------------------------------------------------------"
puts [search *.vhd]
exit
Then the call xtclsh < commands.tcl 2> error.log prints this on standard output:
test
-----------------------------------------------------------------------
/home/zabel/tmp/test/test.vhd
/home/zabel/tmp/test/test_tb.vhd
And this is printed on standard error (into file error.log):
INFO:HDLCompiler:1061 - Parsing VHDL file "/home/zabel/tmp/test/test.vhd" into
library work
INFO:ProjectMgmt - Parsing design hierarchy completed successfully.

Django Subprocess live/unbuffered reporting of stdout from batch scripts

Basically the back story is that i've built a selecting of python scripts for a client to process importing and exporting batch jobs between their operations database and their ecomms site database. this works fine. these scripts write to stdout to update the user on the status of the batch script.
I'm trying to produce a framework for these scripts to be run via a django view and to post the stdout to the webpage to show the user the progress of these batch processes.
the plan was to
- call the batch script as a subprocess and then save stdout and stderr to a file.
- return a redirect to a display page that will reload every 2 seconds and display line by line the contents of the file that stdout is being written to.
however the problem is, that the stdout/stderr file is not being actually written to until the entire batch script has finished running or errors out.
i've tried a number of things, but none seem to work.
heres the current view code.
def long_running(app, filename):
"""where app is ['command', 'arg1', 'arg2'] and filename is the file used for output"""
# where to write the result (something like /tmp/some-unique-id)
fullname = temppath+filename
f = file(fullname, "a+")
# launch the script which outputs something slowly
subprocess.Popen(app, stdout=f, stderr=f)# .communicate()
# once the script is done, close the output
f.close()
def attributeexport(request):
filename = "%d_attribute" %(int(time.time())) #set the filename to be the current time stamp plus an identifier
app = ['python','/home/windsor/django/applications/attribute_exports.py']
#break thread for processing.
threading.Thread(target=long_running, args=(app,filename)).start()
return HttpResponseRedirect('/scripts/dynamic/'+filename+'/')
pass
def dynamic(request, viewfile):
fileobj = open(temppath+viewfile, 'r')
results = []
for line in fileobj:
results.append(line)
if '~END' in line:
#if the process has completed
return render_to_response('scripts/static.html', {'displaylist':results, 'filename':viewfile})
return render_to_response('scripts/dynamic.html', {'displaylist':results, 'filename':viewfile})
pass
It helps if you use the following:
['python','-u','path/to/python/script.py']

Categories