how to execute 7zip commands from python script - python

I am trying to get a basic idea of how the os.system module can be used to execute 7zip commands. For now I don't want to complicate things with Popen or subprocess. I have installed 7zip and copied the 7zip.exe into my users folder. I just want to extract my test file install.zip. However using the code below causes the shell to appear briefly before exiting and no unzip has occurred. Please could you tell me why?
def main():
try:
os.system(r"C:\Users\Oulton\ 7z e C:\Users\Oulton\install.zip")
except:
time.sleep(3)
traceback.print_exc
if __name__ == "__main__":
main()
Many Thanks

There are several problems with the following line:
os.system("C:\Users\Oulton\ 7z e C:\Users\Oulton\install.zip ")
Since your string contains backslashes, you should use a raw string:
os.system(r"C:\Users\Oulton\7z -e C:\Users\Oulton\install.zip")
(note the r before the first double quote.)
I've also removed the extraneous spaces. The first one (before the 7z) was definitely problematic.
Also note that the traceback.print_exc does not call the function. You need to add parentheses: traceback.print_exc().
Finally, it is recommended that in new code the subprocess module is used in preference to os.system().

Can be done using sub process module:
import subprocess
beforezip = D:\kr\file #full location
afterzip = filename.zip
Unzipped_file = "7z a \"%s\" \"%s\"" %( afterzip, beforezip )
retV = subprocess.Popen(cmdExtractISO, shell=True, stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
outData = retV.stdout.readlines();

Related

How to insert os.system output into a text file? [duplicate]

This question already has answers here:
Running shell command and capturing the output
(21 answers)
Closed 2 years ago.
I want to get the stdout in a variable after running the os.system call.
Lets take this line as an example:
batcmd="dir"
result = os.system(batcmd)
result will contain the error code (stderr 0 under Windows or 1 under some linux for the above example).
How can I get the stdout for the above command without using redirection in the executed command?
If all you need is the stdout output, then take a look at subprocess.check_output():
import subprocess
batcmd="dir"
result = subprocess.check_output(batcmd, shell=True)
Because you were using os.system(), you'd have to set shell=True to get the same behaviour. You do want to heed the security concerns about passing untrusted arguments to your shell.
If you need to capture stderr as well, simply add stderr=subprocess.STDOUT to the call:
result = subprocess.check_output([batcmd], stderr=subprocess.STDOUT)
to redirect the error output to the default output stream.
If you know that the output is text, add text=True to decode the returned bytes value with the platform default encoding; use encoding="..." instead if that codec is not correct for the data you receive.
These answers didn't work for me. I had to use the following:
import subprocess
p = subprocess.Popen(["pwd"], stdout=subprocess.PIPE)
out = p.stdout.read()
print out
Or as a function (using shell=True was required for me on Python 2.6.7 and check_output was not added until 2.7, making it unusable here):
def system_call(command):
p = subprocess.Popen([command], stdout=subprocess.PIPE, shell=True)
return p.stdout.read()
import subprocess
string="echo Hello world"
result=subprocess.getoutput(string)
print("result::: ",result)
I had to use os.system, since subprocess was giving me a memory error for larger tasks. Reference for this problem here. So, in order to get the output of the os.system command I used this workaround:
import os
batcmd = 'dir'
result_code = os.system(batcmd + ' > output.txt')
if os.path.exists('output.txt'):
fp = open('output.txt', "r")
output = fp.read()
fp.close()
os.remove('output.txt')
print(output)
I would like to expand on the Windows solution. Using IDLE with Python 2.7.5, When I run this code from file Expts.py:
import subprocess
r = subprocess.check_output('cmd.exe dir',shell=False)
print r
...in the Python Shell, I ONLY get the output corresponding to "cmd.exe"; the "dir" part is ignored. HOWEVER, when I add a switch such as /K or /C ...
import subprocess
r = subprocess.check_output('cmd.exe /K dir',shell=False)
print r
...then in the Python Shell, I get all that I expect including the directory listing. Woohoo !
Now, if I try any of those same things in DOS Python command window, without the switch, or with the /K switch, it appears to make the window hang because it is running a subprocess cmd.exe and it awaiting further input - type 'exit' then hit [enter] to release. But with the /K switch it works perfectly and returns you to the python prompt. Allrightee then.
Went a step further...I thought this was cool...When I instead do this in Expts.py:
import subprocess
r = subprocess.call("cmd.exe dir",shell=False)
print r
...a new DOS window pops open and remains there displaying only the results of "cmd.exe" not of "dir". When I add the /C switch, the DOS window opens and closes very fast before I can see anything (as expected, because /C terminates when done). When I instead add the /K switch, the DOS window pops open and remain, AND I get all the output I expect including the directory listing.
If I try the same thing (subprocess.call instead of subprocess.check_output) from a DOS Python command window; all output is within the same window, there are no popup windows. Without the switch, again the "dir" part is ignored, AND the prompt changes from the python prompt to the DOS prompt (since a cmd.exe subprocess is running in python; again type 'exit' and you will revert to the python prompt). Adding the /K switch prints out the directory listing and changes the prompt from python to DOS since /K does not terminate the subprocess. Changing the switch to /C gives us all the output expected AND returns to the python prompt since the subprocess terminates in accordance with /C.
Sorry for the long-winded response, but I am frustrated on this board with the many terse 'answers' which at best don't work (seems because they are not tested - like Eduard F's response above mine which is missing the switch) or worse, are so terse that they don't help much at all (e.g., 'try subprocess instead of os.system' ... yeah, OK, now what ??). In contrast, I have provided solutions which I tested, and showed how there are subtle differences between them. Took a lot of time but...
Hope this helps.
commands also works.
import commands
batcmd = "dir"
result = commands.getoutput(batcmd)
print result
It works on linux, python 2.7.

Controlling a python script from another script

I am trying to learn how to write a script control.py, that runs another script test.py in a loop for a certain number of times, in each run, reads its output and halts it if some predefined output is printed (e.g. the text 'stop now'), and the loop continues its iteration (once test.py has finished, either on its own, or by force). So something along the lines:
for i in range(n):
os.system('test.py someargument')
if output == 'stop now': #stop the current test.py process and continue with next iteration
#output here is supposed to contain what test.py prints
The problem with the above is that, it does not check the output of test.py as it is running, instead it waits until test.py process is finished on its own, right?
Basically trying to learn how I can use a python script to control another one, as it is running. (e.g. having access to what it prints and so on).
Finally, is it possible to run test.py in a new terminal (i.e. not in control.py's terminal) and still achieve the above goals?
An attempt:
test.py is this:
from itertools import permutations
import random as random
perms = [''.join(p) for p in permutations('stop')]
for i in range(1000000):
rand_ind = random.randrange(0,len(perms))
print perms[rand_ind]
And control.py is this: (following Marc's suggestion)
import subprocess
command = ["python", "test.py"]
n = 10
for i in range(n):
p = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
while True:
output = p.stdout.readline().strip()
print output
#if output == '' and p.poll() is not None:
# break
if output == 'stop':
print 'sucess'
p.kill()
break
#Do whatever you want
#rc = p.poll() #Exit Code
You can use subprocess module or also the os.popen
os.popen(command[, mode[, bufsize]])
Open a pipe to or from command. The return value is an open file object connected to the pipe, which can be read or written depending on whether mode is 'r' (default) or 'w'.
With subprocess I would suggest
subprocess.call(['python.exe', command])
or the subprocess.Popen --> that is similar to os.popen (for instance)
With popen you can read the connected object/file and check whether "Stop now" is there.
The os.system is not deprecated and you can use as well (but you won't get a object from that), you can just check if return at the end of execution.
From subprocess.call you can run it in a new terminal or if you want to call multiple times ONLY the test.py --> than you can put your script in a def main() and run the main as much as you want till the "Stop now" is generated.
Hope this solve your query :-) otherwise comment again.
Looking at what you wrote above you can also redirect the output to a file directly from the OS call --> os.system(test.py *args >> /tmp/mickey.txt) then you can check at each round the file.
As said the popen is an object file that you can access.
What you are hinting at in your comment to Marc Cabos' answer is Threading
There are several ways Python can use the functionality of other files. If the content of test.py can be encapsulated in a function or class, then you can import the relevant parts into your program, giving you greater access to the runnings of that code.
As described in other answers you can use the stdout of a script, running it in a subprocess. This could give you separate terminal outputs as you require.
However if you want to run the test.py concurrently and access variables as they are changed then you need to consider threading.
Yes you can use Python to control another program using stdin/stdout, but when using another process output often there is a problem of buffering, in other words the other process doesn't really output anything until it's done.
There are even cases in which the output is buffered or not depending on if the program is started from a terminal or not.
If you are the author of both programs then probably is better using another interprocess channel where the flushing is explicitly controlled by the code, like sockets.
You can use the "subprocess" library for that.
import subprocess
command = ["python", "test.py", "someargument"]
for i in range(n):
p = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
while True:
output = p.stdout.readline()
if output == '' and p.poll() is not None:
break
if output == 'stop now':
#Do whatever you want
rc = p.poll() #Exit Code

Using os.system and subprocess.Popen

I have been trying to write a python script to control my Virtual machine remotely. For example I want to run a program on my VM and check if it is running. For this step I tries with Os.system and it seems it is working but for having better control I like to use subprocess, but unfortunately there was no success. I provided my code for both os.system and subprocess as follows:
os.system("vmrun -gu root -gp mohsen77 runProgramInGuest F:\Client\Ubuntu10464bit.vmx usr/bin/tcpreplay --intf1=eth4 /home/srastega/output.pcap")
print('PID is:' + str(os.getpid()))
args = shlex.split("vmrun -gu root -gp mohsen77 runProgramInGuest F:\Client\Ubuntu10464bit.vmx /bin/ls -l")
tcpreplay = subprocess.Popen(args, stderr=subprocess.STDOUT, stdout=subprocess.PIPE, shell=True)
out, err = tcpreplay.communicate()
print(out)
print('PID is:' + str(tcpreplay.pid))
For the second piece of code I am receiving an error:
Error: Cannot open VM: F:\ClientUbuntu10464bit.vmx, The virtual machine cannot b
e found
The backslashes in the double-quoted string may have special meaning, which could cause them to disappear. I'm not sure why the two calls are working differently, but try either doubling the backslashes or changing to forward slashes.
(Lifted to an answer from the discussion in the comments.)

running TCL through python by subprocess, but not giving any output

I am trying to run my tcl script through python subprocess as follow:
import subprocess
>>> subprocess.Popen(["tclsh", "tcltest.tcl"])
<subprocess.Popen object at 0x0000000001DD4DD8>
>>> subprocess.Popen(["tclsh", "tcltest.tcl"], shell=True )
<subprocess.Popen object at 0x0000000002B34550>
I don't know if it is working or not, since I don't see any anything from it!
my tcl script also has some packages from my company that causes errors when I use Tkinter, Tk, and eval,
import Tkinter
import socket
def TCLRun():
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(('127.0.0.1', 5006))
root = Tkinter.Tk()
## root.tk.eval('package require Azimuth-Sdk')
tcl_script ="""
##package require Company-Sdk
## set players [ace_azplayer_list_players]
set players 2
puts $players
## if { $players != "" } {
## foreach player $players {
## set cmd ace_azplayer_remove_player
## if { [catch { [ $cmd $player ] } err] } {
## puts " $cmd $player - $err"
## sleep 1
## }
## }
## } """
# call the Tkinter tcl interpreter
root.tk.call('eval', tcl_script)
root.mainloop()
gives me this error
import TCLCall
>>> TCLCall.TCLRun()
Traceback (most recent call last):
File "<pyshell#2>", line 1, in <module>
TCLCall.TCLRun()
File "C:\Users\XXX\Desktop\PKT\TCLCall.py", line 24, in TCLRun
root.tk.call('eval', tcl_script)
TclError: can not find channel named "stdout"
that's why I switched to subprocess. at least it doesn't give me error!
any idea how to run my tcl script with internal required package through python?!
Thanks
To get the output from using subprocess.Popen, you can try the following:
import subprocess
p = subprocess.Popen(
"tclsh tcltest.tcl",
shell=True,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
stdout, stderr = p.communicate()
print stdout
print stderr
It's entirely possible that the script you're running with subprocess.Popen is also generating an error, but isn't displaying since you aren't explicitly looking for it.
Edit:
To prevent some information from being lost in the comments below:
You probably have several potential errors here, or things you can try.
Either your tcl script by itself isn't able to import teapot properly, or some sort of interaction between the tcl script and the python script isn't working properly, or subprocess.Popen isn't correctly finding the teapot package from your path.
I would try debugging your programs in that order. First confirm that your tcl script works without using python or subprocess.Popen and just directly run it from the command line (for example, C:\Users\blah tclsh tcltest.tcl)
Then, after you've made sure your script work, bring in Python. From what I'm seeing, you don't have any problem with the python stuff, but with either your tcl script or some issue with your path.
The whole point of subprocess.Popen is redirection of standard channels, so you can handle output programmatically instead of seeing it on your own standard output. Did you try handling it? How?
Maybe you don't need redirection at all: then os.system("tclsh tcltest.tcl") should be enough. Or maybe subprocess.Popen has other advantages for you -- then figure out how to disable redirection, or how to redirect child's stdout to your own stdout.
I think I might have a solution for you. Or at least a different method to try. This example opens the TCL shell as a process. Then you pump commands to it as though you are on command line. Since you said your command line worked, I would think that this would too. This works for me on Windows with Python 3.7.6 and TCL 8.5.
There needs to be a little trickery. I've found solutions that require threads and all sorts of other overhead to get this job done but they just fell flat. What I came up with is simple and synchronous.
stdout.readline() will block. So if your TCL command doesn't throw anything back, you are dead in the water.
SO, you force something to come back by appending an innocuous TCL puts command that communicates back to the Python script that the job is done.
The other trick is that you need to "puts []" the command to force the output back into stdout.
If you need to source more TCL files, then add those in before you make your final call to what you are trying to run. Whatever you need to do on the cli, you do through this process.
My code has this all wrapped in a class with methods. I brought them all together here in-line for the example. Leave the errorInfo code in while debugging and remove as you see fit.
Note: You can also use TCL "info body " to read a script into a string variable and then run each line through the process. In essence, if in a Python debugging session, stepping through TCL. Sorry, this does not work too well. The workaround for comments does not work for opening curly braces.
Hope this helps someone out there.
EDIT: Using multi-line string handles comment lines.
import subprocess
print("------")
shell_path = r"<YOUR PATH TO THE TCL INTERPRETER SHELL>"
tcl_shell = subprocess.Popen(shell_path,
stdin =subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
universal_newlines = True,
bufsize = 0)
#
# Use ONE of the "command"s below. I have them here in series for exmaples.
#
# Simple
command = r"set temp 5"
# Multiline with error --> This will give an error of course to see the error extraction in action
command = r"""
set temp 5
set temp2 [expr temp + 5]
"""
# Multiline working
command = r"""
set temp 5
set temp2 [expr $temp + 5]
"""
# More output
command = r"""
puts "Starting process"
set temp 5
puts $temp
set temp2 [expr $temp + 5]
"""
# Comments handled
command = r"# comment!"
# Be sure to leave the newline to handle comments
tcl_shell.stdin.write(f"""puts [{command}
] \nputs \"tcl_shell_cmd_complete\"\n""")
# NOTE: tcl_shell.stdin.flush() does not seem to be needed. Consider if needed.
result = ""
line = tcl_shell.stdout.readline()
while line != "tcl_shell_cmd_complete\n":
result += line
line = tcl_shell.stdout.readline()
print(f"tcl_shell sent:\n{command}")
print(f"tcl_shell result:\n{result}".rstrip())
command_error_check = "puts $errorInfo"
tcl_shell.stdin.write(f"{command_error_check} \nputs \"tcl_shell_cmd_complete\"\n")
resultErr = ""
line = tcl_shell.stdout.readline()
while line != "tcl_shell_cmd_complete\n":
resultErr += line
line = tcl_shell.stdout.readline()
print(f"tcl_shell error info:\n{resultErr}")
tcl_shell.stdin.close()
tcl_shell.terminate()
tcl_shell.wait(timeout=0.5)
print("------")

Can we execute multiple commands in the same command prompt launched by python

There is an excel file which contains the paths of multiple scenarios. I am using os.system(command) in a for loop. In each iteration, a path is read from excel file and executes the scenario for that path.
My problem is that every time, by using os.system(), a CMD opens, execute one command and close. In next iteration, again second path is read and execute it and CMD close. Here CMD pop-ups again and again. And the system is busy during that period and not able to do other task. I want to execute all the commands(scenario) in one CMD because I would like to minimize it and use the system for other task.
In each iteration, there are two main steps:
os.chdir(PATH)
os.system(path of exe+" "+name of config file that is present at PATH")
Can it be done by using subprocess. If yes please give me some example how it can be implemented?
If you want to use the subprocess module, try something like this :
from subprocess import call
import os.path
def call_scenario(path, config_file):
retcode = call(["path/of/exe", os.path.join(path,config_file)])
if retcode != 0:
print "Something bad happened : %s"%retcode
When using subprocess.call, the shell=False parameter will avoid to launch a cmd to do something.
it can be done - here is a quick example using multiprocessing (Python 2.6 or newer )
The example bellow uses a Unix command ("ls") and unixes paths ("/usr, etc...) but just replace those with your needed commands and paths.
from multiprocessing import Process
import os
paths = ["/tmp", "/usr", "/usr/include"]
def exec_(path):
p = Process()
p.run = lambda: os.system("ls %s" % path)
p.start()
for path in paths:
exec_(path)
Another option, if you need some sophisticated control on what is running, return codes, etc... is to make use of the Fabric project -
Although it is aimed at executingmultiple commands on different hosts using ssh - I think it culd be used for different paths on the same host.
URL for fabric:
http://docs.fabfile.org/en/1.3.3/index.html
To run c:\path\to\exe for all config.ini from each path simultaneously and to change current directory to cwd before it is executed:
from subprocess import Popen
processes = [Popen([r"c:\path\to\exe", "config.ini"], cwd=path) for path in paths]
for p in processes: p.wait()
If you don't want to run all commands in parallel then use subprocess.call() with the same arguments as for subprocess.Popen().

Categories