python subprocess wildcard is selection only first file - python

I want to delete all the below files:
20200922_051424_00011_v4wzh_db508ed0-b8b9-488b-a796-773d1fb4045c_08
20200922_051424_00011_v4wzh_db508ed0-b8b9-488b-a796-773d1fb4045c_04 20200922_051424_00011_v4wzh_db508ed0-b8b9-488b-a796-773d1fb4045c_09
20200922_051424_00011_v4wzh_db508ed0-b8b9-488b-a796-773d1fb4045c_05 20200922_051424_00011_v4wzh_db508ed0-b8b9-488b-a796-773d1fb4045c_10
In Linux I simply do:
rm 20200922_051424_00011_v4wzh_db508ed0-b8b9-488b-a796-773d1fb4045c_*
But when I am doing the same using python script. It is just deleting first file matching the pattern but not all of them:
temp = subprocess.Popen('rm 20200922_051424_00011_v4wzh_db508ed0-b8b9-488b-a796-773d1fb4045c_*', shell=True, stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
Can anyone tell me the reason why its not working and also what should I do?
Complete python function is:
def remove(filename):
try:
cmd = 'rm ' + filename
print(cmd)
temp = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
stdout, stderr = temp.communicate()
if stderr:
print('Error while running rm command.')
print("Result of running rm command: ", stdout)
except CalledProcessError as e:
pass

Since you're in python, why not remove them directly from python rather than calling a shell command?
for filename in glob.glob(pattern):
os.remove(filename)
Documentation:
os.remove()
glob.glob()

Related

FileNotFound error when executing subprocess.run()

I would like to run a command in python using subprocess.run
I would like to switch the working directory JUST for the execution of this command.
Also, I need to record the output and the return code.
Here is the code I have:
import subprocess
result = subprocess.run("echo \"blah\"", cwd=directory, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
but this only returns
FileNotFoundError: [Errno 2] No such file or directory: 'echo "Running ls -la" && ls -la'
I also tried using the following arguments:
subprocess.run(["echo", "\"blah\""], cwd=directory, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
Like Jean-François Fabre said, the solution is to add "shell=True" to the call
import subprocess
result = subprocess.run("echo \"blah\"", cwd=directory, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
shell=True seems to tell subprocess to use the string as a command.

Python: subprocess.call and variants fail for a particular application from executed .py but not from python in CLI

I have a strange issue here - I have an application that I'm attempting to launch from python, but all attempts to launch it from within a .py script fail without any discernable output. Testing from within VSCode debugger. Here's some additional oddities:
When I swap in notepad.exe into the .py instead of my target applications path, notepad launches ok.
When I run the script line by line from the CLI (start by launching python, then type out the next 4-5 lines of Python), the script works as expected.
Examples:
#This works in the .py, and from the CLI
import subprocess
cmd = ['C:\\Windows\\system32\\notepad.exe', 'C:\\temp\\myfiles\\test_24.xml']
pipe = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
pipe.wait()
print(pipe)
#This fails in the .py, but works ok when pasted in line by line from the CLI
import subprocess
cmd = ['C:\\temp\\temp_app\\target_application.exe', 'C:\\temp\\myfiles\\test_24.xml']
pipe = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
pipe.wait()
print(pipe)
The result is no output when running the .py
I've tried several other variants, including the following:
import subprocess
tup = 'C:\\temp\\temp_app\\target_application.exe C:\temp\test\test_24.xml'
proc = subprocess.Popen(tup)
proc.wait()
(stdout, stderr) = proc.communicate()
print(stdout)
if proc.returncode != 0:
print("The error is: " + str(stderr))
else:
print("Executed: " + str(tup))
Result:
None
The error is: None
1.082381010055542
Now this method indicates there is an error because we are returning something other than 0 and printing "The error is: None", and this is because stderror is "None". So - is it throwing an error without giving an error?
stdout is also reporting "None".
So, lets try check_call and see what happens:
print("Trying check_call")
try:
subprocess.check_call('C:\\temp\\temp_app\\target_application.exe C:\\temp\\test\\test_24.xml', shell=True)
except subprocess.CalledProcessError as error:
print(error)
Results:
Trying check_call
Command 'C:\temp\temp_app\target_application.exe C:\temp\test\test_24.xml' returned non-zero exit status 1.
I've additionally tried subprocess.run, although it is missing the wait procedure I was hoping to use.
import subprocess
tup = 'C:\\temp\\temp_app\\target_application.exe C:\temp\test\test_24.xml'
proc = subprocess.run(tup, check=True)
proc.wait()
(stdout, stderr) = proc.communicate()
print(stdout)
if proc.returncode != 0:
print("The error is: " + str(stderr))
else:
print("Executed: " + str(tup))
What reasons might be worth chasing, or what other ways of trying to catch an error might work here? I don't know how to interpret "`" as an error result.

Unix Popen.communicate not able to gzip large file

I need to gzip files of size more than 10 GB using python on top of shell commands and hence decided to use subprocess Popen.
Here is my code:
outputdir = '/mnt/json/output/'
inp_cmd='gzip -r ' + outputdir
pipe = Popen(["bash"], stdout =PIPE,stdin=PIPE,stderr=PIPE)
cmd = bytes(inp_cmd.encode('utf8'))
stdout_data,stderr_data = pipe.communicate(input=cmd)
It is not gzip-ing the files within output directory.
Any way out?
The best way is to use subprocess.call() instead of subprocess.communicate().
call() waits till the command is executed completely while in Popen(), one has to extrinsically use wait() method for the execution to finish.
Have you tried it like this:
output_dir = "/mnt/json/output/"
cmd = "gzip -r {}".format(output_dir)
proc = subprocess.Popen(
cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
stdin=subprocess.PIPE,
shell=True,
)
out, err = proc.communicate()

Using ls in Python subprocess.Popen function

proc = subprocess.Popen(['ls', '-v', self.localDbPath+'labris.urls.*'], stdout=subprocess.PIPE)
while True:
line = proc.stdout.readline()
if line != '':
print line
else:
break
When using the above code I get the error saying:
ls: /var/lib/labrisDB/labris.urls.*: No such file or directory
But when I dıo the same from shell I get no errors:
ls -v /var/lib/labrisDB/labris.urls.*
Also this doesn't give any error either:
proc = subprocess.Popen(['ls', '-v', self.localDbPath], stdout=subprocess.PIPE)
while True:
line = proc.stdout.readline()
if line != '':
print line
else:
break
Why is the first code failing? What am I missing?
You get error because python subprocess could not be able to expand * like bash.
Change your code like that:
from glob import glob
proc = subprocess.Popen(['ls', '-v'] + glob(self.localDbPath+'labris.urls.*'), stdout=subprocess.PIPE)
Here is more information about glob expansion in python and solutions: Shell expansion in Python subprocess
Globbing is done by the shell. So when you're running ls * in a terminal, your shell is actually calling ls file1 file2 file3 ....
If you want to do something similar, you should have a look at the glob module, or just run your command through a shell:
proc = subprocess.Popen('ls -v ' + self.localDbPath + 'labris.urls.*',
shell=True,
stdout=subprocess.PIPE)
(If you choose the latter, be sure to read the security warnings!)

How to execute shell command get the output and pwd after the command in Python

How can I execute a shell command, can be complicated like normal command in bash command line, get the output of that command and pwd after execution?
I used function like this:
import subprocess as sub
def execv(command, path):
p = sub.Popen(['/bin/bash', '-c', command],
stdout=sub.PIPE, stderr=sub.STDOUT, cwd=path)
return p.stdout.read()[:-1]
And I check if user use cd command but that will not work when user use symlink to cd or other wierd way to change directory.
and I need a dictionary that hold {'cwd': '<NEW PATH>', 'result': '<COMMAND OUTPUT>'}
If you use subprocess.Popen, you should get a pipe object that you can communicate() for the command output and use .pid() to get the process id. I'd be really surprised if you can't find a method to get the current working directory of a process by pid...
e.g.: http://www.cyberciti.biz/tips/linux-report-current-working-directory-of-process.html
I redirect stdout to stderr of pwd command. if stdout is empty and stderr is not a path then stderr is error of the command
import subprocess as sub
def execv(command, path):
command = 'cd %s && %s && pwd 1>&2' % (path, command)
proc = sub.Popen(['/bin/bash', '-c', command],
stdout=sub.PIPE, stderr=sub.PIPE)
stderr = proc.stderr.read()[:-1]
stdout = proc.stdout.read()[:-1]
if stdout == '' and not os.path.exists(stderr):
raise Exception(stderr)
return {
"cwd": stderr,
"stdout": stdout
}
UPDATE: here is better implemention (using last line for pwd and don't use stderr)
def execv(command, path):
command = 'cd %s && %s 2>&1;pwd' % (path, command)
proc = sub.Popen(['/bin/bash', '-c', command],
env={'TERM':'linux'},
stdout=sub.PIPE)
stdout = proc.stdout.read()
if len(stdout) > 1 and stdout[-1] == '\n':
stdout = stdout[:-1]
lines = stdout.split('\n')
cwd = lines[-1]
stdout = '\n'.join(lines[:-1])
return {
"cwd": cwd,
"stdout": man_to_ansi(stdout)
}
To get output of an arbitrary shell command with its final cwd (assuming there is no newline in the cwd):
from subprocess import check_output
def command_output_and_cwd(command, path):
lines = check_output(command + "; pwd", shell=True, cwd=path).splitlines()
return dict(cwd=lines[-1], stdout=b"\n".join(lines[:-1]))

Categories