I am trying to escape the following, so I can grab the version of iDevice attached via USB:
system_profiler SPUSBDataType | sed -n -e 's/ */ /g' -e '/iPad/,/Version/p' -e '/iPhone/,/Version/p' | grep 'iPad\|iPhone\|Version' | awk 'NR%2{printf $0;next;}1'
So I can run it via Popen, however everytime I always get an issue on iPad\|iPhone\|Version, my code is the following, in an attempt to escape the single quotes:
cmd1 = Popen([r'system_profiler', 'SPUSBDataType'], stdout=subprocess.PIPE)
cmd2 = Popen([r'sed','-n','-e','\'s/ */ /g\'','-e','\'/iPad/,/Version/p\'', '-e', '\'/iPhone/,/Version/p\''], stdin=cmd1.stdout, stdout=subprocess.PIPE)
cmd3 = Popen([r'grep', '\'iPad\|iPhone\|Version\''], stdin=cmd2.stdout, stdout=subprocess.PIPE)
cmd4 = Popen([r'awk', '\'NR%2{printf $0;next;}1\''], stdin=cmd3.stdout, stdout=subprocess.PIPE)
cmd1.stdout.close()
ver = cmd4.communicate()[0]
Use a raw string literal, or double the backslashes; \| has a meaning in a Python string definition syntax too, resulting in no backslash being present in the resulting value. You don't need those quotes either (the shell would have removed them too):
cmd3 = Popen([r'grep', r"iPad\|iPhone\|Version"], stdin=cmd2.stdout, stdout=subprocess.PIPE)
It'd be much easier to apply the string filtering and replacements in Python code, in my opinion.
Played around with grep and managed to extract what I needed from system_profiler. However Martijn's answer is more suitable if you cannot grep for the necessary string.
prof = Popen(['system_profiler', 'SPUSBDataType'], stdout=subprocess.PIPE)
grep1 = Popen(['grep','-e','iPhone','-e','iPad','-e','iPod', '-A', '4'], stdin=prof.stdout, stdout=subprocess.PIPE)
grep2 = Popen(['grep', 'Version'], stdin=grep1.stdout, stdout=subprocess.PIPE)
prof.stdout.close() # Allow ps_process to receive a SIGPIPE if grep_process exits.
stdoutver = grep2.communicate()[0]
Related
I've been trying to pass a command that works only with literal double quotes in the commandline around the "concat:file1|file2" argument for ffmpeg.
I cant however make this work from python with subprocess.Popen(). Anyone have an idea how one passes quotes into subprocess.Popen?
Here is the code:
command = "ffmpeg -i "concat:1.ts|2.ts" -vcodec copy -acodec copy temp.mp4"
output,error = subprocess.Popen(command, universal_newlines=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE).communicate()
When I do this, ffmpeg won't take it any other way other than quotes around the concat segement. Is there a way to successfully pass this line to subprocess.Popen command?
I'd suggest using the list form of invocation rather than the quoted string version:
command = ["ffmpeg", "-i", "concat:1.ts|2.ts", "-vcodec", "copy",
"-acodec", "copy", "temp.mp4"]
output,error = subprocess.Popen(
command, universal_newlines=True,
stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate()
This more accurately represents the exact set of parameters that are going to be passed to the end process and eliminates the need to mess around with shell quoting.
That said, if you absolutely want to use the plain string version, just use different quotes (and shell=True):
command = 'ffmpeg -i "concat:1.ts|2.ts" -vcodec copy -acodec copy temp.mp4'
output,error = subprocess.Popen(
command, universal_newlines=True, shell=True,
stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate()
Either use single quotes 'around the "whole pattern"' to automatically escape the doubles or explicitly "escape the \"double quotes\"". Your problem has nothing to do with Popen as such.
Just for the record, I had a problem particularly with a list-based command passed to Popen that would not preserve proper double quotes around a glob pattern (i.e. what was suggested in the accepted answer) under Windows. Joining the list into a string with ' '.join(cmd) before passing it to Popen solved the problem.
This works with python 2.7.3 The command to pipe stderr to stdout has changed since older versions of python:
Put this in a file called test.py:
#!/usr/bin/python
import subprocess
command = 'php -r "echo gethostname();"'
p = subprocess.Popen(command, universal_newlines=True, shell=True,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
text = p.stdout.read()
retcode = p.wait()
print text
Invoke it:
python test.py
It prints my hostname, which is apollo:
apollo
Read up on the manual for subprocess: http://docs.python.org/2/library/subprocess.html
I have been working with a similar issue, with running a relatively complex
command over ssh. It also had multiple double quotes and single quotes. Because
I was piping the command through python, ssh, powershell etc.
If you can instead just convert the command into a shell script, and run the
shell script through subprocess.call/Popen/run, these issues will go away.
So depending on whether you are on windows or on linux or mac, put the
following in a shell script either (script.sh or script.bat)
ffmpeg -i "concat:1.ts|2.ts" -vcodec copy -acodec copy temp.mp4
Then you can run
import subprocess; subprocess.call(`./script.sh`; shell=True)
Without having to worry about single quotes, etc.
This line of code in your question isn't valid Python syntax:
command = "ffmpeg -i "concat:1.ts|2.ts" -vcodec copy -acodec copy temp.mp4"
If you had a Python file with just this line in it, you would get a syntax error. A string literal surrounded with double quotes can't have double quotes in them unless they are escaped with a backslash. So you could fix that line by replacing it with:
command = "ffmpeg -i \"concat:1.ts|2.ts\" -vcodec copy -acodec copy temp.mp4"
Another way to fix this line is to use single quotes for the string literal in Python, that way Python is not confused when the string itself contains a double quote:
command = 'ffmpeg -i "concat:1.ts|2.ts" -vcodec copy -acodec copy temp.mp4'
Once you have fixed the syntax error, you can then tackle the issue with using subprocess, as explained in this answer. I also wrote this answer to explain a helpful mental model for subprocess in general.
Also struggling with a string argument containing spaces and not wanting to use the shell=True.
The solution was to use double quotes for the inside strings.
args = ['salt', '-G', 'environment:DEV', 'grains.setvals', '{"man_version": "man-dev-2.3"}']
try:
p = subprocess.Popen(args, stdin=subprocess.PIPE
, stdout=subprocess.PIPE
, stderr=subprocess.PIPE
)
(stdin,stderr) = p.communicate()
except (subprocess.CalledProcessError, OSError ) as err:
exit(1)
if p.returncode != 0:
print("Failure in returncode of command:")
Anybody suffering from this pain. It also works with params enclosed with quotation marks.
params = ["ls", "-la"]
subprocess.check_output(" ".join(params), shell=True)
I've been trying to pass a command that works only with literal double quotes in the commandline around the "concat:file1|file2" argument for ffmpeg.
I cant however make this work from python with subprocess.Popen(). Anyone have an idea how one passes quotes into subprocess.Popen?
Here is the code:
command = "ffmpeg -i "concat:1.ts|2.ts" -vcodec copy -acodec copy temp.mp4"
output,error = subprocess.Popen(command, universal_newlines=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE).communicate()
When I do this, ffmpeg won't take it any other way other than quotes around the concat segement. Is there a way to successfully pass this line to subprocess.Popen command?
I'd suggest using the list form of invocation rather than the quoted string version:
command = ["ffmpeg", "-i", "concat:1.ts|2.ts", "-vcodec", "copy",
"-acodec", "copy", "temp.mp4"]
output,error = subprocess.Popen(
command, universal_newlines=True,
stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate()
This more accurately represents the exact set of parameters that are going to be passed to the end process and eliminates the need to mess around with shell quoting.
That said, if you absolutely want to use the plain string version, just use different quotes (and shell=True):
command = 'ffmpeg -i "concat:1.ts|2.ts" -vcodec copy -acodec copy temp.mp4'
output,error = subprocess.Popen(
command, universal_newlines=True, shell=True,
stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate()
Either use single quotes 'around the "whole pattern"' to automatically escape the doubles or explicitly "escape the \"double quotes\"". Your problem has nothing to do with Popen as such.
Just for the record, I had a problem particularly with a list-based command passed to Popen that would not preserve proper double quotes around a glob pattern (i.e. what was suggested in the accepted answer) under Windows. Joining the list into a string with ' '.join(cmd) before passing it to Popen solved the problem.
This works with python 2.7.3 The command to pipe stderr to stdout has changed since older versions of python:
Put this in a file called test.py:
#!/usr/bin/python
import subprocess
command = 'php -r "echo gethostname();"'
p = subprocess.Popen(command, universal_newlines=True, shell=True,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
text = p.stdout.read()
retcode = p.wait()
print text
Invoke it:
python test.py
It prints my hostname, which is apollo:
apollo
Read up on the manual for subprocess: http://docs.python.org/2/library/subprocess.html
I have been working with a similar issue, with running a relatively complex
command over ssh. It also had multiple double quotes and single quotes. Because
I was piping the command through python, ssh, powershell etc.
If you can instead just convert the command into a shell script, and run the
shell script through subprocess.call/Popen/run, these issues will go away.
So depending on whether you are on windows or on linux or mac, put the
following in a shell script either (script.sh or script.bat)
ffmpeg -i "concat:1.ts|2.ts" -vcodec copy -acodec copy temp.mp4
Then you can run
import subprocess; subprocess.call(`./script.sh`; shell=True)
Without having to worry about single quotes, etc.
This line of code in your question isn't valid Python syntax:
command = "ffmpeg -i "concat:1.ts|2.ts" -vcodec copy -acodec copy temp.mp4"
If you had a Python file with just this line in it, you would get a syntax error. A string literal surrounded with double quotes can't have double quotes in them unless they are escaped with a backslash. So you could fix that line by replacing it with:
command = "ffmpeg -i \"concat:1.ts|2.ts\" -vcodec copy -acodec copy temp.mp4"
Another way to fix this line is to use single quotes for the string literal in Python, that way Python is not confused when the string itself contains a double quote:
command = 'ffmpeg -i "concat:1.ts|2.ts" -vcodec copy -acodec copy temp.mp4'
Once you have fixed the syntax error, you can then tackle the issue with using subprocess, as explained in this answer. I also wrote this answer to explain a helpful mental model for subprocess in general.
Also struggling with a string argument containing spaces and not wanting to use the shell=True.
The solution was to use double quotes for the inside strings.
args = ['salt', '-G', 'environment:DEV', 'grains.setvals', '{"man_version": "man-dev-2.3"}']
try:
p = subprocess.Popen(args, stdin=subprocess.PIPE
, stdout=subprocess.PIPE
, stderr=subprocess.PIPE
)
(stdin,stderr) = p.communicate()
except (subprocess.CalledProcessError, OSError ) as err:
exit(1)
if p.returncode != 0:
print("Failure in returncode of command:")
Anybody suffering from this pain. It also works with params enclosed with quotation marks.
params = ["ls", "-la"]
subprocess.check_output(" ".join(params), shell=True)
I'm trying to run the following command in a Python script:
sudo sed -i 's/auth-user-pass/auth-user-pass \/etc\/openvpn\/credentials/g' /etc/openvpn/US-East.ovpn
The command above runs fine in a terminal.
My Python script looks like this;
import subprocess
subprocess.Popen(["sudo", "sed", "-i", "'s/auth-user-pass/auth-user-pass", "\/etc\/openvpn\/credentials/g'", "/etc/openvpn/US-East.ovpn"], stdout=subprocess.PIPE, text=True)
But I get the following error when I run the script
sed: -e expression #1, char 1: unknown command: `''
I thought some characters (like the single quote) might need escaping but I've been trying and none of them work.
I'm quite lost; can anyone help?
Since your command line is:
sudo sed -i 's/auth-user-pass/auth-user-pass \/etc\/openvpn\/credentials/g' /etc/openvpn/US-East.ovpn
You need to remove the single quotes (the shell does that for you) and you need to keep the whole of the single-quoted argument as one argument, not splitting it at spaces (the shell doesn't split at spaces inside a quoted string):
subprocess.Popen(["sudo", "sed", "-i", "s/auth-user-pass/auth-user-pass \/etc\/openvpn\/credentials/g", "/etc/openvpn/US-East.ovpn"], stdout=subprocess.PIPE, text=True)
The sed command does not expect to see the single quotes.
Can you test this without the commas in the sed command parameter?
import subprocess
subprocess.Popen(["sudo", "sed", "-i", "'s/auth-user-pass/auth-user-pass \/etc\/openvpn\/credentials/g'", "/etc/openvpn/US-East.ovpn"], stdout=subprocess.PIPE, text=True)
I have a shell command which parses a certain content and gives the required output. I need to implement this in python but the shell command has a new line character "\n" which is not getting getting executed when run through python command.
Of the many lines in the output log, the required line looks like - configurationFile=/app/log/conf/the_jvm_name.4021.logback.xml
I would only need the_jvm_name from the above. The syntax will always be the same. The shell command works fine.
Shell Command -
ps -ef | grep 12345 | tr " " "\n" | grep logback.configurationFile | awk -F"/" '{print $NF}'| cut -d. -f1
Python (escaped all the required double quotes) -
import subprocess
pid_arr = "12345"
sh_command = "ps -ef | grep "+pid_arr+" | tr \" \" \"\n\" | grep configurationFile | awk -F \"/\" '{print $NF}' | cut -d. -f1"
outpt = subprocess.Popen(sh_command , shell=True,stdout=subprocess.PIPE).communicate()[0].decode('utf-8').strip()
With python, I'm not getting the desired output. It just prints configurationFile as it is in the command.
what am I missing here. Any other better way for getting this details?
You can achieve what you want using a regex substitution in Python:
output = subprocess.check_output(["ps", "-ef"])
for line in output.splitlines():
if re.search("12345", line):
output = re.sub(r".*configurationFile=.*/([^.]+).*", r"\1", line)
This captures the part after the last / in the configuration file path, up to the next ..
You could make it slightly more robust by checking only the second column (the PID) for 12345, either by splitting each line on white space:
cols = re.split("\s+", line)
if len(cols) > 1 and cols[1] == "12345":
or by using a better regex, like:
if re.match(r"\S+\s+12345\s", line):
Note that you could also shorten your pipe considerable by just doing something like:
ps -ef | sed -nE '/12345/ { s/.*configurationFile=.*\/([^.]*).*/\1/; p }'
Your shell command works, but it has to deal with too many lines of output and too many fields per line. An easier solution is to tell the ps command to just give you 1 line and on that line, just one field that you care about. For example, on my system:
ps -o cmd h 979
will output:
/usr/bin/dbus-daemon --config-file=/usr/share/defaults/at-spi2/accessibility.conf --nofork --print-address 3
The -o cmd flag will output only the CMD column of the output, while the h parameter represents a command to tell ps to omit the header. Finally, the 979 is the process ID, which tells ps to output information just for this process.
This output is not exactly what you have in your problem, but similar enough. Once we limited the output, we eliminate the need for other commands such as grep, awk, ... At this point, we can use regular expression to extract what we want:
from __future__ import print_function
import re
import subprocess
pid = '979'
command = ['ps', '-o', 'cmd', 'h', pid]
output = subprocess.check_output(command)
pattern = re.compile(r"""
config-file= # Literal string search
.+\/ # Everything up to the last forward slash
([^.]+) # Non-dot chars, this is what we want
""", re.VERBOSE)
matched = pattern.search(output)
if matched:
print(matched.group(1))
Notes
For the regular expression, I am using a verbose form, allowing me to use comment to annotate my pattern. I like this way as regular expression can be difficult to read
On your system, please adjust the "configuration-file" part to work with your output.
How to remove b symbol from python3 script?
import subprocess
get_data=subprocess.check_output(["df -k | awk '{print $6}'"],shell=True)
data_arr=get_data.split()
data_arr.pop(0)
data_arr.pop(0)
for i in data_arr[:]:
print(str(i))
Output
b'/dev/shm'
b'/run'
b'/sys/fs/cgroup'
b'/'
b'/tmp'
b'/test'
b'/boot'
b'/home'
b'/var'
b'/mnt/install'
b'/mnt/snapshot'
b'/mnt/share'
b'/mnt/storage'
b'/mnt/linux'
b'/mnt/download'
b'/run/user/1001'
The b symbol indicates that the output of check_process is a bytes rather than a str. The best way to remove it is to convert the output to string before you do any further work on it:
byte_data=subprocess.check_output(["df -k | awk '{print $6}'"],shell=True)
str_data = byte_data.decode('utf-8')
data_arr=str_data.split()
...
The decode method will take care of any unicode you may have in the string. If your default encoding (or the one used by awk I suppose) is not UTF-8, substitute the correct one in the example above.
Possibly an even better way to get around this issue is to tell check_output to open stdout as a text stream. The easiest way is to add a universal_newlines=True argument, which will use the default encoding for your current locale:
str_data = subprocess.check_output(["df -k | awk '{print $6}'"], shell=True, universal_newlines=True)
Alternatively, you can specify an explicit encoding:
str_data = subprocess.check_output(["df -k | awk '{print $6}'"], shell=True, encoding='utf-8')
In both of these cases, you do not need to decode because the output will already be str rather than bytes.
from my SO question:
read_key = ["binary", "arg1", "arg2", "arg3"]
proc = subprocess.Popen(read_key, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE, encoding='utf-8')
output = proc.communicate()[0]
print(output)
MY_EXPECTED_OUTPUT_STRING
try this:
a=str(yourvalue,'utf-8')