subprocess.getstatus() return limited-length string - python

I use subprocess.getstatus() to get output of bash command running inside python script, in my case top linux command:
output = subprocess.getoutput("top -bc -n 1 | grep some_process_name")
Unfortunately, output string of the function is limited to 80 chars. If the string is longer, I just get the first 80 chars.
Any other alternate way to get long outputs in shell commands, in full?

You could pipe the output to a text file, and then display the text file.

Related

Calling python script from bash script and getting it's return value

I've bash script doing some tasks but I need to manipulate on string obtained from configuration (for simplification in this test it's hardcoded). This manipulation can be done easily in python but is not simple in bash, so I've written a script in python doing this tasks and returning a string (or ideally an array of strings).
I'm calling this python script in my bash script. Both scripts are in the same directory and this directory is added to environment variables. I'm testing it on Ubuntu 22.04.
My python script below:
#!/usr/bin/python
def Get(input: str) -> list:
#Doing tasks - arr is an output array
return ' '.join(arr) #or ideally return arr
My bash script used to call the above python script
#!/bin/bash
ARR=("$(python -c "from test import Get; Get('val1, val2,val3')")")
echo $ARR
for ELEMENT in "${ARR[#]}"; do
echo "$ELEMENT"
done
When I added print in python script for test purposes I got proper results, so the python script works correctly. But in the bash script I got simply empty line. I've tried also something like that: ARR=("$(python -c "from test import Get; RES=Get('val1, val2,val3')")") and the iterate over res and got the same response.
It seems like the bash script cannot handle the data returned by python.
How can I rewrite this scripts to properly get python script response in bash?
Is it possible to get the whole array or only the string?
How can I rewrite this scripts to properly get python script response in bash?
Serialize the data from python side and deserialize on bash. Decide on proper protocol between the processes that would preserve any characters.
The best looks like it is to use newline or zero separated strings (protocol). Output delimiter separated elements from python (serialize) and read them properly on with readarray on bash side (deserialize).
$ tmp=$(python -c 'arr=[1,2,3]; print(*arr)')
$ readarray -t array <<<"$tmp"
$ declare -p array
declare -a array=([0]="1" [1]="2" [2]="3")
Or with zero separated stream. Note that Bash can't store zero bytes in variables, so we use redirection with process subtitution:
$ readarray -d '' -t array < <(python -c 'arr=[1,2,3]; print(*arr, sep="\0", end="")')
$ declare -p array
declare -a array=([0]="1" [1]="2" [2]="3")
I've solved my problem by exporting a string with elements separated by space.
I've also rewritten python code not to be a function but a script.
import sys
if len(sys.argv) > 1:
input = sys.argv[1]
#Doing tasks - arr is an output array
for element in arr:
print(element)
ARRAY=$(python script.py 'val1, val2,val3')
for ELEMENT in $ARRAY; do
echo "$ELEMENT"
done

debus-monitor output reading

I am trying to trigger the python script or shell script whenever a desktop notification has arrived using dbus-monitor
I am using the command in this way
dbus-monitor "interface='org.freedesktop.Notifications'" | grep --line-buffered "string" | xargs -I '{}' python3 ./test.py {}
after that, I am trying to send the desktop notification from another terminal using
-> notify-send "hello" "world"
the output for the above custom notification is
string "notify-send"
string ""
string "hello"
string "world "
string "urgency"
string "notify-send"
string ""
string "hello"
string "world "
string "urgency"
but if my output of this command is 10 lines, then the python script is getting called for every line.
but my expectation is to call the python script once for every notification and then get all the output in a single line as a param for the python script.
It is wise to take advantage of systemd integration with dbus.
Using systemd integration the programmer has better controls/sensors over the dbus integration. Also can take advantage on systemd loging/monitors mechanisms.
There is a good article here about systemd dbus with python..
Also there is very related answer to your question in this answer. as well.

Why is sys.argv() giving me an index out of bound?

I have a textile with content like this:
honda motor co of japan doesn't expect output at its car manufacturing plant in thailand
When I run wc -l textfile.txt, I receive 0.
The problem is I am running a python script that needs to count the number of line in this text file and run accordingly. I have tried two ways of computing the number of lines but they both keep giving me 0 and my code refuses to run.
Python code:
#Way 1
with open(sys.argv[1]) as myfile:
row=sum(1 for line in myfile)
print(row)
#Way 2
row = run("cat %s | wc -l" % sys.argv[1]).split()[0]
I receive an error that says: with open(sys.argv[1]) as myfile IndexError: list index out of range
I am calling receiving this file from php:
exec('python testthis.py $file 2>&1', $output);
I suspect that argv.sys[1] is giving me an error.
There's nothing wrong with the first example of your Python code (way 1).
The problem is the PHP calling code; the string being passed to exec() uses single quotes which prevents the expansion of the $file variable into the command string. The resulting call therefore passes the literal string $file as the argument to exec(), which in turn runs the command in a shell. That shell treats $file as a shell variable and tries to expand it, but it is not defined, and so it expands to an empty string. The resulting call is:
python testthis.py 2>&1
to which Python raises IndexError: list index out of range because it is missing an argument.
To fix use double quotes around the command when calling exec() in PHP:
$file = 'test.txt';
exec("python testthis.py $file 2>&1", $output);
Now $file can be expanded into the string as required.
This does assume that you actually want to expand a PHP variable into the string. Because exec() runs the command in a shell, it is also possible to have the variable defined in the shell's environment, and it will be expanded by the shell into the final command. To do this you would use single quotes around the command passed to exec().
Note that the Python code of "way 1" will return a line count of 1, not 0 as does wc -l.

Storing value from a parsed ping

I'm working on some code that performs a ping operation from python and extracts only the latency by using awk. This is currently what I have:
from os import system
l = system("ping -c 1 sitename | awk -F = 'FNR==2 {print substr($4,1,length($4)-3)}'")
print l
The system() call works fine, but I get an output in terminal rather than the value storing into l. Basically, an example output I'd get from this particular block of code would be
90.3
0
Why does this happen, and how would I go about actually storing that value into l? This is part of a larger thing I'm working on, so preferably I'd like to keep it in native python.
Use subprocess.check_output if you want to store the output in a variable:
from subprocess import check_output
l = check_output("ping -c 1 sitename | awk -F = 'FNR==2 {print substr($4,1,length($4)-3)}'", shell=True)
print l
Related: Extra zero after executing a python script
os.system() returns the return code of the called command, not the output to stdout.
For detail on how to properly get the command's output (including pre-Python 2.7), see this: Running shell command from Python and capturing the output
BTW I would use Ping Package https://pypi.python.org/pypi/ping
It looks promising
Here is how I store output to a variable.
test=$(ping -c 1 google.com | awk -F"=| " 'NR==2 {print $11}')
echo "$test"
34.9

Avoid subprocess.Popen auto escaping my backslashes in grep

I'm trying to write an svn pre-commit hook in python. Part of this involves checking the diff file to see if there are any actual file changes (as opposed to just property changes).
I have a working grep command which I can execute fine on the shell
grep "^\(Added: \|Modified: \|Deleted: \)" diff filename | grep -v 'svn:'
However when I put it through subprocess.POpen it escapes all my backslashes, which knackers the regexp.
Executing command: ['grep', '"^\\Added: \\|Modified: \\|Deleted: \\)", ...]
How do I avoid this?
NB: I'm aware that I can pipe results between subprocesses and I can do the two greps that way. I need help getting the first one working first though :/
NB2: I also tried using filterdiff --clean instead and couldn't get it to work. Searching for Added, Modified or Deleted lines, removing those with 'svn:' in and checking I had some results seemed to work though.
Python code:
command = ['grep', '"^\(Added: \|Modified: \|Deleted: \)"', filename]
sys.stdout.write('Executing command: %s\n' % (command))
p = subprocess.Popen(command,
stdin = subprocess.PIPE
stdout = subprocess.PIPE
stderr = subprocess.STDOUT
shell = True)
data = p.stdout.read()
if len(data) == 0:
sys.stdout.write("Diff does not contain any file modifications./n")
exit(0)
You need to consider what you want grep to see in its command line arguments.
The first argument needs to be the literal string "^\(Added: \|Modified: \|Deleted: \)", so that means that it shouldn't include the double quotes but should include the backslashes.
The way to express this kind of string is to use Python raw strings:
command = ['grep', r'^\(Added: \|Modified: \|Deleted: \)', filename]
A good way to check what you're actually running is to replace grep by echo so you can at least see what you're passing to the command.

Categories