I have a bash script which I run on my .csv file and then I run python script on the output of bash script. I would like to make everything into a single script, but bash scrip is quite complex and I couldn't find a way to use it in a Python..
grep "$(grep -E "tcp|udp" results.csv | grep -E "Critical|High|Medium" | awk -F "\"*,\"*" '{print $8}')" results.csv | sort -t',' -k4,4 -k8,8 | awk -F "\"*,\"*" '{print $5,"port",$7"/"$6,$8}' | sed '/tcp\|udp/!d' | awk '!a[$0]++' | sed '/,port,\/,/d' > out
I tried this both as a string, and as a parametrized command with subprocess, however it's just seems way too many complex characters for everything to work.
Is there a way simpler way to run this command in Python?
P.S. I know there are multiple questions & answers regarding this same topic, but none of them worked for me.
Could you please escape all the " double quotes" with \ please try it out and let us know if it worked:
os.system(" grep \"$(grep -E \"tcp|udp\" results.csv | grep -E \"Critical|High|Medium\" | awk -F \"\\\"*,\\\"*\" '{print $8}')\" results.csv | sort -t',' -k4,4 -k8,8 | awk -F \"\\\"*,\\\"*\" '{print $5,\"port\",$7\"/\"$6,$8}' | sed '/tcp\|udp/!d' | awk '!a[$0]++' | sed '/,port,\/,/d' > out ")
The whole command can be put into " your_command_with\"escaped\"double quotes ".
Have a nice day
Related
I am running this python function in suse linux to grep ip of node from /etc/hosts--
def mm_node():
import os
node_name = os.system("`cat /etc/hosts | egrep -i mm | grep om | awk '{print $1}'`")
return node_name
mm_node()
As a result, it is showing this weird output
sh: 192.168.10.10: command not found
instead of
192.168.10.10
If I run the shell command
(cat /etc/hosts | egrep -i mm | grep om | awk '{print $1}')
directly on linux command prompt, it gives the o/p as
192.168.10.10
Backquotes tell the shell to capture the output and use it on the command line, typically as argument to a command, as in grep `whoami` /etc/passwd. In your case the command line consists only of a backquoted pipeline, so the shell interprets the output of the pipeline as the command to execute. That is why it complains that the IP address is "not found".
If your intention is to capture the output of the pipeline to use in your Python code, you should use the subprocess module, which is the modern alternative to os.system that allows easy capturing of the output. For example:
import subprocess
def mm_node():
output = subprocess.run(
"cat /etc/hosts | egrep -i mm | grep om | awk '{print $1}'",
shell=True,
capture_output=True
).stdout
return output.strip()
print(mm_node())
def mm_node():
import os
node_name = os.system("cat /etc/hosts | egrep -i mm | grep om | awk '{print $1}'")
return node_name
mm_node()
We want to run the following shell command by python script ( we use python version 2.7 )
echo hadoop-hdfs-namenode - 2.6.4.0-91| grep hadoop-hdfs-namenode | awk '{print $NF}' | awk '{printf "%.1f\n", $NF}'
2.6
So I create the following python script to get the results - 2.6
import os
os.system("echo hadoop-hdfs-namenode - 2.6.4.0-91| grep hadoop-hdfs-namenode | awk '{print $NF}' | awk '{printf "%.1f\n", $NF}' ")
but when I run it we get
os.system("echo hadoop-hdfs-namenode - 2.6.4.0-91| grep hadoop-hdfs-namenode | awk '{print $NF}' | awk '{printf "%.1f\n", $NF}' ")
^
SyntaxError: invalid syntax
Is it possible to run this complicated shell via python ? , in order to get the expected results - 2.6
And how to fix my syntax?
escape " and \n : os.system("echo hadoop-hdfs-namenode - 2.6.4.0-91| grep hadoop-hdfs-namenode | awk '{print $NF}' | awk '{printf \" %.1f\\n \", $NF}' ") .
As a side note os.system will execute the command (a string) in a subshell and return the return code of the command , if you need the output take a look at the subprocess module: https://docs.python.org/3/library/subprocess.html
I have the following command, but it does not work. Can anyone help what the issue.
cur_usage=os.popen("""df -k \tmp |tail -1 | awk '{{print $4"\n"$5}}'| grep '%'|tr -d '%'""").read()
print(cur_usage)
There are a couple of things you need to do here:
If you're on a Unix-like OS, you should change \tmp to /tmp
You need to either change \n to \\n or mark the string as a raw string.
One of either of the following should work for you:
curr_usage = os.popen("""df -k /tmp |tail -1 | awk '{{print $4"\\n"$5}}'| grep '%'|tr -d '%'""").read().strip()
or
curr_usage = os.popen(r"""df -k /tmp |tail -1 | awk '{{print $4"\n"$5}}'| grep '%'|tr -d '%'""").read().strip()
I have Raspbian as the linux distro running on my RPI. I've setup a small socket server using twisted and it receives certain commands from an iOS app. These commands are strings. I started a process when I received "st" and now I want to kill it when i get "sp". This is the way I tried:
Imported OS
Used os.system("...") //to start process
os.system("...") // to kill process
Lets say the service is named xyz.
This is the exact way I tried to kill it:
os.system('ps axf | grep xyz | grep -v grep | awk '{print "kill " $1 }' | sh')
But I got a syntax error. That line runs perfectly when I try it in terminal separately. Is this a wrong way to do this in a python script? How do I fix it?
You will need to escape the quotes in your string:
os.system('ps axf | grep xyz | grep -v grep | awk \'{print "kill " $1 }\' | sh')
Or use a triple quote:
os.system('''ps axf | grep xyz | grep -v grep | awk '{print "kill " $1 }' | sh''')
Alternatively, open the process with Popen(...).pid and then use os.kill()
my_pid = Popen('/home/rolf/test1.sh',).pid
os.kill(int(my_pid), signal.SIGKILL)
Remember to include a shebang in your script (#!/bin/sh)
Edit:
On second thoughts, perhaps
os.kill(int(my_pid), signal.SIGTERM)
is probably a better way to end the process, it at least gives the process the chance to close down gracefully.
I have a benchmark thread running and it takes a couple of hours to run.
The script for initiating the benchmark thread was done using python.
It prints out some random "foo" and I want to grep it for further use.
So, I wrote a shell script that does this.
#!/bin/bash
id = `taskset -c 0 python <path>/run-apps.py <thread> | grep "pid" | awk '{print $2}'`
echo $id
Since, the thread takes a very long time.
Maybe the shell script is unable to jump to the next line till the execution is over and I am unable to print the id as soon as it initiates it..
do you see any problem? or how I can rectify this?
This statement
echo $id
cannot run until the previous statement
id=`taskset -c 0 python <path>/run-apps.py <thread> | grep "pid" | awk '{print $2}'`
completes. If you don't need $id, get rid of it and simply run
taskset -c 0 python <path>/run-apps.py <thread> | grep "pid" | awk '{print $2}'
to see the output as it is generated (but you may need to disable buffering, as pointed out by Martijn). If you do need $id, you can use the tee command
to store a copy of the output and print it to standard error at the same time:
id=$(taskset -c 0 python <path>/run-apps.py <thread> |\
grep "pid" | awk '{print $2}' | tee /dev/stderr) # Or some other file descriptor that goes to your terminal
A third option is to use a temporary file.
taskset -c 0 python <path>/run-apps.py <thread> | grep "pid" | awk '{print $2}' > tmpfile &
tail --pid $! -f tmpfile # Watch tmpfile until the backgrounded job completes
do-other-job --reading-from tmpfile