I'm trying a python script downloaded from a blog to send fake echo-replies after a ping from a machine.
The problem is that when I run the script, it gives me this error:
File "/usr/lib/python2.7/dist-packages/nfqueue.py", line 96, in
create_queue def create_queue(self,*args): return
_nfqueue.queue_create_queue(self, *args) RuntimeError: error during nfq_create_queue()
This is the part where it binds the queue:
import nfqueue
q = None
q = nfqueue.queue()
q.open()
q.bind(socket.AF_INET)
q.set_callback(cb)
q.create_queue(0)
try:
q.try_run()
except KeyboardInterrupt:
print "Exiting..."
q.unbind(socket.AF_INET)
q.close()
The error is on the q.create_queue(0), but I don't know what to do!
The obtained message may derive from an already running execution of your python script.
Assuming your script file is pyscriptname.py, run the following command to check if another instance of your script is already running:
ps -aux | grep "pyscriptname.py" | grep -v grep | wc -l
In case something returned value is greater than 0, you can solve the issue by running the following command:
kill -9 `ps aux | grep "pyscriptname.py" | grep -v grep | awk '{print $2}'`
Then, you can run again your python script:
python pyscriptname.py
nfqueue needs root privileges. So run the script as root or run it under sudo
Related
I would like to run the following command using python subprocess.
docker run --rm -it -v $(pwd):/grep11-cli/config ibmzcontainers/hpvs-cli-installer:1.2.0.1.s390x crypto list | grep 'cex4queue": []'
If I run using subprocess.call() - it is working. But I am not able to check the return value
s1="docker run --rm -it -v $(pwd):/grep11-cli/config ibmzcontainers/hpvs-cli-installer:1.2.0.1.s390x crypto list | grep \'cex4queue\": []\'"
p1 = subprocess.call(s1,shell=True)
Same command with subprocess.run is not working.
I want to check whether that string present or not. How can I check?
I would recommend the use of subprocess.Popem:
import subprocess as sb
process = sb.Popen("docker run --rm -it -v $(pwd):/grep11-cli/config ibmzcontainers/hpvs-cli-installer:1.2.0.1.s390x crypto list | grep 'cex4queue\": []'".split(), stdout=sb.PIPE, stderror=sb.PIPE)
output, errors = process.communicate()
print('The output is: {}\n\nThe errors were: {}'.format(output, errors))
I'm trying to run the following;
def conn(ad_group):
result = Popen(["sudo -S /opt/quest/bin/vastool", "-u host/ attrs 'AD_GROUP_NAME' | grep member"], stdout=PIPE)
return result.stdout
on a RedHat machine in a python script but I'm getting FileNotFoundError: [Errno 2] No such file or directory: 'sudo -S /opt/quest/bin/vastool'
I can run the command(sudo -S /opt/quest/bin/vastool -u host/ attrs 'AD_GROUP_NAME' | grep member) at the command line without a problem.
I'm sure I've messed up something in the function but I need an other set of eyes.
Thank you
You need to make the entire command a single string, and use the shell=True option because you're using a shell pipeline.
result = Popen("sudo -S /opt/quest/bin/vastool -u host/ attrs 'AD_GROUP_NAME' | grep member", stdout=PIPE, shell=True)
I have Raspbian as the linux distro running on my RPI. I've setup a small socket server using twisted and it receives certain commands from an iOS app. These commands are strings. I started a process when I received "st" and now I want to kill it when i get "sp". This is the way I tried:
Imported OS
Used os.system("...") //to start process
os.system("...") // to kill process
Lets say the service is named xyz.
This is the exact way I tried to kill it:
os.system('ps axf | grep xyz | grep -v grep | awk '{print "kill " $1 }' | sh')
But I got a syntax error. That line runs perfectly when I try it in terminal separately. Is this a wrong way to do this in a python script? How do I fix it?
You will need to escape the quotes in your string:
os.system('ps axf | grep xyz | grep -v grep | awk \'{print "kill " $1 }\' | sh')
Or use a triple quote:
os.system('''ps axf | grep xyz | grep -v grep | awk '{print "kill " $1 }' | sh''')
Alternatively, open the process with Popen(...).pid and then use os.kill()
my_pid = Popen('/home/rolf/test1.sh',).pid
os.kill(int(my_pid), signal.SIGKILL)
Remember to include a shebang in your script (#!/bin/sh)
Edit:
On second thoughts, perhaps
os.kill(int(my_pid), signal.SIGTERM)
is probably a better way to end the process, it at least gives the process the chance to close down gracefully.
I am trying to understand the behavior I have with the SIGINT signal with a script launched in two differents ways.
Here is a simple python script :
import time
while True:
time.sleep(10000)
If I launch the script in background, check the pid and ppid (notice that it is the same as my terminal) and kill it wit SIGINT, it works :
user#host [~] > python script.py &
[1] 19077
user#host [~] > ps axo pid,ppid,command | grep script
19077 1055 python script.py
19093 1055 grep script
user#host [~] > kill -INT 19077
Traceback (most recent call last):
File "script.py", line 10, in <module>
time.sleep(10000)
KeyboardInterrupt
[1] + exit 1 python script.py
Now if i launch it through a Makefile :
user#host [~] > cat Makefile
all:
python script.py &
user#host [~] > make
python script.py &
user#host [~] > ps axo pid,ppid,command | grep script
19118 1 python script.py
19122 1055 grep script
user#host [~] > kill -INT 19118
user#host [~] > ps axo pid,ppid,command | grep script
19118 1 python script.py
19128 1055 grep script
Notice that now, its ppid is 1 (init, seems logic) and it does not get killed. As if the process does not receive the signal. I changed my script to handle the signal by myself :
import time, signal, sys
def signal_handler(signal, frame):
print 'Killed !'
sys.exit(0)
signal.signal(signal.SIGINT, signal_handler)
while True:
time.sleep(10000)
Now the process got killed with the handler I wrote :
user#host [~] > make
python script.py &
user#host [~] > ps axo pid,ppid,command | grep script
19148 1 python script.py
19152 1055 grep script
user#host [~] > kill -INT 19148
Killed !
user#host [~] > ps axo pid,ppid,command | grep script
19158 1055 grep script
So my question is : why does the process does not get killed witn SIGINT when its ppid is 1 or launched with a Makefile ? I cannot understand the behavior, I know that the best way would be to kill it with SIGTERM as it is almost like a daemon but I want to understand this anyway.
In python the SIGINT signal is translated into a KeyboardInterrupt exception, I tried to caught it without any success.
I did the same script in bash and the behavior is exactly the same.
Any ideas ?
Signal handlers are inherited from the parent process and, as you demonstrate yourself, can be redefined. So either make redefines it. Or, for SIGINT in particular, there may be a logic that redefines the handler when the process looses its stdin or its terminal, since SIGINT is normally used for Ctrl-C. No terminal, no Ctrl-C.
I want to kill python interpeter - The intention is that all the python files that are running in this moment will stop (without any informantion about this files).
obviously the processes should be closed.
Any idea as delete files in python or destroy the interpeter is ok :D (I am working with virtual machine).
I need it from the terminal because i write c code and i use linux commands...
Hope for help
pkill -9 python
should kill any running python process.
There's a rather crude way of doing this, but be careful because first, this relies on python interpreter process identifying themselves as python, and second, it has the concomitant effect of also killing any other processes identified by that name.
In short, you can kill all python interpreters by typing this into your shell (make sure you read the caveats above!):
ps aux | grep python | grep -v "grep python" | awk '{print $2}' | xargs kill -9
To break this down, this is how it works. The first bit, ps aux | grep python | grep -v "grep python", gets the list of all processes calling themselves python, with the grep -v making sure that the grep command you just ran isn't also included in the output. Next, we use awk to get the second column of the output, which has the process ID's. Finally, these processes are all (rather unceremoniously) killed by supplying each of them with kill -9.
pkill with script path
pkill -9 -f path/to/my_script.py
is a short and selective method that is more likely to only kill the interpreter running a given script.
See also: https://unix.stackexchange.com/questions/31107/linux-kill-process-based-on-arguments
You can try the killall command:
killall python
pgrep -f <your process name> | xargs kill -9
This will kill the your process service.
In my case it is
pgrep -f python | xargs kill -9
pgrep -f youAppFile.py | xargs kill -9
pgrep returns the PID of the specific file will only kill the specific application.
If you want to show the name of processes and kill them by the command of the kill, I recommended using this script to kill all python3 running process and set your ram memory free :
ps auxww | grep 'python3' | awk '{print $2}' | xargs kill -9
to kill python script while using ubuntu 20.04.2 intead of Ctrl + C just push together
Ctrl + D
I have seen the pkill command as the top answer. While that is all great, I still try to tread carefully (since, I might be risking my machine whilst killing processes) and follow the below approach:
First list all the python processes using:
$ ps -ef | grep python
Just to have a look at what root user processes were running beforehand and to cross-check later, if they are still running (after I'm done! :D)
then using pgrep as :
$ pgrep -u <username> python -d ' ' #this gets me all the python processes running for user username
# eg output:
11265 11457 11722 11723 11724 11725
And finally, I kill these processes by using the kill command after cross-checking with the output of ps -ef| ...
kill -9 PID1 PID2 PID3 ...
# example
kill -9 11265 11457 11722 11723 11724 11725
Also, we can cross check the root PIDs by using :
pgrep -u root python -d ' '
and verifying with the output from ps -ef| ...