Kill process via python gives Permission denied Error - python

I'm using a python script to control tcpdump. I can start a process of tcpdump just fine from my script. However, when I want to kill the tcpdump process via python:
import subprocess
pid = 9669 # pid of the tcpdump process
subprocess.call(["sudo", "kill", "-9", f"{pid}"])
I receive this error message:
kill: (9669): Permission denied
However, when I open a shell and enter sudo kill -9 9669 it kills the process just fine. The system is configured so that neither sudo tcpdump nor sudo kill will prompt for a password. To my understanding the subprocess.call command and the terminal command should be identical, yet one works and the other doesn't. What am I doing wrong?

PyCharm had been installed via snap. This has put PyCharm behind AppArmor. I removed PyCharm via snap, downloaded the PyCharm tar file and did the installation myself. Now my script for stopping tcpdump works as intended from within PyCharm.
Honestly I find it a bit baffling to put a developer tool into a sandbox where some things just won't work due to the restrictive nature of the sandbox.

Related

bash script that is run from Python reaches sudo timeout

This is a long bash script (400+ lines ) that is originally invoked from a django app like so -
os.system('./bash_script.sh &> bash_log.log')
It stops on a random command in the script. If the order of commands is changed, it hangs on another command in approx. the same location.
sshing to the machine that runs the django app, and running sudo ./bash_script.sh, asks for a password and then runs all the way.
I can't see the message it presents when it hangs in the log file, couldn't make it redirect there. I assume it's a sudo password request.
Tried -
sudo -v in the script - didn't help.
ssh to the machine and manually extend the sudo timeout in /etc/sudoers - didnt help, I think since the django app is already in the air and uses the previos timeout.
splitting the script in two, and running one in separate thread, like so -
def basher(command, log_path):
with open(log_path) as log:
Popen(command, stdout=log, stderr=log).wait()
script_thread = Thread(target=basher, args=('bash_script_pt1.sh', 'bash_log_pt1.log'))
script_thread.start()
os.system('./bash_script_pt2.sh &> bash_log_pt2.log') # I know it's deprecated, not sure if maybe it's better in this case
script_thread.join()
The logs showed that part 1 ended ok, but part 2 still hangs, albeit later in the code than when they were together.
I thought to edit /etc/sudoers from inside the Python code, and then re-login via su - user. There are snippets of how to pass the password using pty, however I don't understand the mechanics of it and could not get it to work.
I also noted that ps aux | grep bash_script.sh shows that the script is being run twice. As -
/bin/bash bash_script.sh
and as
sh -c bash_script.sh.
I assume os.system has an internal shell=True going on.
I don't understand the Linux entities/mechanics in play to figure out what's happening.
My guess is that the django app has different and more limited permissions, than the script itself does, and the script is inheriting said restrictions because it is being executed by it.
You need to find out what permissions the script has when you run it just from bash, and what it has when you run it via django, and then figure out what the difference is.

How to keep AWS Lightsail virtual server running [duplicate]

I have set up Flask on my Rapsberry Pi and I am using it for the sole purpose of acting as a server for an xml file which I created with a Python script to pass data to an iPad app (iRule).
My RPI is set up as headless and my access is with Windows 10 using PuTTY, WinSCP and TightVNC Viewer.
I run the server by opening a terminal window and the following command:
sudo python app1c.py
This sets up the server and I can access my xml file quite well. However, when I turn off the Windows machine and the PuTTY session, the Flask server shuts down!
How can I set it up so that the Flask server continues even when the Windows machine is turned off?
I read in the Flask documentation:
While lightweight and easy to use, Flask’s built-in server is not suitable for production as it doesn’t scale well and by default serves only one request at a time.
Then they go on to give examples of how to deploy your Flask application to a WSGI server! Is this necessary given the simple application I am dealing with?
Use:
$ sudo nohup python app1c.py > log.txt 2>&1 &
nohup allows to run command/process or shell script that can continue running in the background after you log out from a shell.
> log.txt: it forword the output to this file.
2>&1: move all the stderr to stdout.
The final & allows you to run a command/process in background on the current shell.
Install Node package forever at here https://www.npmjs.com/package/forever
Then use
forever start -c python your_script.py
to start your script in the background. Later you can use
forever stop your_script.py
to stop the script
You have multiple options:
Easy: deattach the process with &, for example:
$ sudo python app1c.py &
Medium: install tmux with apt-get install tmux
launch tmux and start your app as before and detach with CTRL+B.
Complexer:
Read run your flask script with a wsgi server - uwsgi, gunicorn, nginx.
Been stressing lately so I decide to go deep.
pm2 start app.py --interpreter python3
Use PM2 for things like this. I also use it for NodeJs app and a Python app on a single server.
Use:
$sudo python app1c.py >> log.txt 2>&1 &
">> log.txt" pushes all your stdout inside the log.txt file (You may check the application logs in it)
"2>&1" pushes all the stderr inside the log.txt file (This would push all the error logs inside log.txt)
"&" at the end makes it run in the background.
You would get the process id immediately after executing this command with which you can monitor or verify it.
$sudo ps -ef | grep <process-id>
Hope it helps..!!
You can always use nohup to run any scripts as background process.
nohup python script.py
This will run your script in background and also have its logs appended in nohup.out file which will be located in the directory script.py is store.
Make sure, you close the terminal and not press Ctrl + C. This will allow it to run in background even when you log out.
To stop it from running , ssh in to the pi again and run ps -ef |grep nohup and kill -9 XXXXX
where XXXX is the pid you will get ps command.
I've always found a detached screen process to be best for use cases such as these.
Run:
screen -m -d sudo python app1c.py
I was trying to run my flask app for testing in my GitHub CI and the step where I was running the app was getting stuck for ever. The reason was that it was never releasing the command line
Best solution I found was a combination of two other responses in here:
nohup python script.py &

Ways to run python script in the background on ubuntu VPS and save the logs

So, I have a python script which outputs some data into terminal from time to time. Im trying to run in on the Ubuntu VPS even after I close the SSH connection and still keep the logs somewhere.
Im saving the logs by using:
python3 my_script.py >>file.txt
and it works perfect, however when I try to run this process using
nohup python3 my_script.py >>file.txt &
so it runs in the background and after the ssh connection is closed it seems to save only the first log outputted from my_script.py. I've also tried running this in crontab but the result is similar - only the first log is saved.
Any tips? What am I doing wrong?
I could not understand what you mean "the first log". Maybe the first line of logs?
To run something in the background when SSH connection is closed, I prefer Linux screen, a terminal simulation tool that help you run your command in a sub-process. With it, you could choose to view your output any time in the foreground, or leave your process run in the background.
Usage (short)
screen is not included in most Linux distributions. Install it (Ubuntu):
$ sudo apt-get install screen
Run your script in the foreground:
$ screen python3 my_script.py
You'll see it running. Now detach from this screen: Press keys Ctrl-A followed by Ctrl-D. You'll be back to your shell where you run previous screen command. If you need to switch back to the running context, use screen -r command.
This tool supports multiple parallel running process too.
Something weird
I've tried to redirect stdout or stderr to a file with > or >> symbol. It turned out in failure. I am not an expert of this either, and maybe you need to see its manual page. However, I tend to directly write to a file in Python scripts, with some essential output lines on the console.

Error when opening interactive ssh shell from python

I am trying to open an interactive ssh shell through fabric.
Requirements:
Use fabrics hosts in the connection string to remote
Open fully interactive shell in current terminal
Works on osx and ubuntu
No need for data transfer between fabric/python and remote. So fabric task can end in background.
So far:
fabfile.py:
def test_ssh():
from subprocess import Popen
Popen('ssh user#1.2.3.4 -i "bla.pem"', shell=True)
In terminal:
localprompt$ fab test_ssh
localprompt$ tcsetattr: Input/output error
[remote ubuntu welcome here]
remoteprompt$ |
Then if I try to input a command on the remote prompt it is executed locally and I drop back to the local prompt.
Does anyone know a solution?
Note: I am aware of fabrics open_shell, but this does not work for me since the stdout lags behind, rendering this unusable.
A slight modification does the trick:
def test_ssh():
from subprocess import call
call('ssh user#1.2.3.4 -i "bla.pem"', shell=True)
As the answer to this question suggests the error suggests the error comes from the inability of ssh to connect to the stdin/out of a process in the background.
With call the fabric task does not end in the background, but I am fine with that as long as as it is not interfering with my stdin/out.

File disappears when not ssh'd

So I have tornado server setup on my vps running ubuntu 12.04. So when I am ssh'd into my server or am vnc'd in there the site loads static/templates files just fine. But when I exit out of ssh or terminate vnc python throws error that the file it was looking for does not exist.
[Errno 2] No such file or directory
When I execute the server I just run the python command to run it as a background process, and once its successfully running and exit out.
I have the server running at www.calapp.manangandhi.com
Edit: As per the answer below I was able to figure out a way for it to work. here si the link to daemonizing tornado application, there are other ways suggested in the thread as well. https://groups.google.com/forum/?fromgroups=#!topic/python-tornado/4cxKEFsS0RE
Are you trying to say that you run the server from within the ssh shell? If so, your problem is most likely that on shutdown of the shell, the software gets a HUP and disconnects despite being in the background. You need the software to daemonize and detach completely from the running terminal. If you're using a toolkit, look up "starting as daemon" or launch your software from within DJB's supervise or other system-wide launcher system.

Categories