Windows python script to run a server and continue - python

I am working on my python script to launch a server, may be in background or in a different process and then further do some processing before killing the launched server.
Once the rest of the processing is over, then kill the launched server.
For Example
server_cmd = 'launch_server.exe -source '+ inputfile
print server_cmd
cmd_pid = subprocess.Popen(server_cmd).pid
...
...
... #Continue doing some processing
cmd_pid.terminate() # Once the processing is done, terminate the server
Some how the script does not continue after launching the server as the server may be running in infinite loop listening for a request. Is there a good way to send this process in background so that it doesn't expect for command line input.
I am using Python 2.7.8

It's odd that your script does not continue after launching the server command. In subprocess module, Popen starts another child process while the parent process (your script) should move on.
However in your code there's already a bug: cmd_pid is an int object and does not have terminate method. You should use subprocess.Popen object to call terminate method.

Making a small change resolved the problem
server_proc = subprocess.Popen(server_cmd, stdout=subprocess.PIPE)
server_proc.terminate()
Thanks Xu for correction in terminate.

Related

subprocess.run() get pid of the process

I want to run a program from inside my Python script and get the PID of the process I launched. I tried with :
p=subprocess.Popen(nameofmyprocess) pid=p.pid
But the problem is that it doesn't wait for my called program to finish. When I looked at the documentation, I concluded I should use subprocess.run() instead, however it doesn't have a .pid like Popen
Is there another alternative ?
Edit :
I should have mentioned this in the original question. The code of my program includes a server part where it opens a socket and listens to it. I have a client side which connects to that socket and sends a message. My end goal is to write a script that runs my program, gets its PID to pass it to some functions I wrote that will do the monitoring of my program : they will show me the memory usage of my program, the socket it has opened, the file descriptors.. basically information about things my program does.
As suggested , I used subprocess.Popen.poll so now my code looks like this :
p=subprocess.Popen(nameofmyprocess)
pid=p.pid
print pid
while (p.poll() is None):
time.sleep(20)
myFunc(pid)
myFunc2(pid)
However, when I run this script and run my client, it can't connect to my server program. It says "Connection failed". I'm pretty sure the program is running though because the PID I print is displayed when I use the command ps aux on another terminal.

Script does not continue after calling subprocess

I have this python script which is running and I call a subprocess to run a game server. The thing is I need the script to continue while the server is running. But as soon as the sub process starts, and the server starts running, the script is paused until I shut down the game server.
How can I allow the script to continue after the server has been initialized?
command = f'f:&cd {server_path}&{exec_prop}'
process = subprocess.Popen(command , stdout=subprocess.PIPE , shell=True)
process.communicate()
communicate blocks until the process terminates. Since this is not want you want, remove that line. It does not seem like you are using it to actually communicate anyway in your example.
On a side note, if you can avoid shell=True that is usually considered better practice. The stdout redirection seems unnecessary as well since you are not communicating..

How to run a python process in the background continuosly

I'm trying to build a todo manager in python where I want to continuously run a process in the bg that will alert the user with a popup when the specified time comes. I'm wondering how I can achieve that.
I've looked at some of the answers on StackOverflow and on other sites but none of them really helped.
So, What I want to achieve is to start a bg process once the user enters a task and keep on running it in the background until the time comes. At the same time there might be other threads running for other tasks as well that will end at their end times.
So far, I've tried this:
t = Thread(target=bg_runner, kwargs={'task': task, 'lock_file': lock_file_path})
t.setName("Get Done " + task.
t.start()
t.join()
With this the thread is continuosly running but it runs in the foreground and only exits when the execution is done.
If I add t.daemon = True in the above code, the main thread immediately exits after start() and it looks like the daemon is also getting killed then.
Please let me know how this can be solved.
I'm guessing that you just don't want to see the terminal window after you launch the script. In this case, it is a matter of how you execute the script.
Try these things.
If you are using a windows computer you can try using pythonw.exe:
pythonw.exe example_script.py
If you are using linux (maybe OSx) you may want to use 'nohup' in the terminal.
nohup python example_script.py
More or less the reason you have to do this comes down to how the Operating system handles processes. I am not an expert on this subject matter, but generally if you launch a script from a terminal, that script becomes a child process of the terminal. So if you exit that terminal, it will also terminate any child processes. The only way to get around that is to either detach the process from the terminal with something like nohup.
Now if you end up adding the #!/usr/bin/env python shebang line, your os could possibly just run the script without a terminal window if you just double click the script. YMMV (Again depends on how your OS works)
The first thing you need to do is prevent your script from exiting by adding a while loop in the main thread:
import time
from threading import Thread
t = Thread(target=bg_runner, kwargs={'task': task, 'lock_file': lock_file_path})
t.setName("Get Done " + task)
t.start()
t.join()
while True:
time.sleep(1.0)
Then you need to put it in the background:
$ nohup python alert_popup.py >> /dev/null 2>&1 &
You can get more information on controlling a background process at this answer.

Python subprocess -- close Django server and Docker container with Ctrl-C, return to terminal

I'm trying to figure out how to properly close out my script that's supposed to start up a Django server running in a docker container (boot2docker, on Mac OS X). Here's the pertinent code block:
try:
init_code = subprocess.check_output('./initdocker.sh', shell=True)
subprocess.call('./startdockerdjango.sh', shell=True)
except subprocess.CalledProcessError:
try:
subprocess.call('./startdockerdjango.sh', shell=True)
except KeyboardInterrupt:
return
Where startdockerdjango.sh takes care of setting the environment variables that docker needs and starts the server up. The script overall is supposed to know whether to do first-time setup and initialization or simply start the container and server; catching the CalledProcessError means that first time setup was already done and that the container and server can just be started up. The startup works fine, but when a user presses Ctrl-C to stop the server, the server stops normally but then apparently the process that started the server is still going. If I press return, then I can go back to the normal terminal command prompt. If I do any sort of shell command, like ls, then it will be carried out and then I can return to the terminal. I want to change the code so that, if a user presses Ctrl-C, then the server and the container that the server is running in will stop normally and then, afterward, stop the process and have the whole script exit. How can this be done? I don't want to just kill or terminate the process upon KeyboardInterrupt, since then the server and container won't be able to stop normally but will be killed off abruptly.
UPDATE:
I recently tried the following according to Padraic Cunningham's comment:
try:
init_code = subprocess.check_output('./initdocker.sh', shell=True)
subprocess.call('./startdockerdjango.sh', shell=True)
except subprocess.CalledProcessError:
try:
startproc = subprocess.Popen('./startdockerdjango.sh')
except KeyboardInterrupt:
startproc.send_signal(SIGTERM)
startproc.wait()
return
This was my attempt to send a term to the server to shut down gracefully and then use wait() to wait for the process (startproc) to complete. This, however, results in just having the container and server end abruptly, something that I was trying to prevent. The same thing happens if I try SIGINT instead. What, if anything, am I doing wrong in this second approach? I still want the same overall thing as before, which is having one single Ctrl-C end the container and server, then exit the script.
You might want to create the process using Popen. It will give you a little more control on how you manage the child process.
env = {"MY_ENV_VAR": "some value"}
proc = subprocess.Popen("./dockerdjango.sh", env=env)
try:
proc.wait()
except KeyboardInterupt:
proc.terminate() # on linux this gives the a chance to clean up,
# or even ignore the signal entirely
# use proc.send_signal(...) and the module signal to send other signals.
# or proc.kill() if you wish to be kill the process immediately.
If you set the environment variables in python it will also result in less child processes that need to be killed.
In the end, it wasn't worth the effort to have the script know to either do first-time initialization or server+container startup. Instead, the script will just try first-time setup and then will tell the user to do docker-compose up after successful setup. This is a better solution for my specific situation than trying to figure out how to have Ctrl-C properly shut down the server and then exit the script.
To Reset django server subprocess execute in your terminal:
$ sudo lsof -i tcp:8080
$ sudo lsof -i tcp:8080|awk '{print $2}'|cut -d/ -f 1|xargs kill

Python execute remote command and don't wait for return

I'm working on testing a corosync cluster. I'm trying to fail the interface that has the floating-IP to ensure the resource migrates over to another node with python.
Now the dilemma is my command does execute on the remote machine, but my test code hangs forever waiting for a reply it will never get--thenode will get rebooted because of the injected failure.
ssh = SSHClient(self.get_ms_ip(ms),
self.get_ms_user(ms),
self.get_ms_password(ms))
ssh.connect()
self.logger.info("Failing FIP eth now on %s" % ms)
ssh.exec_command(cmd, timeout=1)
#Code never reached this comment.
In python, how can I send the command and just continue on without waiting for any return? I've tried wrapping my ssh.exec_command with subprocess.Popen as suggested here Run Process and Don't Wait but that didn't yield anything different.
You don't want a subprocess, you want a thread. Spawn a thread that runs the exec_command call and you'll be able to continue with your code.
Did you try nohup?
ssh.exec_command('nohup %s &'%cmd, timeout=1)
Python doesn't handle threads nicely; can't manually exit a thread. I ended up having to make a worker method that would create the shh connection and run exec_command that would be run as a seperate multiprocessing.Process.
This way I was able to cleanup after a test properly before the next test ran (as part of python's unit test framework).

Categories