I have a script in python (I called it monitor.py), that checks if another python application (called test.py) is running; if true nothing happens; if false it starts test.py.
I am using the subprocess module in monitor.py, but if I start test.py and I close monitor.py , test.py also closes; is there any way to avoid this ? Is this subprocess module the correct one ?
I have a script [...] that checks if another [...] is running
I'm not sure if it's any help in your case, but i just wanted to say that if you're working with Windows, you can program a real service in python.
Doing that from scratch is some effort, but some good people out there provide examples that you can easily change, like this one.
(In this example, look for the line f = open('test.dat', 'w+') and write your code there)
It'll behave like any other windows service, so you can make it start when booting your PC, for example.
Related
So far I don't think this is actually possible, but basically what I am trying to do is have one python program call another and run it, like how you would use import.
But then I need to be able to go from the second file back to the beginning of the first.
Doing this with import doesn't work because the first program never closed and will be still running, so running it again will only return to where it left off when it ran the second file.
Without understanding a bit more about what you want to do, I would suggest looking into the threading or multiprocessing libraries. These should allow you to create multiple instances of a program or function.
This is vague and I'm not quite sure what you're trying to do, but you can also explore the Subprocess module for Python. It will allow you to spawn new processes similarly to if you were starting them from the command-line, and your processes will also be able to talk to the child processes via stdin and stdout.
If you don't want to import any modules:
exec("file.py")
Otherwise:
import os
os.system('file.py')
Or:
import subprocess
subprocess.call('file.py')
I need to run a Python script in a terminal, several times. This script requires me to import some libraries. So every time I call the script in the terminal, the libraries are loaded again, which results in a loss of time. Is there any way I can import the libraries once and for all at the beginning?
(If I try the "naive" way, calling first a script just to import libraries then running my code, it doesn't work).
EDIT: I need to run the script in a terminal because actually it is made to serve in another program developed in Java. The Java code calls the Pythin script in the terminal, reads its result and processes it, then calls it again.
One solution is that you can leave the python script always running and use a pipe to communicate between processes like the code below taken from this answer.
import os, time
pipe_path = "/tmp/mypipe"
if not os.path.exists(pipe_path):
os.mkfifo(pipe_path)
# Open the fifo. We need to open in non-blocking mode or it will stalls until
# someone opens it for writting
pipe_fd = os.open(pipe_path, os.O_RDONLY | os.O_NONBLOCK)
with os.fdopen(pipe_fd) as pipe:
while True:
message = pipe.read()
if message:
print("Received: '%s'" % message)
print("Doing other stuff")
time.sleep(0.5)
The libraries will be unloaded once the script finishes, so the best way you can handle this is to write the script so it can iterate however many times you want, rather than running the whole script multiple times. I would likely use input() (or raw_input() if you're running Python2) to read in however many times you want to iterate over it, or use a library like click to create a command line argument for it.
I want to execute a testrun via bash, if the test needs too much time. So far, I found some good solutions here. But since the command kill does not work properly (when I use it correctly it says it is not used correctly), I decided to solve this problem using python. This is the Execution call I want to monitor:
EXE="C:/program.exe"
FILE="file.tpt"
HOME_DIR="C:/Home"
"$EXE" -vm-Xmx4096M --run build "$HOME_DIR/test/$FILE" "Auslieferung (ML) Execute"
(The opened *.exe starts a testrun which includes some simulink simulation runs - sometimes there are simulink errors - in this case, the execution time of the tests need too long and I want to restart the entire process).
First, I came up with the idea, calling a shell script containing these lines within a subprocess from python:
import subprocess
import time
process = subprocess.Popen('subprocess.sh', shell = True)
time.sleep(10)
process.terminate()
But when I use this, *.terminate() or *.kill() does not close the program I started with the subprocess call.
That´s why I am now trying to implement the entire call in python language. I got the following so far:
import subprocess
file = "somePath/file.tpt"
p = subprocess.Popen(["C:/program.exe", file])
Now I need to know, how to implement the second call "Auslieferung (ML) Execute" of the bash function. This call starts an intern testrun named "Auslieferung (ML) Execute". Any ideas? Or is it better to choose one of the other ways? Or can I get the "kill" option for bash somewhere, somehow?
I have a playgame.cmd file I would like to exceute from within my python code.
It is a genetic algorithm that runs the game (input is individual), waits for the game to run with that individual, then parses data from the game log to output the fitness of that individual.
Inside the .cmd file (shouldn't matter I don't think):
python tools/playgame.py "python MyBot.py" "python tools/sample_bots/python/HunterBot.py"
--map_file tools/maps/example/tutorial1.map --log_dir game_logs --turns 60 --scenario
--food none --player_seed 7 --verbose -e
(This is for the ants AI challenge if you were wondering)
This is all details though. My question is that of the title:
How do I start the script midline in python, wait for the script to finish, then resume the python execution? The script file is in the same folder as the python AntEvolver.py file.
If you want to launch a .cmd file from within a Python script which then launches two more copies of Python within the .cmd, I think you need to slow down, take a step back, and think about how to just get all this stuff to run within one Python interpreter. But, the direct answer to your question is to use os.system() (or the subprocess module, which is also mentioned here):
http://docs.python.org/library/os.html#os.system
A very little snippet:
import subprocess
# do your stuff with sys.argv
subprocess.Popen("python MyBot.py", shell=True).communicate()
# script executed and finished, you can continue...
I'm using Python 2.6 on linux.
I have a run.py script which starts up multiple services in the background and generates kill.py to kill those processes.
Inside kill.py, is it safe to unlink itself when it's done its job?
import os
# kill services
os.unlink(__file__)
# is it safe to do something here?
I'm new to Python. My concern was that since Python is a scripting language, the whole script might not be in memory. After it's unlinked, there will be no further code to interpret.
I tried this small test.
import os
import time
time.sleep(10) # sleep 1
os.unlink(__file__)
time.sleep(10) # sleep 2
I ran stat kill.py when this file was being run and the number of links was always 1, so I guess the Python interpreter doesn't hold a link to the file.
As a higher level question, what's the usual way of creating a list of processes to be killed later easily?
Don't have your scripts write new scripts if you can avoid it – just write out a list of the PIDs, and then through them.
It's not very clear what you're trying to do, but creating and deleting scripts sounds like too much fragile magic.
To answer the question:
Python compiles all of the source and closes the file before executing it, so this is safe.
In general, unlinking an opened file is safe on Linux. (But not everywhere: on Windows you can't delete a file that is in use.)
Note that when you import a module, Python 2 compiles it into a .pyc bytecode file and interprets that. If you remove the .py file, Python will still use the .pyc, and vice versa.
Just don't call reload!
There's no need for Python to hold locks on the files since they are compiled and loaded at import time. Indeed, the ability to swap files out while a program is running is often very useful.
IIRC(!): When on *nix an unlink only removes the name in the filesystem, the inode is removed when the last file handle is closed. Therefore this should not induce any problems, except python tries to reopen the file.
As a higher level question, what's the usual way of creating a list of processes to be killed later easily?
I would put the PIDs in a list and iterate over that with os.kill. I don't see why you're creating and executing a new script for this.
Python reads in a whole source file and compiles it before executing it, so you don't have to worry about deleting or changing your running script file.