Related
In a program I am writing in python I need to completely restart the program if a variable becomes true, looking for a while I found this command:
while True:
if reboot == True:
os.execv(sys.argv[0], sys.argv)
When executed it returns the error [Errno 8] Exec format error. I searched for further documentation on os.execv, but didn't find anything relevant, so my question is if anyone knows what I did wrong or knows a better way to restart a script (by restarting I mean completely re-running the script, as if it were been opened for the first time, so with all unassigned variables and no thread running).
There are multiple ways to achieve the same thing. Start by modifying the program to exit whenever the flag turns True. Then there are various options, each one with its advantages and disadvantages.
Wrap it using a bash script.
The script should handle exits and restart your program. A really basic version could be:
#!/bin/bash
while :
do
python program.py
sleep 1
done
Start the program as a sub-process of another program.
Start by wrapping your program's code to a function. Then your __main__ could look like this:
def program():
### Here is the code of your program
...
while True:
from multiprocessing import Process
process = Process(target=program)
process.start()
process.join()
print("Restarting...")
This code is relatively basic, and it requires error handling to be implemented.
Use a process manager
There are a lot of tools available that can monitor the process, run multiple processes in parallel and automatically restart stopped processes. It's worth having a look at PM2 or similar.
IMHO the third option (process manager) looks like the safest approach. The other approaches will have edge cases and require implementation from your side to handle edge cases.
This has worked for me. Please add the shebang at the top of your code and os.execv() as shown below
#!/usr/bin/env python3
import os
import sys
if __name__ == '__main__':
while True:
reboot = input('Enter:')
if reboot == '1':
sys.stdout.flush()
os.execv(sys.executable, [sys.executable, __file__] + [sys.argv[0]])
else:
print('OLD')
I got the same "Exec Format Error", and I believe it is basically the same error you get when you simply type a python script name at the command prompt and expect it to execute. On linux it won't work because a path is required, and the execv method is basically encountering the same error.
You could add the pathname of your python compiler, and that error goes away, except that the name of your script then becomes a parameter and must be added to the argv list. To avoid that, make your script independently executable by adding "#!/usr/bin/python3" to the top of the script AND chmod 755.
This works for me:
#!/usr/bin/python3
# this script is called foo.py
import os
import sys
import time
if (len(sys.argv) >= 2):
Arg1 = int(sys.argv[1])
else:
sys.argv.append(None)
Arg1 = 1
print(f"Arg1: {Arg1}")
sys.argv[1] = str(Arg1 + 1)
time.sleep(3)
os.execv("./foo.py", sys.argv)
Output:
Arg1: 1
Arg1: 2
Arg1: 3
.
.
.
Thank you in advance for the time you'll give to read this question. I am learning Python and I looked up a lot before asking here, please forgive me for the newbie question.
So I created this script in python 3 using subprocess module to search for another python script's PID, while only knowing the beginning of the script's name and terminate it nicely.
Basically I run python clocks on my LCD screen through Raspberry and I2C, and I terminate the script, clear the LCD and turn it off. This "off" script code is provided below.
The issue is that when I run it from the directory it sits in with a:
python3 off.py
It works perfectly, getting parsing and terminating the PID, then turning off the LCD display.
Ideally I want to trigger it through telegram-cli because I did it in bash and it worked nicely, I find it to be a nice feature. In python it fails.
So I tested and it appears that when I try to launch it from another directory like this:
python3 ~/code/off.py
The grep subprocess returns more than the one PID it returns normally when launched from the script residing directory. For instance (with python3 -v):
kill: failed to parse argument: '25977
26044'
The second PID number is from a sub process created by the script, I can't seem to find what it is as it terminates when the script ends but fails it initial purpose.
Any help in understanding what is happening here would be really appreciated.
I came so far, as show below, from two ugly lines of bash mixed with a call to an dummy four lines python scripts, so I really feel I am getting close to a proper way of achieving my first real python script.
I tried to decompose the script line by line in the interpreter and could not reproduce the error, everything behave as expected. I only get this double PID result when running the script from an outer location.
Thank you in advance for any helpful insight on how to understand what is happening!
#!/usr/bin/env python3
import subprocess
import I2C_LCD_driver
import string
# Defining variables for searched strings and string encoding
searched_process_name = 'lcd_'
cut_grep_out_of_results = 'grep'
result_string_encoding = 'utf-8'
mylcd = I2C_LCD_driver.lcd()
LCD_NOBACKLIGHT = 0x00
run = True
def kill_script():
# Listing processes and getting the searched process
ps_process = subprocess.Popen(["ps", "aux"], stdout=subprocess.PIPE)
grep_process = subprocess.Popen(["grep", "-i", searched_process_name], stdin=ps_process.stdout, stdout=subprocess.PIPE)
# The .stdout.close() lines below allow the previous process to receive a SIGPIPE if the next process exits.
ps_process.stdout.close()
# Cleaning the result until only the PID number is returned in a string
grep_cutout = subprocess.Popen(["grep", "-v", cut_grep_out_of_results], stdin=grep_process.stdout, stdout=subprocess.PIPE)
grep_process.stdout.close()
awk = subprocess.Popen(["cut", "-c", "10-14"], stdin=grep_cutout.stdout, stdout=subprocess.PIPE)
grep_cutout.stdout.close()
output = awk.communicate()[0]
clean_output = output.decode(result_string_encoding)
clean_output_no_new_line = clean_output.rstrip()
clean_output_no_quote = clean_output_no_new_line.replace("'", '')
PID = clean_output_no_quote
# Terminating the LCD script process
subprocess.Popen(["kill", "-9", PID])
while run:
kill_script()
# Cleaning and shutting off LCD screen
mylcd.lcd_clear()
mylcd.lcd_device.write_cmd(LCD_NOBACKLIGHT)
break
I found out the reason of this weird comportment. An error on my end:
I forgot I called some directories with a name including the characters string I was running grep -i against provoking the double result when running the script from outside its directory using its full path.
Turns out the script runs pretty well using subprocess.
So in the end, I renamed the scripts I wanted to terminate with disp_ rather than lcd_ and added shell=False to my subprocesses to make sure there were no risk of unwantedly sending the output to bash while the running the script.
I'm trying to port a shell script to the much more readable python version. The original shell script starts several processes (utilities, monitors, etc.) in the background with "&". How can I achieve the same effect in python? I'd like these processes not to die when the python scripts complete. I am sure it's related to the concept of a daemon somehow, but I couldn't find how to do this easily.
While jkp's solution works, the newer way of doing things (and the way the documentation recommends) is to use the subprocess module. For simple commands its equivalent, but it offers more options if you want to do something complicated.
Example for your case:
import subprocess
subprocess.Popen(["rm","-r","some.file"])
This will run rm -r some.file in the background. Note that calling .communicate() on the object returned from Popen will block until it completes, so don't do that if you want it to run in the background:
import subprocess
ls_output=subprocess.Popen(["sleep", "30"])
ls_output.communicate() # Will block for 30 seconds
See the documentation here.
Also, a point of clarification: "Background" as you use it here is purely a shell concept; technically, what you mean is that you want to spawn a process without blocking while you wait for it to complete. However, I've used "background" here to refer to shell-background-like behavior.
Note: This answer is less current than it was when posted in 2009. Using the subprocess module shown in other answers is now recommended in the docs
(Note that the subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using these functions.)
If you want your process to start in the background you can either use system() and call it in the same way your shell script did, or you can spawn it:
import os
os.spawnl(os.P_DETACH, 'some_long_running_command')
(or, alternatively, you may try the less portable os.P_NOWAIT flag).
See the documentation here.
You probably want the answer to "How to call an external command in Python".
The simplest approach is to use the os.system function, e.g.:
import os
os.system("some_command &")
Basically, whatever you pass to the system function will be executed the same as if you'd passed it to the shell in a script.
I found this here:
On windows (win xp), the parent process will not finish until the longtask.py has finished its work. It is not what you want in CGI-script. The problem is not specific to Python, in PHP community the problems are the same.
The solution is to pass DETACHED_PROCESS Process Creation Flag to the underlying CreateProcess function in win API. If you happen to have installed pywin32 you can import the flag from the win32process module, otherwise you should define it yourself:
DETACHED_PROCESS = 0x00000008
pid = subprocess.Popen([sys.executable, "longtask.py"],
creationflags=DETACHED_PROCESS).pid
Use subprocess.Popen() with the close_fds=True parameter, which will allow the spawned subprocess to be detached from the Python process itself and continue running even after Python exits.
https://gist.github.com/yinjimmy/d6ad0742d03d54518e9f
import os, time, sys, subprocess
if len(sys.argv) == 2:
time.sleep(5)
print 'track end'
if sys.platform == 'darwin':
subprocess.Popen(['say', 'hello'])
else:
print 'main begin'
subprocess.Popen(['python', os.path.realpath(__file__), '0'], close_fds=True)
print 'main end'
Both capture output and run on background with threading
As mentioned on this answer, if you capture the output with stdout= and then try to read(), then the process blocks.
However, there are cases where you need this. For example, I wanted to launch two processes that talk over a port between them, and save their stdout to a log file and stdout.
The threading module allows us to do that.
First, have a look at how to do the output redirection part alone in this question: Python Popen: Write to stdout AND log file simultaneously
Then:
main.py
#!/usr/bin/env python3
import os
import subprocess
import sys
import threading
def output_reader(proc, file):
while True:
byte = proc.stdout.read(1)
if byte:
sys.stdout.buffer.write(byte)
sys.stdout.flush()
file.buffer.write(byte)
else:
break
with subprocess.Popen(['./sleep.py', '0'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc1, \
subprocess.Popen(['./sleep.py', '10'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc2, \
open('log1.log', 'w') as file1, \
open('log2.log', 'w') as file2:
t1 = threading.Thread(target=output_reader, args=(proc1, file1))
t2 = threading.Thread(target=output_reader, args=(proc2, file2))
t1.start()
t2.start()
t1.join()
t2.join()
sleep.py
#!/usr/bin/env python3
import sys
import time
for i in range(4):
print(i + int(sys.argv[1]))
sys.stdout.flush()
time.sleep(0.5)
After running:
./main.py
stdout get updated every 0.5 seconds for every two lines to contain:
0
10
1
11
2
12
3
13
and each log file contains the respective log for a given process.
Inspired by: https://eli.thegreenplace.net/2017/interacting-with-a-long-running-child-process-in-python/
Tested on Ubuntu 18.04, Python 3.6.7.
You probably want to start investigating the os module for forking different threads (by opening an interactive session and issuing help(os)). The relevant functions are fork and any of the exec ones. To give you an idea on how to start, put something like this in a function that performs the fork (the function needs to take a list or tuple 'args' as an argument that contains the program's name and its parameters; you may also want to define stdin, out and err for the new thread):
try:
pid = os.fork()
except OSError, e:
## some debug output
sys.exit(1)
if pid == 0:
## eventually use os.putenv(..) to set environment variables
## os.execv strips of args[0] for the arguments
os.execv(args[0], args)
You can use
import os
pid = os.fork()
if pid == 0:
Continue to other code ...
This will make the python process run in background.
I haven't tried this yet but using .pyw files instead of .py files should help. pyw files dosen't have a console so in theory it should not appear and work like a background process.
I'm using Python 2.7 with Glade 3.15 to create a GUI that allows click-button execution of a variety of existing bash/cshell scripts maintained by my work team. I'm fairly new to Python, but have managed to get the basic application structure up-and-running. However, certain bash scripts I'm calling will step through multiple user prompts and take input to determine end behavior. The problem I am encountering is when I call a bash script as a python subprocess, the bash script appears to take a null input over-and-over, thus causing the prompts to loop endlessly.
For example:
A bash script that prompts:
"Please enter your 4 digit document number:"
** accept user input in terminal **
"You entered ----, is that correct?
1.) Yes
2.) No "
When called from python, the terminal will press through the the prompts, sending an empty response. Since the bash script loops until affirmative response is received, the result is a terminal endlessly printing:
"You entered ----, is that correct?
1.) Yes
2.) No "
I've tried extensively to find answers, here and elsewhere, regarding this issue, but have not found/developed a solution yet.
My basic python, relative to this problem, is as follows (although I have tried a wide variety of different approaches)
import subprocess
from subprocess import Popen,PIPE
...
# Definition for subprocess calls
def subprocess_cmd(self, command):
process = subprocess.Popen(command, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
process.wait()
(output, err) = process.communicate()
print output
...
# Script-Call Button
def on_btnScript_clicked(self, object, data=None):
self.subprocess_cmd("scriptname_is_here")
I just want to call a subprocess from my python button_click event that kicks-off the bash script in the terminal, and waits for keyboard terminal input to walk-through the prompts, as it would if it were run directly from terminal. Sorry so long - wanted to be thorough and explicit. Thanks in advance for any help.
*****UPDATE****
If I call the subprocess from another standalone python file with the .wait() method, the interaction works as desired. But, when I call the subprocess as a result of the GUI button_click event, with the same arguments and methods, the looping anomaly happens. I think this has to do with my button click event and subprocess_cmd 'function' being defined in my mainDialog class, but I don't know how to separate them while retaining my connection to GUI.
Here is more context for my code
#!/usr/bin/python
# Library Imports
from gi.repository import Gtk
from os import system
import subprocess
from subprocess import Popen,PIPE
import time
try:
import math
except:
print "Math Library Missing"
sys.exit(1)
class mainDialog:
# Build the 'form load' parameters
def __init__(self):
self.gladefile = "test.glade"
self.builder = Gtk.Builder()
self.builder.add_from_file(self.gladefile)
self.builder.connect_signals(self)
self.winMain = self.builder.get_object("winMain")
self.winCptArg = self.builder.get_object("winCptArg")
self.winMsbHelp = self.builder.get_object("winMsbHelp")
self.winCptHelp = self.builder.get_object("winCptHelp")
self.winAiHelp = self.builder.get_object("winAiHelp")
self.winMain.move(2625, 400)
self.winMain.show()
# Definition for subprocess calls
def subprocess_cmd(self, command):
process = subprocess.Popen(command)
process.wait()
...
# Script-Call Button
def on_btnScript_clicked(self, object, data=None):
self.subprocess_cmd("scriptname_is_here")
if __name__ == "__main__":
main = mainDialog()
Gtk.main()
Just use os.system:
from os import system
...
# Definition for subprocess calls
def subprocess_cmd(self, command):
process = system(str(command))
...
# Script-Call Button
def on_btnScript_clicked(self, object, data=None):
self.os.system("echo scriptname_is_here")
The syntax is os.system("executable option parameter").
For example,
os.system("ls -al /home")
Well, if anyone is interested, to achieve what I was intending, I simply left stdin and stdout alone, and applied the .wait() method to the subprocess definition -- but this only works when called from a standalone python script; I haven't been able to retain functionality when connected to GUI button click event.
def subprocess_cmd(self, command):
process = subprocess.Popen(command).wait()
...
def on_btnScript_clicked(self, object, data=None):
self.subprocess_cmd("filepath/scriptname_is_here")
stdin and stdout can be left as default, and standard terminal interaction can be achieved, so long as the subprocess definition is appended with the wait() method.
I am trying to learn how to write a script control.py, that runs another script test.py in a loop for a certain number of times, in each run, reads its output and halts it if some predefined output is printed (e.g. the text 'stop now'), and the loop continues its iteration (once test.py has finished, either on its own, or by force). So something along the lines:
for i in range(n):
os.system('test.py someargument')
if output == 'stop now': #stop the current test.py process and continue with next iteration
#output here is supposed to contain what test.py prints
The problem with the above is that, it does not check the output of test.py as it is running, instead it waits until test.py process is finished on its own, right?
Basically trying to learn how I can use a python script to control another one, as it is running. (e.g. having access to what it prints and so on).
Finally, is it possible to run test.py in a new terminal (i.e. not in control.py's terminal) and still achieve the above goals?
An attempt:
test.py is this:
from itertools import permutations
import random as random
perms = [''.join(p) for p in permutations('stop')]
for i in range(1000000):
rand_ind = random.randrange(0,len(perms))
print perms[rand_ind]
And control.py is this: (following Marc's suggestion)
import subprocess
command = ["python", "test.py"]
n = 10
for i in range(n):
p = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
while True:
output = p.stdout.readline().strip()
print output
#if output == '' and p.poll() is not None:
# break
if output == 'stop':
print 'sucess'
p.kill()
break
#Do whatever you want
#rc = p.poll() #Exit Code
You can use subprocess module or also the os.popen
os.popen(command[, mode[, bufsize]])
Open a pipe to or from command. The return value is an open file object connected to the pipe, which can be read or written depending on whether mode is 'r' (default) or 'w'.
With subprocess I would suggest
subprocess.call(['python.exe', command])
or the subprocess.Popen --> that is similar to os.popen (for instance)
With popen you can read the connected object/file and check whether "Stop now" is there.
The os.system is not deprecated and you can use as well (but you won't get a object from that), you can just check if return at the end of execution.
From subprocess.call you can run it in a new terminal or if you want to call multiple times ONLY the test.py --> than you can put your script in a def main() and run the main as much as you want till the "Stop now" is generated.
Hope this solve your query :-) otherwise comment again.
Looking at what you wrote above you can also redirect the output to a file directly from the OS call --> os.system(test.py *args >> /tmp/mickey.txt) then you can check at each round the file.
As said the popen is an object file that you can access.
What you are hinting at in your comment to Marc Cabos' answer is Threading
There are several ways Python can use the functionality of other files. If the content of test.py can be encapsulated in a function or class, then you can import the relevant parts into your program, giving you greater access to the runnings of that code.
As described in other answers you can use the stdout of a script, running it in a subprocess. This could give you separate terminal outputs as you require.
However if you want to run the test.py concurrently and access variables as they are changed then you need to consider threading.
Yes you can use Python to control another program using stdin/stdout, but when using another process output often there is a problem of buffering, in other words the other process doesn't really output anything until it's done.
There are even cases in which the output is buffered or not depending on if the program is started from a terminal or not.
If you are the author of both programs then probably is better using another interprocess channel where the flushing is explicitly controlled by the code, like sockets.
You can use the "subprocess" library for that.
import subprocess
command = ["python", "test.py", "someargument"]
for i in range(n):
p = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
while True:
output = p.stdout.readline()
if output == '' and p.poll() is not None:
break
if output == 'stop now':
#Do whatever you want
rc = p.poll() #Exit Code