I'm learning multiprocessing in Python 2.7. I tried the following code in both Windows 7 and Fedora 20.
Code Sample
import multiprocessing
import time
def worker():
name = multiprocessing.current_process().name
print name, 'Starting'
time.sleep(10)
print name, 'Exiting'
if __name__ == '__main__':
worker_1=multiprocessing.Process(target=worker)
worker_2=multiprocessing.Process(target=worker)
worker_1.start()
worker_2.start()
In Windows-7 in the Task Manager I'm able to see 3 python processes running.
While in Fedora-20 with the command top | grep python, I can see only one python process running.
Is it that in Linux, multiprocessing is not allowed by the operating system?
If a multiprocessing program will run like a normal program, then why one should prefer multiprocessing over Threading?
The problem is not with Python, rather it is with the use of top command. By default, top shows only the processes that are fitted in a single screen. When your worker process are in sleep they takes low resources and falls behind. As a result they do not appear as a result of grep.
You can either use ps aux command to verify that workers got created or you can use -b option for top which is used to redirect output of top.
top -b | grep python
From man top:
-b : Batch mode operation
Starts top in 'Batch mode', which could be useful for sending out-
put from top to other programs or to a file. In this mode, top
will not accept input and runs until the iterations limit you've
set with the '-n' command-line option or until killed.
Related
In a program I am writing in python I need to completely restart the program if a variable becomes true, looking for a while I found this command:
while True:
if reboot == True:
os.execv(sys.argv[0], sys.argv)
When executed it returns the error [Errno 8] Exec format error. I searched for further documentation on os.execv, but didn't find anything relevant, so my question is if anyone knows what I did wrong or knows a better way to restart a script (by restarting I mean completely re-running the script, as if it were been opened for the first time, so with all unassigned variables and no thread running).
There are multiple ways to achieve the same thing. Start by modifying the program to exit whenever the flag turns True. Then there are various options, each one with its advantages and disadvantages.
Wrap it using a bash script.
The script should handle exits and restart your program. A really basic version could be:
#!/bin/bash
while :
do
python program.py
sleep 1
done
Start the program as a sub-process of another program.
Start by wrapping your program's code to a function. Then your __main__ could look like this:
def program():
### Here is the code of your program
...
while True:
from multiprocessing import Process
process = Process(target=program)
process.start()
process.join()
print("Restarting...")
This code is relatively basic, and it requires error handling to be implemented.
Use a process manager
There are a lot of tools available that can monitor the process, run multiple processes in parallel and automatically restart stopped processes. It's worth having a look at PM2 or similar.
IMHO the third option (process manager) looks like the safest approach. The other approaches will have edge cases and require implementation from your side to handle edge cases.
This has worked for me. Please add the shebang at the top of your code and os.execv() as shown below
#!/usr/bin/env python3
import os
import sys
if __name__ == '__main__':
while True:
reboot = input('Enter:')
if reboot == '1':
sys.stdout.flush()
os.execv(sys.executable, [sys.executable, __file__] + [sys.argv[0]])
else:
print('OLD')
I got the same "Exec Format Error", and I believe it is basically the same error you get when you simply type a python script name at the command prompt and expect it to execute. On linux it won't work because a path is required, and the execv method is basically encountering the same error.
You could add the pathname of your python compiler, and that error goes away, except that the name of your script then becomes a parameter and must be added to the argv list. To avoid that, make your script independently executable by adding "#!/usr/bin/python3" to the top of the script AND chmod 755.
This works for me:
#!/usr/bin/python3
# this script is called foo.py
import os
import sys
import time
if (len(sys.argv) >= 2):
Arg1 = int(sys.argv[1])
else:
sys.argv.append(None)
Arg1 = 1
print(f"Arg1: {Arg1}")
sys.argv[1] = str(Arg1 + 1)
time.sleep(3)
os.execv("./foo.py", sys.argv)
Output:
Arg1: 1
Arg1: 2
Arg1: 3
.
.
.
I'm trying to port a shell script to the much more readable python version. The original shell script starts several processes (utilities, monitors, etc.) in the background with "&". How can I achieve the same effect in python? I'd like these processes not to die when the python scripts complete. I am sure it's related to the concept of a daemon somehow, but I couldn't find how to do this easily.
While jkp's solution works, the newer way of doing things (and the way the documentation recommends) is to use the subprocess module. For simple commands its equivalent, but it offers more options if you want to do something complicated.
Example for your case:
import subprocess
subprocess.Popen(["rm","-r","some.file"])
This will run rm -r some.file in the background. Note that calling .communicate() on the object returned from Popen will block until it completes, so don't do that if you want it to run in the background:
import subprocess
ls_output=subprocess.Popen(["sleep", "30"])
ls_output.communicate() # Will block for 30 seconds
See the documentation here.
Also, a point of clarification: "Background" as you use it here is purely a shell concept; technically, what you mean is that you want to spawn a process without blocking while you wait for it to complete. However, I've used "background" here to refer to shell-background-like behavior.
Note: This answer is less current than it was when posted in 2009. Using the subprocess module shown in other answers is now recommended in the docs
(Note that the subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using these functions.)
If you want your process to start in the background you can either use system() and call it in the same way your shell script did, or you can spawn it:
import os
os.spawnl(os.P_DETACH, 'some_long_running_command')
(or, alternatively, you may try the less portable os.P_NOWAIT flag).
See the documentation here.
You probably want the answer to "How to call an external command in Python".
The simplest approach is to use the os.system function, e.g.:
import os
os.system("some_command &")
Basically, whatever you pass to the system function will be executed the same as if you'd passed it to the shell in a script.
I found this here:
On windows (win xp), the parent process will not finish until the longtask.py has finished its work. It is not what you want in CGI-script. The problem is not specific to Python, in PHP community the problems are the same.
The solution is to pass DETACHED_PROCESS Process Creation Flag to the underlying CreateProcess function in win API. If you happen to have installed pywin32 you can import the flag from the win32process module, otherwise you should define it yourself:
DETACHED_PROCESS = 0x00000008
pid = subprocess.Popen([sys.executable, "longtask.py"],
creationflags=DETACHED_PROCESS).pid
Use subprocess.Popen() with the close_fds=True parameter, which will allow the spawned subprocess to be detached from the Python process itself and continue running even after Python exits.
https://gist.github.com/yinjimmy/d6ad0742d03d54518e9f
import os, time, sys, subprocess
if len(sys.argv) == 2:
time.sleep(5)
print 'track end'
if sys.platform == 'darwin':
subprocess.Popen(['say', 'hello'])
else:
print 'main begin'
subprocess.Popen(['python', os.path.realpath(__file__), '0'], close_fds=True)
print 'main end'
Both capture output and run on background with threading
As mentioned on this answer, if you capture the output with stdout= and then try to read(), then the process blocks.
However, there are cases where you need this. For example, I wanted to launch two processes that talk over a port between them, and save their stdout to a log file and stdout.
The threading module allows us to do that.
First, have a look at how to do the output redirection part alone in this question: Python Popen: Write to stdout AND log file simultaneously
Then:
main.py
#!/usr/bin/env python3
import os
import subprocess
import sys
import threading
def output_reader(proc, file):
while True:
byte = proc.stdout.read(1)
if byte:
sys.stdout.buffer.write(byte)
sys.stdout.flush()
file.buffer.write(byte)
else:
break
with subprocess.Popen(['./sleep.py', '0'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc1, \
subprocess.Popen(['./sleep.py', '10'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc2, \
open('log1.log', 'w') as file1, \
open('log2.log', 'w') as file2:
t1 = threading.Thread(target=output_reader, args=(proc1, file1))
t2 = threading.Thread(target=output_reader, args=(proc2, file2))
t1.start()
t2.start()
t1.join()
t2.join()
sleep.py
#!/usr/bin/env python3
import sys
import time
for i in range(4):
print(i + int(sys.argv[1]))
sys.stdout.flush()
time.sleep(0.5)
After running:
./main.py
stdout get updated every 0.5 seconds for every two lines to contain:
0
10
1
11
2
12
3
13
and each log file contains the respective log for a given process.
Inspired by: https://eli.thegreenplace.net/2017/interacting-with-a-long-running-child-process-in-python/
Tested on Ubuntu 18.04, Python 3.6.7.
You probably want to start investigating the os module for forking different threads (by opening an interactive session and issuing help(os)). The relevant functions are fork and any of the exec ones. To give you an idea on how to start, put something like this in a function that performs the fork (the function needs to take a list or tuple 'args' as an argument that contains the program's name and its parameters; you may also want to define stdin, out and err for the new thread):
try:
pid = os.fork()
except OSError, e:
## some debug output
sys.exit(1)
if pid == 0:
## eventually use os.putenv(..) to set environment variables
## os.execv strips of args[0] for the arguments
os.execv(args[0], args)
You can use
import os
pid = os.fork()
if pid == 0:
Continue to other code ...
This will make the python process run in background.
I haven't tried this yet but using .pyw files instead of .py files should help. pyw files dosen't have a console so in theory it should not appear and work like a background process.
I have a Python script that runs all day long checking time every 60 seconds so it can start/end tasks (other python scripts) at specific periods of the day.
This script is running almost all ok. Tasks are starting at the right time and being open over a new cmd window so the main script can keep running and sampling the time. The only problem is that it just won't kill the tasks.
import os
import time
import signal
import subprocess
import ctypes
freq = 60 # sampling frequency in seconds
while True:
print 'Sampling time...'
now = int(time.time())
#initialize the task.. lets say 8:30am
if ( time.strftime("%H:%M", time.localtime(now)) == '08:30'):
# The following method is used so python opens another cmd window and keeps original script running and sampling time
pro = subprocess.Popen(["start", "cmd", "/k", "python python-task.py"], shell=True)
# kill process attempts.. lets say 11:40am
if ( time.strftime("%H:%M", time.localtime(now)) == '11:40'):
pro.kill() #not working - nothing happens
pro.terminate() #not working - nothing happens
os.kill(pro.pid, signal.SIGINT) #not working - windows error 5 access denied
# Kill the process using ctypes - not working - nothing happens
ctypes.windll.kernel32.TerminateProcess(int(pro._handle), -1)
# Kill process using windows taskkill - nothing happens
os.popen('TASKKILL /PID '+str(pro.pid)+' /F')
time.sleep(freq)
Important Note: the task script python-task.py will run indefinitely. That's exactly why I need to be able to "force" kill it at a certain time while it still running.
Any clues? What am I doing wrong? How to kill it?
You're killing the shell that spawns your sub-process, not your sub-process.
Edit: From the documentation:
The only time you need to specify shell=True on Windows is when the command you wish to execute is built into the shell (e.g. dir or copy). You do not need shell=True to run a batch file or console-based executable.
Warning
Passing shell=True can be a security hazard if combined with untrusted input. See the warning under Frequently Used Arguments for details.
So, instead of passing a single string, pass each argument separately in the list, and eschew using the shell. You probably want to use the same executable for the child as for the parent, so it's usually something like:
pro = subprocess.Popen([sys.executable, "python-task.py"])
I am running several processes over a cluster.
I start every process separately using screen command.
It allows me to disconnect from the cluster and when connected view my processes.
Starting all the screens one by one is a painful job.
I am wondering if we could do it with a python script.
The scrip opens the new shell creates the screen runs the process and disconnects.
Writes info about all the started processes in a text file like process id starting commands etc.
Secondly, I would like to stop the processes, I would like to put the pid to file and just run a command which will kill all the mentioned processes.
for example
the smaple inut file looks like
process_name command
123 python batch_training.py
I would like to start the screen with the name given in process_name and the commend will be executed in the corresponding frame.
Thanks
Can you give the screen object a command in python?
Like:
from os import system
command = 'screen ' + '/dev/ttyUSB0 38400'
result = system(command)
result('ATZ')???
If you want to run your command without opening screen session, you should also use -dmS options with screen. So if you want to do that with python, Your code could look like this:
import subprocess
subprocess.call(["screen", "-dmS", "screen_name_1", "top"])
subprocess.call(["screen", "-dmS", "screen_name_2", "top"])
subprocess.call(["screen", "-r"])
Try to use "os.system()" with standart Linux commands.
e.g.:
os.system("screen nano")
In general, in python, you can run system commands by running
from os import system
and then
system('whatever_shell_command')
So in your case you would type:
from os import system
system('screen')
Unfortunately (this is only sort of related) you can't run more then one command together, so
system('shell_command ', argument)
will not work.
so if you want to do that, you will need to concatenate the two strings:
full_command = 'shell_command ' + argument
system(full_command)`
I want to get screenshots of a webpage in Python. For this I am using http://github.com/AdamN/python-webkit2png/ .
newArgs = ["xvfb-run", "--server-args=-screen 0, 640x480x24", sys.argv[0]]
for i in range(1, len(sys.argv)):
if sys.argv[i] not in ["-x", "--xvfb"]:
newArgs.append(sys.argv[i])
logging.debug("Executing %s" % " ".join(newArgs))
os.execvp(newArgs[0], newArgs)
Basically calls xvfb-run with the correct args. But man xvfb says:
Note that the demo X clients used in the above examples will not exit on their own, so they will have to be killed before xvfb-run will exit.
So that means that this script will <????> if this whole thing is in a loop, (To get multiple screenshots) unless the X server is killed. How can I do that?
The documentation for os.execvp states:
These functions all execute a new
program, replacing the current
process; they do not return. [..]
So after calling os.execvp no other statement in the program will be executed. You may want to use subprocess.Popen instead:
The subprocess module allows you to
spawn new processes, connect to their
input/output/error pipes, and obtain
their return codes. This module
intends to replace several other,
older modules and functions, such as:
Using subprocess.Popen, the code to run xlogo in the virtual framebuffer X server becomes:
import subprocess
xvfb_args = ['xvfb-run', '--server-args=-screen 0, 640x480x24', 'xlogo']
process = subprocess.Popen(xvfb_args)
Now the problem is that xvfb-run launches Xvfb in a background process. Calling process.kill() will not kill Xvfb (at least not on my machine...). I have been fiddling around with this a bit, and so far the only thing that works for me is:
import os
import signal
import subprocess
SERVER_NUM = 99 # 99 is the default used by xvfb-run; you can leave this out.
xvfb_args = ['xvfb-run', '--server-num=%d' % SERVER_NUM,
'--server-args=-screen 0, 640x480x24', 'xlogo']
subprocess.Popen(xvfb_args)
# ... do whatever you want to do here...
pid = int(open('/tmp/.X%s-lock' % SERVER_NUM).read().strip())
os.kill(pid, signal.SIGINT)
So this code reads the process ID of Xvfb from /tmp/.X99-lock and sends the process an interrupt. It works, but does yield an error message every now and then (I suppose you can ignore it, though). Hopefully somebody else can provide a more elegant solution. Cheers.