Execute and wait for results C program and Python script - python

My today's task is to create a Python script (say A.py) which can do the following things:
Start a C program (say CProg) passing some params
Start another Python script (say B.py) passing other params
Join/Wait until B.py has finished
Send a SIGINT to CProg
Iterate (this won't be a problem at all I think :P)
Since I'm pretty new in developing Python scripts and my mind is quite full of C/C++ thread/join/execve/... I'd like to ask you if there's a proper way to accomplish my task. I've read some related topics on SO (some talk about PIPEs or Execl) but I'm not sure what to use yet.
Thanks in advance

Use subprocess module.
import os
import signal
import subprocess
import sys
params = [...]
for param for params:
proc = subprocess.Popen(['/path/to/CProg', param.., param..])
subprocess.call([sys.executable, 'B.py', param.., param...])
os.kill(proc.pid, signal.SIGINT)
proc.wait()

Related

Python subprocess shell scripts still runs in background

I am running two python scripts using subprocess one of them still runs.
import subprocess
subprocess.run("python3 script_with_loop.py & python3 scrip_with_io.py", shell=True)
script_with_loop still runs in the background.
What is the way to kill both scripts if one of them dies?
So, you're basically not using python here, you're using your shell.
a & b runs a, disavows it, and runs b. Since you're using the shell, if you wanted to terminate the background task, you'd have to use shell commands to do that.
Of course, since you're using python, there is a better way.
with subprocess.Popen(["somecommand"]) as proc:
try:
subprocess.run(["othercommand"])
finally:
proc.terminate()
Looking at your code though - python3 script_with_loop.py and python3 script_with_io.py - my guess is you'd be better off using the asyncio module because it basically does what the names of those two files are describing.
you should use threading for this sort of thing. try this.
import threading
def script_with_loop():
try:
# script_with_loop.py code goes here
except:
_exit()
def script_with_io():
try:
# script_with_io.py code goes here
except:
_exit()
threading.Thread(target=script_with_loop, daemon=True).start()
threading.Thread(target=script_with_io, daemon=True).start()

Python subprocess always waits for programm [duplicate]

I'm trying to port a shell script to the much more readable python version. The original shell script starts several processes (utilities, monitors, etc.) in the background with "&". How can I achieve the same effect in python? I'd like these processes not to die when the python scripts complete. I am sure it's related to the concept of a daemon somehow, but I couldn't find how to do this easily.
While jkp's solution works, the newer way of doing things (and the way the documentation recommends) is to use the subprocess module. For simple commands its equivalent, but it offers more options if you want to do something complicated.
Example for your case:
import subprocess
subprocess.Popen(["rm","-r","some.file"])
This will run rm -r some.file in the background. Note that calling .communicate() on the object returned from Popen will block until it completes, so don't do that if you want it to run in the background:
import subprocess
ls_output=subprocess.Popen(["sleep", "30"])
ls_output.communicate() # Will block for 30 seconds
See the documentation here.
Also, a point of clarification: "Background" as you use it here is purely a shell concept; technically, what you mean is that you want to spawn a process without blocking while you wait for it to complete. However, I've used "background" here to refer to shell-background-like behavior.
Note: This answer is less current than it was when posted in 2009. Using the subprocess module shown in other answers is now recommended in the docs
(Note that the subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using these functions.)
If you want your process to start in the background you can either use system() and call it in the same way your shell script did, or you can spawn it:
import os
os.spawnl(os.P_DETACH, 'some_long_running_command')
(or, alternatively, you may try the less portable os.P_NOWAIT flag).
See the documentation here.
You probably want the answer to "How to call an external command in Python".
The simplest approach is to use the os.system function, e.g.:
import os
os.system("some_command &")
Basically, whatever you pass to the system function will be executed the same as if you'd passed it to the shell in a script.
I found this here:
On windows (win xp), the parent process will not finish until the longtask.py has finished its work. It is not what you want in CGI-script. The problem is not specific to Python, in PHP community the problems are the same.
The solution is to pass DETACHED_PROCESS Process Creation Flag to the underlying CreateProcess function in win API. If you happen to have installed pywin32 you can import the flag from the win32process module, otherwise you should define it yourself:
DETACHED_PROCESS = 0x00000008
pid = subprocess.Popen([sys.executable, "longtask.py"],
creationflags=DETACHED_PROCESS).pid
Use subprocess.Popen() with the close_fds=True parameter, which will allow the spawned subprocess to be detached from the Python process itself and continue running even after Python exits.
https://gist.github.com/yinjimmy/d6ad0742d03d54518e9f
import os, time, sys, subprocess
if len(sys.argv) == 2:
time.sleep(5)
print 'track end'
if sys.platform == 'darwin':
subprocess.Popen(['say', 'hello'])
else:
print 'main begin'
subprocess.Popen(['python', os.path.realpath(__file__), '0'], close_fds=True)
print 'main end'
Both capture output and run on background with threading
As mentioned on this answer, if you capture the output with stdout= and then try to read(), then the process blocks.
However, there are cases where you need this. For example, I wanted to launch two processes that talk over a port between them, and save their stdout to a log file and stdout.
The threading module allows us to do that.
First, have a look at how to do the output redirection part alone in this question: Python Popen: Write to stdout AND log file simultaneously
Then:
main.py
#!/usr/bin/env python3
import os
import subprocess
import sys
import threading
def output_reader(proc, file):
while True:
byte = proc.stdout.read(1)
if byte:
sys.stdout.buffer.write(byte)
sys.stdout.flush()
file.buffer.write(byte)
else:
break
with subprocess.Popen(['./sleep.py', '0'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc1, \
subprocess.Popen(['./sleep.py', '10'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc2, \
open('log1.log', 'w') as file1, \
open('log2.log', 'w') as file2:
t1 = threading.Thread(target=output_reader, args=(proc1, file1))
t2 = threading.Thread(target=output_reader, args=(proc2, file2))
t1.start()
t2.start()
t1.join()
t2.join()
sleep.py
#!/usr/bin/env python3
import sys
import time
for i in range(4):
print(i + int(sys.argv[1]))
sys.stdout.flush()
time.sleep(0.5)
After running:
./main.py
stdout get updated every 0.5 seconds for every two lines to contain:
0
10
1
11
2
12
3
13
and each log file contains the respective log for a given process.
Inspired by: https://eli.thegreenplace.net/2017/interacting-with-a-long-running-child-process-in-python/
Tested on Ubuntu 18.04, Python 3.6.7.
You probably want to start investigating the os module for forking different threads (by opening an interactive session and issuing help(os)). The relevant functions are fork and any of the exec ones. To give you an idea on how to start, put something like this in a function that performs the fork (the function needs to take a list or tuple 'args' as an argument that contains the program's name and its parameters; you may also want to define stdin, out and err for the new thread):
try:
pid = os.fork()
except OSError, e:
## some debug output
sys.exit(1)
if pid == 0:
## eventually use os.putenv(..) to set environment variables
## os.execv strips of args[0] for the arguments
os.execv(args[0], args)
You can use
import os
pid = os.fork()
if pid == 0:
Continue to other code ...
This will make the python process run in background.
I haven't tried this yet but using .pyw files instead of .py files should help. pyw files dosen't have a console so in theory it should not appear and work like a background process.

Python Subprocess stdin

I am using 2 python scripts:
Script 1 uses subprocess.Popen() to execute a process in Script 2. While this process is executed (it takes some time), Script 1 is doing other stuff.
Question 1) Is subprocess.Popen the best way to solve this issue?
Question 2) Is there any way to pass variables (only int/float values) from Script 1 to Script 2 BESIDES using communicate()? How do I make use of these variables in Script 2 (i.e. how do I address them)?
Thanks for any help!
if both are python why bother going to the terminal?
You could just use pythons threading. If you'd do it through terminal you can pass the arguments like this:
process = subprocess.Popen(["python", "script2.py", "args"])
process.communicate()
or at least something along those lines (this probably is wrong I'v never called a python with Popen). How I'd do it is by using the python Threading module (So This will not work with Popen described in the above, if you want a specific answer using that I'd like to see a bit how those script files look like). Anyway:
script1.py:
from threading import Thread
from sources.script2 import prnt
import time
# 'Script1 function'
def prnt1():
for i in range(5000):
print('script1: %s' % i)
time.sleep(0.5)
# Starting 'script2'
t = Thread(target=prnt, args=(100, 4500)).start()
prnt1()
script2.py:
import time
def prnt(start, stop):
for i in range(start, stop):
print('script2: %s' % i)
time.sleep(0.5)
But like said above if we don't know anything about the scripts you want to run it's hard to give advice. does your script have a function defined in it as entry point? or only as standalone?,ect...

Python linux shells

In my program I Iwant to access multiple linux shells using different process.
Currently I am using subprocess I don't have a linux machine to test this on currently so can you tell me if this works.
Does subprocess work on one terminal? If so is there an alternative?
This is something like what I am developing:
import multiprocessing
import subprocess
def doSomething(filepath):
subprocess.call("somecommands")
subprocess.call("somecommands")
if __name__ == "__main__":
while True:
processList=[]
for i in range(numberOfThreads):
process=multiprocessing.Process(target=doSomething,args=[files])
process.start()
processList.append(process)
for process in processList:
process.join()
You Should use the,
Popen
feature of the subprocess module, that way, I don't think you will need threading anymore since it doesn't look like you're going somewhere serious with sharing data.
Now your code should look like,
import subprocess as s_p
s_p.Popen('Command to be given','*args')
print 'Process started in a separate shell'
I believe this will do your job!

Need to find which program called the python script

I am using a build system(waf) which is a wrapper around python. There are some programs(perl scripts,exe's etc) calling the python build system. When I execute the build scripts from cmd.exe, I need to find out the program that called it. My OS is windows 7. I tried getting the parent PID in a python module and it returns "cmd" as PPID and "python.exe" as PID, so that approach did not help me in finding what I am looking for.
I believe I should be looking at some stacktraces on a OS level, but am not able to find how to do it. Please help me with the approach I should take or a possible code snippet. I just need to know the name of the script or program that called the system, example caller.perl, callload.exe
Thank you
Though I am not sure why it would be needed but this is a fun problem in itself, so here are few tips, once you have parent PID loop thru processes and get name e.g.
using WMI
import wmi
c = wmi.WMI ()
for process in c.Win32_Process ():
if process.ProcessId == ppid:
print process.ProcessId, process.Name
I think you can do same thing using win32 API, e.g.
processes = win32process.EnumProcesses()
for pid in processes:
if pid == ppid:
handle = win32api.OpenProcess(win32con.PROCESS_ALL_ACCESS,
False, pid)
exe = win32process.GetModuleFileNameEx(handle, 0)
This will work for simple cases when progA directly executes progB but if there is a long chain of child process in between, it may not be good solution. Best way for a generic case would be for calling program to tell his identity by passing it as argument e.g.
progB --calledfrom progA
modify the python script to add an argument to it, stating which file called it. then log it into a logger file. all scripts calling it will have to identify themselves to the python script via the argument vector.
For example:
foo.pl calls yourfile.py as:
yourfile.py /path/to/foo.pl
yourfile.py:
def main(argv):
logger.print(argv[1])
I was able to use process explorer to see the chain of processes called and was able to retrieve the name by just traversing the parent. Thanks for all who replied.

Categories