Python running synchronously? Running one executable at a time - python

Trying to use python to control numerous compiled executables, but running into timeline issues! I need to be able to run two executables simultaneously, and also be able to 'wait' until an executable has finished prior to starting another one. Also, some of them require superuser. Here is what I have so far:
import os
sudoPassword = "PASS"
executable1 = "EXEC1"
executable2 = "EXEC2"
executable3 = "EXEC3"
filename = "~/Desktop/folder/"
commandA = filename+executable1
commandB = filename+executable2
commandC = filename+executable3
os.system('echo %s | sudo %s; %s' % (sudoPassword, commandA, commandB))
os.system('echo %s | sudo %s' % (sudoPassword, commandC))
print ('DONESIES')
Assuming that os.system() waits for the executable to finish prior to moving to the next line, this should run EXEC1 and EXEC2 simultaneously, and after they finish run EXEC3...
But it doesn't. Actually, it even prints 'DONESIES' in the shell before commandB even finishes...
Please help!

Your script will still execute all 3 commands sequentially. In shell scripts, the semicolon is just a way to put more than one command on one line. It doesn't do anything special, it just runs them one after the other.
If you want to run external programs in parallel from a Python program, use the subprocess module: https://docs.python.org/2/library/subprocess.html

Use subprocess.Popen to run multiple commands in the background. If you just want the program's stdout/err to go to the screen (or get dumped completely) its pretty straight forward. If you want to process the output of the commands... that gets more complicated. You'd likely start a thread per command.
But here is the case that matches your example:
import os
import subprocess as subp
sudoPassword = "PASS"
executable1 = "EXEC1"
executable2 = "EXEC2"
executable3 = "EXEC3"
filename = os.path.expanduser("~/Desktop/folder/")
commandA = os.path.join(filename, executable1)
commandB = os.path.join(filename, executable2)
commandC = os.path.join(filename, executable3)
def sudo_cmd(cmd, password):
p = subp.Popen(['sudo', '-S'] + cmd, stdin=subp.PIPE)
p.stdin.write(password + '\n')
p.stdin.close()
return p
# run A and B in parallel
exec_A = sudo_cmd([commandA], sudoPassword)
exec_B = sudo_cmd([commandB], sudoPassword)
# wait for A before starting C
exec_A.wait()
exec_C = sudo_cmd([commandC], sudoPassword)
# wait for the stragglers
exec_B.wait()
exec_C.wait()
print ('DONESIES')

Related

How to terminate one python script when many python scripts are running?

Hello guys I have opened 3 python scripts that they are running at the same time. I want to terminate(Kill) one of them with other python file. It means if we run many python scripts at the same time how to terminate or kill one of them or two of them? Is it possible with os or subprocess modules? I try to use them but they kill all python scripts with killing python.exe
FirstSc.py
UserName = input("Enter your username = ")
if UserName == "Alex":
#Terminate or Kill the PythonFile in this address C:\MyScripts\FileTests\SecondSc.py
SecondSc.py
while True:
print("Second app is running ...")
ThirdSc.py
while True:
print("Third app is running ...")
Thanks guys I get good answers. Now if we have a Batch file like SecBatch.bat instead of SecondSc.py how to do this. It means we have these and run FirstSc.py and SecBatch.bat at the same time:
FirstSc.py in this directory D:\MyFiles\FirstSc.py
UserName = input("Enter your username = ")
if UserName == "Alex":
#1)How to print SecBatch.bat syntax it means print:
#CALL C:\MyProject\Scripts\activate.bat
#python C:\pyFiles\ThirdSc.py
#2)Terminate or kill SecBatch.bat
#3)Terminate or kill ThirdSc.py
SecBatch.bat in this directory C:\MyWinFiles\SecBatch.bat that it run a Python VirtualEnvironment then run a python script in this directory C:\pyFiles\ThirdSc.py
CALL C:\MyProject\Scripts\activate.bat
python C:\pyFiles\ThirdSc.py
ThirdSc.py in this directory C:\pyFiles\ThirdSc.py
from time import sleep
while True:
print("Third app is running ...")
sleep(2)
I would store the PID of each script in a standard location. Assuming you are running on Linux I would put them in /var/run/. Then you can use os.kill(pid, 9) to do what you want. Some example helper funcs would be:
import os
import sys
def store_pid():
pid = os.getpid()
# Get the name of the script
# Example: /home/me/test.py => test
script_name = os.path.basename(sys.argv[0]).replace(".py", "")
# write to /var/run/test.pid
with open(f"/var/run/{script_name}.pid", "w"):
f.write(pid)
def kill_by_script_name(name):
# Check the pid file is there
pid_file = f"/var/log/{name}.pid"
if not os.path.exists(pid_file):
print("Warning: cannot find PID file")
return
with open(pid_file) as f:
# The following might throw ValueError if pid file has characters
pid = int(f.read().strip())
os.kill(pid, 9)
Later in FirstSc:
if UserName == "Alex":
kill_by_script_name("SecondSc")
kill_by_script_name("ThirdSc")
NOTE: The code is not tested :) but should point to you to the correct direction (at least for one common way to solve this problem)
You may be able to terminate a Python process by the name of the script file using system commands such as taskkill (or pkill on Linux systems). However, a better way to accomplish this would be (if possible) to have FirstSc.py or whatever script that's doing the killing launch the other scripts using subprocess.Popen(). Then you can call terminate() on it to end the process:
import subprocess
# Launch the two scripts
# You may have to change the Python executable name
second_script = subprocess.Popen(["python", "SecondSc.py"])
third_script = subprocess.Popen(["python", "ThirdSc.py"])
UserName = input("Enter your username = ")
if UserName == "Alex":
second_script.terminate()

Pexect in Windows Batch File > Cygwin > Python > SSH

I have a Linux box that runs Cisco IOS and need to SSH into it sometimes to reboot it. I've written a batch file that calls on Cygwin. Cygwin then calls on Python+PythonScript.
Batch File:
cd c:\cygwin64\bin
bash --login -i -c "python3 /home/Owner/uccxtesting.py"
Python Script
import pexpect
import time
import sys
server_ip = "10.0.81.104"
server_user = "administrator"
server_pass = "secretpassword"
sshuccx1 = pexpect.spawn('ssh %s#%s' % (server_user, server_ip))
sshuccx1.logfile_read = sys.stdout.buffer
sshuccx1.timeout = 180
sshuccx1.expect('.*password:')
sshuccx1.sendline(server_pass)
sshuccx1.expect('.admin:')
sshuccx1.sendline('utils system restart')
sshuccx1.expect('Enter (yes/no)?')
sshuccx1.sendline('yes')
time.sleep(30)
When I run this, it stops at Enter yes/no. This is what I'm getting:
I've seen plenty of examples of pexpect with expect, but there is some white space out beside the question mark. I just don't know how to tell python to expect it.
There may be a bug:
utils system restart prompts for force restart (https://bst.cisco.com/bugsearch/bug/CSCvw22828)
Replace time.sleep(30) with the following code to answer a possible force restart prompt. If it works, you can get rid of the try...except and print commands that I added for debugging:
try:
index = -1
while index != 0:
sshuccx1.expect_exact(['succeeded', 'force', ], timeout=300)
if index == 2:
print('Forcing restart...')
sshuccx1.sendline('yes')
print('Operation succeeded')
print(str(child.before))
except pexpect.ExceptionPexpect:
e_type, e_value, _ = sys.exc_info()
print('Error: ' + pexpect.ExceptionPexpect(e_type).get_trace())
print(e_type, e_value)
Also, change sshuccx1.expect('Enter (yes/no)?') to sshuccx1.expect_exact('Enter (yes/no)?'). The expect method tries to match a regex pattern, and it may get caught on the parentheses (see https://pexpect.readthedocs.io/en/stable/api/pexpect.html#pexpect.spawn.expect_exact)

Run multiple bash lines in Python and separetely check their status and output

I am trying to execute several lines of bash in Python 3 and check the status of each line separately.
I first tried to use gestatusoutput from subprocess, but each line is run in a separated process that does not communicate with the others (for the sake of simplicity, the given MWE consists of setting a variable, but what I intend to do in my actual code is more complex than that — and I know about os.environ for this very specific example):
from subprocess import getstatusoutput as cmd
stat, out = cmd("export TEST=1")
stat, out = cmd("echo $TEST")
will therefore returns:
>>> print(out)
(0, "")
I then tried the following:
cmdline = """export TEST=1
echo $TEST"""
stat, out = cmd(cmdline)
That works but forces me to parse the output, specially if I want to check the status of the first command (if echo works, the status returns by cmd is 0 whatever happens before), that is not very robust.
I saw some things using Popen (still from subprocess) but was unable to use it efficiently.
Any help would be appreciated!
To me, you are trying to share the environment variable between two process, which is not possible.
It looks like this:
Process 1 python main.py #TEST = ""
|Process 2-->"export TEST=1" #Change Process2 env variable TEST to '1'
|Process 3-->"echo $TEST" #Print Process3 env variable TEST (get from process 1)
You can use os.environ[] to change the current environment first (Process 1 variable),Later on use the variable after fork.
Something like this
import os
import subprocess
import sys
os.environ['TEST'] = '1'
out = subprocess.check_call('echo $TEST',shell = True)
I resulted doing the following:
create a launch command wrapping subprocess.Popen to launch my bash commands, that in addition allows me either to retrieve the current environment or to pass a custom environment
create a get_env to parse the return from the previous command and get a dict of the environment
launch wrapper
import os
import subprocess as sp
def launch(cmd_, env=os.environ, get_env=False):
if get_env: cmd_ += " && printenv"
load = sp.Popen(cmd_, shell=True, stdout=sp.PIPE, stderr=sp.PIPE, env=env)
out = load.communicate()
err = load.returncode
return(err, out)
Retrieve the environment
def get_env(out, encoding='utf-8'):
lout = str(out[0], encoding).split('\n')
new_env = {}
for line in lout:
if len(line.split('=')) <= 1:
pass
else:
k = line.split("=")[0]
v = "=".join(line.split("=")[1:])
new_env[k] = v
return new_env
(This is a simple version, it may be more complicated if you have things like functions in your environment — it happens.)
Results:
I can use it as follow:
err, out = launch("export TEST=1", get_env=True)
if not err: new_env = get_env(out)
err, out = launch("echo $TEST", env=new_env)
and therefore:
>>> print(str(out[0], encoding='utf-8'))
1

Automatically restarting Python Script on Exception

I have a really complicated Python script going on, sometimes it just gets an error, and the only way to debug this, is restarting it, because everything else would make no sense and the error would come back in no time (I already tried a lot of things, so please dont concentrate on that)
I want a .bat script (im on Windows unfortunately) that restarts my python script, whenever it ends.
Another python script is also fine.
How can I do that?
Thanks in advance
set env=python.exe
tasklist /FI "IMAGENAME eq python.exe" 2>NUL | find /I /N "python.exe">NUL if "%ERRORLEVEL%"!="0(
start python script.py
)
Other way from python to execute python
import subprocess
from subprocess import call
def processExists(processname):
tlcall = 'TASKLIST', '/FI', 'imagename eq %s' % processname
# shell=True hides the shell window, stdout to PIPE enables
# communicate() to get the tasklist command result
tlproc = subprocess.Popen(tlcall, shell=True, stdout=subprocess.PIPE)
# trimming it to the actual lines with information
tlout = tlproc.communicate()[0].strip().split('\r\n')
# if TASKLIST returns single line without processname: it's not running
if len(tlout) > 1 and processname in tlout[-1]:
print('process "%s" is running!' % processname)
return True
else:
print(tlout[0])
print('process "%s" is NOT running!' % processname)
return False
if not processExists('python.exe')
call(["python", "your_file.py"])

Script execution stops at os.execlpe()

I am a bit of a newbie on Python, but was was testing some things I learned on Ubuntu.
Basically, this script is supposed to set your TCP/IP config, then restart the networking daemon and display the changes.
This is the whole script:
#!/usr/bin/env python
import commands
import os
import sys
euid = os.geteuid()
if euid != 0:
print "Script not started as root. Running sudo.."
args = ['sudo', sys.executable] + sys.argv + [os.environ]
# the next line replaces the currently-running process with the sudo
os.execlpe('sudo', *args)
print 'Running. Your euid is', euid
print "IP"
IP = raw_input(">>")
print "Gateway"
PE = raw_input(">>")
ifconfig = commands.getoutput("ifconfig")
interfaz = ifconfig[0:5]
ArchivoInterfaces = open("/etc/network/interfaces", "w")
ArchivoInterfaces.write("#auto lo\n#iface lo inet loopback\nauto %s\niface %sinet static\naddress %s\ngateway %s\nnetmask 255.255.255.0"%(interfaz, interfaz, IP, PE))
ArchivoInterfaces.close()
ArchivoResolv = open("/etc/resolv.conf", "w")
ArchivoResolv.write("# Generated by NetworkManager\ndomain localdomain\nsearch localdomain\nnameserver 8.8.8.8\nnameserver 8.8.4.4")
ArchivoResolv.close()
os.execlpe('/etc/init.d/networking', "test","restart", os.environ)
print "Todo esta correcto, su IP ahora es %s" %(IP)
fin = raw_input("write d and press enter to show the changes, or press enter to exit.")
if fin == "d":
ArchivoResolv = open("/etc/resolv.conf")
ArchivoInterfaces = open("/etc/network/interfaces")
ifconfig2 = commands.getoutput("ifconfig")
print "ARCHIVO resolv.conf\n"+ArchivoResolv.read()+"\n\n"+"ARCHIVO interfaces\n"+ArchivoInterfaces.read()+"\n\n"+"RESULTADO DE \"ifconfig\"\n"+ifconfig2
fin = raw_input("Presiona ENTER para salir.")
Unfortunately, it keeps stopping on this line - and I'm not sure why:
os.execlpe('/etc/init.d/networking', "test","restart", os.environ)
After reaching this spot, the script runs the restart, and then just exits.
I would love to get it to run the last part of the script so I can see what changed, but I'm unable. Any ideas?
Because all of the exec family of functions work by replacing the current process with the one you execute.
If you just want to run an external command, use the spawn functions instead. (In this case, os.spawnlpe is very nearly a drop-in replacement.)
os.execlpe (and the similar os.exec* functions) replace the current process:
These functions all execute a new program, replacing the current process; they do not return.

Categories