I am a bit of a newbie on Python, but was was testing some things I learned on Ubuntu.
Basically, this script is supposed to set your TCP/IP config, then restart the networking daemon and display the changes.
This is the whole script:
#!/usr/bin/env python
import commands
import os
import sys
euid = os.geteuid()
if euid != 0:
print "Script not started as root. Running sudo.."
args = ['sudo', sys.executable] + sys.argv + [os.environ]
# the next line replaces the currently-running process with the sudo
os.execlpe('sudo', *args)
print 'Running. Your euid is', euid
print "IP"
IP = raw_input(">>")
print "Gateway"
PE = raw_input(">>")
ifconfig = commands.getoutput("ifconfig")
interfaz = ifconfig[0:5]
ArchivoInterfaces = open("/etc/network/interfaces", "w")
ArchivoInterfaces.write("#auto lo\n#iface lo inet loopback\nauto %s\niface %sinet static\naddress %s\ngateway %s\nnetmask 255.255.255.0"%(interfaz, interfaz, IP, PE))
ArchivoInterfaces.close()
ArchivoResolv = open("/etc/resolv.conf", "w")
ArchivoResolv.write("# Generated by NetworkManager\ndomain localdomain\nsearch localdomain\nnameserver 8.8.8.8\nnameserver 8.8.4.4")
ArchivoResolv.close()
os.execlpe('/etc/init.d/networking', "test","restart", os.environ)
print "Todo esta correcto, su IP ahora es %s" %(IP)
fin = raw_input("write d and press enter to show the changes, or press enter to exit.")
if fin == "d":
ArchivoResolv = open("/etc/resolv.conf")
ArchivoInterfaces = open("/etc/network/interfaces")
ifconfig2 = commands.getoutput("ifconfig")
print "ARCHIVO resolv.conf\n"+ArchivoResolv.read()+"\n\n"+"ARCHIVO interfaces\n"+ArchivoInterfaces.read()+"\n\n"+"RESULTADO DE \"ifconfig\"\n"+ifconfig2
fin = raw_input("Presiona ENTER para salir.")
Unfortunately, it keeps stopping on this line - and I'm not sure why:
os.execlpe('/etc/init.d/networking', "test","restart", os.environ)
After reaching this spot, the script runs the restart, and then just exits.
I would love to get it to run the last part of the script so I can see what changed, but I'm unable. Any ideas?
Because all of the exec family of functions work by replacing the current process with the one you execute.
If you just want to run an external command, use the spawn functions instead. (In this case, os.spawnlpe is very nearly a drop-in replacement.)
os.execlpe (and the similar os.exec* functions) replace the current process:
These functions all execute a new program, replacing the current process; they do not return.
Related
Hello guys I have opened 3 python scripts that they are running at the same time. I want to terminate(Kill) one of them with other python file. It means if we run many python scripts at the same time how to terminate or kill one of them or two of them? Is it possible with os or subprocess modules? I try to use them but they kill all python scripts with killing python.exe
FirstSc.py
UserName = input("Enter your username = ")
if UserName == "Alex":
#Terminate or Kill the PythonFile in this address C:\MyScripts\FileTests\SecondSc.py
SecondSc.py
while True:
print("Second app is running ...")
ThirdSc.py
while True:
print("Third app is running ...")
Thanks guys I get good answers. Now if we have a Batch file like SecBatch.bat instead of SecondSc.py how to do this. It means we have these and run FirstSc.py and SecBatch.bat at the same time:
FirstSc.py in this directory D:\MyFiles\FirstSc.py
UserName = input("Enter your username = ")
if UserName == "Alex":
#1)How to print SecBatch.bat syntax it means print:
#CALL C:\MyProject\Scripts\activate.bat
#python C:\pyFiles\ThirdSc.py
#2)Terminate or kill SecBatch.bat
#3)Terminate or kill ThirdSc.py
SecBatch.bat in this directory C:\MyWinFiles\SecBatch.bat that it run a Python VirtualEnvironment then run a python script in this directory C:\pyFiles\ThirdSc.py
CALL C:\MyProject\Scripts\activate.bat
python C:\pyFiles\ThirdSc.py
ThirdSc.py in this directory C:\pyFiles\ThirdSc.py
from time import sleep
while True:
print("Third app is running ...")
sleep(2)
I would store the PID of each script in a standard location. Assuming you are running on Linux I would put them in /var/run/. Then you can use os.kill(pid, 9) to do what you want. Some example helper funcs would be:
import os
import sys
def store_pid():
pid = os.getpid()
# Get the name of the script
# Example: /home/me/test.py => test
script_name = os.path.basename(sys.argv[0]).replace(".py", "")
# write to /var/run/test.pid
with open(f"/var/run/{script_name}.pid", "w"):
f.write(pid)
def kill_by_script_name(name):
# Check the pid file is there
pid_file = f"/var/log/{name}.pid"
if not os.path.exists(pid_file):
print("Warning: cannot find PID file")
return
with open(pid_file) as f:
# The following might throw ValueError if pid file has characters
pid = int(f.read().strip())
os.kill(pid, 9)
Later in FirstSc:
if UserName == "Alex":
kill_by_script_name("SecondSc")
kill_by_script_name("ThirdSc")
NOTE: The code is not tested :) but should point to you to the correct direction (at least for one common way to solve this problem)
You may be able to terminate a Python process by the name of the script file using system commands such as taskkill (or pkill on Linux systems). However, a better way to accomplish this would be (if possible) to have FirstSc.py or whatever script that's doing the killing launch the other scripts using subprocess.Popen(). Then you can call terminate() on it to end the process:
import subprocess
# Launch the two scripts
# You may have to change the Python executable name
second_script = subprocess.Popen(["python", "SecondSc.py"])
third_script = subprocess.Popen(["python", "ThirdSc.py"])
UserName = input("Enter your username = ")
if UserName == "Alex":
second_script.terminate()
I have a Linux box that runs Cisco IOS and need to SSH into it sometimes to reboot it. I've written a batch file that calls on Cygwin. Cygwin then calls on Python+PythonScript.
Batch File:
cd c:\cygwin64\bin
bash --login -i -c "python3 /home/Owner/uccxtesting.py"
Python Script
import pexpect
import time
import sys
server_ip = "10.0.81.104"
server_user = "administrator"
server_pass = "secretpassword"
sshuccx1 = pexpect.spawn('ssh %s#%s' % (server_user, server_ip))
sshuccx1.logfile_read = sys.stdout.buffer
sshuccx1.timeout = 180
sshuccx1.expect('.*password:')
sshuccx1.sendline(server_pass)
sshuccx1.expect('.admin:')
sshuccx1.sendline('utils system restart')
sshuccx1.expect('Enter (yes/no)?')
sshuccx1.sendline('yes')
time.sleep(30)
When I run this, it stops at Enter yes/no. This is what I'm getting:
I've seen plenty of examples of pexpect with expect, but there is some white space out beside the question mark. I just don't know how to tell python to expect it.
There may be a bug:
utils system restart prompts for force restart (https://bst.cisco.com/bugsearch/bug/CSCvw22828)
Replace time.sleep(30) with the following code to answer a possible force restart prompt. If it works, you can get rid of the try...except and print commands that I added for debugging:
try:
index = -1
while index != 0:
sshuccx1.expect_exact(['succeeded', 'force', ], timeout=300)
if index == 2:
print('Forcing restart...')
sshuccx1.sendline('yes')
print('Operation succeeded')
print(str(child.before))
except pexpect.ExceptionPexpect:
e_type, e_value, _ = sys.exc_info()
print('Error: ' + pexpect.ExceptionPexpect(e_type).get_trace())
print(e_type, e_value)
Also, change sshuccx1.expect('Enter (yes/no)?') to sshuccx1.expect_exact('Enter (yes/no)?'). The expect method tries to match a regex pattern, and it may get caught on the parentheses (see https://pexpect.readthedocs.io/en/stable/api/pexpect.html#pexpect.spawn.expect_exact)
I have a small git_cloner script that clones my companies projects correctly. In all my scripts, I use a func that hasn't given me problems yet:
def call_sp(
command, **arg_list):
p = subprocess.Popen(command, shell=True, **arg_list)
p.communicate()
At the end of this individual script, I use:
call_sp('cd {}'.format(branch_path))
This line does not change the terminal I ran my script in to the directory branch_path, in fact, even worse, it annoyingly asks me for my password! When removing the cd yadayada line above, my script no longer demands a password before completing. I wonder:
How are these python scripts actually running? Since the cd command had no permanent effect. I assume the script splits its own private subprocess separate from what the terminal is doing, then kills itself when the script finishes?
Based on how #1 works, how do I force my scripts to change the terminal directory permanently to save me time,
Why would merely running a change directory ask me for my password?
The full script is below, thank you,
Cody
#!/usr/bin/env python
import subprocess
import sys
import time
from os.path import expanduser
home_path = expanduser('~')
project_path = home_path + '/projects'
d = {'cwd': ''}
#calling from script:
# ./git_cloner.py projectname branchname
# to make a new branch say ./git_cloner.py project branchname
#interactive:
# just run ./git_cloner.py
if len(sys.argv) == 3:
project = sys.argv[1]
branch = sys.argv[2]
if len(sys.argv) < 3:
while True:
project = raw_input('Enter a project name (i.e., mainworkproject):\n')
if not project:
continue
break
while True:
branch = raw_input('Enter a branch name (i.e., dev):\n')
if not branch:
continue
break
def call_sp(command, **arg_list):
p = subprocess.Popen(command, shell=True, **arg_list)
p.communicate()
print "making new branch \"%s\" in project \"%s\"" % (branch, project)
this_project_path = '%s/%s' % (project_path, project)
branch_path = '%s/%s' % (this_project_path, branch)
d['cwd'] = project_path
call_sp('mkdir %s' % branch, **d)
d['cwd'] = branch_path
git_string = 'git clone ssh://git#git/home/git/repos/{}.git {}'.format(project, d['cwd'])
#see what you're doing to maybe need to cancel
print '\n'
print "{}\n\n".format(git_string)
call_sp(git_string)
time.sleep(30)
call_sp('git checkout dev', **d)
time.sleep(2)
call_sp('git checkout -b {}'.format(branch), **d)
time.sleep(5)
#...then I make some symlinks, which work
call_sp('cp {}/dev/settings.py {}/settings.py'.format(project_path, branch_path))
print 'dont forget "git push -u origin {}"'.format(branch)
call_sp('cd {}'.format(branch_path))
You cannot use Popen to change the current directory of the running script. Popen will create a new process with its own environment. If you do a cd within that, it will change directory for that running process, which will then immediately exit.
If you want to change the directory for the script you could use os.chdir(path), then all subsequent commands in the script will be run from that new path.
Child processes cannot alter the environment of their parents though, so you can't have a process you create change the environment of the caller.
Trying to use python to control numerous compiled executables, but running into timeline issues! I need to be able to run two executables simultaneously, and also be able to 'wait' until an executable has finished prior to starting another one. Also, some of them require superuser. Here is what I have so far:
import os
sudoPassword = "PASS"
executable1 = "EXEC1"
executable2 = "EXEC2"
executable3 = "EXEC3"
filename = "~/Desktop/folder/"
commandA = filename+executable1
commandB = filename+executable2
commandC = filename+executable3
os.system('echo %s | sudo %s; %s' % (sudoPassword, commandA, commandB))
os.system('echo %s | sudo %s' % (sudoPassword, commandC))
print ('DONESIES')
Assuming that os.system() waits for the executable to finish prior to moving to the next line, this should run EXEC1 and EXEC2 simultaneously, and after they finish run EXEC3...
But it doesn't. Actually, it even prints 'DONESIES' in the shell before commandB even finishes...
Please help!
Your script will still execute all 3 commands sequentially. In shell scripts, the semicolon is just a way to put more than one command on one line. It doesn't do anything special, it just runs them one after the other.
If you want to run external programs in parallel from a Python program, use the subprocess module: https://docs.python.org/2/library/subprocess.html
Use subprocess.Popen to run multiple commands in the background. If you just want the program's stdout/err to go to the screen (or get dumped completely) its pretty straight forward. If you want to process the output of the commands... that gets more complicated. You'd likely start a thread per command.
But here is the case that matches your example:
import os
import subprocess as subp
sudoPassword = "PASS"
executable1 = "EXEC1"
executable2 = "EXEC2"
executable3 = "EXEC3"
filename = os.path.expanduser("~/Desktop/folder/")
commandA = os.path.join(filename, executable1)
commandB = os.path.join(filename, executable2)
commandC = os.path.join(filename, executable3)
def sudo_cmd(cmd, password):
p = subp.Popen(['sudo', '-S'] + cmd, stdin=subp.PIPE)
p.stdin.write(password + '\n')
p.stdin.close()
return p
# run A and B in parallel
exec_A = sudo_cmd([commandA], sudoPassword)
exec_B = sudo_cmd([commandB], sudoPassword)
# wait for A before starting C
exec_A.wait()
exec_C = sudo_cmd([commandC], sudoPassword)
# wait for the stragglers
exec_B.wait()
exec_C.wait()
print ('DONESIES')
I've just become the system admin for my research group's cluster and, in this respect, am a novice. I'm trying to make a few tools to monitor the network and need help getting started implementing them with python (my native tongue).
For example, I would like to view who is logged onto remote machines. By hand, I'd ssh and who, but how would I get this info into a script for manipulation? Something like,
import remote_info as ri
ri.open("foo05.bar.edu")
ri.who()
Out[1]:
hutchinson tty7 2009-08-19 13:32 (:0)
hutchinson pts/1 2009-08-19 13:33 (:0.0)
Similarly for things like cat /proc/cpuinfo to get the processor information of a node. A starting point would be really great. Thanks.
Here's a simple, cheap solution to get you started
from subprocess import *
p = Popen('ssh servername who', shell=True, stdout=PIPE)
p.wait()
print p.stdout.readlines()
returns (eg)
['usr pts/0 2009-08-19 16:03 (kakapo)\n',
'usr pts/1 2009-08-17 15:51 (kakapo)\n',
'usr pts/5 2009-08-17 17:00 (kakapo)\n']
and for cpuinfo:
p = Popen('ssh servername cat /proc/cpuinfo', shell=True, stdout=PIPE)
I've been using Pexpect, which let's you ssh into machines, send commands, read the output, and react to it, with success. I even started an open-source project around it, Proxpect - which haven't been updated in ages, but I digress...
The pexpect module can help you interface with ssh. More or less, here is what your example would look like.
child = pexpect.spawn('ssh servername')
child.expect('Password:')
child.sendline('ABCDEF')
(output,status) = child.sendline('who')
If your needs overgrow simple "ssh remote-host.example.org who" then there is an awesome python library, called RPyC. It has so called "classic" mode which allows to almost transparently execute Python code over the network with several lines of code. Very useful tool for trusted environments.
Here's an example from Wikipedia:
import rpyc
# assuming a classic server is running on 'hostname'
conn = rpyc.classic.connect("hostname")
# runs os.listdir() and os.stat() remotely, printing results locally
def remote_ls(path):
ros = conn.modules.os
for filename in ros.listdir(path):
stats = ros.stat(ros.path.join(path, filename))
print "%d\t%d\t%s" % (stats.st_size, stats.st_uid, filename)
remote_ls("/usr/bin")
If you're interested, there's a good tutorial on their wiki.
But, of course, if you're perfectly fine with ssh calls using Popen or just don't want to run separate "RPyC" daemon, then this is definitely an overkill.
This covers the bases. Notice the use of sudo for things that needed more privileges. We configured sudo to allow those commands for that user without needing a password typed.
Also, keep in mind that you should run ssh-agent to make this "make sense". But all in all, it works really well. Running deploy-control httpd configtest will check the apache configuration on all the remote servers.
#!/usr/local/bin/python
import subprocess
import sys
# The user#host: for the SourceURLs (NO TRAILING SLASH)
RemoteUsers = [
"deploy#host1.example.com",
"deploy#host2.appcove.net",
]
###################################################################################################
# Global Variables
Arg = None
# Implicitly verified below in if/else
Command = tuple(sys.argv[1:])
ResultList = []
###################################################################################################
for UH in RemoteUsers:
print "-"*80
print "Running %s command on: %s" % (Command, UH)
#----------------------------------------------------------------------------------------------
if Command == ('httpd', 'configtest'):
CommandResult = subprocess.call(('ssh', UH, 'sudo /sbin/service httpd configtest'))
#----------------------------------------------------------------------------------------------
elif Command == ('httpd', 'graceful'):
CommandResult = subprocess.call(('ssh', UH, 'sudo /sbin/service httpd graceful'))
#----------------------------------------------------------------------------------------------
elif Command == ('httpd', 'status'):
CommandResult = subprocess.call(('ssh', UH, 'sudo /sbin/service httpd status'))
#----------------------------------------------------------------------------------------------
elif Command == ('disk', 'usage'):
CommandResult = subprocess.call(('ssh', UH, 'df -h'))
#----------------------------------------------------------------------------------------------
elif Command == ('uptime',):
CommandResult = subprocess.call(('ssh', UH, 'uptime'))
#----------------------------------------------------------------------------------------------
else:
print
print "#"*80
print
print "Error: invalid command"
print
HelpAndExit()
#----------------------------------------------------------------------------------------------
ResultList.append(CommandResult)
print
###################################################################################################
if any(ResultList):
print "#"*80
print "#"*80
print "#"*80
print
print "ERRORS FOUND. SEE ABOVE"
print
sys.exit(0)
else:
print "-"*80
print
print "Looks OK!"
print
sys.exit(1)
Fabric is a simple way to automate some simple tasks like this, the version I'm currently using allows you to wrap up commands like so:
run('whoami', fail='ignore')
you can specify config options (config.fab_user, config.fab_password) for each machine you need (if you want to automate username password handling).
More info on Fabric here:
http://www.nongnu.org/fab/
There is a new version which is more Pythonic - I'm not sure whether that is going to be better for you int his case... works fine for me at present...