Run a sudo command from CherryPy - python

I'm trying to create a small application with Python & Cherrypy. I need to interface Docker: list images, instantiate images etc. The background is probably not important. I just need to run some external commands (using subprocess) and process the outcome on the server side. Problem: you need to be root to run these commands. How to do it from a web server?
My code below works fine when I run the 'ls' command, but fails with the 'sudo docker images' command:
subprocess.CalledProcessError: Command 'sudo docker images' returned non-zero exit status 1.
That command works fine when I ran it in a terminal and give the root password. So I need a way to elevate priveleges in the server. Sorry if I state this incorrecly, feel free to educate me. I'm a old linux user but not an IT person. I researched a bit how to do this and got no where...
Thanks for your help
Kind regards,
Nicolas
import subprocess
import cherrypy
def externalCmd(cmd):
return subprocess.check_output(cmd, shell=True).decode('utf-8')
class Webpages(object):
def index(self):
#self.images = externalCmd("sudo docker images")
self.images = externalCmd("ls")
return ''' Images ''' + self.images
index.exposed = True
# run web server
cherrypy.engine.exit()
cherrypy.quickstart(Webpages(), config="webserver.conf")
The webserver.conf file contains the following:
[global]
server.socket_host ="127.0.0.1"
server.socket_port = 8080
server.thread_pool = 5
tools.sessions.on = True
tools.encode.encoding = "Utf-8"
[/annexes]
tools.staticdir.on = True
tools.staticdir.dir = "images"

Related

Python: communication between an app and terminal which is asking for Admins password

I’m trying to accomplish the following but no luck, any suggestions?
⇒ My app will run a command that requires admin’s password, if you run the command in Terminal, the command will stop asking for the password, however, if you run it within Python, it will just skip that and will end with error (as Admin's password weren't entered. I have used elevate module to launch the app as root, however, the command im using doesn't allow to run as a root :
Do not run this script with root privileges. Do not use 'sudo'
Any suggestions how to communicate between cli and python when cli is waiting for the admins password instead of just skipping it?
Thank you all
My code:
Import os
os.system('killall Box')
os.system('/Library/Application\ Support/Box/uninstall_box_drive')
Result:
/usr/local/bin/python3.10 /Users/user/PycharmProjects/pythonProjectBoxUnsaved/venv/test.py
No matching processes belonging to you were found
Unload failed: 5: Input/output error
Try running `launchctl bootout` as root for richer errors.
0
/usr/bin/fileproviderctl
File provider com.box.desktop.boxfileprovider/Box not found. Available providers:
- iCloud Drive (hidden)
com.apple.CloudDocs.MobileDocumentsFileProvider
~/L{5}y/M{14}s
fileproviderctl: can't find domain for com.box.desktop.boxfileprovider/Box: (null)
No matching processes belonging to you were found
sudo: a terminal is required to read the password; either use the -S option to read from standard input or configure an askpass helper
sudo: a password is required
* * * * * *
Process finished with exit code 0
Error when trying to run the code
Error when using 'elevate' module
I have also tried pexpect, will not work, code below:
import os
import pexpect
os.system('killall Box')
child = pexpect.spawn('/Library/Application\ Support/Box/uninstall_box_drive')
print(child.after)
child.expect_exact('Password:')
print(child.after)
child.sendline('my_password')
print(child.after)
result below:
None
b'Password:'
b'Password:'
Solved it.
Had to include Bash path in the spawn arguments:
cmd = "/Library/Application\ Support/Box/uninstall_box_drive"
child = pexpect.spawn("/bin/bash", ["-c", cmd])

How to set time and memory limit?

Hello,
I'm working on online judge project and i'm using docker container for running a user code.
So, when user submit the code, that code runs in a docker container and then it returned output back to user.
Below is the code, how I am handling the user code by running on docker container.
data = loads(request.body.decode("utf-8"))
//writing user code and custom input to file
write_to_file(data['code'], "main.cpp")
write_to_file(data['code_input'], "input.txt")
# Uncomment below 3 lines if below image is not installed in local
# print("building docker image")
# p = getoutput("docker build . -t cpp_test:1")
# print(p)
containerID = getoutput("docker run --name cpp_compiler -d -it cpp_test:1")
# uploading user code on running container
upload_code = getoutput("docker cp main.cpp cpp_compiler:/usr/src/cpp_test/prog1.cpp")
upload_input = getoutput("docker cp input.txt cpp_compiler:/usr/src/cpp_test/input.txt")
result = getoutput('docker exec -it cpp_compiler sh -c "g++ -o Test1 prog1.cpp && ./Test1 < input.txt" ')
print("Deleting the running container : ",getoutput("docker rm --force cpp_compiler"))
return JsonResponse(result)
Now, I want to set time and memory limit on users code, like when the code will be taking more than expected time or memory, it will throw TLE or out of memory error.
I'm not getting the correct way of implementation.
I'm new in this field, any help will be appreciated.
Thanks.

python docker sdk how to run multiple commands in containers.run

I am using python 3 with docker sdk and using
containers.run in order to create a container and run my code
when I use command argument with one command as a string it works fine
see code
client = docker.from_env()
container = client.containers.run(image=image, command="echo 1")
When I try to use a list of commands (which is fine according to the docs)
client = docker.from_env()
container = client.containers.run(image=image, command=["echo 1", "echo 2"])
I am getting this error
OCI runtime create failed: container_linux.go:345: starting container
process caused "exec: \"echo 1\": executable file not found in $PATH
same happens when using one string as such
"echo 1; echo 2"
I am using ubuntu 19 with docker
Docker version 18.09.9, build 1752eb3
It used to work just fine with a list of commands, is there anything wrong with the new version of docker or am i missing something here?
You can use this.
client = docker.from_env()
container = client.containers.run(image=image, command='/bin/sh')
result = container.exec_run('echo 1')
result = container.exec_run('echo 2')
container.stop()
container.remove()
try this:
container = client.containers.run(image="alpine:latest", command=["/bin/sh", "-c", 'echo 1 && echo 2'])

Use a different port for the app in docker

I have a Python app which creates containers for the project and the project database using Docker. By default, it uses port 80 and if we would like to create the multiple instances of the app, I can explicitly provide the port number,
# port 80 is already used, so, try another port
$ bin/butler.py setup --port=82
However, it also happens that the port info provided (using --port) is already used by another instance of the same app. So, it will be better to know which ports are already being used for the app and choose not to use any of them.
How do I know which ports the app use till now? I would like to execute that inside Python.
you can always use subprocess module, run ps -elf | grep bin/butler.py for example and parse the output with regex or simple string manipulation, then extract the used ports .
psutil might be the package you need. You can use the net_connections and grab listen ports from there.
[conn.laddr.port for conn in psutil.net_connections() if conn.status=='LISTEN']
[8000,80,22,1298]
I write a solution where you can get all the ports used by docker from the Python code,
def cmd_ports_info(self, args=None):
cmd = "docker ps --format '{{.Ports}}'"
try:
cp = subprocess.run(cmd,
shell=True,
check=True,
stdout=subprocess.PIPE)
cp = cp.stdout.decode("utf-8").strip()
lines = str(cp).splitlines()
ports = []
for line in lines:
items = line.split(",")
for item in items:
port = re.findall('\d+(?!.*->)', item)
ports.extend(port)
# create a unique list of ports utilized
ports = list(set(ports))
print(colored(f"List of ports utilized till now {ports}\n" + "Please, use another port to start the project", 'green',
attrs=['reverse', 'blink']))
except Exception as e:
print(f"Docker exec failed command {e}")
return None

Interface with remote computers using Python

I've just become the system admin for my research group's cluster and, in this respect, am a novice. I'm trying to make a few tools to monitor the network and need help getting started implementing them with python (my native tongue).
For example, I would like to view who is logged onto remote machines. By hand, I'd ssh and who, but how would I get this info into a script for manipulation? Something like,
import remote_info as ri
ri.open("foo05.bar.edu")
ri.who()
Out[1]:
hutchinson tty7 2009-08-19 13:32 (:0)
hutchinson pts/1 2009-08-19 13:33 (:0.0)
Similarly for things like cat /proc/cpuinfo to get the processor information of a node. A starting point would be really great. Thanks.
Here's a simple, cheap solution to get you started
from subprocess import *
p = Popen('ssh servername who', shell=True, stdout=PIPE)
p.wait()
print p.stdout.readlines()
returns (eg)
['usr pts/0 2009-08-19 16:03 (kakapo)\n',
'usr pts/1 2009-08-17 15:51 (kakapo)\n',
'usr pts/5 2009-08-17 17:00 (kakapo)\n']
and for cpuinfo:
p = Popen('ssh servername cat /proc/cpuinfo', shell=True, stdout=PIPE)
I've been using Pexpect, which let's you ssh into machines, send commands, read the output, and react to it, with success. I even started an open-source project around it, Proxpect - which haven't been updated in ages, but I digress...
The pexpect module can help you interface with ssh. More or less, here is what your example would look like.
child = pexpect.spawn('ssh servername')
child.expect('Password:')
child.sendline('ABCDEF')
(output,status) = child.sendline('who')
If your needs overgrow simple "ssh remote-host.example.org who" then there is an awesome python library, called RPyC. It has so called "classic" mode which allows to almost transparently execute Python code over the network with several lines of code. Very useful tool for trusted environments.
Here's an example from Wikipedia:
import rpyc
# assuming a classic server is running on 'hostname'
conn = rpyc.classic.connect("hostname")
# runs os.listdir() and os.stat() remotely, printing results locally
def remote_ls(path):
ros = conn.modules.os
for filename in ros.listdir(path):
stats = ros.stat(ros.path.join(path, filename))
print "%d\t%d\t%s" % (stats.st_size, stats.st_uid, filename)
remote_ls("/usr/bin")
If you're interested, there's a good tutorial on their wiki.
But, of course, if you're perfectly fine with ssh calls using Popen or just don't want to run separate "RPyC" daemon, then this is definitely an overkill.
This covers the bases. Notice the use of sudo for things that needed more privileges. We configured sudo to allow those commands for that user without needing a password typed.
Also, keep in mind that you should run ssh-agent to make this "make sense". But all in all, it works really well. Running deploy-control httpd configtest will check the apache configuration on all the remote servers.
#!/usr/local/bin/python
import subprocess
import sys
# The user#host: for the SourceURLs (NO TRAILING SLASH)
RemoteUsers = [
"deploy#host1.example.com",
"deploy#host2.appcove.net",
]
###################################################################################################
# Global Variables
Arg = None
# Implicitly verified below in if/else
Command = tuple(sys.argv[1:])
ResultList = []
###################################################################################################
for UH in RemoteUsers:
print "-"*80
print "Running %s command on: %s" % (Command, UH)
#----------------------------------------------------------------------------------------------
if Command == ('httpd', 'configtest'):
CommandResult = subprocess.call(('ssh', UH, 'sudo /sbin/service httpd configtest'))
#----------------------------------------------------------------------------------------------
elif Command == ('httpd', 'graceful'):
CommandResult = subprocess.call(('ssh', UH, 'sudo /sbin/service httpd graceful'))
#----------------------------------------------------------------------------------------------
elif Command == ('httpd', 'status'):
CommandResult = subprocess.call(('ssh', UH, 'sudo /sbin/service httpd status'))
#----------------------------------------------------------------------------------------------
elif Command == ('disk', 'usage'):
CommandResult = subprocess.call(('ssh', UH, 'df -h'))
#----------------------------------------------------------------------------------------------
elif Command == ('uptime',):
CommandResult = subprocess.call(('ssh', UH, 'uptime'))
#----------------------------------------------------------------------------------------------
else:
print
print "#"*80
print
print "Error: invalid command"
print
HelpAndExit()
#----------------------------------------------------------------------------------------------
ResultList.append(CommandResult)
print
###################################################################################################
if any(ResultList):
print "#"*80
print "#"*80
print "#"*80
print
print "ERRORS FOUND. SEE ABOVE"
print
sys.exit(0)
else:
print "-"*80
print
print "Looks OK!"
print
sys.exit(1)
Fabric is a simple way to automate some simple tasks like this, the version I'm currently using allows you to wrap up commands like so:
run('whoami', fail='ignore')
you can specify config options (config.fab_user, config.fab_password) for each machine you need (if you want to automate username password handling).
More info on Fabric here:
http://www.nongnu.org/fab/
There is a new version which is more Pythonic - I'm not sure whether that is going to be better for you int his case... works fine for me at present...

Categories