I am using python 3 with docker sdk and using
containers.run in order to create a container and run my code
when I use command argument with one command as a string it works fine
see code
client = docker.from_env()
container = client.containers.run(image=image, command="echo 1")
When I try to use a list of commands (which is fine according to the docs)
client = docker.from_env()
container = client.containers.run(image=image, command=["echo 1", "echo 2"])
I am getting this error
OCI runtime create failed: container_linux.go:345: starting container
process caused "exec: \"echo 1\": executable file not found in $PATH
same happens when using one string as such
"echo 1; echo 2"
I am using ubuntu 19 with docker
Docker version 18.09.9, build 1752eb3
It used to work just fine with a list of commands, is there anything wrong with the new version of docker or am i missing something here?
You can use this.
client = docker.from_env()
container = client.containers.run(image=image, command='/bin/sh')
result = container.exec_run('echo 1')
result = container.exec_run('echo 2')
container.stop()
container.remove()
try this:
container = client.containers.run(image="alpine:latest", command=["/bin/sh", "-c", 'echo 1 && echo 2'])
Related
Hello,
I'm working on online judge project and i'm using docker container for running a user code.
So, when user submit the code, that code runs in a docker container and then it returned output back to user.
Below is the code, how I am handling the user code by running on docker container.
data = loads(request.body.decode("utf-8"))
//writing user code and custom input to file
write_to_file(data['code'], "main.cpp")
write_to_file(data['code_input'], "input.txt")
# Uncomment below 3 lines if below image is not installed in local
# print("building docker image")
# p = getoutput("docker build . -t cpp_test:1")
# print(p)
containerID = getoutput("docker run --name cpp_compiler -d -it cpp_test:1")
# uploading user code on running container
upload_code = getoutput("docker cp main.cpp cpp_compiler:/usr/src/cpp_test/prog1.cpp")
upload_input = getoutput("docker cp input.txt cpp_compiler:/usr/src/cpp_test/input.txt")
result = getoutput('docker exec -it cpp_compiler sh -c "g++ -o Test1 prog1.cpp && ./Test1 < input.txt" ')
print("Deleting the running container : ",getoutput("docker rm --force cpp_compiler"))
return JsonResponse(result)
Now, I want to set time and memory limit on users code, like when the code will be taking more than expected time or memory, it will throw TLE or out of memory error.
I'm not getting the correct way of implementation.
I'm new in this field, any help will be appreciated.
Thanks.
I have a Python app which creates containers for the project and the project database using Docker. By default, it uses port 80 and if we would like to create the multiple instances of the app, I can explicitly provide the port number,
# port 80 is already used, so, try another port
$ bin/butler.py setup --port=82
However, it also happens that the port info provided (using --port) is already used by another instance of the same app. So, it will be better to know which ports are already being used for the app and choose not to use any of them.
How do I know which ports the app use till now? I would like to execute that inside Python.
you can always use subprocess module, run ps -elf | grep bin/butler.py for example and parse the output with regex or simple string manipulation, then extract the used ports .
psutil might be the package you need. You can use the net_connections and grab listen ports from there.
[conn.laddr.port for conn in psutil.net_connections() if conn.status=='LISTEN']
[8000,80,22,1298]
I write a solution where you can get all the ports used by docker from the Python code,
def cmd_ports_info(self, args=None):
cmd = "docker ps --format '{{.Ports}}'"
try:
cp = subprocess.run(cmd,
shell=True,
check=True,
stdout=subprocess.PIPE)
cp = cp.stdout.decode("utf-8").strip()
lines = str(cp).splitlines()
ports = []
for line in lines:
items = line.split(",")
for item in items:
port = re.findall('\d+(?!.*->)', item)
ports.extend(port)
# create a unique list of ports utilized
ports = list(set(ports))
print(colored(f"List of ports utilized till now {ports}\n" + "Please, use another port to start the project", 'green',
attrs=['reverse', 'blink']))
except Exception as e:
print(f"Docker exec failed command {e}")
return None
I'm trying to create a small application with Python & Cherrypy. I need to interface Docker: list images, instantiate images etc. The background is probably not important. I just need to run some external commands (using subprocess) and process the outcome on the server side. Problem: you need to be root to run these commands. How to do it from a web server?
My code below works fine when I run the 'ls' command, but fails with the 'sudo docker images' command:
subprocess.CalledProcessError: Command 'sudo docker images' returned non-zero exit status 1.
That command works fine when I ran it in a terminal and give the root password. So I need a way to elevate priveleges in the server. Sorry if I state this incorrecly, feel free to educate me. I'm a old linux user but not an IT person. I researched a bit how to do this and got no where...
Thanks for your help
Kind regards,
Nicolas
import subprocess
import cherrypy
def externalCmd(cmd):
return subprocess.check_output(cmd, shell=True).decode('utf-8')
class Webpages(object):
def index(self):
#self.images = externalCmd("sudo docker images")
self.images = externalCmd("ls")
return ''' Images ''' + self.images
index.exposed = True
# run web server
cherrypy.engine.exit()
cherrypy.quickstart(Webpages(), config="webserver.conf")
The webserver.conf file contains the following:
[global]
server.socket_host ="127.0.0.1"
server.socket_port = 8080
server.thread_pool = 5
tools.sessions.on = True
tools.encode.encoding = "Utf-8"
[/annexes]
tools.staticdir.on = True
tools.staticdir.dir = "images"
I try to execute 4 commands into a container (it has a mysql database) but if i do it in anothe terminal work, but if a create a container and then execute the commands, it not working. I have this code:
this code create the container but dont execute the command 1 ,2 , 3 and 4.
import docker
from docker.types import Mount
from threading import Thread
client = docker.DockerClient(base_url='unix://var/run/docker.sock')
container= client.containers.run(
"python_base_image:v02",
detach=True,
name='201802750001M04',
ports={'3306/tcp': None, '80/tcp': None},
mounts=
[Mount("/var/lib/mysql","201802750001M04_backup_db",type='volume')]
)
command1 = "sed -i '/bind/s/^/#/g' /etc/mysql/my.cnf"
command2 = "mysql --user="root" --password="temprootpass" --
execute="GRANT ALL PRIVILEGES ON . TO 'macripco'#'172.17.0.1'
IDENTIFIED BY '12345';""
command3 = "mysql --user="root" --password="temprootpass" --
execute="GRANT ALL PRIVILEGES ON . TO 'macripco'#'localhost'
IDENTIFIED BY '12345';""
command4 = "sudo /etc/init.d/mysql restart"
a = container.exec_run(command1,detach=False,stream=True,stderr=True,
stdout=True)
b = container.exec_run(command2,detach=False,stream=True,stderr=True,
stdout=True)
c = container.exec_run(command3,detach=False,stream=True,stderr=True,
stdout=True)
d = container.exec_run(command4,detach=False,stream=True,stderr=True,
stdout=True)`
But if i execute the commands later(in another terminal), once the container has been created, that work. I need create and execute the commands together.
Thanks.
That was a problem about time execution, that was resolved with time.sleep(10) between two executions, after create container and before exec_run
I need to get the containers name, from within the running container in python
i could easily get the container id from inside the container in python with
bashCommand = """head -1 /proc/self/cgroup|cut -d/ -f3"""
output = subprocess.check_output(['bash','-c', bashCommand])
print output
now i need the containername
Just set the Name at runtime like:
docker run --name MYCOOLCONTAINER alpine:latest
Then:
bashCommandName = `echo $NAME`
output = subprocess.check_output(['bash','-c', bashCommandName])
print output