Hello,
I'm working on online judge project and i'm using docker container for running a user code.
So, when user submit the code, that code runs in a docker container and then it returned output back to user.
Below is the code, how I am handling the user code by running on docker container.
data = loads(request.body.decode("utf-8"))
//writing user code and custom input to file
write_to_file(data['code'], "main.cpp")
write_to_file(data['code_input'], "input.txt")
# Uncomment below 3 lines if below image is not installed in local
# print("building docker image")
# p = getoutput("docker build . -t cpp_test:1")
# print(p)
containerID = getoutput("docker run --name cpp_compiler -d -it cpp_test:1")
# uploading user code on running container
upload_code = getoutput("docker cp main.cpp cpp_compiler:/usr/src/cpp_test/prog1.cpp")
upload_input = getoutput("docker cp input.txt cpp_compiler:/usr/src/cpp_test/input.txt")
result = getoutput('docker exec -it cpp_compiler sh -c "g++ -o Test1 prog1.cpp && ./Test1 < input.txt" ')
print("Deleting the running container : ",getoutput("docker rm --force cpp_compiler"))
return JsonResponse(result)
Now, I want to set time and memory limit on users code, like when the code will be taking more than expected time or memory, it will throw TLE or out of memory error.
I'm not getting the correct way of implementation.
I'm new in this field, any help will be appreciated.
Thanks.
Related
I've been using the following code for a while now, however recently (in the last month) it's been failing:
# Make sure WireGuard starts at boot
systemd_cmd = 'ssh %s %s#%s "systemctl enable wg-quick#wg0"' % (SSH_OPTS, SSH_USER, get_gw_ret['pub_ipv4_addr'])
p = subprocess.Popen(start_wg_cmd,shell=True,stdout=subprocess.PIPE,stderr=subprocess.PIPE)
systemd_out = p.communicate()[1]
systemd_out_message = str(systemd_out)[2:-1]
A successful run of systemctl enable wg-quick#wg0 is the goal. The expected output is something like;
Created symlink /etc/systemd/system/multi-user.target.wants/wg-quick#wg0.service → /lib/systemd/system/wg-quick#.service.
However these days it returns:
wg-quick: `wg0\' already exists\\n
I've narrowed it down to the above python code since executing this command directly in a bash shell is always successful. The complete commands looks like:
ssh -o StrictHostKeyChecking=no root#10.0.0.8 "systemctl enable wg-quick#wg0"
Any ideas what might be happening?
Someone pointed out my typo:
- p = subprocess.Popen(start_wg_cmd,shell=True,stdout=subprocess.PIPE,stderr=subprocess.PIPE)
+ p = subprocess.Popen(systemd_cmd,shell=True,stdout=subprocess.PIPE,stderr=subprocess.PIPE)
Thanks for reading!
I have a python script which counts the words for a given file and saves the output to a "result.txt" file after the execution. I want my docker container to do this as the container starts and display the output the console. Below is my docker file and python file
FROM python:3
RUN mkdir /home/data
RUN mkdir /home/output
RUN touch /home/output/result.txt
WORKDIR /home/code
COPY word_counter.py ./
CMD ["python", "word_counter.py"]
ENTRYPOINT cat ../output/result.txt
import glob
import os
from collections import OrderedDict
import socket
from pathlib import Path
dir_path = os.path.dirname(os.path.realpath(__file__))
# print(type(dir_path))
parent_path = Path(dir_path).parent
data_path = str(parent_path)+"/data"
# print(data_path)
os.chdir(data_path)
myFiles = glob.glob('*.txt')
output = open("../output/result.txt", "w")
output.write("files in home/data are : ")
output.write('\n')
for x in myFiles :
output.write(x)
output.write('\n')
output.close()
total_words = 0
for x in myFiles :
file = open(x, "r")
data = file.read()
words = data.split()
total_words = total_words + len(words)
file.close()
output = open("../output/result.txt", "a")
output.write("Total number of words in both the files : " + str(total_words))
output.write('\n')
output.close()
frequency = {}
for x in myFiles :
if x == "IF.txt" :
curr_file = x
document_text = open(curr_file, 'r')
text_string = document_text.read()
words = text_string.split()
for word in words:
count = frequency.get(word,0)
frequency[word] = count + 1
frequency_list_desc_order = sorted(frequency, key=frequency.get, reverse=True)
output = open("../output/result.txt", "a")
output.write("Top 3 words in IF.txt are :")
output.write('\n')
ip_addr = socket.gethostbyname(socket.gethostname())
for word in frequency_list_desc_order[:3]:
line = word + " : " + str(frequency[word])
output.write(line)
output.write('\n')
output.write("ip address of the machine : " + ip_addr + "\n")
output.close()
I am mapping a local directory which has two text files IF.txt and Limerick1.txt from the host machine to the directory "/home/data" inside the container and the python code inside the container reads the files and saves the output to result.txt in "home/output" inside the container.
I want my container to print the output in "result.txt" to the console when I start the container using the docker run command.
Issue: docker does not execute the following statement when starting a container using docker run.
CMD ["python", "word_counter.py"]
command to run the container:
docker run -it -v /Users/xyz/Desktop/project/docker:/home/data proj2docker bash
But when I run the same command "python word_counter.py" from within the container it executes perfectly fine.
can someone help me with this?
You have an entrypoint in your Dockerfile. This entrypoint will run and basically take the CMD as additional argument(s).
The final command that you run when starting the container looks like this
cat ../output/result.txt python word_counter.py
This is likely not what you want. I suggest removing that entrypoint. Or fix it according to your needs.
If you want to print that file and still execute that command, you can do something like the below.
CMD ["python", "word_counter.py"]
ENTRYPOINT ["/bin/sh", "-c", "cat ../output/result.txt; exec $#"]
It will run some command(s) as entrypoint, in this case printing the output of that file, and after that execute the CMD which is available as $# as its standard posix shell behaviour. In any shell script it would work the same to access all arguments that were passed to the script. The benefit of using exec here is that it will run python with process id 1, which is useful when you want to send signals into the container to the python process, for example kill.
Lastly, when you start the container with the command you show
docker run -it -v /Users/xyz/Desktop/project/docker:/home/data proj2docker bash
You are overriding the CMD in the Dockerfile. So in that case, it is expected that it doesn't run python. Even if your entrypoint didn't have the former mentioned issue.
If you want to always run the python program, then you need to make that part of the entrypoint. The problem you would have is that it would first run the entrypoint until it finishes and then your command, in this case bash.
You could run it in the background, if that's what you want. Note that there is no default CMD, but still the exec $# which will allow you to run an arbitrary command such as bash while python is running in the background.
ENTRYPOINT ["/bin/sh", "-c", "cat ../output/result.txt; python word_counter.py &; exec $#"]
If you do a lot of work in the entrypoint it is probably cleaner to move this to a dedicated script and run this script as entrypoint, you can still call exec $# at the end of your shell script.
According to your comment, you want to run python first and then cat on the file. You could drop the entrypoint and do it just with the command.
CMD ["/bin/sh", "-c", "python word_counter.py && cat ../output/result.txt"]
I am using python 3 with docker sdk and using
containers.run in order to create a container and run my code
when I use command argument with one command as a string it works fine
see code
client = docker.from_env()
container = client.containers.run(image=image, command="echo 1")
When I try to use a list of commands (which is fine according to the docs)
client = docker.from_env()
container = client.containers.run(image=image, command=["echo 1", "echo 2"])
I am getting this error
OCI runtime create failed: container_linux.go:345: starting container
process caused "exec: \"echo 1\": executable file not found in $PATH
same happens when using one string as such
"echo 1; echo 2"
I am using ubuntu 19 with docker
Docker version 18.09.9, build 1752eb3
It used to work just fine with a list of commands, is there anything wrong with the new version of docker or am i missing something here?
You can use this.
client = docker.from_env()
container = client.containers.run(image=image, command='/bin/sh')
result = container.exec_run('echo 1')
result = container.exec_run('echo 2')
container.stop()
container.remove()
try this:
container = client.containers.run(image="alpine:latest", command=["/bin/sh", "-c", 'echo 1 && echo 2'])
I got script that are building images pushing them to docker hub and checking all the errors.
But my main purpose is to run specific command like "node server.js" and than start running the script commands.
and i want it to be in the same script file all together.
For now what i am doing is opening 2 terminals, from the First terminal running the command 'node server.js' to start the app.
And from the Second terminal running the script.
and what i want to is configure the 'node server.js' command inside the script to run at the background and let the script continue and the same time.
For now this is my script and when the script start running the command os.system(start_node) the script stop running to the other commands.
so my questions is how to run this command and let the script contiune without opening 2 terminal and run in 1 terminal node server.js and in the second the script without the command os.system(start_node).
#!/usr/bin/env python3
#Before running this script need to start the app 'node server.js'
import os
import sys
os.chdir ("/opt/new-test-app")
start_node = 'node server.js'
npm_test = 'npm test'
npm_output = ' 8 passing'
image = 'docker build -t test/new-test-app-new:latest .'
test = 'curl -o /dev/null -s -w "%{http_code}\n" http://localhost:8081'
docker_login = 'cat /cred/cred.txt | docker login --username test --password-stdin'
docker_push = 'docker push alexkocloud/new-test-app-new:latest'
os.system(start_node)
os.system(npm_test)
if npm_output == 0:
print ("npm test not succesfully passed")
sys.exit()
else:
print('npm test successfuly passed with "8 passing"')
if os.system(test) == 0:
print('HTTP Status Code 200 OK')
else:
print('ERROR CODE')
sys.exit()
os.system(image)
os.system(docker_login)
os.system(docker_push)
sys.exit(0)
Ok, what i did is just added this line
nohup node server.js > output.log &
Inside my variable:
start_node = 'nohup node server.js > output.log &'
and it's working as i wanted.
If anyone has a better solution i would love to see.
Thanks.
I'm trying to create a small application with Python & Cherrypy. I need to interface Docker: list images, instantiate images etc. The background is probably not important. I just need to run some external commands (using subprocess) and process the outcome on the server side. Problem: you need to be root to run these commands. How to do it from a web server?
My code below works fine when I run the 'ls' command, but fails with the 'sudo docker images' command:
subprocess.CalledProcessError: Command 'sudo docker images' returned non-zero exit status 1.
That command works fine when I ran it in a terminal and give the root password. So I need a way to elevate priveleges in the server. Sorry if I state this incorrecly, feel free to educate me. I'm a old linux user but not an IT person. I researched a bit how to do this and got no where...
Thanks for your help
Kind regards,
Nicolas
import subprocess
import cherrypy
def externalCmd(cmd):
return subprocess.check_output(cmd, shell=True).decode('utf-8')
class Webpages(object):
def index(self):
#self.images = externalCmd("sudo docker images")
self.images = externalCmd("ls")
return ''' Images ''' + self.images
index.exposed = True
# run web server
cherrypy.engine.exit()
cherrypy.quickstart(Webpages(), config="webserver.conf")
The webserver.conf file contains the following:
[global]
server.socket_host ="127.0.0.1"
server.socket_port = 8080
server.thread_pool = 5
tools.sessions.on = True
tools.encode.encoding = "Utf-8"
[/annexes]
tools.staticdir.on = True
tools.staticdir.dir = "images"