Python getting Docker Container Name from the inside of a docker container - python

I need to get the containers name, from within the running container in python
i could easily get the container id from inside the container in python with
bashCommand = """head -1 /proc/self/cgroup|cut -d/ -f3"""
output = subprocess.check_output(['bash','-c', bashCommand])
print output
now i need the containername

Just set the Name at runtime like:
docker run --name MYCOOLCONTAINER alpine:latest
Then:
bashCommandName = `echo $NAME`
output = subprocess.check_output(['bash','-c', bashCommandName])
print output

Related

Docker CMD commad does not execute while running starting the container but works when run from within the container

I have a python script which counts the words for a given file and saves the output to a "result.txt" file after the execution. I want my docker container to do this as the container starts and display the output the console. Below is my docker file and python file
FROM python:3
RUN mkdir /home/data
RUN mkdir /home/output
RUN touch /home/output/result.txt
WORKDIR /home/code
COPY word_counter.py ./
CMD ["python", "word_counter.py"]
ENTRYPOINT cat ../output/result.txt
import glob
import os
from collections import OrderedDict
import socket
from pathlib import Path
dir_path = os.path.dirname(os.path.realpath(__file__))
# print(type(dir_path))
parent_path = Path(dir_path).parent
data_path = str(parent_path)+"/data"
# print(data_path)
os.chdir(data_path)
myFiles = glob.glob('*.txt')
output = open("../output/result.txt", "w")
output.write("files in home/data are : ")
output.write('\n')
for x in myFiles :
output.write(x)
output.write('\n')
output.close()
total_words = 0
for x in myFiles :
file = open(x, "r")
data = file.read()
words = data.split()
total_words = total_words + len(words)
file.close()
output = open("../output/result.txt", "a")
output.write("Total number of words in both the files : " + str(total_words))
output.write('\n')
output.close()
frequency = {}
for x in myFiles :
if x == "IF.txt" :
curr_file = x
document_text = open(curr_file, 'r')
text_string = document_text.read()
words = text_string.split()
for word in words:
count = frequency.get(word,0)
frequency[word] = count + 1
frequency_list_desc_order = sorted(frequency, key=frequency.get, reverse=True)
output = open("../output/result.txt", "a")
output.write("Top 3 words in IF.txt are :")
output.write('\n')
ip_addr = socket.gethostbyname(socket.gethostname())
for word in frequency_list_desc_order[:3]:
line = word + " : " + str(frequency[word])
output.write(line)
output.write('\n')
output.write("ip address of the machine : " + ip_addr + "\n")
output.close()
I am mapping a local directory which has two text files IF.txt and Limerick1.txt from the host machine to the directory "/home/data" inside the container and the python code inside the container reads the files and saves the output to result.txt in "home/output" inside the container.
I want my container to print the output in "result.txt" to the console when I start the container using the docker run command.
Issue: docker does not execute the following statement when starting a container using docker run.
CMD ["python", "word_counter.py"]
command to run the container:
docker run -it -v /Users/xyz/Desktop/project/docker:/home/data proj2docker bash
But when I run the same command "python word_counter.py" from within the container it executes perfectly fine.
can someone help me with this?
You have an entrypoint in your Dockerfile. This entrypoint will run and basically take the CMD as additional argument(s).
The final command that you run when starting the container looks like this
cat ../output/result.txt python word_counter.py
This is likely not what you want. I suggest removing that entrypoint. Or fix it according to your needs.
If you want to print that file and still execute that command, you can do something like the below.
CMD ["python", "word_counter.py"]
ENTRYPOINT ["/bin/sh", "-c", "cat ../output/result.txt; exec $#"]
It will run some command(s) as entrypoint, in this case printing the output of that file, and after that execute the CMD which is available as $# as its standard posix shell behaviour. In any shell script it would work the same to access all arguments that were passed to the script. The benefit of using exec here is that it will run python with process id 1, which is useful when you want to send signals into the container to the python process, for example kill.
Lastly, when you start the container with the command you show
docker run -it -v /Users/xyz/Desktop/project/docker:/home/data proj2docker bash
You are overriding the CMD in the Dockerfile. So in that case, it is expected that it doesn't run python. Even if your entrypoint didn't have the former mentioned issue.
If you want to always run the python program, then you need to make that part of the entrypoint. The problem you would have is that it would first run the entrypoint until it finishes and then your command, in this case bash.
You could run it in the background, if that's what you want. Note that there is no default CMD, but still the exec $# which will allow you to run an arbitrary command such as bash while python is running in the background.
ENTRYPOINT ["/bin/sh", "-c", "cat ../output/result.txt; python word_counter.py &; exec $#"]
If you do a lot of work in the entrypoint it is probably cleaner to move this to a dedicated script and run this script as entrypoint, you can still call exec $# at the end of your shell script.
According to your comment, you want to run python first and then cat on the file. You could drop the entrypoint and do it just with the command.
CMD ["/bin/sh", "-c", "python word_counter.py && cat ../output/result.txt"]

How to set time and memory limit?

Hello,
I'm working on online judge project and i'm using docker container for running a user code.
So, when user submit the code, that code runs in a docker container and then it returned output back to user.
Below is the code, how I am handling the user code by running on docker container.
data = loads(request.body.decode("utf-8"))
//writing user code and custom input to file
write_to_file(data['code'], "main.cpp")
write_to_file(data['code_input'], "input.txt")
# Uncomment below 3 lines if below image is not installed in local
# print("building docker image")
# p = getoutput("docker build . -t cpp_test:1")
# print(p)
containerID = getoutput("docker run --name cpp_compiler -d -it cpp_test:1")
# uploading user code on running container
upload_code = getoutput("docker cp main.cpp cpp_compiler:/usr/src/cpp_test/prog1.cpp")
upload_input = getoutput("docker cp input.txt cpp_compiler:/usr/src/cpp_test/input.txt")
result = getoutput('docker exec -it cpp_compiler sh -c "g++ -o Test1 prog1.cpp && ./Test1 < input.txt" ')
print("Deleting the running container : ",getoutput("docker rm --force cpp_compiler"))
return JsonResponse(result)
Now, I want to set time and memory limit on users code, like when the code will be taking more than expected time or memory, it will throw TLE or out of memory error.
I'm not getting the correct way of implementation.
I'm new in this field, any help will be appreciated.
Thanks.

python docker sdk how to run multiple commands in containers.run

I am using python 3 with docker sdk and using
containers.run in order to create a container and run my code
when I use command argument with one command as a string it works fine
see code
client = docker.from_env()
container = client.containers.run(image=image, command="echo 1")
When I try to use a list of commands (which is fine according to the docs)
client = docker.from_env()
container = client.containers.run(image=image, command=["echo 1", "echo 2"])
I am getting this error
OCI runtime create failed: container_linux.go:345: starting container
process caused "exec: \"echo 1\": executable file not found in $PATH
same happens when using one string as such
"echo 1; echo 2"
I am using ubuntu 19 with docker
Docker version 18.09.9, build 1752eb3
It used to work just fine with a list of commands, is there anything wrong with the new version of docker or am i missing something here?
You can use this.
client = docker.from_env()
container = client.containers.run(image=image, command='/bin/sh')
result = container.exec_run('echo 1')
result = container.exec_run('echo 2')
container.stop()
container.remove()
try this:
container = client.containers.run(image="alpine:latest", command=["/bin/sh", "-c", 'echo 1 && echo 2'])

Use a different port for the app in docker

I have a Python app which creates containers for the project and the project database using Docker. By default, it uses port 80 and if we would like to create the multiple instances of the app, I can explicitly provide the port number,
# port 80 is already used, so, try another port
$ bin/butler.py setup --port=82
However, it also happens that the port info provided (using --port) is already used by another instance of the same app. So, it will be better to know which ports are already being used for the app and choose not to use any of them.
How do I know which ports the app use till now? I would like to execute that inside Python.
you can always use subprocess module, run ps -elf | grep bin/butler.py for example and parse the output with regex or simple string manipulation, then extract the used ports .
psutil might be the package you need. You can use the net_connections and grab listen ports from there.
[conn.laddr.port for conn in psutil.net_connections() if conn.status=='LISTEN']
[8000,80,22,1298]
I write a solution where you can get all the ports used by docker from the Python code,
def cmd_ports_info(self, args=None):
cmd = "docker ps --format '{{.Ports}}'"
try:
cp = subprocess.run(cmd,
shell=True,
check=True,
stdout=subprocess.PIPE)
cp = cp.stdout.decode("utf-8").strip()
lines = str(cp).splitlines()
ports = []
for line in lines:
items = line.split(",")
for item in items:
port = re.findall('\d+(?!.*->)', item)
ports.extend(port)
# create a unique list of ports utilized
ports = list(set(ports))
print(colored(f"List of ports utilized till now {ports}\n" + "Please, use another port to start the project", 'green',
attrs=['reverse', 'blink']))
except Exception as e:
print(f"Docker exec failed command {e}")
return None

I try to execute into container with exec_run but doesn't work

I try to execute 4 commands into a container (it has a mysql database) but if i do it in anothe terminal work, but if a create a container and then execute the commands, it not working. I have this code:
this code create the container but dont execute the command 1 ,2 , 3 and 4.
import docker
from docker.types import Mount
from threading import Thread
client = docker.DockerClient(base_url='unix://var/run/docker.sock')
container= client.containers.run(
"python_base_image:v02",
detach=True,
name='201802750001M04',
ports={'3306/tcp': None, '80/tcp': None},
mounts=
[Mount("/var/lib/mysql","201802750001M04_backup_db",type='volume')]
)
command1 = "sed -i '/bind/s/^/#/g' /etc/mysql/my.cnf"
command2 = "mysql --user="root" --password="temprootpass" --
execute="GRANT ALL PRIVILEGES ON . TO 'macripco'#'172.17.0.1'
IDENTIFIED BY '12345';""
command3 = "mysql --user="root" --password="temprootpass" --
execute="GRANT ALL PRIVILEGES ON . TO 'macripco'#'localhost'
IDENTIFIED BY '12345';""
command4 = "sudo /etc/init.d/mysql restart"
a = container.exec_run(command1,detach=False,stream=True,stderr=True,
stdout=True)
b = container.exec_run(command2,detach=False,stream=True,stderr=True,
stdout=True)
c = container.exec_run(command3,detach=False,stream=True,stderr=True,
stdout=True)
d = container.exec_run(command4,detach=False,stream=True,stderr=True,
stdout=True)`
But if i execute the commands later(in another terminal), once the container has been created, that work. I need create and execute the commands together.
Thanks.
That was a problem about time execution, that was resolved with time.sleep(10) between two executions, after create container and before exec_run

Categories