I'm running a python script in a docker container using docker-compose on an Ubuntu 20.04 server. I'm looking for a way to automatically delete old docker-compose logs. I specified such a structure, but the script hangs after about a week of work:
logging:
driver: "json-file"
options:
max-size: "200k"
max-file: "10"
It seems to me that this is happening because the place for logs is running out. Is this possible or is there another reason? And if this is the reason, how can I solve it? Thanks
You can do this with an external script.
First, find the container logs location with this command
docker container inspect --format='{{.LogPath}}' [CONTAINER ID/NAME]
And then, truncate the log file with
truncate -s 0 /path/to/logfile
You can put that in a bash script and have it run automatically with cron.
Related
im looking for the OS version(such as Ubuntu 20.04.1 LTS) to get it from container that run on kuberentes server. i mean, i need to OS of the server which on that server i have kubernetes with number of pods(and containers).
i saw there is a library which call "kubernetes" but didn't found any relevant info on this specific subject.
is there a way to get this info with python?
many thanks for the help!
If you need to get an OS version of running container you should read
https://kubernetes.io/docs/tasks/debug/debug-application/get-shell-running-container/
as it described above you can get access to your running pod by command:
kubectl exec --stdin --tty <pod_name> -- /bin/bash
then just type "cat /etc/os-release" and you will see the OS info which your pod running on. In most cases containers run on unix systems and you will find current pod OS.
You also can install python or anything else inside your pod. But I do not recommend to do it. Containers have minimum thing to make you app work. For checking it is ok, but after it just deploy new container.
Using the node info on which pod is running via kubectl. In the below command, replace the <PODNAME> with your pod name.
kubectl get node $(kubectl get pod <PODNAME> -o jsonpath='{.spec.nodeName}') -o jsonpath='{.status.nodeInfo.osImage}'
I am executing my Robot Framework Selenium tests in a remote machine (it's a Docker container, but I need it to be working using Podman, too... so I guess using docker commands wouldn't help me) and in this remote machine, there is an automatic process running on the background, which is producing terminal logs.
I can read this cmd output when I execute docker logs <container_id> in my terminal, but I need to get them using python and extract some info from these logs to show them in the Robot Framework test log file.
Any ideas how to do it?
I have found several ways how to execute a command in the remote machine and get the output, but here I am not executing any command, I just need to read what's being produced automatically.
Thank you for your advice
I have made a Flask API for a spacy ner code and deployed it on Docker. In the code I have used python's logging to return the outputs to a file, info.log.
The question is, how to access the log file in the container after running it.
Since I had to look for a long time, I picked up bits of answers from different places and am compiling it here for anyone who is stuck.
After running the container, go to the terminal and post the following commands.
(I used pycharm and the terminal started inside the directory where my code and dockerfile were stored)
docker ps
(this shows the containers running currently)
docker exec -it 'container-name' bash
(now you have entered the container)
ls -lsa
(this will show all the files in the container, including the log file)
cat info.log
Now, you can see the log file contents on the terminal.
I am trying to run my streamlit app via docker. Since I want to run my code in a linux system, I am trying first it it runs in my windows system.
So I ran my container and ran a command which gave me two URLs. But both of the urls are not working.
This is the terminal result
This the result in browser
Do I have to mention any port number? And if yes, then how to find my local system's port?
Thanks in advance
I think you are using the wrong port number, please try to use -p 8501:8501 in your docker command and then go to localhost:8501 in your browser.
First check if you are able to ping the internal IP of the docker container from the host machine.
then export multiple ports using the following args:-
docker run -p <host_port1>:<container_port1> -p <host_port2>:<container_port2>
I have a back-end API server created with Python & Flask. I used MongoDB as my database. I build and run docker-composer every time while I update my source code. Because of this, I always take a backup of my database before stopping and restarting docker container.
From the beginning I am using this command to get a backup in my default folder:
sudo docker-compose exec -T db mongodump --archive --gzip --db SuperAdminDB> backup.gz
This line worked well previously. Then I restore the database again after restarting the the docker-composer to enable my back-end with updated code. I used this command to restore the database.
sudo docker-compose exec -T db mongorestore --archive --gzip < backup.gz
But from today, while I am trying to take a backup from server while the docker is still running (as usual), the server freezes like the image below.
I am using Amazon EC2 server and Ubuntu 20.04 version
First, stop redirecting output of the command. If you don't know whether it is working you should be looking at all available information which includes the output.
Then verify you can connect to your deployment using mongo shell and run commands.
If that succeeds look at server log and verify there is a record of connection from mongodump.
If that works try dumping other collections.
After digging 3 days for right reason I have found that the main reason is the apache.
I have recently installed apache to host my frontend also. While apache is running the server won't allow me to dump mongodb backup. Somehow apache was conflicting with docker.
My solution:
1. Stop apache service
sudo service apache2 stop
2. Then take MongoDB backup
sudo docker-compose exec -T db mongodump --archive --gzip --db SuperAdminDB> backup.gz