Like in the topic, docker process suspends and gets killed.
My python project runs bash script whose one of the parts is to run R script which withdraws data from influxdb and then processes it. When the project gets data for a short period of time e.g. 1-5 days, it is not a problem. The whole thing starts with bigger time frames like a couple of weeks. It just slows down so badly so that it takes ages to generate anything (I checked logs) and eventually it gets killed. It’s OK when R script withdraws something about 25mb of data, however 70mb of data is not as easy. Can it be something that Flask+bash+R use too much memory at once or something? Such problem does not appear when invoked outside of docker
Dockerfile:
FROM ubuntu
# Install requirements fot the flask app
RUN apt-get clean && apt-get update && DEBIAN_FRONTEND=noninteractive apt-get upgrade -y && DEBIAN_FRONTEND=noninteractive apt-get install -y \
python3 \
python3-pip \
r-base \
r-base-dev \
r-cran-rgl \
mutt \
git \
texlive-fonts-recommended
# Install requirements fot the flask app
RUN pip3 install -r ./requirements.txt
flask app snippet:
#app.route('/send', methods=['POST'])
def send():
path = os.path.dirname(os.path.realpath(__file__))
script = path + '/generate_pdf.sh'
address = str(request.form['email'])
start_date = convert_date(str(request.form['start_date']))
end_date = convert_date(str(request.form['end_date']))
command = [script, start_date, end_date, address]
subprocess.run(command)
return json.dumps({
'status': 'OK',
'message': 'The action is completed'
})
generate_pdf.sh:
#!/bin/bash
start_date="$1"
end_date="$2"
address="$3"
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
report_name="\"$DIR/my_document.pdf\""
R -e "rmarkdown::render('$DIR/generate_document.Rmd', output_file = $report_name)" --args "$start_date" "$end_date"
report_name="$DIR/my_document.pdf"
echo | mutt -s "Generated document" -a $report_name -- "$address"
out=$(rm $report_name)
R script snippet:
where.clause <- paste0("time >= '",
start.date,
"' AND time <= '",
as.character(as.Date(end.date) + days(1)),
"'")
con <- influxdbr::influx_connection(host = "localhost",
port = 8086,
user = "root",
pass = "root")
select.query <- paste0(
'id, name, surname, car, employment_status'
)
rows <- influx_select(con, db = 'my_db', select.query, from = 'workers',
where = where.clause)
rows <- as.data.frame(rows, stringsAsFactors = FALSE)
if(is.data.frame(rows) && nrow(rows) == 0) {
cat('No data could be obtained from the database.', sep = '\n')
knitr::knit_exit()
}
Here are the logs that I got while executing the app, which was suppose to withdraw about 74mb of data.
....
label: unnamed-chunk-4 (with options)
List of 3
$ echo : logi FALSE
$ message: logi FALSE
$ warning: logi FALSE
Success: (204) No Content
/app/generate_pdf.sh: line 8: 58 Killed
....
The application works perfectly outside the docker.
When this command rows <- influx_select is invoked, the data is obtained in a raw version. Before it gets cast to dataframe it weights a lot - 24mb, 70 and more.
I manually ran the script inside the docker and the R script went a bit farther:
....
label: unnamed-chunk-8 (with options)
List of 4
$ echo : logi FALSE
$ message : logi FALSE
$ fig.align : chr "left"
$ fig.height: num 7
Quitting from lines 72-76 (generate_document.Rmd)
Error in system(paste(which, shQuote(names[i])), intern = TRUE, ignore.stderr = TRUE) :
cannot popen '/usr/bin/which 'pdfcrop' 2>/dev/null', probable reason 'Cannot allocate memory'
...
Related
Hello,
I'm working on online judge project and i'm using docker container for running a user code.
So, when user submit the code, that code runs in a docker container and then it returned output back to user.
Below is the code, how I am handling the user code by running on docker container.
data = loads(request.body.decode("utf-8"))
//writing user code and custom input to file
write_to_file(data['code'], "main.cpp")
write_to_file(data['code_input'], "input.txt")
# Uncomment below 3 lines if below image is not installed in local
# print("building docker image")
# p = getoutput("docker build . -t cpp_test:1")
# print(p)
containerID = getoutput("docker run --name cpp_compiler -d -it cpp_test:1")
# uploading user code on running container
upload_code = getoutput("docker cp main.cpp cpp_compiler:/usr/src/cpp_test/prog1.cpp")
upload_input = getoutput("docker cp input.txt cpp_compiler:/usr/src/cpp_test/input.txt")
result = getoutput('docker exec -it cpp_compiler sh -c "g++ -o Test1 prog1.cpp && ./Test1 < input.txt" ')
print("Deleting the running container : ",getoutput("docker rm --force cpp_compiler"))
return JsonResponse(result)
Now, I want to set time and memory limit on users code, like when the code will be taking more than expected time or memory, it will throw TLE or out of memory error.
I'm not getting the correct way of implementation.
I'm new in this field, any help will be appreciated.
Thanks.
Hi guys I would like to ask for some help with my bash script.
I am running 2 python script inside my bash script and it is working when I'm running it manually but when I'm using cron only the commands in the .sh file is working not on .py
Please take note that I already install necessary utils and packages for python3.
This is the script:
#!/usr/bin/env bash
# list.tmp path directory
fileLoc="/home/ec2-user/PushNotification/Incoming34days/list34days.tmp"
# URL to POST request
refLink='http link'
# Title of Push Notification
title='34th day: Grace Period is about to end'
# curl type
type='Notification'
# curl action_type
actionType='NotificationActivity'
# Get the current date and time
now=$(date '+%b %d %Y %H:%M:%S')
# Message to the user
body="Subscribe to the Philippine Mobile Number plan now to continue receiving calls and texts and sending text messages to the Philippines."
# Logs location
logsLoc="/home/ec2-user/PushNotification/Incoming34days/logs.tmp"
# current number
currentNumLoc="/home/ec2-user/PushNotification/Incoming34days/currentNum.tmp"
echo "[$now] Sending notifications to mobile numbers advising today is the last day of grace period..." > $logsLoc
# Python file to SELECT all id who has 34 days counter
python3 select34days.py
# psql -d $database -t -c "SELECT id FROM svn WHERE current_date - expiry_date::DATE = 4" psql must be setup using .pgpass for postgresql authentication, please indicated database
# name and query list directory. Deleting the last line from list.txt
# This is to read the textfile list.txt line per line
while IFS='' read -r list;
# for list in `cat list.txt`;
do
# curl POST request
response=$(curl --location --request POST $refLink \
--header 'Authorization: Basic YXBwdm5vdXNlcjphcHB2bm9wYXNz' \
--header 'Content-Type: application/json' \
--data-raw '{
"title":"'"$title"'",
"body":"'"$body"'",
"min" :[{"mobileNumber" : "'"$list"'"}],
"type" : "'"$type"'",
"action_type" : "'"$actionType"'"}')
# Echo mobile number
echo "[$now] Mobile Number: $list" >> $logsLoc
# Echo response from curl
echo "Response: '$response'"
echo "[$now] Result: '$response'" >> $logsLoc
# Update the current number of the list
echo $list > $currentNumLoc
echo "[$now] Updating $list status into EXPIRED" >> $logsLoc
# Updating status into EXPIRED
python3 updateQuery34.py
done < "$fileLoc"
# end of script
The select34days.py and updateQuery34.py is not running.
I have a log.tmp to check regarding this situation and only displaying commands inside my .sh file
Inside my cron are
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/bin:/usr/bin
MAILTO=root
Your PATH looks wrong:
PATH=/sbin:/bin:/usr/bin:/usr/bin
This includes /usr/bin twice which isn't necessary, but hints that something else should have been there.
Depending on how you've installed it, python might be in /usr/bin/ or /usr/local/bin or even somewhere in /opt.
At the commandline you can find python's directory using:
dirname $(which python3)
This directory needs to be added to your path in your crontab.
Just declare the specific directory with script name (eg. /etc/test.sh)every time you are using bash scripts and adding it to a cron job since the cron doesn't know where is the specific script within the the server.
I can source bash script (without shebang) easy as bash command in terminal but trying to do the same via python command
sourcevars = "cd /etc/openvpn/easy-rsa && . ./vars"
runSourcevars = subprocess.Popen(sourcevars, shell = True)
or
sourcevars = [". /etc/openvpn/easy-rsa/vars"]
runSourcevars = subprocess.Popen(sourcevars, shell = True)
I receive :
Please source the vars script first (i.e. "source ./vars")
Make sure you have edited it to reflect your configuration.
What's the matter, how to do it correctly?I've read some topics here,e.g here but could not solve my problem using given advices. Please explain with examples.
UPDATED:
# os.chdir = ('/etc/openvpn/easy-rsa')
initvars = "cd /etc/openvpn/easy-rsa && . ./vars && ./easy-rsa ..."
# initvars = "cd /etc/openvpn/easy-rsa && . ./vars"
# initvars = [". /etc/openvpn/easy-rsa/vars"]
cleanall = ["/etc/openvpn/easy-rsa/clean-all"]
# buildca = ["printf '\n\n\n\n\n\n\n\n\n' | /etc/openvpn/easy-rsa/build-ca"]
# buildkey = ["printf '\n\n\n\n\n\n\n\n\n\nyes\n ' | /etc/openvpn/easy-rsa/build-key AAAAAA"]
# buildca = "cd /etc/openvpn/easy-rsa && printf '\n\n\n\n\n\n\n\n\n' | ./build-ca"
runInitvars = subprocess.Popen(cmd, shell = True)
# runInitvars = subprocess.Popen(initvars,stdout=subprocess.PIPE, shell = True, executable="/bin/bash")
runCleanall = subprocess.Popen(cleanall , shell=True)
# runBuildca = subprocess.Popen(buildca , shell=True)
# runBuildca.communicate()
# runBuildKey = subprocess.Popen(buildkey, shell=True )
UPDATE 2
buildca = ["printf '\n\n\n\n\n\n\n\n\n' | /etc/openvpn/easy-rsa/build-ca"]
runcommands = subprocess.Popen(initvars+cleanall+buildca, shell = True)
There's absolutely nothing wrong with this in and of itself:
# What you're already doing -- this is actually fine!
sourcevars = "cd /etc/openvpn/easy-rsa && . ./vars"
runSourcevars = subprocess.Popen(sourcevars, shell=True)
# ...*however*, it won't have any effect at all on this:
runOther = subprocess.Popen('./easy-rsa build-key yadda yadda', shell=True)
However, if you subsequently try to run a second subprocess.Popen(..., shell=True) command, you'll see that it doesn't have any of the variables set by sourcing that configuration.
This is entirely normal and expected behavior: The entire point of using source is to modify the state of the active shell; each time you create a new Popen object with shell=True, it's starting a new shell -- their state isn't carried over.
Thus, combine into a single call:
prefix = "cd /etc/openvpn/easy-rsa && . ./vars && "
cmd = "/etc/openvpn/easy-rsa/clean-all"
runCmd = subprocess.Popen(prefix + cmd, shell=True)
...such that you're using the results of sourcing the script in the same shell invocation as that in which you actually source the script.
Alternately (and this is what I'd do), require your Python script to be invoked by a shell which already has the necessary variables in its environment. Thus:
# ask your users to do this
set -a; . ./vars; ./yourPythonScript
...and you can error out if people don't do so very easy:
import os, sys
if not 'EASY_RSA' in os.environ:
print >>sys.stderr, "ERROR: Source vars before running this script"
sys.exit(1)
I am calling a tcsh script in my python program. The tcsh script takes 10-12 mins for completion. But as i call this script from python, python interrupts script before it executes completely. here is the code snippet.
import subprocess
import os
os.chdir(dir_path_forCD)
subprocess.call('/home/sdcme/bin/nii_mdir_sdcme %s %s' % (a, a), shell=True)
print(a+1);
Can some one point out, how i can call nii_mdir_sdcmescript from python without interrupting(killing) script before it is executed completely.
The complete script is as follows:
#!/usr/bin/python
import subprocess
import os
import dicom
import time
dire = '.'
directories = subprocess.check_output(
['find', '/Users/sdb99/Desktop/dicom', '-maxdepth', '1', '-type', 'd', '-mmin', '-660', '-type', 'd', '-mmin', '+5']
).splitlines()
number_of_directories = len(directories)
b_new = '.'
for n in range(1,number_of_directories):
dire_str = (directories[n])
dire_str = str(dire_str) #[2:-1]
print(dire_str)
for dirpath,dirnames,filenames in os.walk(dire_str,topdown=True):
a =1
for filename in filenames:
print(dirpath)
if filename[-4:] == '.dcm':
firstfilename = os.path.join(dirpath, filename)
dir_path_forCD= dirpath
dcm_info = dicom.read_file(firstfilename, force=True)
if dcm_info[0x0019, 0x109c].value == 'epiRTme':
os.chdir(dir_path_forCD)
subprocess.call('/home/sdcme/bin/nii_mdir_sdcme %s %s' % (a, a), shell=True)
print(a+1);
break
break
break
tcsh script: nii_mdir_sdcme
#!/bin/tcsh
if ($#argv < 2) then
echo "Usage: nii_mdir_sdcme start_dir# end_dir#"
exit
else
set start = $argv[1]
set end = $argv[2]
if ( ! -d ./medata ) then
sudo mkdir ./medata
endif
sudo chown sdcme ./medata
sudo chgrp users ./medata
set i = $start
while ( $i <= $end )
echo " "
if ( $i < 10 ) then
echo "Entering 000$i..."
cd 000$i
sudo chmod 777 .
niidicom_sdcme run0$i
#mv *+orig.* ../medata
sudo chmod 755 .
else
echo "Entering 00$i..."
cd 00$i
sudo chmod 777 .
niidicom_sdcme run$i
#mv *+orig.* ../medata
sudo chmod 755 .
endif
cd ..
# i++
end
endif
The problem was with the counter a, which i am passing to call tcsh script.
Now it seems that, the problem was never with the python interrupting tcsh script. subprocess lets tcsh script run without interruptions with shell = True.
I'm asking some help to show notifications using python-crontab, because everything I've tried do not work. The display is not initilised when the script is launched by cron. When I start it manually, that's work.
The codes I've tried:
#!/usr/bin/env python
# coding: utf8
import subprocess
import os
#os.environ.setdefault("XAUTHORITY", "/home/guillaume" + "/.Xauthority")
#os.environ.setdefault('DISPLAY', ':0.0') # do not work
#os.environ['DISPLAY'] = ':0.0' # do not work
print = os.environ
cmd2 = 'notify-send test'
subprocess.call(cmd2, shell=True)
# more code, which is working (using VLC)
cmd3 = "cvlc rtp://232.0.2.183:8200 --sout file/mkv:/path/save/file.mkv" # to download TV's flow
with open("/path/debug_cvlc.log", 'w') as out:
proc = subprocess.Popen(cmd3, stderr=out, shell=True, preexec_fn=os.setsid)
pid = proc.pid # to get the pid
with open("/path/pid.log", "w") as f:
f.write(str(pid)) # to write the pid in a file
# I'm using the pid to stop the download with another cron's task, and to display another notify message.
# Download and stop is working very well, and zenity too. But not notify-send
Thanks
Edit: here are the environment variables I have for this cron's script:
{'LANG': 'fr_FR.UTF-8', 'SHELL': '/bin/sh', 'PWD': '/home/guillaume', 'LOGNAME': 'guillaume', 'PATH': '/usr/bin:/bin', 'HOME': '/home/guillaume', 'DISPLAY': ':0.0'}
Edit2: I'm calling my script in cron like this:
45 9 30 6 * export DISPLAY=:0.0 && python /home/path/script.py > /home/path/debug_cron_on.log 2>&1
I precise I have two screens, so I think DISPLAY:0.0 is the way to display this notify..
But I don't see it.
Edit3: It appears that I've a problem with notify-send, because it's working using zenity:
subprocess.call("zenity --warning --timeout 5 --text='this test is working'", shell=True)
I have notify-send version 0.7.3, and I precise that notify-send is working with the terminal.
Edit4: Next try with python-notify.
import pynotify
pynotify.init("Basic")
n = pynotify.Notification("Title", "TEST")
n.show()
The log file show this: (in french)
Traceback (most recent call last):
File "/home/path/script.py", line 22, in <module>
n.show()
gio.Error: Impossible de se connecter : Connexion refusée
#Translating: Unable to connect : Connection refused
So, I have problem with dbus? what is this?
Solution: Get the DBUS_SESSION_BUS_ADDRESS before creating the cron order:
cron = CronTab()
dbus = os.getenv("DBUS_SESSION_BUS_ADDRESS") # get the dbus
# creating cron
cmd_start = "export DBUS_SESSION_BUS_ADDRESS=" + str(dbus) + " && export DISPLAY=:0.0 && cd /path && python /path/script.py > path/debug_cron.log 2>&1"
job = cron.new(cmd_start)
job = job_start.day.on(self.day_on) # and all the lines to set cron, with hours etc..
cron.write() # write the cron's file
Finally, the cron's line is like that:
20 15 1 7 * export DBUS_SESSION_BUS_ADDRESS=unix:abstract=/tmp/dbus-M0JCXXbuhC && export DISPLAY=:0.0 && python script.py
Then the notification is displaying. Problem resolved !! :)
You are calling the cron like
45 9 30 6 * DISPLAY=:0.0 python /home/path/script.py > /home/path/debug_cron_on.log 2>&1
which is incorrect, since you are not exporting the DISPLAY variable, and the subsequent command does not run.
Try this instead
45 9 30 6 * export DISPLAY=:0.0 && cd /home/path/ && python script.py >> debug_cron.log 2>&1
Also, you are setting the DISPLAY variable within your cron job as well, so try if the cron job works without exporting it in the job line
45 9 30 6 * cd /home/path/ && python script.py >> debug_cron.log 2>&1
EDIT
While debugging, run the cron job every minute. Following worked for me:
Cron entry
* * * * * cd /home/user/Desktop/test/send-notify && python script.py
script.py
#!/usr/bin/env python
import subprocess
import os
os.environ.setdefault('DISPLAY', ':0.0')
print os.environ
cmd2 = 'notify-send test'
subprocess.call(cmd2, shell=True)
EDIT 2
Using pynotify, script.py becomes
#!/usr/bin/env python
import pynotify
import os
os.environ.setdefault('DISPLAY', ':0.0')
pynotify.init("Basic")
n = pynotify.Notification("Title", "TEST123")
n.show()
and cron entry becomes
* * * * * cd /home/user/Desktop/test/send-notify && python script.py
EDIT 3
One environment variable DBUS_SESSION_BUS_ADDRESS is missing from the cron environment.
It can be set in this and this fashion
crontab is considered an external host -- it doesn't have permission to write to your display.
Workaround: allow anyone to write to your display. Type this in your shell when you're logged in:
xhost +