How to run specific command while running all the script parallel - python

I got script that are building images pushing them to docker hub and checking all the errors.
But my main purpose is to run specific command like "node server.js" and than start running the script commands.
and i want it to be in the same script file all together.
For now what i am doing is opening 2 terminals, from the First terminal running the command 'node server.js' to start the app.
And from the Second terminal running the script.
and what i want to is configure the 'node server.js' command inside the script to run at the background and let the script continue and the same time.
For now this is my script and when the script start running the command os.system(start_node) the script stop running to the other commands.
so my questions is how to run this command and let the script contiune without opening 2 terminal and run in 1 terminal node server.js and in the second the script without the command os.system(start_node).
#!/usr/bin/env python3
#Before running this script need to start the app 'node server.js'
import os
import sys
os.chdir ("/opt/new-test-app")
start_node = 'node server.js'
npm_test = 'npm test'
npm_output = ' 8 passing'
image = 'docker build -t test/new-test-app-new:latest .'
test = 'curl -o /dev/null -s -w "%{http_code}\n" http://localhost:8081'
docker_login = 'cat /cred/cred.txt | docker login --username test --password-stdin'
docker_push = 'docker push alexkocloud/new-test-app-new:latest'
os.system(start_node)
os.system(npm_test)
if npm_output == 0:
print ("npm test not succesfully passed")
sys.exit()
else:
print('npm test successfuly passed with "8 passing"')
if os.system(test) == 0:
print('HTTP Status Code 200 OK')
else:
print('ERROR CODE')
sys.exit()
os.system(image)
os.system(docker_login)
os.system(docker_push)
sys.exit(0)

Ok, what i did is just added this line
nohup node server.js > output.log &
Inside my variable:
start_node = 'nohup node server.js > output.log &'
and it's working as i wanted.
If anyone has a better solution i would love to see.
Thanks.

Related

How to set time and memory limit?

Hello,
I'm working on online judge project and i'm using docker container for running a user code.
So, when user submit the code, that code runs in a docker container and then it returned output back to user.
Below is the code, how I am handling the user code by running on docker container.
data = loads(request.body.decode("utf-8"))
//writing user code and custom input to file
write_to_file(data['code'], "main.cpp")
write_to_file(data['code_input'], "input.txt")
# Uncomment below 3 lines if below image is not installed in local
# print("building docker image")
# p = getoutput("docker build . -t cpp_test:1")
# print(p)
containerID = getoutput("docker run --name cpp_compiler -d -it cpp_test:1")
# uploading user code on running container
upload_code = getoutput("docker cp main.cpp cpp_compiler:/usr/src/cpp_test/prog1.cpp")
upload_input = getoutput("docker cp input.txt cpp_compiler:/usr/src/cpp_test/input.txt")
result = getoutput('docker exec -it cpp_compiler sh -c "g++ -o Test1 prog1.cpp && ./Test1 < input.txt" ')
print("Deleting the running container : ",getoutput("docker rm --force cpp_compiler"))
return JsonResponse(result)
Now, I want to set time and memory limit on users code, like when the code will be taking more than expected time or memory, it will throw TLE or out of memory error.
I'm not getting the correct way of implementation.
I'm new in this field, any help will be appreciated.
Thanks.

subprocess.run simple scenario fails

I am trying to run python subprocess.run function to execute following command:
pdftoppm -jpeg -f 1 -scale-to 200 data/andromeda.pdf and-page
pdftoppm - is part of poppler utility and it generates images from pdf files.
File data/andromeda.pdf exists. Folder data is on same level with python script and/or where I run command from.
Command basically will generate a jpeg file, from page 1 (-f 1) 200px wide (-scale-to) from given file of and-page-1.jpeg format (so called ppmtroot).
Long story short: from command line it works as expected i.e. if I call the above command either from zsh or bash shell, manually - it generates thumbnail as expected. However if I run it from python subprocess module - it fails it returns 99 error code!
Following is python code (file name is sc_02_thumbnails.py):
import subprocess
import sys
def main(filename, ppmroot):
cmd = [
'pdftoppm',
'-f 1',
'-scale-to 200',
'-jpeg',
filename,
ppmroot
]
result = subprocess.run(
cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE
)
if result.returncode:
print("Failed to generate thumbnail. Return code: {}. stderr: {}".format(
result.returncode,
result.stderr
))
print("Used cmd: {}".format(' '.join(cmd)))
sys.exit(1)
else:
print("Success!")
if __name__ == "__main__":
if len(sys.argv) > 2:
filename = sys.argv[1]
ppmroot = sys.argv[2]
else:
print("Usage: {} <pdffile> <ppmroot>".format(sys.argv[0]))
sys.exit(1)
main(filename, ppmroot)
And here is repo which includes data/andromeda.pdf file as well.
I call my script with as (from zsh):
$ chmod +x ./sc_02_thumbnauils.py
$ ./sc_02_thumbnails.py data/andromeda.pdf and-page
and ... thumbnail generating fails!
I have tried executing python script from both, from zsh and bash shells :(
What I am doing wrong?
The quoting is wrong, you should have '-f', '1', etc

How can I make my custom shell work with ssh?

I'm making a custom shell in Python for a very limited user on a server, who is logged in via ssh with a public key authentication. They need to be able to run ls, find -type d, and cat in specific directories with certain limitations. This works fine if you run something like ssh user#server -i keyfile, because you see the interactive prompt, and can run those commands. However, something like ssh user#server -i keyfile "ls /var/log" doesn't. ssh simply hangs, with no response. By using the -v switch I've found that the connection is succeeding, so the problem is in my shell. I'm also fairly certain that the script isn't even being started, since print sys.argv at the beginning of the program does nothing. Here's the code:
#!/usr/bin/env python
import subprocess
import re
import os
with open(os.devnull, 'w') as devnull:
proc = lambda x: subprocess.Popen(x, stdout=subprocess.PIPE, stderr=devnull)
while True:
try:
s = raw_input('> ')
except:
break
try:
cmd = re.split(r'\s+', s)
if len(cmd) != 2:
print 'Not permitted.'
continue
if cmd[0].lower() == 'l':
# Snip: verify directory
cmd = proc(['ls', cmd[1]])
print cmd.stdout.read()
elif cmd[0].lower() == 'r':
# Snip: verify directory
cmd = proc(['cat', cmd[1]])
print cmd.stdout.read()
elif cmd[0].lower() == 'll':
# Snip: verify directory
cmd = proc(['find', cmd[1], '-type', 'd'])
print cmd.stdout.read()
else:
print 'Not permitted.'
except OSError:
print 'Unknown error.'
And here's the relevant line from ~/.ssh/authorized_keys:
command="/path/to/shell $SSH_ORIGINAL_COMMAND" ssh-rsa [base-64-encoded-key] user#host
How can I make the shell script when the command is passed on the command line so it can be used in scripts without starting an interactive shell?
The problem with ssh not responding is related to the fact that ssh user#host cmd does not open a terminal for the command being run. Try calling ssh user#host -t cmd.
However, even if you pass the -t option, you'd still have another problem with your script: it only works interactively and totally ignores the $SSH_ORIGINAL_PROGRAM being passed. A naive solution would be to check sys.argv and if its bigger than 1 you don't loop forever, and instead only execute whatever command you have in it.

python-notify module & cron: gio.Error

I'm asking some help to show notifications using python-crontab, because everything I've tried do not work. The display is not initilised when the script is launched by cron. When I start it manually, that's work.
The codes I've tried:
#!/usr/bin/env python
# coding: utf8
import subprocess
import os
#os.environ.setdefault("XAUTHORITY", "/home/guillaume" + "/.Xauthority")
#os.environ.setdefault('DISPLAY', ':0.0') # do not work
#os.environ['DISPLAY'] = ':0.0' # do not work
print = os.environ
cmd2 = 'notify-send test'
subprocess.call(cmd2, shell=True)
# more code, which is working (using VLC)
cmd3 = "cvlc rtp://232.0.2.183:8200 --sout file/mkv:/path/save/file.mkv" # to download TV's flow
with open("/path/debug_cvlc.log", 'w') as out:
proc = subprocess.Popen(cmd3, stderr=out, shell=True, preexec_fn=os.setsid)
pid = proc.pid # to get the pid
with open("/path/pid.log", "w") as f:
f.write(str(pid)) # to write the pid in a file
# I'm using the pid to stop the download with another cron's task, and to display another notify message.
# Download and stop is working very well, and zenity too. But not notify-send
Thanks
Edit: here are the environment variables I have for this cron's script:
{'LANG': 'fr_FR.UTF-8', 'SHELL': '/bin/sh', 'PWD': '/home/guillaume', 'LOGNAME': 'guillaume', 'PATH': '/usr/bin:/bin', 'HOME': '/home/guillaume', 'DISPLAY': ':0.0'}
Edit2: I'm calling my script in cron like this:
45 9 30 6 * export DISPLAY=:0.0 && python /home/path/script.py > /home/path/debug_cron_on.log 2>&1
I precise I have two screens, so I think DISPLAY:0.0 is the way to display this notify..
But I don't see it.
Edit3: It appears that I've a problem with notify-send, because it's working using zenity:
subprocess.call("zenity --warning --timeout 5 --text='this test is working'", shell=True)
I have notify-send version 0.7.3, and I precise that notify-send is working with the terminal.
Edit4: Next try with python-notify.
import pynotify
pynotify.init("Basic")
n = pynotify.Notification("Title", "TEST")
n.show()
The log file show this: (in french)
Traceback (most recent call last):
File "/home/path/script.py", line 22, in <module>
n.show()
gio.Error: Impossible de se connecter : Connexion refusée
#Translating: Unable to connect : Connection refused
So, I have problem with dbus? what is this?
Solution: Get the DBUS_SESSION_BUS_ADDRESS before creating the cron order:
cron = CronTab()
dbus = os.getenv("DBUS_SESSION_BUS_ADDRESS") # get the dbus
# creating cron
cmd_start = "export DBUS_SESSION_BUS_ADDRESS=" + str(dbus) + " && export DISPLAY=:0.0 && cd /path && python /path/script.py > path/debug_cron.log 2>&1"
job = cron.new(cmd_start)
job = job_start.day.on(self.day_on) # and all the lines to set cron, with hours etc..
cron.write() # write the cron's file
Finally, the cron's line is like that:
20 15 1 7 * export DBUS_SESSION_BUS_ADDRESS=unix:abstract=/tmp/dbus-M0JCXXbuhC && export DISPLAY=:0.0 && python script.py
Then the notification is displaying. Problem resolved !! :)
You are calling the cron like
45 9 30 6 * DISPLAY=:0.0 python /home/path/script.py > /home/path/debug_cron_on.log 2>&1
which is incorrect, since you are not exporting the DISPLAY variable, and the subsequent command does not run.
Try this instead
45 9 30 6 * export DISPLAY=:0.0 && cd /home/path/ && python script.py >> debug_cron.log 2>&1
Also, you are setting the DISPLAY variable within your cron job as well, so try if the cron job works without exporting it in the job line
45 9 30 6 * cd /home/path/ && python script.py >> debug_cron.log 2>&1
EDIT
While debugging, run the cron job every minute. Following worked for me:
Cron entry
* * * * * cd /home/user/Desktop/test/send-notify && python script.py
script.py
#!/usr/bin/env python
import subprocess
import os
os.environ.setdefault('DISPLAY', ':0.0')
print os.environ
cmd2 = 'notify-send test'
subprocess.call(cmd2, shell=True)
EDIT 2
Using pynotify, script.py becomes
#!/usr/bin/env python
import pynotify
import os
os.environ.setdefault('DISPLAY', ':0.0')
pynotify.init("Basic")
n = pynotify.Notification("Title", "TEST123")
n.show()
and cron entry becomes
* * * * * cd /home/user/Desktop/test/send-notify && python script.py
EDIT 3
One environment variable DBUS_SESSION_BUS_ADDRESS is missing from the cron environment.
It can be set in this and this fashion
crontab is considered an external host -- it doesn't have permission to write to your display.
Workaround: allow anyone to write to your display. Type this in your shell when you're logged in:
xhost +

Called bashscript doesn't start up GNU screen session

I have a problem with a backup script which is supposed to call a bash starting/stopping script, in which a "daemon" (via GNU screen) is managed. For the moment my python backup script is called via cron. Within the launch.sh script there is a determination of the given parameter. If "stop" is given the script echos "Stopping..." and runs the GNU screen command to shut down the session. The same goes for "start". If the script is called via subprocess.call(...,Shell=True) in Python the string is shown but the screen session remains untouched. If it gets called directly in bash everything works fine.
#!/usr/bin/env python
'''
Created on 27.07.2013
BackUp Script v0.2
#author: Nerade
'''
import time
import os
from datetime import date
from subprocess import check_output
import subprocess
script_dir = '/home/minecraft/automated_backup'
#folders = ['/home/minecraft/staff']
folders = ['/home/minecraft/bspack2','/home/minecraft/staff']
# log = 0
backup_date = date.today()
backup_dir = '/home/minecraft/automated_backup/' + backup_date.isoformat()
def main():
global log
init_log()
init_dirs()
for folder in folders:
token = folder.split("/")
stopCmd = folder + '/launch.sh stop'
log.write("Stopping server %s...\n" % (token[3]))
subprocess.call(stopCmd,shell=True)
#print stopCmd
while screen_present(token[3]):
time.sleep(0.5)
log.write("Server %s successfully stopped!\n" % (token[3]))
specificPath = backup_dir + '/' + token[3]
os.makedirs(specificPath)
os.system("cp /home/minecraft/%s/server.log %s/server.log" % (token[3],specificPath))
backup(folder,specificPath + '/' + backup_date.isoformat() + '.tar.gz')
dumpDatabase(backup_dir)
for folder in folders:
token = folder.split("/")
startCmd = folder + '/launch.sh start'
log.write("Starting server %s...\n" % (token[3]))
subprocess.call(startCmd,shell=True)
time.sleep(1)
log.write(screen_present(token[3]))
#print startCmd
def dumpDatabase(target):
global log
log.write("Dumping Database...\n")
cmd = "mysqldump -uroot -p<password> -A --quick --result-file=%s/%s.sql" % (backup_dir,backup_date.isoformat())
os.system(cmd)
#print cmd
def backup(source,target):
global log
log.write("Starting backup of folder %s to %s\n" % (source,target))
cmd = 'tar cfvz %s --exclude-from=%s/backup.conf %s' % (target,source,source)
os.system(cmd)
#print cmd
def screen_present(name):
var = check_output(["screen -ls; true"],shell=True)
if "."+name+"\t(" in var:
return True
else:
return False
def init_log():
global log
log = open("%s/backup.log" % script_dir,'a')
log.write(
"Starting script at %s\n" % time.strftime("%m/%d/%Y %H:%M:%S")
)
def init_dirs():
global backup_dir,log
log.write("Checking and creating directories...\n")
if not os.path.isdir(backup_dir):
os.makedirs(backup_dir)
if __name__ == '__main__':
main()
And the launch.sh:
#!/bin/sh
if [ $# -eq 0 ] || [ "$1" = "start" ]; then
echo "Starting Server bspack2"
screen -S bspack2 -m -d java -Xmx5G -Xms4G -jar mcpc-plus-legacy-1.4.7-R1.1.jar nogui
fi
if [ "$1" = "stop" ]; then
screen -S bspack2 -X stuff 'stop\015'
echo "Stopping Server bspack2"
fi
What's my problem here?
I'm sure by now you've solved this problem, but looking through your question I'd bet the answer is remarkably simple -- mcpc-plus-legacy-1.4.7-R1.1.jar isn't found by java, which fails, and subsequently screen terminates.
In launch.sh, screen will execute in the same directory as the calling script. In this case, your python script, when run by cron, will have an active directory of the running user's home directory (so root crontabs will run in /root/, for instance, and a user crontab in /home/username/).
Simple solution is just to the following:
cd /home/minecraft/bspack2
as the second line in your launch.sh script, just after #!/bash/sh.
In the future, I'd recommend when interacting with screen to leverage the -L parameter. This turns on autologging. By default, in the current directory a file "screenlog.0" will be generated when screen terminates, showing you a log history of activity during the screen session. This will allow you to debug screen problems with ease, and help encourage keeping track of "current directory" while working with shell scripts, to make finding the screen log output simple.

Categories