I wonder if anyone has any insights into this. I have a bash script that should put my ssh key onto a remote machine. Adopted from here, the script reads,
#!/usr/bin/sh
REMOTEHOST=user#remote
KEY="$HOME/.ssh/id_rsa.pub"
KEYCODE=`cat $KEY`
ssh -q $REMOTEHOST "mkdir ~/.ssh 2>/dev/null; chmod 700 ~/.ssh; echo "$KEYCODE" >> ~/.ssh/authorized_keys; chmod 644 ~/.ssh/authorized_keys"
This works. The equivalent python script should be
#!/usr/bin/python
import os
os.system('ssh -q %(REMOTEHOST)s "mkdir ~/.ssh 2>/dev/null; chmod 700 ~/.ssh; echo "%(KEYCODE)s" >> ~/.ssh/authorized_keys; chmod 644 ~/.ssh/authorized_keys"' %
{'REMOTEHOST':'user#remote',
'KEYCODE':open(os.path.join(os.environ['HOME'],
'.ssh/id_rsa.pub'),'r').read()})
But in this case, I get that
sh: line 1: >> ~/.ssh/authorized_keys; chmod 644 ~/.ssh/authorized_keys: No
such file or directory
What am I doing wrong? I tried escaping the inner-most quotes but same error message... Thank you in advance for your responses.
You have a serious question -- in that os.system isn't behaving the way you expect it to -- but also, you should seriously rethink the approach as a whole.
You're launching a Python interpreter -- but then, via os.system, telling that Python interpreter to launch a shell! os.system shouldn't be used at all in modern Python (subprocess is a complete replacement)... but using any Python call which starts a shell instance is exceptionally silly in this kind of use case.
Now, in terms of the actual, immediate problem -- look at how your quotation marks are nesting. You'll see that the quote you're starting before mkdir is being closed in the echo, allowing your command to be split in a spot you don't intend.
The following fixes this immediate issue, but is still awful and evil (starts a subshell unnecessarily, doesn't properly check output status, and should be converted to use subprocess.Popen()):
os.system('''ssh -q %(REMOTEHOST)s "mkdir ~/.ssh 2>/dev/null; chmod 700 ~/.ssh; echo '%(KEYCODE)s' >> ~/.ssh/authorized_keys; chmod 644 ~/.ssh/authorized_keys"''' % {
'REMOTEHOST':'user#remote',
'KEYCODE':open(os.path.join(os.environ['HOME'], '.ssh/id_rsa.pub'),'r').read()
})
Related
For one gitlab CI runner
I have a jar file which needs to be continuosly running in the Git linux box but since this is a application which is continuosly running, the python script in the next line is not getting executed. How to run the jar application and then execute the python script simultaneously one after another?
.gitlab.ci-yml file:
pwd && ls -l
unzip ZAP_2.8.0_Core.zip && ls -l
bash scan.sh
python3 Report.py
scan.sh file has the code java -jar app.jar.
Since, this application is continuosly running, 4th line code python3 Report.py is not getting executed.
How do I make both these run simulataneously without the .jar application stopping?
The immediate solution would probably be:
pwd && ls -l
echo "ls OK"
unzip ZAP_2.8.0_Core.zip && ls -l
echo "unzip + ls OK"
bash scan.sh &
scanpid=$!
echo "started scanpid with pid $scanpid"]
ps axuf | grep $scanpid || true
echo "ps + grep OK"
( python3 Report.py ; echo $? > report_status.txt ) || true
echo "report script OK"
kill $scanpid
echo "kill OK"
echo "REPORT STATUS = $(cat report_status.txt)"
test $(cat report_status.txt) -eq 0
Start the java process in the background,
run your python code and remember its return status and always return true.
kill the background process after running python
check for the status code of the python script.
Perhaps this is not necessary, as I never checked how gitlabci deals with background processes, that were spawned by its runners.
I do here a conservative approach.
- I remember the process id of the bash script, so that I can kill it later
- I ensure, that the line running the python script always returns a 0 exit code such, that gitlabci does not stop executing the next lines, but I remember the status code
- then I kill the bash script
- then I check whether the exit code of the python script was 0 or not, such, that gitlabci can perform the proper checking whether the runner was executed successfully or not.
Another minor comment (not related to your question)
I don't really understand why you write
unzip ZAP_2.8.0_Core.zip && ls -l
instead of
unzip ZAP_2.8.0_Core.zip ; ls -l```
If you expect the unzip command to fail you could just write
unzip ZAP_2.8.0_Core.zip
ls -l
and gitlabci would abort automatically before executing ls -l
I also added many echo statements for better debugging, error analysis, you might remove them in your final solution.
To run the two scripts one after the other, you can add & to the end of the line that is blocking. That will make it run in the background.
Either do
bash scan.sh & or add & to the end of the line calling the jar file within the scan.sh...
I am executing a python script from bash script as follows
"python -O myscript.pyo &"
After launching the python script I need to press "enter" manually to get prompt.
Is there a way to avoid this manual intervention.
Thanks in advance!
pipe a blank input to it:
echo "" | python -O myscript.pyo
you might want to create a bash alias to save keyhits: alias run_myscript="echo '' | python -O myscript.pyo"
placing wait after the line to run the process in background seems to work.
Source:
http://www.iitk.ac.in/LDP/LDP/abs/html/abs-guide.html#WAITHANG
Example given:
!/bin/bash
test.sh
ls -l &
echo "Done."
wait
Many thanks
I am trying to setup a crontab to run 6 python data scrapers. I am tired of having to restart them manually when one of them fails. When running the following:
> ps -ef | grep python
ubuntu 31537 1 0 13:09 ? 00:00:03 python /home/ubuntu/scrapers/datascraper1.py
etc... I get a list of the datascrapers 1-6 all in the same folder.
I edited my crontab like this:
sudo crontab -e
# m h dom mon dow command
* * * * * pgrep -f /home/ubuntu/scrapers/datascraper1.py || python /home/ubuntu/scrapers/datascraper1.py > test.out
Then I hit control+X to exit and hit yes to save as /tmp/crontab.M6sSxL/crontab .
However it does not work in restarting or even starting datascraper1.py whether I kill the process manually or if the process fails on its own. Next, I tried reloading cron but it still didn't work:
sudo cron reload
Finally I tried removing nohup from the cron statement and that also did not work.
How can I check if a cron.allow or cron.deny file exists?
Also, do I need to add a username before pgrep? I am also not sure what the "> test.out" is doing at the end of the cron statement.
After running
grep CRON /var/log/syslog
to check to see if cron ran at all, I get this output:
ubuntu#ip-172-31-29-12:~$ grep CRON /var/log/syslog
Jan 5 07:01:01 ip-172-31-29-12 CRON[31101]: (root) CMD (pgrep -f datascraper1.py ||
python /home/ubuntu/scrapers/datascraper1.py > test.out)
Jan 5 07:01:01 ip-172-31-29-12 CRON[31100]: (CRON) info (No MTA installed, discarding output)
Jan 5 07:17:01 ip-172-31-29-12 CRON[31115]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Jan 5 08:01:01 ip-172-31-29-12 CRON[31140]: (root) CMD (pgrep -f datascraper1.py || python /home/ubuntu/scrapers/datascraper1.py > test.out)
Since there is evidence of Cron executing the command, there must be something wrong with this command, (note: I added the path to python):
pgrep -f datascraper1.py || /usr/bin/python /home/ubuntu/scrapers/datascraper1.py > test.out
Which is supposed to check to see if datascaper1.py is running, if not then restart it.
Since Cron is literally executing this statement:
(root) CMD (pgrep -f datascraper1.py || python /home/ubuntu/scrapers/datascraper1.py > test.out)
aka
root pgrep -f datascraper1.py
Running the above root command gives me:
The program 'root' is currently not installed. You can install it by typing:
sudo apt-get install root-system-bin
Is there a problem with Cron running commands from root?
Thanks for your help.
First of all, you need to see if cron is working at all.
Add this to your cron file (and ideally delete the python statement for now, to have a clear state)
* * * * * echo `date` >>/home/your_username/hello_cron
This will output the date in the file "hello_cron" every minute. Try this, and if this works, ie you see output every minute, write here and we can troubleshoot further.
You can also look in your system logs to see if cron has ran your command, like so:
grep CRON /var/log/syslog
Btw the >test.out part would redirect the output of the python program to the file test.out. I am not sure why you need the nohup part - this would let the python programs run even if you are logged out - is this what you want?
EDIT: After troubleshooting cron:
The message about no MTA installed means that cron is trying to send you an e-mail with the output of the job but cannot because you dont have an email programm installed:
Maybe this will fix it:
sudo apt-get install postfix
The line invoking the python program in cron is producing some output (an error) so it's in your best interests to see what happens. Look at this tutorial to see how to set your email address: http://www.cyberciti.biz/faq/linux-unix-crontab-change-mailto-settings/
Just in case the tutorial becomes unavailable:
MAILTO:youremail#example.com
You need to add python home to your path at the start of the job, however you have python set up. When you're running it yourself and you type python, it checks where you are, then one level down, then your $PATH. So, python home (where the python binary is) needs to be globally exported for the user that owns the cron (so, put it in a rc script in /etc/rc.d/) or, you need to append the python home to path at the start of the cron job. So,
export PATH=$PATH:<path to python>
Or, write the cron entry as
/usr/bin/python /home/ubuntu/etc/etc
to call it directly. It might not be /usr/bin, run the command
'which python'
to find out.
The 'No MTA' message means you are getting STDERR, which would normally get mailed to the user, but can't because you have no Mail Transfer Agent set up, like mailx, or mutt, so no mail for the user can get delivered from cron, so it is discarded. If you'd like STDERR to go into the log also, at the end, instead of
"command" > test.out
write
"command" 2>&1 > test.out
to redirect STDERR into STDOUT, then redirect both to test.out.
I'm trying to set up my /etc/rc.local to automatically start up a process on reboot as another user. For some reason, the .bash_rc for this user does not seem to be getting initialized.
Here's the command I added to /etc/rc.local :
sudo su -l batchuser -c "/home/batchuser/app/run_prod.sh &"
this didn't work, so I also tried this:
sudo su -l batchuser -c ". /home/batchuser/.profile; /home/batchuser/app/run_prod.sh &"
run_prod.sh just starts up a python script. The python script fails because it references modules which are in a python path which gets initialized in the .bash_rc
EDIT: it works when I do this
sudo su -l batchuser -c "export PYTHONPATH=/my/python/path; /home/batchuser/app/run_prod.sh &"
Why does this work and not the statement above? How come the .bashrc is not getting initialized?
I have run into this same problem. I can't fully explain the behavior, but I ended up doing this type of thing:
sudo $PYTHONPATH=$PYTHONPATH the_command
or more specifically for your case,
sudo $PYTHONPATH=$PYTHONPATH su -l batchuser -c /home/batchuser/app/run_prod.sh &"
Does that work for you? If it does, you may find it doesn't return immediately like you expect it to. You may need to move the & outside the quotes so it applies to the sudo command.
I have a python script i'd like to start on startup on an ubuntu ec2 instance but im running into troubles.
The script runs in a loop and takes care or exiting when its ready so i shouldn't need to start or stop it after its running.
I've read and tried a lot of approaches with various degrees of success and honestly im confused about whats the best approach. I've tried putting a shell script that starts the python script in /etc/init.d, making it executable and doing update-rc.d to try to get it to run but its failed at every stage.
here's the contents of the script ive tried:
#!/bin/bash
cd ~/Dropbox/Render\ Farm\ 1/appleseed/bin
while :
do
python ./watchfolder18.py -t ./appleseed.cli -u ec2 ../../data/
done
i then did
sudo chmod +x /etc/init.d/script_name
sudo sudo update-rc.d /etc/init.d/script_name defaults
This doesn't seem to run on startup and i cant see why, if i run the command manually it works as expected.
I also tried adding a line to rc.local to start the script but that doesn't seem to work either
Can anybody share what they have found is the simplest way to run a python script in the background with arguments on startup of an ec2 instance.
UPDATE: ----------------------
I've since moved this code to a file called /home/ubuntu/bin/watch_folder_start
#!/bin/bash
cd /home/ubuntu/Dropbox/Render\ Farm\ 1/appleseed/bin
while :
do
python ./watchfolder18.py -t ./appleseed.cli -u ec2 ../../data/
done
and changed my rc.local file to this:
nohup /home/ubuntu/bin/watch_folder_start &
exit 0
Which works when i manually run rc.local but wont fire on startup, i did chmod +x rc.local but that didn't change anything,
Your /etc/init.d/script_name is missing the plumbing that update-rc.d and so on use, and won't properly handle stop, start, and other init-variety commands, so...
For initial experimentation, take advantage of the /etc/init.d/rc.local script (which should be linked to by default from /etc/rc2/S99rc.local). The gets you out of having to worry about the init.d conventions and just add things to /etc/rc.local before the exit 0 at its end.
Additionally, that ~ isn't going to be defined, you'll need to use a full pathname - and furthermore the script will run as root. We'll address how to avoid this if desired in a bit. In any of these, you'll need to replace "whoeveryouare" with something more useful. Also be warned that you may need to prefix the python command with a su command and some arguments to get the process to run with the user id you might need.
You might try (in /etc/rc.local):
( if cd '/home/whoeveryouare/Dropbox/Render Farm 1/appleseed/bin' ; then
while : ; do
# This loop should respawn watchfolder18.py if it dies, but
# ideally one should fix watchfolder18.py and remove this loop.
python ./watchfolder18.py -t ./appleseed.cli -u ec2 ../../data/
done
else
echo warning: could not find watchfolder 1>&2
fi
) &
You could also put all that in a script and just call it from /etc/rc.local.
The first pass is roughly what you had, but if we assume that watchfolder18.py will arrange to avoid dying we can cut it down to:
( cd '/home/whoeveryouare/Dropbox/Render Farm 1/appleseed/bin' \
&& exec python ./watchfolder18.py -t ./appleseed.cli -u ec2 ../../data/ ) &
These aren't all that pretty, but it should let you get your daemon sorted out so you can debug it and so on, then come back to making a proper /etc/init.d or /etc/init script later. Something like this might work in /etc/init/watchfolder.conf, but I'm not yet facile enough to claim this is anything other than a rough stab at it:
# watchfolder - spawner for watchfolder18.py
description "watchfolder program"
start on runlevel [2345]
stop on runlevel [!2345]
script
if cd '/home/whoeveryouare/Dropbox/Render Farm 1/appleseed/bin' ; then
exec python ./watchfolder18.py -t ./appleseed.cli -u ec2 ../../data/0
fi
end script
I found that the best solution in the end was to use 'upstart' and create a file in etc/init called myfile.conf that contained the following
description "watch folder service"
author "Jonathan Topf"
start on startup
stop on shutdown
# Automatically Respawn:
respawn
respawn limit 99 5
script
HOST=`hostname`
chdir /home/ubuntu/Dropbox/Render\ Farm\ 1/appleseed/bin
exec /usr/bin/python ./watchfolder.py -t ./appleseed.cli -u $HOST ../../data/ >> /home/ubuntu/bin/ec2_server.log 2>&1
echo "watch_folder started"
end script
More info on using the upstart system here
http://upstart.ubuntu.com/
https://help.ubuntu.com/community/UbuntuBootupHowto
http://blog.joshsoftware.com/2012/02/14/upstart-scripts-in-ubuntu/