python ssh script to deal with secondary shell - python

I want to write a python script to ssh into a server and print some output. But I have some problem here when coding, here is what I basically want to achieve:
[zz#bts01 ~]$ cd /opt/cdma-msc/
[zz#bts01 cdma-msc]$ ./sccli
SoftCore for CDMA CLI (c) Quortus 2010
RAN> show system
System Configuration
Software version: V1.31
System name: RAN
System location:
Shutdown code:
Emergency call dest:
Current date/time: Tue Feb 27 14:27:41 2018
System uptime: 20h 33m
Auto-provisioning: Enabled
RAN> exit
Bye.
[zz#bts01 cdma-msc]$
Please see above I only want the show system output.
But problem is I use python paramiko ssh package, looks like it is not recognize the second shell after I execute the ./sccli command.
what can I do to allow python ssh script interactive with the second shell (above 'RAN>')?
Thanks!!

There exists a python library that was built for running SSH commands from Python. It is pretty powerful and robust. You can run commands across many machines in parallel, it has error handling, can execute as root.. etc. etc.
Check it out: https://github.com/dwighthubbard/sshmap
Learning how to use this library is very valuable if you have to do SSH commands from python frequently.

Related

Mac ssh to Windows PowerShell. Trying to get it to open an app (on the PC). Nothing works, including commands that do it entered directly on the PC

Trying to have a python script open an app and then continue on. I have the following setup:
Windows PowerShell (PS) running on PC.
Mac Terminal running bash. ssh... into the PC.
In both windows I get the same "Windows PowerShell, Copyright (C) Microsoft..." etc. info and the same PS C:\... prompt. So it seems I'm in the right place.
Tried various things. In all cases, anything I do directly on the PC that successfully opens the app, when done at the same prompt in the ssh session on the Mac, gives all the same feedback, but doesn't actually open the app.
Things I've tried:
py .\open_mt5.py at the PS prompt, where that script is:
import subprocess
result = subprocess.Popen(['C:\\Program Files\\MetaTrader 5\\terminal64.exe'])
print(result)
py to start python REPL, then enter the above lines one by one. Notably, in both setups, the python REPL prompt becomes available immediately (as expected) and the output is the same, but again, the app doesn't open on the PC.
Just execute the app from the PS prompt: C:\'Program Files'\'MetaTrader 5'\terminal64.exe
...and a few other things.
I think this should work, but it doesn't. What am I missing?
Or, speaking to the end goal specifically: how can I have my python script, run via ssh, open that app on the PC (and of course not have that block the script from continuing)?
Thanks!
Couple of other details if it helps:
It seems the hiccup is somewhere in the ssh, since that's the difference in all cases. But it also seems to be a Windows thing, because if I ssh into another Mac and enter commands to open Mac apps, they open on the remote Mac, (as expected).
Everything else I've ever tried via ssh from the Mac (to a Mac or a PC) works the same as it does typing directly into the same prompt on the remote machine, including running an entire python based REST API server on the PC. So I'm pretty sure everything is configured more or less right.
The app in question needs to be running for my REST API to work as it includes a service that provides some of the functionality. My end goal is have the python script open the app at the start of everything it's doing to set up and run the REST API server, and do so without blocking the rest of the script of course.
New to python. Self-teaching. Desire to learn it (basics at least). New to Windows command line (cmd, PowerShell). Self-teaching. No desire to learn any more than required for specific project. (Mac/unix user normally). Hoping people can reply at that level. 😊
Mac: macOS Monterey 12.4. PC: Windows 10 Home 21H2, python 3.10.5, and the default Win 10 OpenSSH Server.

Bash script to run python script in python environment runs on ubuntu command line but not in crontab [duplicate]

I have set up a cronjob for root user in ubuntu environment as follows by typing crontab -e
34 11 * * * sh /srv/www/live/CronJobs/daily.sh
0 08 * * 2 sh /srv/www/live/CronJobs/weekly.sh
0 08 1 * * sh /srv/www/live/CronJobs/monthly.sh
But the cronjob does not run. I have tried checking if the cronjob is running using pgrep cron and that gives process id 3033. The shell script calls a python file and is used to send an email. Running the python file is ok. There's no error in it but the cron doesn't run. The daily.sh file has the following code in it.
python /srv/www/live/CronJobs/daily.py
python /srv/www/live/CronJobs/notification_email.py
python /srv/www/live/CronJobs/log_kpi.py
WTF?! My cronjob doesn't run?!
Here's a checklist guide to debug not running cronjobs:
Is the Cron daemon running?
Run ps ax | grep cron and look for cron.
Debian: service cron start or service cron restart
Is cron working?
* * * * * /bin/echo "cron works" >> /tmp/file
Syntax correct? See below.
You obviously need to have write access to the file you are redirecting the output to. A unique file name in /tmp which does not currently exist should always be writable.
Probably also add 2>&1 to include standard error as well as standard output, or separately output standard error to another file with 2>>/tmp/errors
Is the command working standalone?
Check if the script has an error, by doing a dry run on the CLI
When testing your command, test as the user whose crontab you are editing, which might not be your login or root
Can cron run your job?
Check /var/log/cron.log or /var/log/messages for errors.
Ubuntu: grep CRON /var/log/syslog
Redhat: /var/log/cron
Check permissions
Set executable flag on the command: chmod +x /var/www/app/cron/do-stuff.php
If you redirect the output of your command to a file, verify you have permission to write to that file/directory
Check paths
Check she-bangs / hashbangs line
Do not rely on environment variables like PATH, as their value will likely not be the same under cron as under an interactive session. See How to get CRON to call in the correct PATHs
Don't suppress output while debugging
Commonly used is this suppression: 30 1 * * * command > /dev/null 2>&1
Re-enable the standard output or standard error message output by removing >/dev/null 2>&1 altogether; or perhaps redirect to a file in a location where you have write access: >>cron.out 2>&1 will append standard output and standard error to cron.out in the invoking user's home directory.
If you don't redirect output from a cron job, the daemon will try to send you any output or error messages by email. Check your inbox (maybe simply more $MAIL if you don't have a mail client). If mail is not available, maybe check for a file named dead.letter in your home directory, or system log entries saying that the output was discarded. Especially in the latter case, probably edit the job to add redirection to a file, then wait for the job to run, and examine the log file for error messages or other useful feedback.
If you are trying to figure out why something failed, the error messages will be visible in this file. Read it and understand it.
Still not working? Yikes!
Raise the cron debug level
Debian
in /etc/default/cron
set EXTRA_OPTS="-L 2"
service cron restart
tail -f /var/log/syslog to see the scripts executed
Ubuntu
in /etc/rsyslog.d/50-default.conf
add or comment out line cron.* /var/log/cron.log
reload logger sudo /etc/init.d/rsyslog restart
re-run cron
open /var/log/cron.log and look for detailed error output
Reminder: deactivate log level, when you are done with debugging
Run cron and check log files again
Cronjob Syntax
# Minute Hour Day of Month Month Day of Week User Command
# (0-59) (0-23) (1-31) (1-12 or Jan-Dec) (0-6 or Sun-Sat)
0 2 * * * root /usr/bin/find
This syntax is only correct for the root user. Regular user crontab syntax doesn't have the User field (regular users aren't allowed to run code as any other user);
# Minute Hour Day of Month Month Day of Week Command
# (0-59) (0-23) (1-31) (1-12 or Jan-Dec) (0-6 or Sun-Sat)
0 2 * * * /usr/bin/find
Crontab Commands
crontab -l
Lists all the user's cron tasks.
crontab -e, for a specific user: crontab -e -u agentsmith
Starts edit session of your crontab file.
When you exit the editor, the modified crontab is installed automatically.
crontab -r
Removes your crontab entry from the cron spooler, but not from crontab file.
Another reason crontab will fail: Special handling of the % character.
From the manual file:
The entire command portion of the line, up to a newline or a
"%" character, will be executed by /bin/sh or by the shell specified
in the SHELL variable of the cronfile. A "%" character in the
command, unless escaped with a backslash (\), will be changed into
newline characters, and all data after the first % will be sent to
the command as standard input.
In my particular case, I was using date --date="7 days ago" "+%Y-%m-%d" to produce parameters to my script, and it was failing silently. I finally found out what was going on when I checked syslog and saw my command was truncated at the % symbol. You need to escape it like this:
date --date="7 days ago" "+\%Y-\%m-\%d"
See here for more details:
http://www.ducea.com/2008/11/12/using-the-character-in-crontab-entries/
Finally I found the solution. Following is the solution:-
Never use relative path in python scripts to be executed via crontab.
I did something like this instead:-
import os
import sys
import time, datetime
CLASS_PATH = '/srv/www/live/mainapp/classes'
SETTINGS_PATH = '/srv/www/live/foodtrade'
sys.path.insert(0, CLASS_PATH)
sys.path.insert(1,SETTINGS_PATH)
import other_py_files
Never supress the crontab code instead use mailserver and check the mail for the user. That gives clearer insights of what is going.
I want to add 2 points that I learned:
Cron config files put in /etc/cron.d/ should not contain a dot (.). Otherwise, it won't be read by cron.
If the user running your command is not in /etc/shadow. It won't be allowed to schedule cron.
Refs:
http://manpages.ubuntu.com/manpages/xenial/en/man8/cron.8.html
https://help.ubuntu.com/community/CronHowto
To add another point, a file in /etc/cron.d must contain an empty new line at the end. This is likely related to the response by Luciano which specifies that:
The entire command portion of the line, up to a newline or a "%"
character, will be executed
I found useful debugging information on an Ubuntu 16.04 server by running:
systemctl status cron.service
In my case I was kindly informed I had left a comment '#' off of a remark line:
Aug 18 19:12:01 is-feb19 cron[14307]: Error: bad minute; while reading /etc/crontab
Aug 18 19:12:01 is-feb19 cron[14307]: (*system*) ERROR (Syntax error, this crontab file will be ignored)
It might also be a timezone problem.
Cron uses the local time.
Run the command timedatectl to see the machine time and make sure that your crontab is in this same timezone.
https://askubuntu.com/a/536489/1043751
I had a similar problem to the link below.
similar to my problem
my original post
My Issue
My issue was that cron / crontab wouldn't execute my bash script. that bash script executed a python script.
original bash file
#!/bin/bash
python /home/frosty/code/test_scripts/test.py
python file (test.py)
from datetime import datetime
def main():
dt_now = datetime.now()
string_now = dt_now.strftime('%Y-%m-%d %H:%M:%S.%f')
with open('./text_file.txt', 'a') as f:
f.write(f'wrote at {string_now}\n')
return None
if __name__ == '__main__':
main()
the error I was getting
File "/home/frosty/code/test_scripts/test.py", line 7
string_to_write = f'wrote at {string_now}\n'
^
SyntaxError: invalid syntax
this error didn't make sense because the code executed without error from the bash file and the python file.
** Note -> ensure in the crontab -e file you don't suppress the output. I sent the output to a file by adding >>/path/to/cron/output/file.log 2>&1 after the command. below is my crontab -e entry
*/5 * * * * /home/frosty/code/test_scripts/echo_message_sh >>/home/frosty/code/test_scripts/cron_out.log 2>&1
the issue
cron was using the wrong python interpreter, probably python 2 from the syntax error.
how I solved the problem
I changed my bash file to the following
#!/bin/bash
conda_shell=/home/frosty/anaconda3/etc/profile.d/conda.sh
conda_env=base
source ${conda_shell}
conda activate ${conda_env}
python /home/frosty/code/test_scripts/test.py
And I changed my python file to the following
from datetime import datetime
def main():
dt_now = datetime.now()
string_now = dt_now.strftime('%Y-%m-%d %H:%M:%S.%f')
string_file = '/home/frosty/code/test_scripts/text_file.txt'
string_to_write = 'wrote at {}\n'.format(string_now)
with open(string_file, 'a') as f:
f.write(string_to_write)
return None
if __name__ == '__main__':
main()
No MTA installed, discarding output
I had a similar problem with a PHP file executed as a CRON job.
When I manually execute the file it works, but not with CRON tab.
I got the output message: "No MTA installed, discarding output"
Postfix is the default Mail Transfer Agent (MTA) in Ubuntu and can be installed it using
sudo apt-get install postfix
But this same message can be also output when you add a log file as below and it does not have proper write permission to /path/to/logfile.log
/path/to/php -f /path/to/script.php >> /path/to/logfile.log
The permission issue can occur if you create the cron-log file manually using a command like touch while you are logged in as a different user and you add CRONs in the tab of another user(group) like www-data using: sudo crontab -u www-data -e. Then CRON daemon tries to write to the log file and fail, then tries to send the output as an email using Ubuntu's MTA and when it's not found, outputs "No MTA installed, discarding output".
To prevent this:
Create the file with proper permission.
Avoid creating the relevant CRON log file manually, add the log in CRON tab and let the log file get created automatically when the cron is run.
I've found another reason for user's crontab not running: the hostname is not present on the hosts file:
user#ubuntu:~$ cat /etc/hostname
ubuntu
Now the hosts file:
user#ubuntu:~$ cat /etc/hosts
127.0.0.1 localhost
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
This is on a Ubuntu 14.04.3 LTS, the way to fix it is adding the hostname to the hosts file so it resembles something like this:
user#ubuntu:~$ cat /etc/hosts
127.0.0.1 ubuntu localhost
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
For me, the solution was that the file cron was trying to run was in an encrypted directory, more specifcically a user diretory on /home/. Although the crontab was configured as root, because the script being run exisited in an encrypted user directory in /home/ cron could only read this directory when the user was actually logged in. To see if the directory is encrypted check if this directory exists:
/home/.ecryptfs/<yourusername>
if so then you have an encrypted home directory.
The fix for me was to move the script in to a non=encrypted directory and everythig worked fine.
As this is becoming a canonical for troubleshooting cron issues, allow me to add one specific but rather complex issue: If you are attempting to run a GUI program from cron, you are probably Doing It Wrong.
A common symptom is receiving error messages about DISPLAY being unset, or the cron job's process being unable to access the display.
In brief, this means that the program you are trying to run is attempting to render something on an X11 (or Wayland etc) display, and failing, because cron is not attached to a graphical environment, or in fact any kind of input/output facility at all, beyond being able to read and write files, and send email if the system is configured to allow that.
For the purposes of "I'm unable to run my graphical cron job", let's just point out in broad strokes three common scenarios for this problem.
Probably identify the case you are trying to implement, and search for related questions about that particular scenario to learn more, and find actual solutions with actual code.
If you are trying to develop an interactive program which communicates with a user, you want to rethink your approach. A common, but nontrivial, arrangement is to split the program in two: A back-end service which can run from cron, but which does not have any user-visible interactive facilities, and a front-end client which the user runs from their GUI when they want to communicate with the back-end service.
Probably your user client should simply be added to the user(s)' GUI startup script if it needs to be, or they want to, run automatically when they log in.
I suppose the back-end service could be started from cron, but if it requires a GUI to be useful, maybe start it from the X11 server's startup scripts instead; and if not, probably run it from a regular startup script (systemd these days, or /etc/rc.local or a similar system startup directory more traditionally).1
If you are trying to run a GUI program without interacting with a real user 2, you may be able to set up a "headless" X11 server 3 and run a cron job which starts up that server, runs your job, and quits.
Probably your job should simply run a suitable X11 server from cron (separate from any interactive X11 server which manages the actual physical display(s) and attached graphics card(s) and keyboard(s) available to the system), and pass it a configuration which runs the client(s) you want to run once it's up and running. (See also the next point for some practical considerations.)
You are running a computer for the sole purpose of displaying a specific application in a GUI, and you want to start that application when the computer is booted.
Probably your startup scripts should simply run the GUI (X11 or whatever) and hook into its startup script to also run the client program once the GUI is up and running. In other words, you don't need cron here; just configure the startup scripts to run the desktop GUI, and configure the desktop GUI to run your application as part of the (presumably automatic, guest?) login sequence.4
There are ways to run X11 programs on the system's primary display (DISPLAY=:0.0) but doing that from a cron job is often problematic, as that display is usually reserved for actual interactive use by the first user who logs in and starts a graphical desktop. On a single-user system, you might be able to live with the side effects if that user is also you, but this tends to have inconvenient consequences and scale very poorly.
An additional complication is deciding which user to run the cron job as. A shared system resource like a back-end service can and probably should be run by root (though ideally have a dedicated system account which it switches into once it has acquired access to any privileged resources it needs) but anything involving a GUI should definitely not be run as root at any point.
A related, but distinct problem is to interact in any meaningful way with the user. If you can identify the user's active session (to the extent that this is even well-defined in the first place), how do you grab their attention without interfering with whatever else they are in the middle of? But more fundamentally, how do you even find them? If they are not logged in at all, what do you do then? If they are, how do you determine that they are active and available? If they are logged in more than once, which terminal are they using, and is it safe to interrupt that session? Similarly, if they are logged in to the GUI, they might miss a window you spring up on the local console, if they are actually logged in remotely via VNC or a remote X11 server.
As a further aside: On dedicated servers (web hosting services, supercomputing clusters, etc) you might even be breaking the terms of service of the hosting company or institution if you install an interactive graphical desktop you can connect to from the outside world, or even at all.
1
The #reboot hook in cron is a convenience for regular users who don't have any other facility for running something when the system comes up, but it's just inconvenient and obscure to hide something there if you are root anyway and have complete control over the system. Use the system facilities to launch system services.
2
A common use case is running a web browser which needs to run a full GUI client, but which is being controlled programmatically and which doesn't really need to display anything anywhere, for example to scrape sites which use Javascript and thus require a full graphical browser to render the information you want to extract.
Another is poorly designed scientific or office software which was not written for batch use, and thus requires a GUI even when you just want to run a batch job and then immediately quit without any actual need to display anything anywhere.
(In the latter case, probably review the documentation to check if there isn't a --batch or --noninteractive or --headless or --script or --eval option or similar to run the tool without the GUI, or perhaps a separate utility for noninteractive use.)
3
Xvfb is the de facto standard solution; it runs a "virtual framebuffer" where the computer can spit out pixels as if to a display, but which isn't actually connected to any display hardware.
4
There are several options here.
The absolutely simplest is to set up the system to automatically log in a specific user at startup without a password prompt, and configure that user's desktop environment (Gnome or KDE or XFCE or what have you) to run your script from its "Startup Items" or "Login Actions" or "Autostart" or whatever the facility might be called. If you need more control over the environment, maybe run bare X11 without a desktop environment or window manager at all, and just run your script instead. Or in some cases, maybe replace the X11 login manager ("greeter") with something custom built.
The X11 stack is quite modular, and there are several hooks in various layers where you could run a script either as part of a standard startup process, or one which completely replaces a standard layer. These things tend to differ somewhat between distros and implementations, and over time, so this answer is necessarily vague and incomplete around these matters. Again, probably try to find an existing question about how to do things for your specific platform (Ubuntu, Raspbian, Gnome, KDE, what?) and scenario. For simple scenarios, perhaps see Ubuntu - run bash script on startup with visible terminal
I experienced same problem where crons are not running.
We fixed by changing permissions and owner by
Crons made root owner as we had mentioned in crontab AND
Cronjobs 644 permission given
There is already a lot of answers, but none of them helped me so I'll add mine here in case it's useful for somebody else.
In my situation, my cronjobs were working find until there was a power shortage that cut the power to my Raspberry Pi. Cron got corrupted. I think it was running a long python script exactly when the shortage happened. Nothing in the main answer above worked for me. The solution was however quite simple. I just had to force reinstallation of cron with:
sudo apt-get --reinstall install cron
It work right away after this.
Copying my answer for a duplicated question here.
cron may not know where to find the Python interpreter because it doesn't share your user account's environment variables.
There are 3 solutions to this:
If Python is at /usr/bin/python, you can change the cron job to use an absolute path: /usr/bin/python /srv/www/live/CronJobs/daily.py
Alternatively you can also add a PATH value to the crontab with PATH=/usr/bin.
Another solution would be to specify an interpreter in the script file, make it executable, and call the script itself in your crontab:
a. Put shebang at the top of your python file: #!/usr/bin/python.
b. Set it to executable: $ chmod +x /srv/www/live/CronJobs/daily.py
c. Put it in crontab: /srv/www/live/CronJobs/daily.py
Adjust the path to the Python interpreter if it's different on your system.
Reference
CRON uses a different TIMEZONE
A very common issue is: cron time settings may is different than your. In particular, the timezone could be not be the same:
# Notice that tasks will be started based on the cron's system
# daemon's notion of time and timezones.
You can run:
* * * * * echo $(date) >> /tmp/test.txt
This should generate a file like:
# cat test.txt
Sun 03 Apr 2022 09:02:01 AM UTC
Sun 03 Apr 2022 09:03:01 AM UTC
Sun 03 Apr 2022 09:04:01 AM UTC
Sun 03 Apr 2022 09:05:01 AM UTC
Sun 03 Apr 2022 09:06:01 AM UTC
If you are using a TZ other than UTC, you can try:
timedatectl set-timezone America/Sao_Paulo
replace America/Sao_Paulo according to you settings.
I'm not sure if it is actually necessary, but you can run:
sudo systemctl restart cron.service
After that, cron works as I expected:
# cat test.txt
Sun 03 Apr 2022 09:02:01 AM UTC
Sun 03 Apr 2022 09:03:01 AM UTC
Sun 03 Apr 2022 09:04:01 AM UTC
Sun 03 Apr 2022 09:05:01 AM UTC
Sun 03 Apr 2022 09:06:01 AM UTC
Sun 03 Apr 2022 09:07:01 AM UTC
Sun 03 Apr 2022 09:08:01 AM UTC
Sun 03 Apr 2022 09:09:01 AM UTC
Sun 03 Apr 2022 09:10:01 AM UTC
Sun 03 Apr 2022 06:11:01 AM -03
Sun 03 Apr 2022 06:12:01 AM -03
Sun 03 Apr 2022 06:13:01 AM -03
Sun 03 Apr 2022 06:14:01 AM -03
Try
service cron start
or
systemctl start cron
In my case I was trying to run cron locally.
I checked status:
service cron status
It showed me:
* cron is not running
Then I simply started the service:
service cron start
Sometimes the command that cron needs to run is in a directory where cron has no access, typically on systems where users' home directories' permissions are 700 and the command is in that directory.
Although answer has been accepted for this question, I will like to add what worked for me.
it's a good idea to quote the URL, if it contains a query it may not work without everything being quoted.
DONT FORGET TO PUT YOUR URL WHICH CONTAINS "?, =, #, %" IN A QUOTE.
Example.
https://paystack.com/indexphp?docs/api/#transaction-charge-authorization&date=today
should be in a quote like so
"https://paystack.com/indexphp?docs/api/#transaction-charge-authorization&date=today"

Remote shutdown not working in Python

I'm trying to write up a small script using Python 2.7.7 that would ping an IP address and determine whether that PC is turned on or off, and change the power state of that system accordingly. I'm relying heavily on the Python modules subprocess and wakeonlan. I am not having any issues pinging or using WOL, but the shutdown functionality is behaving in a very strange way.
Using the command shutdown -s -t 0 /m \\XXX.XXX.X.X from the command prompt works fine, as well as the following from the Python interactive shell in cmd:
import subprocess
ip = 'XXX.XXX.X.X' # use for example
subprocess.call('shutdown -s -t 0 /m \\\\%s' % ip)
But running the same command in from a Python script is returning this error:
XXX.XXX.X.X: The entered computer name is not valid or remote shutdown is not supported on the target computer. Check the name and then try again or contact your system administrator.(53)
Are there any background behaviors that I'm not thinking about? Perhaps something to do with the subprocess module? Thanks in advance!

Error when opening interactive ssh shell from python

I am trying to open an interactive ssh shell through fabric.
Requirements:
Use fabrics hosts in the connection string to remote
Open fully interactive shell in current terminal
Works on osx and ubuntu
No need for data transfer between fabric/python and remote. So fabric task can end in background.
So far:
fabfile.py:
def test_ssh():
from subprocess import Popen
Popen('ssh user#1.2.3.4 -i "bla.pem"', shell=True)
In terminal:
localprompt$ fab test_ssh
localprompt$ tcsetattr: Input/output error
[remote ubuntu welcome here]
remoteprompt$ |
Then if I try to input a command on the remote prompt it is executed locally and I drop back to the local prompt.
Does anyone know a solution?
Note: I am aware of fabrics open_shell, but this does not work for me since the stdout lags behind, rendering this unusable.
A slight modification does the trick:
def test_ssh():
from subprocess import call
call('ssh user#1.2.3.4 -i "bla.pem"', shell=True)
As the answer to this question suggests the error suggests the error comes from the inability of ssh to connect to the stdin/out of a process in the background.
With call the fabric task does not end in the background, but I am fine with that as long as as it is not interfering with my stdin/out.

Execute remote python script via SSH

I want to execute a Python script on several (15+) remote machine using SSH. After invoking the script/command I need to disconnect ssh session and keep the processes running in background for as long as they are required to.
I have used Paramiko and PySSH in past so have no problems using them again. Only thing I need to know is how to disconnect a ssh session in python (since normally local script would wait for each remote machine to complete processing before moving on).
This might work, or something similar:
ssh user#remote.host nohup python scriptname.py &
Basically, have a look at the nohup command.
On Linux machines, you can run the script with 'at'.
echo "python scriptname.py" ¦ at now
If you are going to perform repetitive tasks on many hosts, like for example deploying software and running setup scripts, you should consider using something like Fabric
Fabric is a Python (2.5 or higher) library and command-line tool for
streamlining the use of SSH for application deployment or systems
administration tasks.
It provides a basic suite of operations for executing local or remote
shell commands (normally or via sudo) and uploading/downloading files,
as well as auxiliary functionality such as prompting the running user
for input, or aborting execution.
Typical use involves creating a Python module containing one or more
functions, then executing them via the fab command-line tool.
You can even use tmux in this scenario.
As per the tmux documentation:
tmux is a terminal multiplexer. It lets you switch easily between several programs in one terminal, detach them (they keep running in the background) and reattach them to a different terminal. And do a lot more
From a tmux session, you can run a script, quit the terminal, log in again and check back as it keeps the session until the server restart.
How to configure tmux on a cloud server

Categories