Modules appear to not be installed when running python through ssh - python

I am connecting to a remote server through
ssh user#server.com
and run
python script.py
in the appropriate directory. However, I get the error
ImportError: No module named numpy
even though I know the module is installed and the script runs with no problems when I am physically logged in to that server.
None of the answers I was able to find worked (for example this, and this). Do have any ideas as to how I can run the script using ssh?
The remote server has Python 2.6.6 installed, and
which python
returns
/usr/bin/python
The remote serves runs CentOS.

See similar problem describe here: Why does an SSH remote command get fewer environment variables then when run manually?.
Compare your environment variables in the local (physical) mode to the remote mode by running env in both cases. Move missing variables from your local profile to /etc/profile. Then log out from ssh session and connect again.
Another approach: If you don't want to change anything, then after ssh switch to your user via su - <your user>. This may look weird because you already logged it with this user. The difference is, that after su all your env. variables will set like in a local (physical) mode. Advantage: it is quick. Disadvantage: You will have to do it each time you want to run your Python script. So the first approach with configuring /etc/profile may be better on the long run.

Related

Using the ssh agent inside a python script

I'm pulling and pushing to a github repository with a python script. For the github repository, I need to use a ssh key.
If I do this manually before running the script:
eval $(ssh-agent -s)
ssh-add ~/.ssh/myprivkey
everything works fine, the script works. But, after a while, apparently the key expires, and I have to run those 2 cmds again
The thing is, if I do that inside the python script, with os.system(cmd), it doesn't work, it only works if I do that manually
I know this must be a messy way to use the ssh agent, but I honestly don't know how it works, and I just want the script to work, that's all
The script runs once an hour, just in case
While the normal approach would be to run your Python script in a shell where the ssh agent is already running, you can also consider an alternative approach with sshify.py
# This utility will execute the given command (by default, your shell)
# in a subshell, with an ssh-agent process running and your
# private key added to it. When the subshell exits, the ssh-agent
# process is killed.
Consider defining the ssh key path against a host of github.com in your ssh config file as outlined here: https://stackoverflow.com/a/65791491/14648336
If Linux then at this path ~/.ssh/ create a file called config and input something similar to in the above answer:
Host github.com
HostName github.com
User your_user_name
IdentityFile ~/.ssh/your_ssh_priv_key_file_name
This would save the need for starting an agent each time and also prevent the need for custom environment variables if using GitPython (you mention using Python) as referenced in some other SO answers.

pycharm always "uploading pycharm helpers" to same remote python interpreter when starts

When I start PyCharm for remote python interpreter, it always performs "Uploading PyCharm helpers", even when the remote machine IP is the same and already containing previously uploaded helpers. Is the behaviour correct?
This is a well known problem that can be a major obstacle in productivity especially if you use disposable instances in your workflow. It leads to a forced coffee break of 20 minutes every time you want to connect to a remote system. Unacceptable.
Seems like PyCharm creates a build.txt file in the remote helper folder that just has the current PyCharm build number as its contents, for instance:
PY-171.4694.38
So it's possible to upload the helpers manually by using rsync on /Applications/PyCharm.app/Contents/helpers/ and finally manually creating a build.txt file with your current build number. After that, PyCharm should not attempt to re-upload them.
Example:
$ rsync -avz /Applications/PyCharm.app/Contents/helpers/ cluster:/home/xapple/.pycharm_helpers/
$ echo "PY-171.4694.38" > /home/xapple/.pycharm_helpers/build.txt
$ python /home/xapple/.pycharm_helpers/pydev/setup_cython.py build_ext --inplace
In my case, several projects are projected to the remote server by Pycharm. All of them get stuck when one of the projects goes wrong on the remote server. Solution: leave only one that you need to work on and restart the PyCharm by "Invalidate caches".
Note that -- at least as late as version 2018.3.x -- PyCharm also appears to require re-uploading of the helpers when the local network connection changes as well, for some reason.
What I've observed in my case is that if, while PyCharm remains running, I relocate my laptop and connect to a different LAN, the next remote debugging session I initiate will trigger the lengthy helper upload. It turns out that the contents of the helpers directory actually uploaded in this case are exactly identical to the contents already present in that directory on the remote system (I compared them), so this upload is entirely superfluous, but PyCharm isn't able to detect this.
As there's no way I know of in PyCharm to bypass or cancel automatic helpers upload, the only recourse is to completely exit from PyCharm (close all open project windows) after each change of network connection and restart the IDE. In my experience, this will cause the helper upload to succeed in the "checking remote helpers" phase, before actually uploading all the helpers again. Of course, this is major annoyance if you have multiple projects open, but it's faster than waiting the (tens of) minutes for the agonizingly slow helpers upload to complete.
All of what other responders describe for the course of action to take when changing PyCharm versions is true. It is sufficient to use rsync, ftp, scp, or whatever to transfer the contents of the new local helpers directory (on Linux, a subdirectory of where the app is installed) to the remote system (on Linux, ~/.pycharm_helpers, where ~ is the home directory of the user name used for the remote debugging session), and update the remote build.txt in the helpers directory with the new PyCharm version.
This problem came back again 6 years later with PyCharm 2022.3.2.
The directory /Applications/PyCharm.app/Contents/helpers/ doesn't exist anymore, so the previous trick doesn't work.
What solved it this time is simply to:
Quit PyCharm.
Delete the ~/.pycharm_helpers directory on the remote server.
Relaunch PyCharm and let it do it's thing.
According to the docs,
PyCharm checks remote helpers version on every remote run, so if you update your PyCharm version, the new helpers will be uploaded automatically and you don't need to recreate remote interpreter.
fast (less than 3 second between me an digitalocean) solution inspired by excellent xApple's answer
on remote server:
export SOURCE=<your ip>
export PORT=9000
export HELPERS=$HOME/.pycharm_helpers
# PyCharm Help -> About
export BUILD=PY-172.4343.24 # 2017/10/11
cd $HELPERS
rm -fr *
# my OS - ubuntu, change firewall rules to yours if you're not so lucky
sudo ufw allow from $SOURCE proto tcp to any port $PORT
netcat -l -v -p $PORT | tar xz # here you waiting for connection
# after finish
sudo ufw delete allow from $SOURCE proto tcp to any port $PORT
echo -n $BUILD > build.txt
python $HELPERS/pydev/setup_cython.py build_ext --inplace
on your workstation:
export TARGET=<remote server ip>
export PORT=9000
export HELPERS=<path to helpers> # for me it's $HOME/opt/pycharm-2016.3/helpers
cd $HELPERS
tar cfz - . | netcat -v $TARGET $PORT
Turning off the firewall addressed the problem in my case (macOS - Mojave). Note that this is not a general solution as it was not tested in any other environments/OS.

How to remote debug in PyCharm

The issue I'm facing right now:
I deploy Python code on a remote host via SSH
the scripts are passed some arguments and must be ran by a specific user
the PyCharm run/debug configuration that I create connects through SSH via a different user (can't connect with the user that actually runs the scripts)
I want to remote debug this code via PyCharm...I managed to do all configuration, I just get permission errors.
Are there any ways on how I can run/debug the scripts as a specific user (like sudo su - user)?
I've read about specifying some Python Interpeter options in PyCharm's remote/debug configuration, but didn't manage to get a working solution.
If you want an easy and more flexible way to get into the PyCharm debugger, rather than necessarily having a one-click "play" button in PyCharm, you can use the debug server functionality. I've used this in situations where running some Python code isn't as simple as running python ....
See the Remote debug with a Python Debug Server docs for more details, but here's a rough summary of how it works:
Upload & install remote debugging helper egg on your server (On OSX, these are found under /Applications/PyCharm.app/Contents/debug-eggs)
Setup remote debug server run configuration: click on the drop-down run configuration menu, select Edit configurations..., hit the + button, choose Python remote debug.
The details entered here (somewhat confusingly) tell the remote server running the Python script how to connect to your laptop's PyCharm instance.
set Local host name to your laptop's IP address
set port to any free port that you can use on your laptop (e.g. 8888)
Now follow the remaining instructions in that dialog box: copy-paste the import and pydevd.settrace(...) statements into your code, specifically where you want your code to "hit a breakpoint". This is basically the PyCharm equivalent of import pdb; pdb.set_trace(). Make sure the changed code is sync'ed to your server.
Hit the bug button (next to play; this starts the PyCharm debug server), and run your Python script just like you'd normally do, under whatever user, environment etc. When the breakpoint is hit, PyCharm should drop into debug mode.
I have this (finally) working with ssh RemoteForward open, like so:
ssh -R 5678:localhost:5678 user#<remotehost>
Then start the script in this ssh session. The python script host must connect to localhost:5678 and of course your local pycharm debugger must listen to 5678
(or whatever port you choose)

How to Use Python to execute initial commands over the terminal and then continue use

I have looked into using pxssh ,subprocess and paramiko but have found no success. What I am ultimately trying to do is figure out a way to not only use SSH to access a server and execute commands using a python script and finish there, but also have it open an instance of the terminal after executing all the commands for continued use.
Currently the server has modules that clients have to manually activate using commands after they have established an SSH connection.
For example:
module python
This command would give the user access to python.
Following this the user would then be able to use python and all its commands through the ssh connection in the terminal.
The issue I have with the methods listed earlier for executing these commands is that it does not display an instance of the terminal. It successfully executes the commands but since these commands have to be executed every time a new SSH connection is established it's worthless unless I can get essentially a copy of the terminal that the Python script executed and loaded up all the modules with.
Does any one have a solution to this? I've scoured the web for hours to no success.
This is a very difficult issue to explain so if anything is unclear please ask me and I will try my best to rephrase things. I am very new to all this.

Is it possible to use remote vagrant based python interpreter when coding Visual Studio + PTVS

In our company we using vagrant VM's to have the save env. for all. Is it possible to configure VisualStudio + PTVS (python tools for VS) to use vagrant based python interpreter via ssh for example?
There's no special support for remote interpreters in PTVS, like what PyCharm has. It's probably possible to hack something based on the existing constraints, but it would be some work...
To register an interpreter that can actually run, it would have to have a local (well, CreateProcess'able - so e.g. SMB shares are okay) binary that accepts the same command line options as python.exe. It might be possible to use ssh directly by adding the corresponding command line options to project settings. Otherwise, a proxy binary that just turns around and invokes the remote process would definitely work.
Running under debugger is much trickier. For that to work, the invoked Python binary would also have to be able to load the PTVS debugging bits (which is a bunch of .py files in PTVS install directory), and to connect to VS over TCP to establish a debugger connection. I don't see how this could be done without writing significant amounts of code to correctly proxy everything.
Attaching to a remotely running process using ptvsd, on the other hand, would be trivial.
For code editing experience, you'd need a local copy (or a share etc) of the standard library for that interpreter, so that it can be analyzed by the type inference engine.

Categories