Get vim on a remote server without system administration permissions - python

I have been given a profile (with /home directory) on a remote Linux server to work on projects that need powerful computing resources. I'd like to use Vim to edit code (mostly python) on the remote server as it can be run through a shell and doesn't require a slow GUI exchange. Currently, the Debian distribution on the remote server has a barebones vi installed and no Vim. Is there a way to install a Vim (perhaps in my home directory?) without superuser permissions?

You should be able to install vim locally, for example downloaded from a binary, or compiled from source with
git clone https://github.com/vim/vim.git
cd vim/src
make
From there, you can simply add the directory you compiled it to to PATH.

Related

Mounting a virtual environment via SSHFS on local machine using it's Python3 file not working

So I have mounted a part of a development server which hold a virtual environment that is used for development testing. The reason for this is to get access to the installed packages such as Django-rest-framework and Django itself and not having it set up locally (to be sure to use the same version as the development server has). I know that it's perhaps better to use Docker for this, but that's not the case right now.
The way I've done it is installing SSHFS via an external brew (as it's no longer supported in the brew core) - via this link https://github.com/gromgit/homebrew-fuse
After that I've run this command in the terminal to via SSH mount the specific part of the development server that holds the virtual enviornment:
sshfs -o ssh_command='ssh -i /Users/myusername/.ssh/id_rsa' myusername#servername:/home/myusername/projectname/env/bin ~/mnt/projectname
It works fine and I have it mounted on my local disk in mnt/projectname.
Now I go into VSCode and go into the folder and select the file called "python3" as my interpreter (which I should, right?). However, this file is just an alias, being 16 bytes in size. I suspect something is wrong here, but I'm not sure on how to fix it. Can someone maybe take a look and give some input? I'll attach a screenshot of the mounted directory.
Screenshot of virtualenv directory mounted on local machine
The solution to the problem was using the VSCode extension Remote - SSH and run VSCode directly in the remote location, and from there being able to access the virtual environment.

Starting TRAC server with multiple independant projects

I'm running a TRAC server (tracd service) with 3 independant projects configured. Each project has an own password file in order to keep the user management independant. TRAC is started as a Windows service as described on https://trac.edgewall.org/wiki/0.11/TracStandalone
It seems that starting the TRAC server does not work if the string length of the key 'AppParameters' in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\tracd\Parameters is too long. The maximum key lenght seems to be around 260 characters.
The TRAC server can be started successfully using following 'AppParameters' key:
C:\Python27\Scripts\tracd-script.py -p 80 --auth=',C:\Trac\Moisture\conf\.htpasswd,mt.com' --auth=',C:\Trac\Balances\conf\.htpasswd,mt.com' --auth=',C:\Trac\Weights\conf\.htpasswd,mt.com' C:\Trac\Moisture C:\Trac\Balances C:\Trac\Weights
The TRAC server does not start with following 'AppParameters' key:
C:\Python27\Scripts\tracd-script.py -p 80 --auth='Moisture,C:\Trac\Moisture\conf\.htpasswd,mt.com' --auth='Balances,C:\Trac\Balances\conf\.htpasswd,mt.com' --auth='Weights,C:\Trac\Weights\conf\.htpasswd,mt.com' C:\Trac\Moisture C:\Trac\Balances C:\Trac\Weights
If I add a fourth project it is not possible to start the TRAC server anymore because the string is too long. Is this problem known? Is there a workaround?
You can also shorten your command by using the -e option for specifying the Trac environment parent directory rather than explicitly listing each Environment path.
A more extensive solution:
You could run the service with nssm.
Install nssm and put it on your path. I installed using chocolatey package manager: choco install -y nssm.
Create a batch file, run_tracd.bat:
C:\Python27-x86\Scripts\tracd.exe -p 8080 env1
Run nssm install tracd:
Run nssm start tracd
You don't have to do it exactly like this. You could avoid the bat file and enter the parameters in the nssm GUI. I'm not Windows expert, but I like having the bat file because it's easier to edit. However, there may be security concerns that I'm unaware of or it may be more robust to put the parameters in the nssm GUI (you don't have to worry about accidental deletion of the bat file). The following also works for me:

pycharm always "uploading pycharm helpers" to same remote python interpreter when starts

When I start PyCharm for remote python interpreter, it always performs "Uploading PyCharm helpers", even when the remote machine IP is the same and already containing previously uploaded helpers. Is the behaviour correct?
This is a well known problem that can be a major obstacle in productivity especially if you use disposable instances in your workflow. It leads to a forced coffee break of 20 minutes every time you want to connect to a remote system. Unacceptable.
Seems like PyCharm creates a build.txt file in the remote helper folder that just has the current PyCharm build number as its contents, for instance:
PY-171.4694.38
So it's possible to upload the helpers manually by using rsync on /Applications/PyCharm.app/Contents/helpers/ and finally manually creating a build.txt file with your current build number. After that, PyCharm should not attempt to re-upload them.
Example:
$ rsync -avz /Applications/PyCharm.app/Contents/helpers/ cluster:/home/xapple/.pycharm_helpers/
$ echo "PY-171.4694.38" > /home/xapple/.pycharm_helpers/build.txt
$ python /home/xapple/.pycharm_helpers/pydev/setup_cython.py build_ext --inplace
In my case, several projects are projected to the remote server by Pycharm. All of them get stuck when one of the projects goes wrong on the remote server. Solution: leave only one that you need to work on and restart the PyCharm by "Invalidate caches".
Note that -- at least as late as version 2018.3.x -- PyCharm also appears to require re-uploading of the helpers when the local network connection changes as well, for some reason.
What I've observed in my case is that if, while PyCharm remains running, I relocate my laptop and connect to a different LAN, the next remote debugging session I initiate will trigger the lengthy helper upload. It turns out that the contents of the helpers directory actually uploaded in this case are exactly identical to the contents already present in that directory on the remote system (I compared them), so this upload is entirely superfluous, but PyCharm isn't able to detect this.
As there's no way I know of in PyCharm to bypass or cancel automatic helpers upload, the only recourse is to completely exit from PyCharm (close all open project windows) after each change of network connection and restart the IDE. In my experience, this will cause the helper upload to succeed in the "checking remote helpers" phase, before actually uploading all the helpers again. Of course, this is major annoyance if you have multiple projects open, but it's faster than waiting the (tens of) minutes for the agonizingly slow helpers upload to complete.
All of what other responders describe for the course of action to take when changing PyCharm versions is true. It is sufficient to use rsync, ftp, scp, or whatever to transfer the contents of the new local helpers directory (on Linux, a subdirectory of where the app is installed) to the remote system (on Linux, ~/.pycharm_helpers, where ~ is the home directory of the user name used for the remote debugging session), and update the remote build.txt in the helpers directory with the new PyCharm version.
This problem came back again 6 years later with PyCharm 2022.3.2.
The directory /Applications/PyCharm.app/Contents/helpers/ doesn't exist anymore, so the previous trick doesn't work.
What solved it this time is simply to:
Quit PyCharm.
Delete the ~/.pycharm_helpers directory on the remote server.
Relaunch PyCharm and let it do it's thing.
According to the docs,
PyCharm checks remote helpers version on every remote run, so if you update your PyCharm version, the new helpers will be uploaded automatically and you don't need to recreate remote interpreter.
fast (less than 3 second between me an digitalocean) solution inspired by excellent xApple's answer
on remote server:
export SOURCE=<your ip>
export PORT=9000
export HELPERS=$HOME/.pycharm_helpers
# PyCharm Help -> About
export BUILD=PY-172.4343.24 # 2017/10/11
cd $HELPERS
rm -fr *
# my OS - ubuntu, change firewall rules to yours if you're not so lucky
sudo ufw allow from $SOURCE proto tcp to any port $PORT
netcat -l -v -p $PORT | tar xz # here you waiting for connection
# after finish
sudo ufw delete allow from $SOURCE proto tcp to any port $PORT
echo -n $BUILD > build.txt
python $HELPERS/pydev/setup_cython.py build_ext --inplace
on your workstation:
export TARGET=<remote server ip>
export PORT=9000
export HELPERS=<path to helpers> # for me it's $HOME/opt/pycharm-2016.3/helpers
cd $HELPERS
tar cfz - . | netcat -v $TARGET $PORT
Turning off the firewall addressed the problem in my case (macOS - Mojave). Note that this is not a general solution as it was not tested in any other environments/OS.

PyDev: Running Code in local machine to remote machine

Would you please let me know, how would I run my code in local machine to remote server?
I have source code and data in local machine. But I would like to run the code in remote server.
One solution would be:
Install python on the remote machine
Package your code into a python package using distutils (see http://wiki.python.org/moin/Distutils/Tutorial). Basically the process ends when you run the command python setup sdist in the root dir of your project, and get a tar.gz file in the dist/ subfolder.
Copy your package to the remote server using scp, for example, if it is an amazon machine:
scp -i myPemFile.pem local-python-package.tar.gz remote_user_name#remote_ip:remote_folder
Run sudo pip install local-python-package.tar.gz on the remote server
Now you can either SSH to the remote machine and run your code or use some remote enabler such as fabric to start commands on the remote server (works for any shell command, specifically python scripts)
Alternatively, you can just skip the package building in [2], if you have a simple script, just scp the script itself to the remote machine ane proceed using a remote python myscript.py
Hope this helps
I would recommend setting up git repository on repote server and connect local source (for git you can read about how to do it here: http://git-scm.com/book).
Then you can use i.e. Eclipse EGit and after you change your local code you can push it to remote location.

Using shared libraries on a PiCloud environment server

Linux newbie question: I have a personal PiCloud environment and can install my own Python extensions. But I would like to use a pre-compiled C shared library (mylib.so), i.e., place it in /user/lib. Is that possible? If I have to build it on the PiCloud environment server, how do I upload the source?
It's possible that you could simply copy mylib.so to your environment's /usr/lib. But, it's preferred that you compile mylib.so on the setup server to ensure that all the dependencies are available on the server, and that the correct architecture is used (AMD64).
Here are the steps:
Create an environment, and put it in modification mode.
You will need to copy your files to the setup server for the environment. If you're on Linux, it'll be easiest to use scp. If you're on Windows, you'll need to use something like Tunnelier. On either OS, you'll need to click on the key icon, and download the SSH Identity file you'll need to authenticate with the setup server for copying files.
$ scp -i picloud_rsa mylib.tar.gz picloud#setup-server-hostname.com:~/
Once the files are on the server, you can either SSH into the setup server, or use the web browser console (new feature!). From there, run your compile scripts. You can copy your .so file to /usr/lib. Don't forget to use "sudo".
$ sudo cp mylib.so /usr/lib
You should run whatever program depends on mylib.so on the setup server to ensure it's working properly. If you're going to run a test, you'll need to run "ldconfig" so that your shared library is in the library cache.
$ sudo ldconfig
$ ./run_your_program

Categories