Linux newbie question: I have a personal PiCloud environment and can install my own Python extensions. But I would like to use a pre-compiled C shared library (mylib.so), i.e., place it in /user/lib. Is that possible? If I have to build it on the PiCloud environment server, how do I upload the source?
It's possible that you could simply copy mylib.so to your environment's /usr/lib. But, it's preferred that you compile mylib.so on the setup server to ensure that all the dependencies are available on the server, and that the correct architecture is used (AMD64).
Here are the steps:
Create an environment, and put it in modification mode.
You will need to copy your files to the setup server for the environment. If you're on Linux, it'll be easiest to use scp. If you're on Windows, you'll need to use something like Tunnelier. On either OS, you'll need to click on the key icon, and download the SSH Identity file you'll need to authenticate with the setup server for copying files.
$ scp -i picloud_rsa mylib.tar.gz picloud#setup-server-hostname.com:~/
Once the files are on the server, you can either SSH into the setup server, or use the web browser console (new feature!). From there, run your compile scripts. You can copy your .so file to /usr/lib. Don't forget to use "sudo".
$ sudo cp mylib.so /usr/lib
You should run whatever program depends on mylib.so on the setup server to ensure it's working properly. If you're going to run a test, you'll need to run "ldconfig" so that your shared library is in the library cache.
$ sudo ldconfig
$ ./run_your_program
Related
So I have mounted a part of a development server which hold a virtual environment that is used for development testing. The reason for this is to get access to the installed packages such as Django-rest-framework and Django itself and not having it set up locally (to be sure to use the same version as the development server has). I know that it's perhaps better to use Docker for this, but that's not the case right now.
The way I've done it is installing SSHFS via an external brew (as it's no longer supported in the brew core) - via this link https://github.com/gromgit/homebrew-fuse
After that I've run this command in the terminal to via SSH mount the specific part of the development server that holds the virtual enviornment:
sshfs -o ssh_command='ssh -i /Users/myusername/.ssh/id_rsa' myusername#servername:/home/myusername/projectname/env/bin ~/mnt/projectname
It works fine and I have it mounted on my local disk in mnt/projectname.
Now I go into VSCode and go into the folder and select the file called "python3" as my interpreter (which I should, right?). However, this file is just an alias, being 16 bytes in size. I suspect something is wrong here, but I'm not sure on how to fix it. Can someone maybe take a look and give some input? I'll attach a screenshot of the mounted directory.
Screenshot of virtualenv directory mounted on local machine
The solution to the problem was using the VSCode extension Remote - SSH and run VSCode directly in the remote location, and from there being able to access the virtual environment.
I have been given a profile (with /home directory) on a remote Linux server to work on projects that need powerful computing resources. I'd like to use Vim to edit code (mostly python) on the remote server as it can be run through a shell and doesn't require a slow GUI exchange. Currently, the Debian distribution on the remote server has a barebones vi installed and no Vim. Is there a way to install a Vim (perhaps in my home directory?) without superuser permissions?
You should be able to install vim locally, for example downloaded from a binary, or compiled from source with
git clone https://github.com/vim/vim.git
cd vim/src
make
From there, you can simply add the directory you compiled it to to PATH.
I want to manage virtual machines (any flavor) using Python scripts. Example, create VM, start, stop and be able to access my guest OS's resources.
My host machine runs Windows. I have VirtualBox installed. Guest OS: Kali Linux.
I just came across a software called libvirt. Do any of you think this would help me ?
Any insights on how to do this? Thanks for your help.
For aws use boto.
For GCE use Google API Python Client Library
For OpenStack use the python-openstackclient and import its methods directly.
For VMWare, google it.
For Opsware, abandon all hope as their API is undocumented and has like 12 years of accumulated abandoned methods to dig through and an equally insane datamodel back ending it.
For direct libvirt control there are python bindings for libvirt. They work very well and closely mimic the c libraries.
I could go on.
follow the directions here to install docker https://docs.docker.com/windows/ (it includes Oracle VirtualBox (if you dont already have it)
#grab the immage
docker pull kalilinux/kali-linux-docker
#run a specific command
docker run kalilinux/kali-linux-docker <some_command>
#open interactive terminal to "docker image"
docker run -t -i kalilinux/kali-linux-docker /bin/bash
if you want to mount a local volume you can use the `-v dst src` switch in your run command
#mount local ./training/webapp directory into kali image # /webapp
docker run kalilinux/kali-linux-docker -v /webapp training/webapp <some_command>
note that these are run from the regular windows prompt to use python you would need to wrap them in subprocess calls ...
I have written a python script on my local laptop which uses several third party packages. I now want to run my script regularly (via a cron job) on an external server.
The external server most likely does not have all the dependencies installed, is there is a way to package and deploy my python script and dependencies in order to ensure that it will run?
I have already tried to package the script as an exe, but failed to do so.
Not clear what kind of third party packages you have, but for those that were installed with pip, you can do this in your dev environment:
$ pip freeze > requirements.txt
And then you can install these packages in your production environment:
$ pip install requirements.txt
Ideally, you will already have a virtualenv on your production box. If not, it may be well worth reading about these before deploying your script.
Just turn your computer into a server. Simply set up your router for port forwarding so that your server's content's will display when the router's IP is entered. You can of course purchase a DNS domain to give that IP a human readable URL.
Would you please let me know, how would I run my code in local machine to remote server?
I have source code and data in local machine. But I would like to run the code in remote server.
One solution would be:
Install python on the remote machine
Package your code into a python package using distutils (see http://wiki.python.org/moin/Distutils/Tutorial). Basically the process ends when you run the command python setup sdist in the root dir of your project, and get a tar.gz file in the dist/ subfolder.
Copy your package to the remote server using scp, for example, if it is an amazon machine:
scp -i myPemFile.pem local-python-package.tar.gz remote_user_name#remote_ip:remote_folder
Run sudo pip install local-python-package.tar.gz on the remote server
Now you can either SSH to the remote machine and run your code or use some remote enabler such as fabric to start commands on the remote server (works for any shell command, specifically python scripts)
Alternatively, you can just skip the package building in [2], if you have a simple script, just scp the script itself to the remote machine ane proceed using a remote python myscript.py
Hope this helps
I would recommend setting up git repository on repote server and connect local source (for git you can read about how to do it here: http://git-scm.com/book).
Then you can use i.e. Eclipse EGit and after you change your local code you can push it to remote location.