How do I deploy a python application to an external server? - python

I have written a python script on my local laptop which uses several third party packages. I now want to run my script regularly (via a cron job) on an external server.
The external server most likely does not have all the dependencies installed, is there is a way to package and deploy my python script and dependencies in order to ensure that it will run?
I have already tried to package the script as an exe, but failed to do so.

Not clear what kind of third party packages you have, but for those that were installed with pip, you can do this in your dev environment:
$ pip freeze > requirements.txt
And then you can install these packages in your production environment:
$ pip install requirements.txt
Ideally, you will already have a virtualenv on your production box. If not, it may be well worth reading about these before deploying your script.

Just turn your computer into a server. Simply set up your router for port forwarding so that your server's content's will display when the router's IP is entered. You can of course purchase a DNS domain to give that IP a human readable URL.

Related

Get vim on a remote server without system administration permissions

I have been given a profile (with /home directory) on a remote Linux server to work on projects that need powerful computing resources. I'd like to use Vim to edit code (mostly python) on the remote server as it can be run through a shell and doesn't require a slow GUI exchange. Currently, the Debian distribution on the remote server has a barebones vi installed and no Vim. Is there a way to install a Vim (perhaps in my home directory?) without superuser permissions?
You should be able to install vim locally, for example downloaded from a binary, or compiled from source with
git clone https://github.com/vim/vim.git
cd vim/src
make
From there, you can simply add the directory you compiled it to to PATH.

Starting TRAC server with multiple independant projects

I'm running a TRAC server (tracd service) with 3 independant projects configured. Each project has an own password file in order to keep the user management independant. TRAC is started as a Windows service as described on https://trac.edgewall.org/wiki/0.11/TracStandalone
It seems that starting the TRAC server does not work if the string length of the key 'AppParameters' in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\tracd\Parameters is too long. The maximum key lenght seems to be around 260 characters.
The TRAC server can be started successfully using following 'AppParameters' key:
C:\Python27\Scripts\tracd-script.py -p 80 --auth=',C:\Trac\Moisture\conf\.htpasswd,mt.com' --auth=',C:\Trac\Balances\conf\.htpasswd,mt.com' --auth=',C:\Trac\Weights\conf\.htpasswd,mt.com' C:\Trac\Moisture C:\Trac\Balances C:\Trac\Weights
The TRAC server does not start with following 'AppParameters' key:
C:\Python27\Scripts\tracd-script.py -p 80 --auth='Moisture,C:\Trac\Moisture\conf\.htpasswd,mt.com' --auth='Balances,C:\Trac\Balances\conf\.htpasswd,mt.com' --auth='Weights,C:\Trac\Weights\conf\.htpasswd,mt.com' C:\Trac\Moisture C:\Trac\Balances C:\Trac\Weights
If I add a fourth project it is not possible to start the TRAC server anymore because the string is too long. Is this problem known? Is there a workaround?
You can also shorten your command by using the -e option for specifying the Trac environment parent directory rather than explicitly listing each Environment path.
A more extensive solution:
You could run the service with nssm.
Install nssm and put it on your path. I installed using chocolatey package manager: choco install -y nssm.
Create a batch file, run_tracd.bat:
C:\Python27-x86\Scripts\tracd.exe -p 8080 env1
Run nssm install tracd:
Run nssm start tracd
You don't have to do it exactly like this. You could avoid the bat file and enter the parameters in the nssm GUI. I'm not Windows expert, but I like having the bat file because it's easier to edit. However, there may be security concerns that I'm unaware of or it may be more robust to put the parameters in the nssm GUI (you don't have to worry about accidental deletion of the bat file). The following also works for me:

No module named 'flask' while using Vagrant

I am trying to setup Vagrant in my machine(ubuntu 15.10 64bit). and I followed the steps mentioned here link
I am getting error as no Flask found when I run app.py
Am i missing something here? Its mentioned that all packages from requirements will be installed automatically. But I am not able to make it work.
Steps are as follows:
Getting started
Install Vagrant
Clone this repo as your project name:
git clone git#github.com:paste/fvang.git NEW-PROJECT-NAME
Configure project name and host name in ansible/roles/common/vars/main.yml:
project_name: "fvang"
host_name: "fvang.local"
Modify your local /etc/hosts:
192.168.33.11 fvang.local
Build your Vagrant VM:
vagrant up
Log into the VM via SSH:
vagrant ssh
Start Flask development server:
cd ~/fvang
python app/app.py
I am the author of the FVANG repo, but I don't have the rep to join your chat. I posted a response on the github issue, see here:
https://github.com/paste/fvang/issues/2
I think the Ansible provisioning script failed to complete due to changes in Ansible 2.0. (otherwise Flask would have been installed from requirements.txt). You can check which version of Ansible was installed by running ansible --version. I will be upgrading the scripts to 2.0 shortly.
Edit --
I just updated the repo to work with Ansible 2.0 and simplified a few things. Everything should work as expected now, give it a shot. You'll probably want to just vagrant destroy and vagrant up again.
A vagrant machine as new as a new operating system. You need to install each and every software you need. try this
sudo pip install Flask
After installation if you need to run the app, then you need to uncomment vagrant's ip
(In Vagrantfile) before accessing vagrant's localhost, it turns out to be 192.168.33.10 generally and port 5000

What ports does pip use?

This is hopefully a quick one to answer, I'm trying to provision a box on AWS with puppet and one of the steps involves a pip install from a requirements file. Something like this: -
/usr/local/venv/ostcms/bin/pip install -r /vagrant/requirements.txt
The step basically fails because it can't find any of the packages in the requirements file, but when I open the AWS box's security group up to allow "All Traffic" the pip step works.
I'm trying to find the port that pip uses so I can basically have that port, http and ssh open on the box and live happily ever after.
Pip runs on 3128 so make sure you have that open in your AWS console. Otherwise pip will get blocked when attempting to talk to PyPi (or anywhere else it cares to download from).

Using shared libraries on a PiCloud environment server

Linux newbie question: I have a personal PiCloud environment and can install my own Python extensions. But I would like to use a pre-compiled C shared library (mylib.so), i.e., place it in /user/lib. Is that possible? If I have to build it on the PiCloud environment server, how do I upload the source?
It's possible that you could simply copy mylib.so to your environment's /usr/lib. But, it's preferred that you compile mylib.so on the setup server to ensure that all the dependencies are available on the server, and that the correct architecture is used (AMD64).
Here are the steps:
Create an environment, and put it in modification mode.
You will need to copy your files to the setup server for the environment. If you're on Linux, it'll be easiest to use scp. If you're on Windows, you'll need to use something like Tunnelier. On either OS, you'll need to click on the key icon, and download the SSH Identity file you'll need to authenticate with the setup server for copying files.
$ scp -i picloud_rsa mylib.tar.gz picloud#setup-server-hostname.com:~/
Once the files are on the server, you can either SSH into the setup server, or use the web browser console (new feature!). From there, run your compile scripts. You can copy your .so file to /usr/lib. Don't forget to use "sudo".
$ sudo cp mylib.so /usr/lib
You should run whatever program depends on mylib.so on the setup server to ensure it's working properly. If you're going to run a test, you'll need to run "ldconfig" so that your shared library is in the library cache.
$ sudo ldconfig
$ ./run_your_program

Categories