Download a resource onto multiple Apache Servers? [Linux OS] - python

I am running a backbone server operations for my main server. The main server basically supports a iFrame to portal to the backbone.
I run the backbone at home on 4 Old Apple TV's that I installed Linux on and one Raspberry Pi I run the API on. The API switches from loading the resource from Apple TV 1, to the next (2), to the next (3), to the next (4), and back to one (1).
The problem I face is that it's difficult to have to copy a file 4 times across the servers. Especially if I am making a new page automatically that needs to be forwarded to the 4 servers.
Is there a way I can allow, say, an exception in Apache? or run SSH? And preferably a way to send these commands via Python?
Sorry, I'm much more of CSS and HTML developer then running servers.
Thanks in advance, cheers.

Syncing files can be done a number of ways
You can either copy the files around all of the time to keep everything updated or you can use a network filesystem to keep everything all in one place.
If you decide to copy the files around I would take a look at rsync.
If you must have python you can do some cool things with fabric.
If you decide to go the network filesystem route you should take a look at NFS

Related

How to develop python remotely on 2 different machines using PyCharm

I am currently working on a server API. Said API is used in a mobile app that is being developed, and thus I've set up a dev server on a VPS. The API server uses python/django/django-rest-framework.
I have two offices that I work at- one has my desktop workstation, and at the other I use a laptop. I commonly will move from one office to the other during the day.
I have followed the many tutorials that are out there for setting up PyCharm to use a remote interpreter, and to sync changes to a remote server. However this seems to be causing a bit of an issue for me- I have 3 different copies of my code that may or may not be synced. Specifically, the server will always have its code synced with the last machine to have updated files. For example:
Make a change to coolStuff.py using my desktop. Save the changes. This pushes the saved changes to the server via pyCharm.
Get called to the other office. Drive over, open up my laptop. Go to edit coolStuff.py, and not realize that I am now working on an old copy, potentially overwriting the changes I made in step 1.
To get around this, I've been committing all of my changes after step 1, then pushing them up to the repo. I then need to pull the changes to the laptop. However, there are times that the push or pull is forgotten, and issues arise.
Is there a better workflow, or tools within pyCharm that can automate a 2-way sync between a dev machine and the server? Alternatively, would it be bad practice to use a cloud storage/sync to keep everything syncd up? (e.g.- store my code directory on something like Dropbox or OneDrive). I figure there has got to be a better way.

How can a desktop application installed on multiple PCs be synced?

We have developed a desktop application using Python. The scenario is:
An internet connection is not available.
The desktop application would be installed in multiple PCs
PC1 would enter data to a particular section (other PCs, (like PC2, PC3, etc.) can't input their respective sections unless PC1 enters their respective section).
When PC1 enters data on their section, we want it to be reflected on every other PC that has the desktop application installed.
How can we sync data within multiple desktop applications where there is no internet connectivity?
If we can achieve this using a local LAN, then how can we do this?
The most common solution would be to have one machine act as the server. Each other machine would pass its data back to the server machine over the local network. The other client machines would be notified by the server that changes had been made. The server would control and store all the data.
You could either write a separate server application to run on a completely different machine, or you could have one of your client machines act as the 'host' and take on the server role itself.
Alternatively, you could work on a peer-to-peer solution, where each machine has a separate but synchronised copy of the data. That sounds harder to get right, to me.
One of the answers at Python Library/Framework for writing P2P applications recommends the Twisted framework for network applications.

System requirement for an embedded Python web server

I'm working on an embedded device that runs Linux on ARM7 with 64MB RAM and 64MB storage (12MB free). The device should be configured via web therefore it needs to run an embedded web server. Currently it's using Lighttpd and LUA, but I'm thinking about replacing LUA (or maybe even Lighttpd) with Python. The server will occasionally be accessed by one or two users for making changes to internal settings of the C program that is running in Linux. So the server load isn't really a lot. I also need it to be Open Source Software. Web.py seems to be small enough but I still need to compile Python which I haven't done before. So I'm wondering what are the system requirements of Python? LUA seems to do quite well for small embedded systems but I don't like its syntax for C-binding.
However, I couldn't find updated information about system requirements for embedding Python in such settings. This page from Michael Lauer seems to be old.
Any ideas? Suggestions? hints? links?
I'm working on this device using OpenWRT + Python:
http://wiki.openwrt.org/oldwiki/OpenWrtDocs/Hardware/Meraki/Mini
The first python run is veeeeeery slow but it are metacompiling all .pyc files, next it work well.

Hosting a non-trivial python program on the web?

I'm a complete novice in this area, so please excuse my ignorance.
I have three questions:
What's the best (fastest, easiest, headache-free) way of hosting a python program online?
I'm currently looking at Google App Engine and Web Frameworks for Python, but all the options are a bit overwhelming.
Which gui/viz libraries will transfer to a web app environment without problems?
I'm willing to sacrifice some performance for the sake of simplicity.
(Google App Engine can't do C libraries, so this is causing a dilemma.)
Where can I learn more about running a program locally vs. having a program continuously run on a server and taking requests from multiple users?
Currently I have a working Python program that only uses standard Python libraries. It currently uses around 2.7gb of ram, but as I increase my dataset, I'm predicting it will use closer to 6gb. I can run it on my personal machine, and everything is just peachy. I'd like to continue developing on the front end on my home machine and implement the web app later.
Here is a relevant, previous post of mine.
Depending on your knowledge with server administration, you should consider a dedicated server. I was doing running some custom Python modules with Numpy, Scipy, Pandas, etc. on some data on a shared server with Godaddy. One program I wrote took 120 seconds to complete. Recently we switched to a dedicated server and it now takes 2 seconds. The shared environment used CGI to run Python and I installed mod_python on the dedicated server.
Using a dedicated server allows COMPLETE control (including root access) to the server which allows the compilation and/or installation of anything. It is a bit pricy but if you're making money with your stuff it might be worth it.
Another option would be to use something like http://www.dyndns.com/ where you can host a domain on your own machine.
So with that said, perhaps some answers:
It depends on your requirements. ~4gb of RAM might require a dedicated server. What you are asking is not necessarily an easy task so don't be afraid to get your hands dirty.
Not sure what you mean here.
A server is just a computer that responds to requests. On the dedicated server (I keep mentioning) you are operating in a Unix (or Windows) environment just like you would locally. You use SOFTWARE (e.g. Apache web server) to serve client requests. My vote is mod_python.
It's a greater headache than a dedicated server, but it should be much closer to your needs to go with an Amazon EC2 instance.
http://aws.amazon.com/ec2/#instance
Their extra large instance should be more than large enough for what you need to do, and you only turn the instance on when you need it so you don't have the massive bill that you get with a dedicated server that's the same size.
There are some nice javascript based visualization toolkits out there, so you can model your application to return raw (json) data and render that on the client.
I can mention d3.js http://mbostock.github.com/d3/ and the JavaScript InfoVis Toolkit http://thejit.org/

What are some successful methods for deploying a Django application on the desktop?

I have a Django application that I would like to deploy to the desktop. I have read a little on this and see that one way is to use freeze. I have used this with varying success in the past for Python applications, but am not convinced it is the best approach for a Django application.
My questions are: what are some successful methods you have used for deploying Django applications? Is there a de facto standard method? Have you hit any dead ends? I need a cross platform solution.
I did this a couple years ago for a Django app running as a local daemon. It was launched by Twisted and wrapped by py2app for Mac and py2exe for Windows. There was both a browser as well as an Air front-end hitting it. It worked pretty well for the most part but I didn't get to deploy it out in the wild because the larger project got postponed. It's been a while and I'm a bit rusty on the details, but here are a few tips:
IIRC, the most problematic thing was Python loading C extensions. I had an Intel assembler module written with C "asm" commands that I needed to load to get low-level system data. That took a while to get working across both platforms. If you can, try to avoid C extensions.
You'll definitely need an installer. Most likely the app will end up running in the background, so you'll need to mark it as a Windows service, Unix daemon, or Mac launchd application.
In your installer you'll want to provide a way to locate a free local TCP port. You may have to write a little stub routine that the installer runs or use the installer's built-in scripting facility to find a port that hasn't been taken and save it to a config file. You then load the config file inside your settings.py and whatever front-end you're going to deploy. That's the shared port. Or you could just pick a random number and hope no other service on the desktop steps on your toes :-)
If your front-end and back-end are separate apps then you'll need to design an API for them to talk to each other. Make sure you provide a flag to return the data in both raw and human-readable form. It really helps in debugging.
If you want Django to be able to send notifications to the user, you'll want to integrate with something like Growl or get Python for Windows extensions so you can bring up toaster pop-up notifications.
You'll probably want to stick with SQLite for database in which case you'll want to make sure you use semaphores to tackle multiple requests vying for the database (or any other shared resource). If your app is accessed via a browser users can have multiple windows open and hit the app at the same time. If using a custom front-end (native, Air, etc...) then you can control how many instances are running at a given time so it won't be as much of an issue.
You'll also want some sort of access to local system logging facilities since the app will be running in the background and make sure you trap all your exceptions and route it into the syslog. A big hassle was debugging Windows service startup issues. It would have been impossible without system logging.
Be careful about hardcoded paths if you want to stay cross-platform. You may have to rely on the installer to write a config file entry with the actual installation path which you'll have to load up at startup.
Test actual deployment especially across a variety of firewalls. Some of the desktop firewalls get pretty aggressive about blocking access to network services that accept incoming requests.
That's all I can think of. Hope it helps.
If you want a good solution, you should give up on making it cross platform. Your code should all be portable, but your deployment - almost by definition - needs to be platform-specific.
I would recommend using py2exe on Windows, py2app on MacOS X, and building deb packages for Ubuntu with a .desktop file in the right place in the package for an entry to show up in the user's menu. Unfortunately for the last option there's no convenient 'py2deb' or 'py2xdg', but it's pretty easy to make the relevant text file by hand.
And of course, I'd recommend bundling in Twisted as your web server for making the application easily self-contained :).

Categories