We have developed a desktop application using Python. The scenario is:
An internet connection is not available.
The desktop application would be installed in multiple PCs
PC1 would enter data to a particular section (other PCs, (like PC2, PC3, etc.) can't input their respective sections unless PC1 enters their respective section).
When PC1 enters data on their section, we want it to be reflected on every other PC that has the desktop application installed.
How can we sync data within multiple desktop applications where there is no internet connectivity?
If we can achieve this using a local LAN, then how can we do this?
The most common solution would be to have one machine act as the server. Each other machine would pass its data back to the server machine over the local network. The other client machines would be notified by the server that changes had been made. The server would control and store all the data.
You could either write a separate server application to run on a completely different machine, or you could have one of your client machines act as the 'host' and take on the server role itself.
Alternatively, you could work on a peer-to-peer solution, where each machine has a separate but synchronised copy of the data. That sounds harder to get right, to me.
One of the answers at Python Library/Framework for writing P2P applications recommends the Twisted framework for network applications.
Related
There are a number of computer that I can access via ssh.
I was wondering if I can have a python code base in a single location but execute a part of it or all of it in each of these computers independently.
I can copy my codes in each of these PCs and then execute them in each through SSH but it will be hard then to make a change in the code since I should do it in all the copies.
I was also wondering if I can do something like this similar to a cluster since each of these PCs has a number of CPUs, though that would probably not be possible or very easy.
Two options quickly pop to mind:
Use sshfs to mount the remote code location to the local PC and run.
Use something like git to store your configured code, and setup each of the PCs to pull the app (?) from the remote code repo. This way, you update / configure the code once and pull the update to each PC.
For example:
We use the second method. We have seven RasPi servers running various (isolated) tasks. One of the servers is a NAS server which has a Git repo on it - where we store our configured code, and use git pull or git clone commands (via ssh) to pull the app to the local server. Works really well for us. Maybe an idea to help you ... ?
I am currently working on a server API. Said API is used in a mobile app that is being developed, and thus I've set up a dev server on a VPS. The API server uses python/django/django-rest-framework.
I have two offices that I work at- one has my desktop workstation, and at the other I use a laptop. I commonly will move from one office to the other during the day.
I have followed the many tutorials that are out there for setting up PyCharm to use a remote interpreter, and to sync changes to a remote server. However this seems to be causing a bit of an issue for me- I have 3 different copies of my code that may or may not be synced. Specifically, the server will always have its code synced with the last machine to have updated files. For example:
Make a change to coolStuff.py using my desktop. Save the changes. This pushes the saved changes to the server via pyCharm.
Get called to the other office. Drive over, open up my laptop. Go to edit coolStuff.py, and not realize that I am now working on an old copy, potentially overwriting the changes I made in step 1.
To get around this, I've been committing all of my changes after step 1, then pushing them up to the repo. I then need to pull the changes to the laptop. However, there are times that the push or pull is forgotten, and issues arise.
Is there a better workflow, or tools within pyCharm that can automate a 2-way sync between a dev machine and the server? Alternatively, would it be bad practice to use a cloud storage/sync to keep everything syncd up? (e.g.- store my code directory on something like Dropbox or OneDrive). I figure there has got to be a better way.
I have an application running on linux server and I need to create a local backup of it's data.
However, new data is being added to the application after every hour and I want to sync my local backup data with server's data.
I want to write a script (shell or python) that can automatically download new added data from the linux server to my local machine backup. But I am newbie to the linux envoirnment and don't know how to write shell script to achieve this.
What is the better way of achieving this ? And what would be the script to do so ?
rsync -r fits in your use case and it's a single line command.
rsync -r source destination
or the options you need according to your specific case.
So, you don't need a python script for that, but you can still write it and let it use the command above.
Moreover, if you want the Python script to do it in an automatic way, you may check the event scheduler module.
This depends on where and how your data is stored on the Linux server, but you could write a network application which pushes the data to a client and the client saves the data on the local machine. You can use sockets for that.
If the data is available via aan http server and you know how to write RESTful APIs, you could use that as well and make a task run on your local machine every hour which calls the REST API and handles its (JSON) data. Keep in mind that you need to secure the API if the server is running online and not in the same LAN.
You could also write a small application which downloads the files every hour from the server over FTP (if you want to backup files stored on the system). You will need to know the exact path of the file(s) to do this though.
All solutions above are meant for Python programming. Using a shell script is possible, but a little more complicated. I would use Python for this kind of tasks, as you have a lot of network related libraries available (ftp, socket, http clients, simple http servers, WSGI libraries, etc.)
A very naive question...
I need to have a central Python program offering services to other Python applications (all running on the same machine). The easiest for me would be that the other applications could directly call functions provided by the server. I know rpyc (Remote Python Call) and the rpyc.Service class could do the job (like in this tutorial]) but I do not want anything "remote" (all clients have to be local, running on the same machine as the server) so I was wondering if there was a better way to do this than using rpyc?
I'm a complete novice in this area, so please excuse my ignorance.
I have three questions:
What's the best (fastest, easiest, headache-free) way of hosting a python program online?
I'm currently looking at Google App Engine and Web Frameworks for Python, but all the options are a bit overwhelming.
Which gui/viz libraries will transfer to a web app environment without problems?
I'm willing to sacrifice some performance for the sake of simplicity.
(Google App Engine can't do C libraries, so this is causing a dilemma.)
Where can I learn more about running a program locally vs. having a program continuously run on a server and taking requests from multiple users?
Currently I have a working Python program that only uses standard Python libraries. It currently uses around 2.7gb of ram, but as I increase my dataset, I'm predicting it will use closer to 6gb. I can run it on my personal machine, and everything is just peachy. I'd like to continue developing on the front end on my home machine and implement the web app later.
Here is a relevant, previous post of mine.
Depending on your knowledge with server administration, you should consider a dedicated server. I was doing running some custom Python modules with Numpy, Scipy, Pandas, etc. on some data on a shared server with Godaddy. One program I wrote took 120 seconds to complete. Recently we switched to a dedicated server and it now takes 2 seconds. The shared environment used CGI to run Python and I installed mod_python on the dedicated server.
Using a dedicated server allows COMPLETE control (including root access) to the server which allows the compilation and/or installation of anything. It is a bit pricy but if you're making money with your stuff it might be worth it.
Another option would be to use something like http://www.dyndns.com/ where you can host a domain on your own machine.
So with that said, perhaps some answers:
It depends on your requirements. ~4gb of RAM might require a dedicated server. What you are asking is not necessarily an easy task so don't be afraid to get your hands dirty.
Not sure what you mean here.
A server is just a computer that responds to requests. On the dedicated server (I keep mentioning) you are operating in a Unix (or Windows) environment just like you would locally. You use SOFTWARE (e.g. Apache web server) to serve client requests. My vote is mod_python.
It's a greater headache than a dedicated server, but it should be much closer to your needs to go with an Amazon EC2 instance.
http://aws.amazon.com/ec2/#instance
Their extra large instance should be more than large enough for what you need to do, and you only turn the instance on when you need it so you don't have the massive bill that you get with a dedicated server that's the same size.
There are some nice javascript based visualization toolkits out there, so you can model your application to return raw (json) data and render that on the client.
I can mention d3.js http://mbostock.github.com/d3/ and the JavaScript InfoVis Toolkit http://thejit.org/