I am trying to develop a python PyQt program that allow user to enter data about personal particulars and review them at a later time for processing purpose.
The program will be used by less than 5 persons at the same time. So, i am thinking to use Sqlite3 database as i believe it should be able to cope for that amount of traffic.
The frame work i have in mind is that, the clients will have their own copy of my python pyqt program on each machine. Whenever they perform any operations that required data read/write, it will connect to the server thorough internet and read/write from the sqlite.db on the server.
Basically, the server will be nothing but a remote data storage.
Currently, i am able to create the required GUI for data inputs by using various widgets like QlineEdit, QCombobox, QTextEdit and so on.
But i have never done network programming before, thus i have no idea how to implement a server that store the sqlite data file for my software. So my questions are
(1) if i have a PC that has 24/7 internet connection, how do i set it up so that it can act as a server that store the data file for my software?
(2)In what way can/should my program communicate to that server through internet.
Even if you can't give me exact answer, i would appreciate if you can provide me some information of so that i look up and study about it.
Any constructive advice will be appreciated.
FYI: all the PCs will be running windows XP SP3 32 bits.
There are different ways for a client to communicate with a server.
You can use
XMLRPC to create an object with methods that are called on the server side
You can use HTTP and REST for the server with the library requests or urllib for the client
For the latter you can use flask, bottle, django or other frameworks to create a website that serves the content
(tutorials)
You can use Pyro to remotely access the objects on the server. Useful if the clients should also communicate with eachother.
You can create your own protocol. You will learn a lot and value the other options.
The list is not complete
I suggest that you have a look at XMLRPC if that fits. For number 2 I can say that many APIs use such a HTTP-interface (twitter, github, facebok, google). It is easy to use also for other people.
Security is important. I am not an expert. If you send username and password in plain text then use SSL to encrypt the connection. If you can not get ssl to work with python you can use stunnel.
Related
I spent quite some time now with researching Server Backends/API/Frameworks. I need a solution where I can store user content (JSON & Binary data).
The obvious choice would be a REST API. The only missing element is a push feature when data on server changed and clients should be notified instantly. With more research in this matter I discovered classic approaches (Comet, Push, Server sent events, Bayeux, BOSH, …) as well as the „new“ league, Websockets. I would definitely prefer the method with Websockets or using directly TCP Sockets. But this post is not about pros/cons of these two technologies so please restrain yourself from getting side tracked in comments.
At moment exists following projects which are very similar to my needs:
- Simperium (simperium.com), this looks very promising, but core/server is sadly not open source and god knows when, if ever, this step happens
- Realtime.co (framework.realtime.co/storage), hosted service, but same principle
- Some Frameworks for building servers such as Atmosphere (java, no WAMP), Cometd (java, project page looks like stuck in the 90’s), Autobahn (python, WAMP)
My actual favorite is the Autobahn framework (autobahn.ws). Especially using the WAMP protocol (subset of Websocket) as it offers exactly what I need. So the idea would be to build a python backend/server with Autobahn Python (based on Twisted framework) which manages all socket (WAMP) connections and include a Postgresql database for data storing. For all desired clients exists already WAMP libraries. The server would need to be able to do the typical REST API features:
- Send, update, delete requested data (JSON/Binary) from/to server/clients
- Synchronize & automatic conflict management
- Offline handling when connection breaks, automatic restart when connection available again
So finally the questions:
- Have I missed an open source project which covers exactly my needs?
- If I would like to develop my own server with autobahn and a database, could you point me to right direction? Have lot of concerns and not enough depth understanding.. I know Autobahn gives you already a server, but this one is not very close to my final needs.. how to build a server efficient so that he can handle all connected sockets? How handle when a client needs server push? Are there schemas, models or concept how such a server should look like?
- Twisted is a very powerful python framework but not regarded as the most convenient for writing apps.. But I guess a Socket based storage server with db access should be possible? When I run twisted as a web ressource and develop server components with other python framework, would this compromise the latency/performance much?
- Is such a desired server backend with lot of data storage (JSON fields and also binary data such as documents, images) reasonable to build with Sockets by a single devoloper/small team or is this smth. which only bigger companies like Dropbox can do at the moment?
Thank you very much for your help & time!
So finally the questions:
Have I missed an open source project which covers exactly my needs?
No you've covered the open source projects. Open source only gets you about halfway there though. To implement a Global Realtime Network requires equal parts implementation and equal parts operations. You have to think about dropped messages, retries, what happens if a particular geography gets hot how do you scale your servers ...etc. I would argue that an open source solution won't achieve what you want unless you're willing to invest significant resources into operations. I would recommend a service like PubNub: http://pubnub.com
If I would like to develop my own server with autobahn and a database, could you point me to right direction? Have lot of concerns and not enough depth understanding.. I know Autobahn gives you already a server, but this one is not very close to my final needs.. how to build a server efficient so that he can handle all connected sockets? How handle when a client needs server push? Are there schemas, models or concept how such a server should look like?
A good database to back a realtime framework would be Cassandra because it supports high write volumes and handles time series data well: http://cassandra.apache.org/.
Twisted is a very powerful python framework but not regarded as the most convenient for writing apps.. But I guess a Socket based storage server with db access should be possible? When I run twisted as a web ressource and develop server components with other python framework, would this compromise the latency/performance much?
I would not use Twisted. I would use Gevent:http://www.gevent.org/. Its coroutine based so you don't get into callback hell. To support more connections you just increase your greenlet pool to listen on the socket.
Is such a desired server backend with lot of data storage (JSON fields and also binary data such as documents, images) reasonable to build with Sockets by a single devoloper/small team or is this smth. which only bigger companies like Dropbox can do at the moment?
Again I would not build this on your own. A service like PubNub: http://pubnub.com which takes care of all the operational issues for you and has a clean API would service your needs with minimal cost. PubNub takes care of the protocol for you so if your on a mobile device that doesn't support WebSockets it will use TCP, HTTP or whatever the best transport is for the device.
I have been looking into using ZODB as a persistence layer for a multiplayer video game. I quite like how seamlessly it integrates with arbitrary object-oriented data structures. However, I am stumbling over one issue, where I can't figure out, whether ZODB can resolve this for me.
Apparently, one can use the ClientStorage from ZEO to access a remote data storage used for persistence. While this is great in a trusted local network, one can't do this without proper authorization and authentication in an open network.
So I was wondering, if there is any chance to realize the following concept with ZODB:
On the server-side I would like to have a ZEO server running plus a simulation of the game world that might operate as a fully authorized client on the ZEO server (or use the same file storage as the ZEO server).
On the client side I'd need very restricted read/write access to the ZEO server, so that a client can only view the information its user is supposed to know about (e.g. the surrounding area of their character) and can only modify information related to the actions that their character can perform.
These restrictions would have to be imposed by the server using some sort of fine-grained authorisation scheme. So I would need to be able to tell the server whether user A has permissions to read/write object B.
Now is there way to do this in ZODB or third-party solutions for this kind of problem? Or is there a way to extend ZEO in this way?
No, ZEO was never designed for such use.
It is designed for scaling ZODB access across multiple processes instead, with authentication and authorisation left to the application on top of the data.
I would not use ZEO for anything beyond a local network anyway. Use a different protocol to handle communication between game clients and game server instead, keeping the ZODB server side only.
I'm interested in something based on Jabber but I didn't find a free/opensource one so I'm thinking of writing one.
I've installed a Jabber server and now thinking about the ways in which I can write the client. I'm thinking of one of either these two methods.
1) An ajax call made to a jabber script running on the webserver that takes care of connecting to the server. But then I thought because of the dependencies involved in the jabber client, it might end up consuming too much memory when a few clients connect.
2) The other method is to run a client running as a daemon that takes care of all the heavy lifting. This way I need to have only one instance of the client that sends a spoofed message (sender's name as that of whatever the user entered on the site). A simple script running on the webserver talks to this daemon over some sort of API (XMLRPC or Msgpack maybe?)
I think #2 is better but I'm not sure. Are there other ways I can implement this? I'm considering using Perl or Python for this.
Jabber is usually called XMPP nowadays, and there are dozens of clients and servers, something for every language. If you are using Javascript (you mention Ajax), you probably want Strophe. Most servers are modular, so you only load the features you need (consider Tigase, ejabberd, or xmpppy). Writing your own is even worse an idea than it sounds.
BOSH
Install prosody because it is really eaSily installed and has BOSH support built-in. You could skip this but then you need to find out how to use BOSH via ejabberd.
use strophe.js to implement this(using BOSH). New browsers support cross-domain request(CORS -> read Proxy-less BOSH part). The old browsers you could use proxy or use flash in the middle as proxy.
read Professional XMPP Programming with JavaScript and jQuery to learn strophe. It even has chapters explaining how to create chat.
Node.js
Or you could consider installing node.js to create your chat system using socket.io.
Here is what I would like to do, and I want to know how some people with experience in this field do this:
With three POST requests I get from the http server:
widgets and layout
and then app logic (minimal)
data
Or maybe it's better to combine the first two or all three. I'm thinking of using pyqt. I think I can load .ui files. I can parse json data. I just think it would be rather dangerous to pass code over a network to be executed on the client. If someone can hijack the connection, or can change the apps setting to access a bogus server, that is nasty.
I want to do it this way because it keeps all the clients up-to-date. It's sort of like a webapp but simpler because of Qt. Essentially the "thin" app is just a minimal compiled python file that loads data from a server.
How can I do this without introducing security issues on the client? Is https good enough? Is there a way to get pyqt to run in a sandbox of sorts?
PS. I'm not stuck on Qt or python. I do like the concept though. I don't really want to use Java - server or client side.
Your desire to send "app logic" from the server to the client without sending "code" is inherently self-contradictory, though you may not realize that yet -- even if the "logic" you're sending is in some simplified ad-hoc "language" (which you don't even think of as a language;-), to all intents and purposes your Python code will be interpreting that language and thereby execute that code. You may "sandbox" things to some extent, but in the end, that's what you're doing.
To avoid hijackings and other tricks, instead, use HTTPS and validate the server's cert in your client: that will protect you from all the problems you're worrying about (if somebody can edit the app enough to defeat the HTTPS cert validation, they can edit it enough to make it run whatever code they want, without any need to send that code from a server;-).
Once you're using https, having the server send Python modules (in source form if you need to support multiple Python versions on the clients, else bytecode is fine) and the client thereby save them to disk and import / reload them, will be just fine. You'll basically be doing a variant of the classic "plugins architecture" where the "plugins" are in fact being sent from the server (instead of being found on disk in a given location).
Use a web-browser it is a well documented system that does everything you want. It is also relatively fast to create simple graphical applications in a browser. Examples for my reasoning:
The Sage math environment has built their graphical client as an application that runs in a browser, together with a local web-server.
There is the Pyjamas project that compiles Python to Javascript. This is IMHO worth a try.
Edit:
You could try PyPy's sandbox interpreter, as a secure Python interpreter for the code that was transferred over a network.
An then there is the most simple solution: Simply send Python modules over the network, but sign and/or encrypt them. This is the way all Linux distributions work. You store a cryptographic token on the local computer. The server signs/encrypts the code before it sends it, with a matching token. GPG should be able to do it.
I'm building a website where I hook people up so that they can anonymously vent to strangers. You either choose to be a listener, or a talker, and then you get catapulted into a one-on-one chat room.
The reason for the app's construction is because you often can't vent to friends, because your deepest vulnerabilities can often be leveraged against you later on. (Like it or not, this is a part of human nature. Sad.)
I'm looking for some insight into how I should architect everything. I found this neat tutorial, http://giantflyingsaucer.com/blog/?p=875, which suggests using python & stackless + flash. Someone else suggested I should try using p2p sockets, but I don't even know where to begin to look for info on that.
Any other suggestions? I'd like to keep it simple. :^)
Unless you expect super high load, this is simple enough that it doesn't really matter what you use on the backend: just pick something you're comfortable with. PHP, Python, Ruby, Even a bash script using CGI - your skill level with the language is likely to make more difference that the language features themselves.
I would use an XMPP server like ejabberd or OpenFire to power the backend. XMPP contains everything you need for creating chat/real-time applications. You can use a Flex/Flash Actionscript library like Actionscript 3 XIFF to communicate with the XMPP server.
Flash is user-unfriendly for UI (forms, etc) and it is relatively easy to do what you want using HTML and Javascript on the front-end.
One possible approach for reading the messages would be to regularly do an Ajax request from the server for any new messages. Format the new message and insert it into the DOM.
You will probably need to answer at least these questions before you continue, though:
1) Are you recreating IRQ (everyone sees your posts), or is this a random one-to-one chat, like chatroulette?
1a) Is this a way for a specific person to talk to another specific person, or is this more like twitter?
2) What is your plan for scaling up if this idea takes off? Memcached should probably be a method of last-resort ("bandaid over a bullet-hole"). What's your roadmap for eventually handling a large volume of messages?
3) Is there any way to ignore users? Talk to certain users? Hide your rants from users?
Hey Zach I had to create a socket server for a flash game I made. I built my server in C#, but I would use whatever language your familiar with. If you let me know what your most comfortable with I could try to help find a good tutorial.
The one thing I spent many hours on was getting flash to work from a website with a socket server. With the newer versions of Flash you need to send back a policy file. In my case this needed to be the first chunk of data sent back to the client when they connected to the socket server.
Not sure what to tell you about structuring the back end. I need to know a little bit more about your programming experience. I had an array of all user connections, and was placing them in different "Rooms" so they could play each other. So just some simple arrays and understanding how to send messages to the clients would help you here.
If you have any familiarity with C# I would have no problem sending you the source code for my socket server.