I am going to write some HTTP (REST) client in Python. This will be a Command Line Interface tool with no gui. I won't use any business logic objects, no database, just using an API to communicate with the server (using Curl). Would you recommend me some architectual patterns for doing that, except for Model View Controller?
Note: I am not asking for a design patterns like Command or Strategy. I just want to know how to segregate and decouple abstraction layers.
I think using MVC is pointless regarding the fact of not having a business logic - please correct me if I'm wrong. Please give me your suggestions!
Do you know any examples of CLI projects (in any language, not necessarily in Python) that are well maintained and with clean code?
Cheers
Since your app is not very complex, I see 2 layers here:
ServerClient: it provides API for remote calls and hides any details. It knows how to access HTTP server, provide auth, deal with errors etc. It has methods like do_something_good() which anyone may call and do not care if it remote method or not.
CommandLine: it uses optparse (or argparse) to implement CLI, it may support history etc. This layer uses ServerClient to access remote service.
Both layers do not know anything about each other (only protocol like list of known methods). It will allow you to use somethign instead of HTTP Rest and CLI will still work. Or you may change CLI with batch files and HTTP should work.
Related
Currently I am designing and programming a piece of software that controls a series of devices. The software is planned to have a REST interface through which you would be able to control the software (and the devices) remotely.
Now, a very basic abstraction of the architecture could look something like this:
As you can note, the system is composed of a master controller, which then handles and monitors different modules that are not dependent on each other. Front End module is an example in the diagram, while others are general abstractions of modules, but they could be anything (Database module, MessageBus module, etc).
For the actual REST interface there are both data retrieval, data storage, as well as control commands that are being implemented.
My "problem" is that I can't decide on how these "commands" should be propagated down the line.
Some cases of possible commands:
Command requesting to turn on/off, restart, control a device handled by another module
Command requesting to restart/reload the software
Command to retrieve data from another module
Now I see a few possible ways of the actual logic implementation:
All received REST commands are dispatched through a message bus. In this case each request should receive a unique identifier which could be then used to retrieve the status of the request
All received REST commands make direct calls to other modules
Both of these have pros and cons:
The second method of doing everything could very easily fall into spaghetti code and would be hard to debug and expand upon, since there's a lot of multithreading utilization through different modules. But it is possibly the fastest way of handling a command and retrieving data. Especially since the project requires speed and responsiveness.
The first method lacks the advantages of the second method, however it would help to keep the code and architecture clean and clear of dependencies from other modules. Furthermore, a Console channel is also planned which could in theory use the same methodology for implementation.
There's another method that I thought of while brainstorming about the problem:
To force the REST channel to forward the incoming requests to the actual FronEnd module and then "wait" until it receives a response. The FrontEnd module would then have to directly call other modules for any information or actions requested.
This method, however, is not that "different" from the method nr 2.
Could anyone offer any advice? Perhaps ideas on the implementation or design decisions?
In case you are wondering, the software is being written in Python, but I don't think this is relevant to the question.
So basically we have decided to ditch the RESTful way and simply went for an approach using sockets (or websockets in particular).
The commands sent through the websockets are formatted as JSON and resemble REST in a way (basically a request contains a "URI", a "Action" [get, put, post, etc.] and a "body").
A command comes to the front end control part of the system and then is pushed to a message bus where another part of the system has already subscribed for these commands. After it processes the data or executes a command, the data is returned through the message bus and dispatched to the client through the websocket.
I'm writing a service (daemon) which provides web-unrelated features to my users.
I'd like to implement a minimal web server in that service such that a user can connect on say http://localhost:5000 and get an overview of the service current status. I've read a lot on embedding/extending Python and on how the latter seems to be recommended. However I can't decide how to design this: my service entry point has to be on the C++ side (it's a system daemon and one might not want to compile it with the web-server feature).
I'd like to use something like Django to be able to handle the requests (routing, security, whatever) on the Python side where many, many things already exist. I already have a minimal HTTP server in C++ (mongoose) that can process simple requests but I'd really like to delegate the actual processing to Python, pretty much the way WSGI does.
What is a good approach here ?
Disclaimer: I'm not very familiar with any of the things mentioned in the question title.
Would it be possible to use a browser control (like Webkit) as a frontend for a WSGI app (using a framework like Flask) without starting a local WSGI server?
Basically the requests and responses are managed by a middle layer between the HTML UI and the WSGI backend. A certain URI could mean "Local", for instance "local://" or something similar, and will be routed to the embedded WSGI app with all the original headers etc.
You will lose any features that a normal WSGI server provides unless you implement it yourself or somehow embed a server that is also usable via an API instead of real HTTP requests.
Now that I think of it, this is the only real requirement: A WSGI server that is callable via an API and not just real HTTP requests.
I know the usefulness of this is questionable (and maybe doesn't even make sense). My question is whether this is at all possible?
EDIT: Here's another way of putting it:
I want a single codebase to be both a web app and a desktop app, using an HTML frontend and a Python backend. I don't want to run a server on any port for the desktop app. What's the easiest way to achieve this?
It is in theory possible to write your own WSGI container that implements a full API and adapts that to WSGI. flup might bring some inspiration.
Earlier today I saw exactly what you're asking for -- a way to call WSGI through an API without actually connecting over the network. However, it shouldn't be that hard.
On a side note, you might want to look at PySide, of particular interest to you may be the ability to bind python elements to DOM events, so if you're just looking to trigger python code that's an even shorter route.
If you give some more detail on what you're hoping to achieve we might be able to dial it in for you.
Reviving this, since we're facing the same problem and are about to scale things up from a single view/widget to the whole app.
What I did was to simply set the base URL to something where I serve static content, and from a QRC file that's easy:
html = jinjatemplate.render(...)
self._mainFrame.setHtml(html.decode('utf-8'), Qt.QUrl('qrc:///Orsync/html/'))
For the communication, our HTML uses AJAX over jQuery for most things. You could wrap that in a layer that either does $.post(...) or api.post(...) like this:
self._mainFrame.addToJavaScriptWindowObject('api', self._webapi)
You'd need to decode the URL and create a request object yourself, but maybe that's not too hard to do? We use very few URLs currently (who are mapped directly to python objects/functions) so it's easy to do the mapping ourselves.
Data that goes back is just sent using QMainFrame.evaluateJavaScript(...), either as a direct Qt call or as a bunch of code lines fetched using $.getScript(...) (which just evaluates the code received).
I'm currently rebuilding things a bit using CherryPy, and it maps urls -> Python objects straight off, so I'm hoping there's something to be gained by that.
Otherwise, I would wish one could run QWebKit over named pipes or something similarly localized and not a tcp-socket. :)
Here is what I would like to do, and I want to know how some people with experience in this field do this:
With three POST requests I get from the http server:
widgets and layout
and then app logic (minimal)
data
Or maybe it's better to combine the first two or all three. I'm thinking of using pyqt. I think I can load .ui files. I can parse json data. I just think it would be rather dangerous to pass code over a network to be executed on the client. If someone can hijack the connection, or can change the apps setting to access a bogus server, that is nasty.
I want to do it this way because it keeps all the clients up-to-date. It's sort of like a webapp but simpler because of Qt. Essentially the "thin" app is just a minimal compiled python file that loads data from a server.
How can I do this without introducing security issues on the client? Is https good enough? Is there a way to get pyqt to run in a sandbox of sorts?
PS. I'm not stuck on Qt or python. I do like the concept though. I don't really want to use Java - server or client side.
Your desire to send "app logic" from the server to the client without sending "code" is inherently self-contradictory, though you may not realize that yet -- even if the "logic" you're sending is in some simplified ad-hoc "language" (which you don't even think of as a language;-), to all intents and purposes your Python code will be interpreting that language and thereby execute that code. You may "sandbox" things to some extent, but in the end, that's what you're doing.
To avoid hijackings and other tricks, instead, use HTTPS and validate the server's cert in your client: that will protect you from all the problems you're worrying about (if somebody can edit the app enough to defeat the HTTPS cert validation, they can edit it enough to make it run whatever code they want, without any need to send that code from a server;-).
Once you're using https, having the server send Python modules (in source form if you need to support multiple Python versions on the clients, else bytecode is fine) and the client thereby save them to disk and import / reload them, will be just fine. You'll basically be doing a variant of the classic "plugins architecture" where the "plugins" are in fact being sent from the server (instead of being found on disk in a given location).
Use a web-browser it is a well documented system that does everything you want. It is also relatively fast to create simple graphical applications in a browser. Examples for my reasoning:
The Sage math environment has built their graphical client as an application that runs in a browser, together with a local web-server.
There is the Pyjamas project that compiles Python to Javascript. This is IMHO worth a try.
Edit:
You could try PyPy's sandbox interpreter, as a secure Python interpreter for the code that was transferred over a network.
An then there is the most simple solution: Simply send Python modules over the network, but sign and/or encrypt them. This is the way all Linux distributions work. You store a cryptographic token on the local computer. The server signs/encrypts the code before it sends it, with a matching token. GPG should be able to do it.
We use a lot of of python to do much of our deployment and would be handy to connect to our TFS server to get information on iteration paths, tickets etc. I can see the webservice but unable to find any documentation. Just wondering if anyone knew of anything?
The web services are not documented by Microsoft as it is not an officially supported route to talk to TFS. The officially supported route is to use their .NET API.
In the case of your sort of application, the course of action I usually recommend is to create your own web service shim that lives on the TFS server (or another server) and uses their API to talk to the server but allows you to present the data in a nice way to your application.
Their object model simplifies the interactions a great deal (depending on what you want to do) and so it actually means less code over-all - but better tested and testable code and also you can work around things such as the NTLM auth used by the TFS web services.
Hope that helps,
Martin.
So, this question is friggin' old, but let me take a whack at it (since it keeps coming up in my google searches).
There's no officiall supported API for the on premise TFS (the MSFT hosted one has http://www.visualstudio.com/en-us/integrate/api/overview).
That said, you can always use Fiddler (http://www.telerik.com/fiddler) or something like it to inspect the calls that the web client for TFS is making to the server and do your magic to turn those into the scripts in python you want.
You'll need to run your python scripts under a service account that has TFS privs appropriate to what it is trying to do (read, update, confugure... whatever).
Since it sounds like you are just trying to read from TFS, this might be a really easy way for you to get what you want since an HTTP get to
http://yourserver/tfs/yourcollection/yourproject/_workitems#id=yourworkitemid
will hand you back (halfway) sane html payloads.
If you want lists of iterations or teams or whatever, then your service account needs to have the appropriate admin privileges and hit things like
http://yourserver/tfs/yourcollection/yourproject/_admin/_iterations
and use that response.