I have a django app which provides a rest api using Django-rest-framework. The API is used by clients as expected, but I also have another process(on the same node) that uses Django ORM to read the app's database, which is sqlite3.
Is it better architecture for the process to use the rest api to interact(only reads) with the app's database? Or is there a better, perhaps more efficient way than making a ton of HTTP requests from the same node?
The problem with the ORM approach(besides the hacky nature) is that occasionally reads fail and must be retried. Also, I want to write to the app's db which would probably causes more sqlite concurrency issues.
It depends on what your application is doing. If your REST application reads a piece of data from SQLITE using the Django ORM and then the other app does a write you can run into some interesting race situations. To prevent that it might make sense to have both these applications as django-app in a single Django project.
Related
I'm trying to build a web app which provide an interface to do some queries upon data extracted from another public API server.
To complete the queries in real time, I would have to prefetch the data from the public API server.
So I think it is reasonable to separate the app deal with query input(there is still some logic here, so only javascript wouldn't be enough) from the app which runs in background and builds the database which could possibly answer the query in real time.
Then what comes to my mind is does this background app really have to be a part of Django project? It runs almost without interacting with any Django component. Except for the database which is also accessible by a Django app and some signals maybe(when to start/stop collecting data, but this could probably also decided internally).
What would a be good design choice for my situation?
I was tasked with building a framework to unify half a dozen of python webapps that share some common functionality between them.
The current setup consists of an apache web server, serving up a specific cgi script (for each app) that talks to a simple XMLRPC service (again, a different service for each app), which in turn runs db queries, writes files to disk, runs external scripts, etc..
The idea was to do away with redundant code (such as 6 different RPC services creating and managing 6 different connections to the same db, executing very similar queries, etc), as well as the whole XMLRPC thing. Also, keep things in python land and maintain a double layer of abstraction - the client-side app can't talk directly to the api that runs the queries and writes to the disk; it must talk to an intermediate api (currently the cgi scripts) that in turn talk to the processes that perform all the querying / disk writing.
Since this is my first major Python undertaking, I am somewhat struggling with finding the correct, DRY, modular and pythonic way of architecting this framework.
Ideally, I would like to construct a REST api that the code on the client-side (currently implemented in React.js) can talk to directly.
This api should then be able to interface with another, internal REST api (currently, this is handled by the XMLRPC processes) that will do all the heavy-lifting (aforementioned db querying - perhaps using an ORM, executing tasks, writing files, and the like).
However, I'm not sure how I should implement the inter-api communication - should it be over HTTP (even if they're living on the same server?) or some other protocol I'm not aware of?
Also, how should I go about consolidating the common bits of code?
should there be a separate module / process that keeps a live connection to our db, and the disparate apis connect to it?
How can I implement an ORM with such a scheme, as I don't want new developers querying the db directly. I would like them to have an explicit model that they talk to instead.
I know this is a long and convoluted way of saying "I'm stuck", and for that I apologize and would happily provide more detail if asked, however what I'm asking for are more of a set of general guidelines:
How should different REST apis (implemented using Flask + SQLAlchemy, for example) talk to each other, whether they live on the same server, or on different ones?
How should I abstract away common tasks like connecting to a db into a separate process?
Finally, what are some good, common-sense rules for keeping this whole thing modular?
I'm in the process of setting up a new web app and deciding whether to just do it with WSGI or go the full framework route with Django.
The app's foremost requirements:
1) The app has no UI what so ever and all of the data is exposed to clients via a REST api with JSON.
2) It will have data to persist so MongoDB & probably Amazon's SimpleDB will be used for the database side.
Is there a reason to use Django or can I get marginal speed improvement with WSGI only?
Previous server-side apps I've built were either with Java/Struts and Groovy/Grails on the JVM. My understanding is that Django is an MVC framework similar to Rails and Grails.
I've also played around with Google App Engine which uses WSGI as thin layer above your code for managing and routing requests.
I suggest you consider something between those two extremes. Flask is lightweight, very easy to use, and connects to your web server via wsgi. You can use regular python database connectors with it, and a few databases even have Flask-specific extension modules.
I have worked with Django for a couple of projects and I like it a lot, but since you are going to use mongoDB and a lot of JSON I suggest you use NodeJS as server side, with Express as framework, you can see a brief tutorial here:
http://howtonode.org/express-mongodb
One of the advantages of this is that you will use only javascript all along your project, I began working with this technology the last month in a Hackathon, and I can tell you that I'm very impressed of how fast and simple it is.
I've worked a bit with some django "apps" ,its really easy, but setting up the "apps" can be a bit of a long process. Django has a lot of nice features that you won't be using and I agree that you might be on one "extreme" here.
I'm writing a syndication client, with the aim being to have a client for devices, and a web site that has the same functionality. I shall develop the website using Django - this is already decided; the client shall be written in python with both a CLI and a PyQt4 GUI. I have been writing the clinet first, and it's fairly database-heavy, as everything is cached to enable it to be read while offline.
It struck me today that it would make sense to use Django models for my application, to reduce the repetition of effort between the client and the website. My question is how easy it is to seperate this, and how much of Django I will need in my client to use Django's models. AFAIK I should not need to run the server, but what else is needed? I had an idea of generating the same html for my client as the website, but showing it withing Qt widgets rather than serving pages for a browser.
Has anyone tried this sort of thing before? I'm starting on this already, but it would be good to get a warning of potential dead-ends or things that will create a maintainance nightmare...
Read up on standalone Django scripts and you'll be on your path to victory. Basically all you're really doing is referencing the Django settings.py (which Django expects) and then using models without web views or urls.
If all you're really interested in is using Django's ORM to manage your models and database interaction, you might want to consider using SQLAlchemy instead.
You'll still have to run the Django app as a web server, but you can restrict it to serve to only localhost or something. And sure, you can use QtWebKit as the client.
I am working on a webapp that interacts with data via XML-RPC rather than with a direct connection to a database. I can execute SQL queries via an XML-RPC methods.
I would like to interact with the data in an ORM framework fashion that has lazy/eager fetching, etc., although I can't seem to figure out how that would be possible with Python or even Django's libraries.
Check out XML Models. It's REST rather than XML-RPC, but much of it is probably reusable.
You would have to write your own database backend. Take a look at existing backends for how to do this.