Python Module in Django - python

I am building my first Django web application and I need a bit of advice on code layout.
I need to integrate it with several other applications that are exposed through RESTful APIs and additionally Django's internal data. I need to develop the component that will pull data from various sources, django's DB, format it consistently for output and return it for rendering by the template.
I am thinking of the best way to write this component. There are a couple of ways to proceed and I wanted to solicit some feedback from more experienced web developers on things I may have missed.
Way #1
Develop a completely standalone objects for interacting with other applications via their APIs, etc... This would not have anything related with django and test independently. Then in django, import this module in the views that need it, run object methods to get required data, etc...
If I need to access any of this functionality via a GET request (like through JavaScript), I can have a dedicated view that imports the module and returns json.
Way #2
Develop this completely as django view(s) expose as a series of GET/POST calls that would all call each other to get the work done. This would be directly tied in the application.
Way #3
Start with Way #1, but instead of creating a view, package it as a django app. Develop unit tests on the app as well as the individual objects.
I think that way 1 or 3 would be very much encapsulated and controlled.
Way 2 would be more complicated, but facilitate higher component re-use.
What is better from a performance standpoint? If I roll with Way #1 or 3, would an instance of the object be instantiated for each request?
If so this approach may be a bit too heavy for this. If I proceed with this, can they be singletons?
Anyway, I hope this makes sense.
thanks in advance.

Definitely go for "way #1". Keeping an independent layer for your service(s) API will help a lot later when you have to enhance that layer for reliability or extending the API.
Stay away from singletons, they're just global variables with a new name. Use an appropriate life cycle for your interface objects. The obvious "instantiate for each request" is not the worst idea, and it's easier to optimize that if needed (by caching and/or memoization) than it's to unroll global vars everywhere.
Keep in mind that web applications are supposed to be several processes on a "shared nothing" design; the only shared resources must be external to the app: database, queue managers, cache storage.
Finally, try to avoid using this API directly from the view functions/CBV. Either use them from your models, or write a layer conceptually similar to the way models are used from the views. No need of an ORM-like api, but keep any 'business process' away from the views, which should be concerned only with the presentation and high level operations. Think "thick model, thin views" with your APIs as a new kind of models.

Related

How do I efficiently understand a framework with sparse documentation?

I have the problem that for a project I need to work with a framework (Python), that has a poor documentation. I know what it does since it is the back end of a running application. I also know that no framework is good if the documentation is bad and that I should prob. code it myself. But, I have a time constraint. Therefore my question is: Is there a cooking recipe on how to understand a poorly documented framework?
What I tried until now is checking some functions and identify the organizational units in the framework but I am lacking a system to do it more effectively.
If I were you, with time constaraints, and bound to use a specific framework. I'll go in the following manner:
List down the use cases I desire to implement using the framework
Identify the APIs provided by the framework that helps me implement the use cases
Prototype the usecases based on the available documentation and reading
The prototyping is not implementing the entire use case, but to identify the building blocks around the case and implementing them. e.g., If my usecase is to fetch the Students, along with their courses, and if I were using Hibernate to implement, I would prototype the database accesss, validating how easily am I able to access the database using Hibernate, or how easily I am able to get the relational data by means of joining/aggregation etc.
The prototyping will help me figure out the possible limitations/bugs in the framework. If the limitations are more of show-stoppers, I will implement the supporting APIs myself; or I can take a call to scrap out the entire framework and write one for myself; whichever makes more sense.
You may also use python debugging library: pdb. After importing it with import pdb you may set traces in the body of functions and classes pdb.set_trace(). Then it will stop the execution of the program in the line and you may look at existing variables and processes.

Django calling REST API from models or views? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have to call external REST APIs from Django. The external data source schemas resemble my Django models. I'm supposed to keep the remote data and local ones in sync (maybe not relevant for the question)
Questions:
What is the most logical place from where to call external web services: from a model method or from a view?
Should I put the code that call the remote API in external modules that will be then called by the views?
Is it possible to conditionally select the data source? Meaning presenting the data from the REST API or the local models depending on their "freshness"?
Thanks
EDIT: for the people willing to close this question: I've broken down the question in three simple questions from the beginning and I've received good answers so far, thanks.
I think it is an opinion where to call web services. I would say don't pollute your models because it means you probably need instances of those models to call these web services. That might not make any sense. Your other choice there is to make things #classmethod on the models, which is not very clean design I would argue.
Calling from the view is probably more natural if accessing the view itself is what triggers the web service call. Is it? You said that you need to keep things in sync, which points to a possible need for background processing. At that point, you can still use views if your background processes issue http requests, but that's often not the best design. If anything, you would probably want your own REST API for this, which necessitates separating the code from your average web site view.
My opinion is these calls should be placed in modules and classes specifically encapsulated for your remote calls and processing. This makes things flexible (background jobs, signals, etc.) and it is also easier to unit test. You can trigger calling this code in the views or elsewhere, but the logic itself should be separate from both the views and the models to decouple things nicely.
You should imagine that this logic should exist on its own if there was no Django around it, then build other pieces that connect that logic to Django (ex: syncing the models). In other words, keep things atomic.
Yes, same reasons as above, especially flexibility. Is there any reason not to?
Yes, simply create the equivalent of an interface. Have each class map to the interface. If the fields are the same and you are lazy, in python you can just dump the fields you need as dicts to the constructor (using **kwargs) and be done with it, or rename the keys using some convetion you can process. I usually build some sort of simple data mapper class for this and process the django or rest models in a list comprehension, but no need if things match up as I mentioned.
Another related option to the above is you can dump things into a common structure in a cache such as Redis or Memcache. It might be wise to atomically update this info if you are concerned with "freshness." But in general you should have a single source of authority that can tell you what is actually fresh. In sync situations, I think it's better to pick one or the other to keep things predictable and clear though.
One last thing that might influence your design is that by definition, keeping things in sync is a difficult process. Syncs tend to be very prone to failure, so you should have some sort of durable mechanism such as a task queue or job system for retries. Always assume when calling a remote REST API that calls can fail for crazy reasons such as network hicups. Also keep in mind transactions and transactional behavior when syncing. Since these are important, it points again to the fact that if you put all this logic in a view directly, you will probably run into trouble reusing it in the background without abstracting things a bit anyway.
What is the most logical place from where to call external web services: from a model method or from a view?
Ideally your models should only talk to database and have no clue what's happening with your business logic.
Should I put the code that call the remote API in external modules that will be then called by the views?
If you need to access them from multiple modules, then yes, placing them in a module makes sense. That way you can reuse them efficiently.
Is it possible to conditionally select the data source? Meaning presenting the data from the REST API or the local models depending on their "freshness"?
Of course it's possible. You can just implement how you fetch your data on request. But the more efficient way might just be avoiding that logic and just sync your local data with remote data and show the local data on the views.

Consuming a RESTful API with Django

I'm building a Django application that needs to interact with a 3rd party RESTful API, making various GETs, PUTs, etc to that resource. What I'm looking for is a good way to represent that API within Django.
The most obvious, but perhaps less elegant solution seems to be creating a model that has various methods mapping to webservice queries. On the other hand, it seems that using something like a custom DB backend would provide more flexibility and be better integrated into Django's ORM.
Caveat: This is the first real project I've done with Django, so it's possible I'm missing something obvious here.
The requests library makes it easy to write a REST API consumer. There is also a Python library called slumber, which is built on top of requests, for the explicit purpose of consuming REST APIs. How well that will work for you likely depends on how RESTful the API really is.
Make the REST calls using the builtin urllib (a bit clunky but functional) and wrap the interface into a class, with a method for each remote call. Your class can then translate to and from native python types. That is what I'd do anyway!

Understanding Zope internals, from Django eyes

I am a newbie to zope and I previously worked on Django for about 2.5 years. So when I first jumped into Zope(v2) (only because my new company is using it since 7 years), I faced these questions. Please help me in understanding them.
What is the "real" purpose of zodb as such? I know what it does, but tell me one great thing that zodb does and a framework like Django (which doesn't have zodb) misses.
Update: Based on the answers, Zodb replaces the need for ORM. You can directly store the object inside the db(zodb itself).
It is said one of the zope's killer feature is the TTW(Through the Web or Developing using ZMI) philosophy. But I(and any developer) prefers File-System based development(using Version control, using Eclipse, using any favorite tool outside Zope). Then where is this TTW actually used?
This is the big one. What "EXTRA Stuff" does Zope's Acquistion gain when compared to Python/Django Inheritance.
Is it really a good move to come to Zope, from Django ?
Any site like djangosnippets.org for Zope(v2)?
First things first: current zope2 versions include all of zope3, too. And if you look at modern zope2 applications like Plone, you'll see that it uses a lot of "zope 3" (now called the "zope tool kit", ZTK) under the hood.
The real purpose of the ZODB: it is one of the few object databases (as opposed to relational SQL databases) that sees real widespread use. You can "just" store all your python objects in there without needing to use an object-relational mapper. No "select * from xyz" under the hood. And adding a new attribute on a zodb object "just" persists that change. Luxurious! Especially handy when your data cannot be handily mapped to a strict relational database. If you can map it easily: just use such a database, I've used sqlalchemy a few times in zope projects.
TTW: we've come back from that. At least, the zope2 way of TTW indeed has all the drawbacks that you fear. No version control, no outside tools, etc. Plone is experimenting (google for "dexterity") with nice explicit zope 3 ways of doing TTW development that can still be mapped back to the filesystem.
TTW: the zodb makes it easy and cheap to store all sorts of config settings in the database, so you can typically adjust a lot of things through the browser. This doesn't really count as typical TTW development, though.
Acquisition: handy trick, though it leads to a huge namespace polution. Double edged sword. To improve debuggability and maintenance we try to do without in most of the cases. The acquisition happens inside the "object graph", so think "folder structure inside the zope site". A call to "contact_form" three folders down can still find the "contact_form" on the root of the site if it isn't found somewhere in between. Double edged sword!
(And regular python object oriented inheritance happens all over the place of course).
Moving from django to zope: a really good idea for certain problems and nonsensical for other problems :-) Quite a lot of zope2/plone companies have actually done some django projects for specific projects, typically those that have 99% percent of their content in a relatively straightforward SQL database. If you're more into content management, zope (and plone) is probably better.
Additional tip: don't focus only on zope2. Zope3's "component architecture" has lots of functionality for creating bigger applications (also non-web). Look at grok (http://grok.zope.org) for a friendly packaged zope, for instance. The pure component architecture is also usable inside django projects.
On the ZODB:
Another way to ask "What is the real purpose of the ZODB?" is to ask, "Why was the ZODB originally created?"
The answer to that is the project was started very early on, around 1996. This was before the existance of MySQL or PostgreSQL, when miniSQL (a free-to-use but not free software) database was still in common use, or big money databases such as Oracle. Python provided the pickle module to serialize Python objects to disk - but serialization is lower level, it doesn't allow for features such as transactions, concurrent writes, and replication. This is what the ZODB provides.
It's still in use today in Zope because it works well. If you have no existing skillset in realational databases, it's easier to learn to use the ZODB than a relational database. It's also usable simpler use-cases, for example if you have a command-line script that needs to store some configuration information, using a relational database means having to run a database server just to store a little bit of configuraiton. You could use a config file, but the ZODB also works quite nicely because it's an embedable database. That means that the database is running in the same process as the rest of your Python code.
It's also worth noting that the API used to store objects inside containers is different between Zope 2 and Zope 3. In Zope 2, containers are stored as attributes:
root.mycontainer.myattr
In Zope 3, they use the same interface as Python standard dictionary type:
root['mycontainer']myattr
This is another reason why it can be easier to learn to use the ZODB than the Django ORM, since Django has it's own interface for it's ORM which is distinct from Python's existing interfaces.
Through-the-web (TTW):
Again, understanding the reason for TTW goes back when Zope was developed. While it seems silly to break with well known developer tools such Subversion or Mercurial, Zope was developed in the late 90s when the only free version control system was CVS. Zope 2 had it's own simple version control capabilites, and they were as good as CVS (which is to say, "they were limited and sucky."). UNIX workstations cost a lot more money back then, and had far fewer resources, so System Administrators were much more guarded and careful about how servers were managed. TTW allowed people who might not normally be able to upload code to the server with sysadmin intervation a way to do that.
With text editors, emacs and vi have had ftp-modes, and Zope 2 can listen on an FTP port. This would allow you to develop so that code was stored in the ZODB (editable TTW), but it was common to edit this code using a emacs or vi.
Today in Zope, TTW is more rarely used or promoted since it no longer makes sense to do this. Disk space is cheap, servers are (relatively) cheap, and there are lots of developer tools which expect to interact with the standard filesystem.
Acquisition:
It was a mistake. It was a very confusing feature that caused lots of unexpected things to happen. In theory there are some interesting ideas to acquisition, but in practice it's best tossed in the bin and has little practical use.
Moving from Django to Zope:
Work started on Zope 3 in 2001. This fixed a lot of the problems with Zope 2. It's a testament to the Zope community that Zope 2 is still actively and well maintained, but it's hardly state-of-the-art. Zope 2 is really only interesting to learn from a historical perspective.
Zope 3 ended up getting evolved in a few different directions, and so modern incarnations of Zope are best expressed in the form of Grok, BFG or Bobo.
Grok is closest to Zope 3, and as such is a pretty large framework - it can be rather overwhelming at times when delving through it's code base. However, just like Django, or any other full-stack framework you don't need to use every part of Grok, it can be quite easy to learn the basic and create web applications with it. It's convention-over-configuration is second to none, and it's class-based Views give it a much tighter, arguably cleaner code base than a Django web application. It's URL routing system is extremely flexible, but also arguably over-engineered.
BFG is a "pay for only what you eat" framework written by long time Zope developer Chris McDonough. As such, it's closer to Pylons in spirit, where only the parts deemed core or essential to a framework are included. It also plays very well with WSGI. It only uses a few core Zope packages.
Bobo is a "micro-framework". It's just a way to route URLs and serve up an app. It doesn't use any Zope packages, so isn't strictly in the Zope family of web frameworks. But it was written by Zope's creator, Jim Fulton, who originally called the publishing part of Zope, "Bobo". The original Bobo, written in the early 90's, mapped URLs to packages and modules, so if your source code was layed out as:
mypackage.mymodule.MyClass
You could have a URL such as:
/mypackage/mymodule/MyClass
Which was very inflexible, and was replaced with URL Traversel in Zope 2, which is fairly complex. Bobo uses Routes, so it's a middle ground between dead-simple URL resolution and complex URL resolution - about the same in complexity as Django's URL resolution machinery.
I answer without much experience on both, but I had the chance to manipulate both, so I can tell you my opinion on some of your questions.
1)What is the "real" purpose of zodb
as such? Meaning I know what it does,
but tell me one great thing that zodb
does and a framework like django(which
doesn't have zodb) misses
Load distribution via ZEO and search via ZCatalog. Django is very low level on this point of view. To achieve the same, you would have to reimplement a lot of wheels, triangular.
Something I learned quite soon is: don't mess with low level database issues. You will screw them up. It's a can of worms, Dune sized.
So why choose django ORM ? You should also consider if YAGNI. django is easy and self contained, documentation is premium, and when (if) your site will grow that much, you will do the switch to a better ORM (or to a pure OODB, in case of ZODB) later on.
2)It is said one of the zope's killer
feature is the TTW(Through the Web or
Developing using ZMI) philosophy. But
I(and any developer) prefers
File-System based development(using
Version control, using Eclipse, using
any favorite tool outside Zope). Then
where is this TTW actually used?
I cannot answer properly to this question, but I would not say that it's fundamentally bad to develop with such approach. Of course it's a change of mindset, and I tend to prefer filesystem based development as well.
4)Is it really a good move to work on
Zope, from Django ?
Zope 3 is very modular, so you are free to use many of its components from django. I would advise against it though. You can, of course, but what I found most problematic is the lack of help. There are not many people using zope components and django at the same time. Sooner or later, you will have a problem and google won't help. At that point, you will realize that if your life was a videogame, you are definitely playing it at level difficult (maybe extreme, if you will have to put your nose into the zope code).
A very good reference on ZODB is ZODB/ZEO programmer's guide. ZODB is not an ORM. Its a true object database. Python objects are persisted inside the database transparently without any worries about how to transform them into a representation suitable for database. Any pickleable Python object can be saved inside the ZODB. Relational databases are suitable for large amount of flat data (like employee records) while ZODB is best for hierarchical data (typically found in web applications). I personally use Zope 3 for my applications. I never did TTW type of work. Best part of using ZODB was the fact that I never had to worry at all about how I am going to save data and how things would change when I upgrade my software from one version to next one. For example, if I add a new attribute to a Python class, all I have to do is provide a default value as a class attribute. It then becomes automatically available to all objects created with the previous version of the same class. Removing an attribute is a simple del operation on existing objects. BTW, ZODB can be used independently in any kind of Python application and isn't coupled with just ZOPE platform. I love the fact that I don't have to worry about the nitty gritties of SQL while working on Python applications thanx to ZODB. And off course if you need a database server so that you can run multiple copies of your application backed by the same server ZEO comes to your rescue on top of ZODB.
Zope started with the idea of being an Object Publishing Environment. From that perspective mapping the URL directly to the object hierarchy in ZODB was great. The URLs simply reflect the hierarchy of objects. Now so far as figuring out the URL is considered, there is always the Rotterdam debugging interface for help. For development work, I keep the development flags on in the zope configuration and look at the contents of ZODB through the Rotterdam interface. Rotterdam skin provide a great way of introspecting the Python objects stored inside the ZODB and figuring out the URLs is much more interactive. Moreover, for major containers inside my ZODB, I register them as persistent utilities inside the site manager (Zope 3 sites and site managers). Anywhere in my code, whenever I need access to such containers, all I do is getUtility(IMyContainerType). I don't even have to remember the detailed locations of those containers inside the code base. They are once registered with the site manager and going forward available anywhere inside the code base through getUtility() calls.
And the URLs also support namespaces. For example using the ++skin++ namespace, you can anytime change the skin of your web application. Using the ++language++ namespace, you can any time change the preferred language of your user interface. Using the ++attributes++ namespace you can access individual attributes of an object. URLs are simply much more powerful and much more customizable. And you can write traversal adapters, define your own namespaces, to enhance the capabilities of your URLs. To give an example, all pages which are directly accessible from the web interface, are part of my default skin. While all pages which are invoked through background AJAX calls, are under a different skin. This way, one can implement different ways of authentication mechanisms in different skins. In main skin, one is redirected to a different login page in case of authentication failure. For AJAX pages, one could simply receive an HTTP error. This could be centrally done. Zope 3 objects have interfaces and one view can be defined for multiple interfaces. Wherever you have an object which supports the given interface, all associated views become automatically available and all such URLs are automatically valid. If you think about it, its a much more powerful than a single python file or XML file where the URLs are hard-coded. I don't really know much about DJango and J2EE so cannot say if they have equivalent capability.
ZODB is a OO-style database that doesn't need a schema definition. You can simply create (nearly) all kinds of objects, and persist them.
The TTW is sometimes annoying, but you can mount the ZOPE-object-tree using webdav. Then you can edit the templates and scripts using your favorite editor.
ZOPE is especially powerful for creating CMS-like systems, IMHO there it is still unmatched - you'd have to go through a lot to make it work equally well in Django.
And through the TTW, actually non-developers like designers have a good chance of developing e.g. templates and CSS without need for developer interaction.
+1 on Wheat's answer, above: "Zope 2 is really only interesting to learn from a historical perspective". I did Zope dev for a large site for a couple of years, 50% zope 2, 50% zope 3. Even then (this was 2 years ago) we were working to migrate everything off of zope 2. Unless you already have a lot invested in an existing Zope 2 project, there's no reason to use it; there's just not much of future there. And if you do have a big existing zope 2 project, I'd suggest taking a look at a product caled Five (a joke: 2 + 3 = 5) that aims to
allow you to integrate Zope 3
technologies into Zope 2. Among
others, it allows you to use Zope 3
interfaces, ZCML-based configuration,
adapters, browser pages (including
skins, layers, and resources),
automated add and edit forms based on
schemas, object events, as well as
Zope 3-style i18n message catalogs.
When all is said and done, Zope 3 is a very different framework from 2, and IMHO, a much better (albeit more complicated) one. TTW is optional, and not recommended for most cases. Implicit acquisition is gone.
Looks like people here have covered why you might want to use the ZODB, so I thought I'd mention one other thing about Zope 3 (or Zope 2 using Five) that's good. Zope has a very powerful system for wiring together different application components called the Zope Component Architecture (ZCA). It allows you to write components that are more or less autonomous and reusable, and which can be plugged together in a standardized way. I mostly do Django development now and I sometimes find myself missing the ZCA. In Django, the ability to write reusable components is limited and kind of ad-hoc. But, like Reinout says zope.component (like most zope packages, including the ZODB) works outside of the zope framework and could be used in a Django project.
That said, the ZCA has its drawbacks, one of which is the tedious process of registering your components in XML files; it always felt a little Java-esqe to me. One reason I really like Grok http://grok.zope.org/ is that it sits on top of zope.component and does much of that grunt work for you.
So bottom line: Zope 2 is mostly a dead end. If your employer is amenable to it, start looking at Zope 3, or at least Five. I think you'll find Zope 3 has a steep learning curve compared to Django, so it might be a good idea to come at it via Grok, which smooths out a lot of Zope 3's rougher edges. But, I think for a really large or complex web application with lots of moving parts, I'd go for Zope over Django (and I say this as someone who really likes Django a lot). For smaller projects, Django would probably be faster. Quantifying "large" and "small" in this context is hard though, and would probably require a couple of thousand more words. If you really are interested in Zope 3, the book by Philipp von Weitershausen is definitely the place to start.

Does Django development provide a truly flexible 3 layer architecture?

A few weeks ago I asked the question "Is a PHP, Python, PostgreSQL design suitable for a non-web business application?" Is a PHP, Python, PostgreSQL design suitable for a business application?
A lot of the answers recommended skipping the PHP piece and using Django to build the application. As I've explored Django, I've started to question one specific aspect of my goals and how Django comes into play for a non-web business application.
Based on my understanding, Django would manage both the view and controller pieces and PostgreSQL or MySQL would handle the data. But my goal was to clearly separate the layers so that the database, domain logic, and presentation could each be changed without significantly affecting the others. It seems like I'm only separating the M from the VC layers with the Django solution.
So, is it counterproductive for me to build the domain layer in Python with an SQL Alchemy/Elixir ORM tool, PostgreSQL for the database layer, and then still use Django or PHP for the presentation layer? Is this possible or pure insanity?
Basically, I'd be looking at an architecture of Django/PHP > Python/SQLAlchemy > PostgreSQL/MySQL.
Edit: Before the fanboys get mad at me for asking a question about Django, just realize: It's a question, not an accusation. If I knew the answer or had my own opinion, I wouldn't have asked!
You seem to be saying that choosing Django would prevent you from using a more heterogenous solution later. This isn't the case. Django provides a number of interesting connections between the layers, and using Django for all the layers lets you take advantage of those connections. For example, using the Django ORM means that you get the great Django admin app almost for free.
You can choose to use a different ORM within Django, you just won't get the admin app (or generic views, for example) along with it. So a different ORM takes you a step backward from full Django top-to-bottom, but it isn't a step backward from other heterogenous solutions, because those solutions didn't give you intra-layer goodness the admin app in the first place.
Django shouldn't be criticized for not providing a flexible architecture: it's as flexible as any other solution, you just forgo some of the Django benefits if you choose to swap out a layer.
If you choose to start with Django, you can use the Django ORM now, and then later, if you need to switch, you can change over to SQLalchemy. That will be no more difficult than starting with SQLalchemy now and later moving to some other ORM solution.
You haven't said why you anticipate needing to swap out layers. It will be a painful process no matter what, because there is necessarily much code that relies on the behavior of whichever toolset and library you're currently using.
Django will happily let you use whatever libraries you want for whatever you want to use them for -- you want a different ORM, use it, you want a different template engine, use it, and so on -- but is designed to provide a common default stack used by many interoperable applications. In other words, if you swap out an ORM or a template system, you'll lose compatibility with a lot of applications, but the ability to take advantage of a large base of applications typically outweighs this.
In broader terms, however, I'd advise you to spend a bit more time reading up on architectural patterns for web applications, since you seem to have some major conceptual confusion going on. One might just as easily say that, for example, Rails doesn't have a "view" layer since you could use different file systems as the storage location for the view code (in other words: being able to change where and how the data is stored by your model layer doesn't mean you don't have a model layer).
(and it goes without saying that it's also important to know why "strict" or "pure" MVC is an absolutely horrid fit for web applications; MVC in its pure form is useful for applications with many independent ways to initiate interaction, like a word processor with lots of toolbars and input panes, but its benefits quickly start to disappear when you move to the web and have only one way -- an HTTP request -- to interact with the application. This is why there are no "true" MVC web frameworks; they all borrow certain ideas about separation of concerns, but none of them implement the pattern strictly)
You seem to be confusing "separate layers" with "separate languages/technologies." There is no reason you can't separate your concerns appropriately within a single programming language, or within an appropriately modular framework (such as Django). Needlessly multiplying programming languages / frameworks is just needlessly multiplying complexity, which is likely to slow down your initial efforts so much that your project will never reach the point where it needs a technology switch.
You've effectively got a 3 layer architecture whether you use Django's ORM or SQLAlchemy, though your forgo some of the Django's benefits if you choose the latter.
Based on my understanding, Django would manage both the view and controller pieces and PostgreSQL or MySQL would handle the data.
Not really, Django has its own ORM, so it does separate data from view/controller.
here's an entry from the official FAQ about MVC:
Where does the “controller” fit in, then? In Django’s case, it’s probably the framework itself: the machinery that sends a request to the appropriate view, according to the Django URL configuration.
If you’re hungry for acronyms, you might say that Django is a “MTV” framework – that is, “model”, “template”, and “view.” That breakdown makes much more sense.
At the end of the day, of course, it comes down to getting stuff done. And, regardless of how things are named, Django gets stuff done in a way that’s most logical to us.
There's change and there's change. Django utterly seperates domain model, business rules and presentation. You can change (almost) anything with near zero breakage. And by change I am talking about meaningful end-user focused business change.
The technology mix-n-match in this (and the previous) question isn't really very helpful.
Who -- specifically -- benefits from replacing Django templates with PHP. It's not more productive. It's more complex for no benefit. Who benefits from changing ORM's. The users won't and can't tell.

Categories