I wrote a Flask web application for a system that our company uses. However, we have another web application, which is running on Node.JS. The "problem" is that my colleague writes everything on node, while I write everything in Python.
We want to implement both applications on one webpage - for example:
My application will run on example.com/assistant
His application will run on example.com/app1 and example.com/app2
How can we do this? Can we somehow implement the templates that I use with his templates and vice-versa?
Thank you in advance!
V
Serving different apps from the same domain
You can use haproxy for directing requests to specific service based on ACL rules.
You could use path_beg rule, to direct any request beginning with specific path to be directed to corresponding server. See example below.
/etc/haproxy/haproxy.cfg
# only relevant part of the config file
# assumes all apps are on one machine
frontend http-in
bind *:80
acl py_app1 path_beg /assistant
acl node_app1 path_beg /app1
acl node_app2 path_beg /app2
default_backend main_servers
backend py_app1
server flask_app 127.0.0.1:5000
backend node_app1
server nodejs1 127.0.0.1:4001
backend node_app2
server nodejs2 127.0.0.1:4002
backend main_servers
server other1 127.0.0.1:3000 # nginx, apache, or whatever
Sharing template code between apps
This would be harder, as you would need to both agree on some kind of format, which needs to be language and framework-agnostic, and probably logic-less.
Mustache claims to be "framework-agnostic way to render logic-free views". I used it sparringly a few years ago so this one is first that came to mind, however you should do more research on this, maybe there is some better fit.
Python implementation
JS implementation
The problem would be to actually keep the templates always in sync with apps, and not break functionality of the views. If a template changes then you would need to test all apps that use this template file. Also, you probably will block one another from updating your apps at different times, because if one of you change the template files, then you must come to a consensus, update all relevant apps, and deploy them at one time.
Related
In my small web-site I feel need to make some data widely available, to avoid exchanging with database for every request made. E.g. this could be the list of current users show in the bottom of every page or the time of last update of ranking.
The stuff works in Python (Flask) running upon nginx + uwsgi (this docker image).
I wonder, do I have some small cache or shared memory for keeping such information "out of the box", or I need to take care of explicitly setting up some dedicated cache? Or perhaps some thing like this is provided by nginx?
alternatively I still can use database for it has its own cache I think, anyway
Sorry if question seems to be naive/silly - for I come from java world (where things a bit different as we serve all requests with one fat instance of java application) - and have some difficulty grasping what powers does wsgi/uwsgi provide. Thanks in advance!
Firstly, nginx has cache:
https://www.nginx.com/blog/nginx-caching-guide/
But for flask cacheing you also have options:
https://pythonhosted.org/Flask-Cache/
http://flask.pocoo.org/docs/1.0/patterns/caching/
Did you have a look at caching section from Flask docs?
It literally says:
Flask itself does not provide caching for you, but Werkzeug, one of the libraries it is based on, has some very basic cache support
You create a cache object once and keep it around, similar to how Flask objects are created. If you are using the development server you can create a SimpleCache object, that one is a simple cache that keeps the item stored in the memory of the Python interpreter:
from werkzeug.contrib.cache import SimpleCache
cache = SimpleCache()
-- UPDATE --
Or you could solve on the frontend side storing data in the web browser local storage.
If there's nothing in the local storage you call the DB, else you use the information from local storage rather than making db call.
Hope it helps.
From this page:
UPLOADS_DEFAULT_URL
If you have a server set up to serve from
UPLOADS_DEFAULT_DEST, then set the server’s base URL here. Continuing
the example above, if /var/uploads is accessible from
http://localhost:5001, then you would set this to
http://localhost:5001/ and URLs for the photos set would start with
http://localhost:5001/photos. Include the trailing slash.
However, you
don’t have to set any of the _URL settings - if you don’t, then they
will be served internally by Flask. They are just there so if you have
heavy upload traffic, you can have a faster production server like
Nginx or Lighttpd serve the uploads.
I do not understand how Flask uses UPLOADS_DEFAULT_URL. The text says that if you don't specify it the uploads will be served internally by flask. Questions:
On what url are they going to be served by flask if I don't specify the url?
If I do specify URL what flask is going to do with it? How is it going to use it?
So it's easier to answer my question: I don't know how exactly python interacts with a web server such as apache or nginx. I do understand that in principle you want these web servers to front/proxy you python app for scalability/load but I don't know exact details on how this is done. May be if I knew that, the information above would be more obvious to me.
From practical perspective: I have someone else's python/flask app, and not a lot of experience with python. The parameter above needs to be specified in the config files. I got the app up and running, I did not specify this particular parameter, the uploads are working fine. I'm wondering what else could I have possibly broken by not specifying the URL.
On what url are they going to be served by flask if I don't specify the url?
From the doc i understand that if you set like this
UPLOADS_DEFAULT_DEST = '/var/uploads/'
UPLOADS_DEFAULT_URL = 'http://localhost:5000/'
Then when you upload a set named photos will store its uploads in /var/uploads/photos.Lets assume it as /var/uploads/photos/test.jpg.Then flask will serve the image as
http://localhost:5000/photos/test.jpg.
If I do specify URL what flask is going to do with it? How is it going to use it?
Let if
UPLOADED_PHOTOS_DEST = '/var/mypics/'
UPLOADED_PHOTOS_URL = 'http://localhost:5000/'
Then when you upload a set named photos will store its uploads in /var/mypics/.Lets assume it as /var/mypics/test.jpg.Then flask will serve the image as
http://localhost:5000/test.jpg.
But we do not use this in production.In production images, statics should be served by nginx or apache
This used to be done in the adminstrator console. Now, according to the docs, it's controlled by a setting in the application's configuration files.
I updated my app.yaml file to include these lines and redeployed it:
#
# Module Settings
# https://cloud.google.com/appengine/docs/python/modules/#Python_Configuration
#
module: default
instance_class: F2
However, I haven't noticed any improvement in my application's performance. Specifically, I have a (cpu-bound) script that was taking 4-5 secs to run and there has been no difference since the change.
So my question is: am I doing this correctly? And is there a way to confirm (for example, in the logs or elsewhere in the admin console) the level at which my application's servers are running?
I should note that I am testing this on an unbilled application. Although I couldn't find any information in the docs that indicated this feature was limited to billed applications, I know that some features are unavailable on unbilled apps.
The settings you have there look correct.
If you are using modules, and it looks like you are, you can confirm the frontend instance class is what you set it to by viewing the "Versions" page on the old app engine console at http://appengine.google.com/
If you aren't using modules you can view the instance type on the "Application Settings" page.
Unfortunately, there doesn't seem to be a way to check the frontend instance class using the new cloud console.
If you look under Instances in the application dashboard you can see which ones you currently have running.
I'd like to change the environment variable DJANGO_SETTINGS_MODULE (along with a few others) and then have ALL relevant modules like django.conf, django.db etc reloaded to reflect the information from the new settings module. The new settings module will have different database. I will be doing this in a middleware.
I was able to achieve this by reloading a few modules along with django.conf and django.db. All new SQL statements were fired against the new DB.
But this appears to be so hackish.
The main reason for me wanting to do this is to have the same apache child process serve requests for different django applications (different settings and not different apps) without having to recreate a new apache child process which reloads the whole thing.
Is there a clean way of achieving what I want to do?
Thanks,
UPDATE (19-Sept-2014): I have accepted Daniel Roseman's answer as that seems to be the reality in the context of the question asked. The Router approach suggested by him was something that I explored but couldn't use because django's transaction classes don't use the router. The router I presume exists for a different reason. The application code base I'm working on, which is pretty large, has tons of transaction.commit_manually for the default or a specific db alias. I was trying to get this to support multiple client databases without changing the application code.
However, I did manage to solve the main problem which was to support multiple client DBs and other settings. I don't try to change the settings on the fly nor do I use the router. I instead have a single settings.py with all client DB information. I monkey patched the connection handler to return a different database connection for 'default' alias (or other specific alias used by the code) based on certain env variables set in the middleware. So far this has worked fine. I will post an update if I run into any issues or if someone else can point out a potential issue with the approach.
No, there is no way to do this, and that's a very good thing as it is a bad idea. There is no reason to use the same Apache process for different sites: instead you should have different virtual hosts for each of your sites, and let Apache manage them.
I have a URL route in my web.py application that I want to run to catch all URLs that hit the server, but only after any static assets are served.
For example, if theres is js/test.js in my static directory, the path http://a.com/js/tests.js should return the file contents. But I also have my url routing set up so that there is a regex that catches everything like this:
urls = ('/.*', 'CatchAllHandler')
So this should run only if no static asset was discovered. A request for http://a.com/js/test.js should return the static file test.js, but a request for http://a.com/js/nope.js should route through the CatchAllHandler.
I've looked into writing my own StaticMiddleware for this, but it will only help if the order of web.py operations is changed. Currently the middleware is executed after the URL routes have been processed. I need the middleware to run first, and let the url routing clean up the requests that were not served static assets.
The one idea I have is to use the notfound() function as my catch all handler, but that may not be best.
the url matching is python regex. You can test/play with python regex here
that said, this should work for you:
('/(?!static)(.*)', 'CatchAllHandler')
I haven't played with web.py's middleware, but my understanding.. WSGI middleware happens before web.py gets to seeing the request/response. I would think, provided your WSGI MiddleWare is properly configured, it would just work.
pouts That sucks. There is the hook stuff, which makes it really easy, I've don't that before, and it will see all the stuff before .. docs are here: http://webpy.org/cookbook/application_processors
but I guess in regards to your other comment, 'wanting it to work regardless of URL'. How would you know it's static content otherwise? I'm confused greatly. The EASIEST way, since for production you want some other web server running your web.py scripts, is to push all the static content into the web server. Then you can of course do whatever you want in the web server that needs doing. This is exactly what happens with mod_wsgi and apache for instance (you change /static to point to the directory IN the web server).
Perhaps if you shared an actual example of what you need done, I could help you more. Otherwise I've given you now 3 different ways to handle the problem (excluding using WSGI middleware). How many more do you need? :P