I tried profiling my web application and one of the bottlenecks reported was the lack of gzip compression. I proceeded to install the gzip middleware in Django and got a bit of a boost but a new report shows that it is only gzipping the HTML files i.e. any content processed by Django. Is there a way I could kludge/hack/force/make the middleware gzip my CSS and my JS as well?
Could someone please answer my questions below. I've gotten a little lost with this.
I might have gotten it wrong but
people do gzip the CSS and the JS,
don't they?
Does Django not compress
JS and CSS for some browser
compatibility issues?
Is compressing
and minifying the same thing?
Thanks.
Your CSS and JS should not be going through Django on your production system. You need to configure Apache (or Nginx, or whatever) to serve these, and when you do so you'll be able to set up gzip compression there, rather than in Django.
And no, compressing and minifying are not the same thing. GZip compression is done dynamically by the server when it serves your request, and the browser transparently unzips the file when it receives it. Minification is the process of removing comments and white-space from the files, and sometimes concatenating multiple files into one (ie one css and one javascript, instead of lots of each). This is done when you deploy your files to the server - by django-compress, as Ashok suggests, or by something external like the YUI compressor, and the browser doesn't attempt to reconstruct the original file - that would be impossible, and unnecessary.
You should think about placing your django application behind an HTTP reverse proxy.
You can configure apache to act as a reverse proxy to your django application, although a number of people seem to prefer using nginx or lighttpd for this scenario.
An HTTP reverse proxy basically is a proxy set up directly in front of your web application. Browsers make requests from the reverse proxy, and the reverse proxy forwards the requests to the web application. The reverse proxy can also do a number of interesting things like handle ssl, handle gzip-compressing all responses, and handle serving static files.
Thanks everyone.
It seems that the GzipMiddleware in Django DOES compress CSS and JS.
I was using Google's Page Speed plugin for Firebug to profile my page and it seems that it was generating reports based on old copies (non-gzipped versions) of the CSSs and JSs in my local cache. These copies were there from the time before I enabled the Gzip middleware. I flushed the cache and it seems that the reports showed different results altogether.
Follow Daniel Roseman's suggestion, "Your CSS and JS should not be going through Django on your production system"
If you want to serve through Django then
you can compress css, js files using django-compressor, django-compress
Related
I'm switching to Pyramid from Apache/PHP/Smarty/Dreamweaver scheme.
I mean the situation of having static site in Apache with menu realized via Dreamweaver template or other static tools. And then if I wanted to put some dynamic content in html I could make the following:
Put smarty templates in html.
Create php behind html with same name. Php takes html as template.
Change links from html to php.
And that was all.
This scheme is convenient because the site is viewable in browser and editable
in Dreamweaver.
How can I reproduce this scheme in Pyramid?
There are separate dirs for templates and static content. Plus all this myapp:static modifiers in hrefs. Where to look up?
Thank you for your advices.
There is no smarty port for Python. So you would have to start using another template syntax, such as mako or chameleon
To do this, you would setup your view_config to respond to the url, end tell it to use the corresponding template.
If you want to do this, you would simple change your code. But this is not necessary, pyramid will process your requests, whether the url contains .html, .php, .python, /, or whatever.
You could still edit the templates in Dreamweaver I guess.
Only really static pages would be linked using static_url. If it is html that you mean to make into a template, it might be easiest to just start of with a template right away, without any dynamic content in it.
This is from the URL dispatch tutorial:
# in views.py
#view_config(route_name='view_page', renderer='templates/view.pt')
def view_page(request):
return {}
# in __init__.py
config.add_route('view_page', 'mypage.html')
You can build a small web application which uses traversal to serve html documents from a directory. Here's more explanations about how traversal works.
Then you can programmatically render those documents as Chameleon templates, using PageTemplateFile for example. This would allow you to include, say, common header/footer/navigation into every page.
This would mean that every page in your site will be in fact dynamic, so that would incur a small performance penalty for every page regardless of whether it has dynamic content or not, but you should not be concerned with this unless you're building the next Facebook. :) However, this approach would allow you to have a plain html document corresponding to every page in your website which you'll be able to edit using Dreamweaver or any other editor.
This is somewhat a different answer than ohters but here is a completely different flow.
Write all your pages in html. Everything!!! and then use something like angularjs or knockoutjs to add dynamic content. Pyramid will serve dynamic content requested using ajax.
You can then map everything to you html templates... edit those templates wherever you want since they are simply html files.
The downside is that making it work altogether isn't that simple at first.
I would like to change the logo of a website based on which menu is currently activated/seen by the user browsing the website.
For instance I have www.urltowebsite.com/menu1 = Header Logo 1
And then I have www.urltowebsite.com/menu2 = Header Logo 2
And on top of this I want to add an else statement stating that: If any other menu is selected, use header logo 3.
How can I make this possible with Python? I cant seem to wrap my head around what to define where and how to call up the different functions on the HTML website.
Oh and I insist doing this with Python. And preferably without any framework such as Django. But if needs be I can install web.py
EDIT:
Am I forced to go with php then? I would like to once and for all start utilizing Python on my web projects.
The website is made in simple HTML as I said first. The Javascript functions are only used to serve the HTML menu's through AJAX. Again this does not matter much for what I am trying to do, as menu's have classes and I can define those in php and thus change my logo/header.
What I want to do is to use Python in this instance. Here is a code snippet from the site:
<div id="header">
<span class="title"><img src="http://www.url.com/subfolder/images/logo.png"/>
</span>
</div>
And some more relevant to this:
<div id="menu">
<ul>
<li>001</li>
<li>002</li>
<li>003</li>
<li>004</li>
<li>005</li>
<li>006</li>
<li>007</li>
<li>008</li>
</ul>
</div>
So can I use python here?
You're asking to do the wrong thing the wrong way.
In order to change the logo based on the URL in Python , you need Python to generate the page and know what that url is.
There are two ways to do that in Python:
Use an existing Web Framework
Write your own Web Framework
"Python" doesn't know or care what your URL is - the frameworks and support libraries ( Django, Pyramid, Bottle, Flash, Tornado, Twisted, etc) figure out what the URL is by an integration with an underlying web server ( though some have their own webserver coupled in ). Similarly, PHP doesn't really know or care what the URL is - that information comes from an integration with Apache or FCGI/Nginx/etc. PHP tends to ship with most/all of that integration done. It's also worth noting that PHP is not just a language, but a web framework. Python is just a language.
Most Python frameworks will be written to the WSGI spec and have a "request" object that has all the data you want ( and many use the WebOb librbary for that ).
If you plan on doing everything with static HTML files, then you have a few options:
have a single static directory. use javascript to figure out the addressbar location, and render the corresponding logo / write the headers & footers.
have a "template" directory of all your HTML. use a Python script build a static version of each website with the custom headers/footers and configure your webserver to serve a different one for each domain.
No, Python cannot run inside the HTML web page. If you're really serving plain HTML pages then you must use javascript to execute code in the browser once the page is loaded. However, since you mention using AJAX, it sounds like it's not really true that you're serving plain HTML but rather have some server side code. If so, that server side code is the place to put your HTML-construction logic. To know the best way to do that, you would have to describe what's happening on the server.
Although I haven't used it, I have heard that the pyhp project more or less provides php-like embedded functionality for python.
Is there any module out there that could be used by my Django site to tell whether the client browser supports HTML5 and what features are supported?
Sadly no. This is something that you'll need JavaScript client to do. Especially something like http://modernizr.com/
One way to do it would be to run modernizr and send results to back end.
If you would be really optimistic, you could build a list of User-Agents and decide upon that. But good luck with keeping which things works in which version of Chrome and Firefox.
I am just starting Python and I was wondering how I would go about programming web applications without the need of a framework. I am an experienced PHP developer but I have an urge to try out Python and I usually like to write from scratch without the restriction of a framework like flask or django to name a few.
WSGI is the Python standard for web server interfaces. If you want to create your own framework or operate without a framework, you should look into that. Specifically I have found Ian Bicking's DIY Framework article helpful.
As an aside, I tend to think frameworks are useful and personally use Django, like the way Pylons works, and have used Bottle in the past for prototyping—you may want to look at Bottle if you want a stay-out-of-your-way microframework.
One of the lightest-weight frameworks is mod_wsgi. Anything less is going to be a huge amount of work parsing HTTP requests to find headers and URI's and methods and parsing the GET or POST query/data association, handling file uploads, cookies, etc.
As it is, mod_wsgi will only handle the basics of request parsing and framing up results.
Sessions, cookies, using a template generator for your response pages will be a surprising amount of work.
Once you've started down that road, you may find that a little framework support goes a long way.
You will have to look into something like CGI or FastCGI, which provides an API to communicate to the webserver.
Google App Engine enables you to write simple apps, and even provides a local webserver where you can try things out.
People here love frameworks. One shortcoming I have noted is that Python lacks a handy-dandy module for Sessions as a library built-in, despite it being available in PHP and as CGI::Session in Perl.
You will end up doing:
import cgi # if you want to work with forms and such
import cgitb; cgitb.enable() # to barf up errors to the web
print 'Content-type: text/html\n\n' # to start off any HTML.
You will have to write session stuff on your own.
For a PHP programmer, I think mod_python is a good way to get started without any framework. It can be used directly as Apache 2 module. You can have code tags (like <? ?> in PHP) and even conditional HTML output (HTML inside if statement):
<%
if x == y:
# begin
%>
... some html ...
<%
# end
%>
(simplified example taken from onlamp.com's Python Server Pages tutorial)
You should try web.py, it provides a bare minimum of features that does not get in your way.
http://webpy.org/
You can just make the entire thing yourself as a CGI script written in python. On an Apache server you go into the httpd.conf file and add these two lines at the bottom.
AddHandler cgi-script .py
ScriptInterpreterSource Registry-Strict
Then the standard output is redirected to the client, i.e. the print(...) method sends text to the client. Then you just read the .html, .css, and .js files stored on the server and print() each line. Connect to your database on the backend. Set up your security/authorization protocols... Basically you will need to make the entire framework yourself, only it will be customized to fit your needs perfectly.
Probably a good idea to come up with some special character to parse for when reading the files on the server and before printing to insert any dynamic content, such as:
HTML
<div>
<p> <<& pythonData $>> </p>
</div>
Python
htmlFile = open("something.html", "r")
for line in htmlFile:
if "<<&" in line:
# figure out what characters that special symbol is in the line
# replace them with dictionary value or variable or something
print(line)
else:
print(line)
Here is the documentation for the official library to work with Common Gateway Interface (CGI) in python: https://docs.python.org/3/library/cgi.html It includes an example that shows reading form data sent to the server into a python script.
Don't forget to tell your scripts where the python interpreter is on the Apache server (should be in /bin somewhere), in other words point at python with the sh-bang:
#!/bin/python3.10
Or wherever your server's python interpreter is located at.
I'm trying to upload a PDF file to a website using Hot Banana's content management system using a Python script. I've successfully logged into the site and can log out, but I can't seem to get file uploads to work.
The file upload is part of a large complicated web form that submits the form data and PDF file though a POST. Using Firefox along with the Firebug and Tamper Data extensions I took a peek at what the browser was sending in the POST and where it was going. I believe I mimicked the data the browser was sending in the code, but I'm still having trouble.
I'm importing cookielib to handle cookies, poster to encode the PDF, and urllib and urllib2 to build the request and send it to the URL.
Is it possible that registering the poster openers is clobbering the cookie processor openers? Am I doing this completely wrong?
Edit: What's a good way to debug the process? At the moment, I'm just dumping out the urllib2 response to a text file and examining the output to see if it matches what I get when I do a file upload manually.
Edit 2: Chris Lively suggested I post the error I'm getting. The response from urllib2 doesn't generate an exception, but just returns:
<script>
if (parent != window) {
parent.document.location.reload();
} else {
parent.document.location = 'login.cfm';
}
</script>
I'll keep at it.
A tool like WireShark will give you a more complete trace at a much lower-level than the firefox plugins.
Often this can be something as simple as not setting the content-type correctly, or failing to include content-length.
"What's a good way to debug [a web services] process?"
At the moment, I'm just dumping out the urllib2 response to a text file and examining the output to see if it matches what I get when I do a file upload manually.
Correct. That's about all there is.
HTTP is a very simple protocol -- you make a request (POST, in this case) and the server responds. Not much else involved and not much more you can do while debugging.
What else would you like? Seriously. What kind of debugger are you imagining might exist for this kind of stateless protocol?
You might be better off instrumenting the server to see why this is failing, rather than trying to debug this on the client side.