I'm trying to analyse different ways of Python connection to webserver: cgi, fastcgi, mod_python, WSGI, and Django framework...
So, how many ways exist to parse get/post parameters in python? Are they differ from one connection to another?
I saw this question. CGI, Django etc. are considered there. But what about fastCGI? Is it the same way as CGI? And mod_python, WSGI?
I take interest in the most usable, 'popular' ways of getting request parameters.
It can be parsing of particular parameter (name=value) or entire query string parsing.
Where I can read about different APIs?
There are so many connection methods and frameworks, so I get confused(
Thanks a lot! Nikolai.
urlparse.parseqs[l]() is used to parse parameters. But you likely won't have to do it yourself since the framework (if you're using one) will handle them.
Related
So I recently became interested into knowing how to create a URL shortener without using bittly or other things, but I am not very good at using python to connect with other stuff. All I know is:
Checking to see if the URL is available ( Only to see if it has HTTP:// and unavailable characters, nothing to see if the domain is occupied or not. )
All of the other things... I need help with.
By the way, I COMPLETLY do not understand how to do that, so it would be great if you add comments to show me what is going on.
I suggest you take a look at Flask, it is a framework for building web applications (APIs, web apps, etc.).
DigitalOcean has a nice tutorial on this.
You can either use hashing algorithms for the custom shortened urls, or even let the user pick more readable names (like bit.ly/my-url). In this case you would be storing in a database the shortened url and the long url.
I am building a REST API to MongoDB using Flask-RESTful. One feature request I have received is to be able to pass arbitrarily complex Mongo queries, e.g. JSON docs, through the API. I am a little concerned about the implementation here. At first, I would just make a new endpoint (users/query/<json>). I was looking at other implementations, where it was passed through as a parameter (users/query?q={}). Now my questions are:
What is the preferred method?
Is this even a good idea?
Am I missing something?
Thanks!
This is a bit of a strange question, I know, but bear with me. We've developed a RESTful platform using Python for one of our iPhone apps. The webapp version has been built using Django, which makes use of this API as well. We were thinking it would be a great idea to use Django's built-in control panel capabilities to help manage the data.
This itself isn't the issue. The problem is that everyone has decided it would be best of the admin center was essentially a client that sits on top of the RESTful platform.
So, my question is, is there a way to manipulate the model layer of Django to access our API directly, rather than communicated directly with the database? The model layer would act as the client passing requests and responses to and from the admin center.
I'm sure this is possible, but I'm not so sure as to where I would start. Any input?
I remember I once thought about doing such thing. At the time, I created a custom Manager using a custom QuerySet. And I overrode some methods such as _filter_or_exclude(), count(), exists(), select_related(), ... and added some properties. It took less than a week to become a total mess that had probably no chance to work one day. So I immediately stopped everything and found a more suitable solution.
If I had to do it once again, I would take a long time to consider alternatives. And if it really sounds like the best thing to do, I'd probably create a custom database backend. This backend would, rather than converting Django ORM queries to SQL queries, convert them to HTTP requests.
To do so, I think the best starting point would be to get familiar with django source code concerning database backends.
I also think there are some important things to consider before starting such development:
Is the API able to handle any Django ORM request? Put another way: Will any Django ORM query be translatable to an API request?
If not, may "untranslatable" queries be safely ignored? For instance, an ORDER BY clause might be safe to ignore. While a GROUP BY clause is very unlikely to be safely dismissed.
If some queries can't be neither translated nor ignored, may them be reasonably emulated. For instance, if your API does not support a COUNT() operation, you could emulate it by getting the whole data and count it in python with len(), but is this reasonable?
If they are still some queries that you won't be able to handle (which is more than likely): Are all "common" queries (in this case, all queries potentially used by Django Admin) covered and will it be possible to upgrade the API if an uncovered case is discovered lately or is introduced in a future version of Django?
According to the use case, there are probably tons of other considerations to take, such as:
the integrity of the data
support of transactions
the timing of a query which will be probably much higher than just querying a local (or even remote) database.
brand new on this forum and this is my first post!
At work we're starting a project which uses Apache Solr and i'm in charge of the frontend system (Django-based).
Our solr database isn't related to any other db engine nor to any models' class, so Haystack isn't good for us (since its strictly related to the models).
I was looking at http://code.google.com/p/pysolr/ and http://code.google.com/p/solrpy/
Basically, they're similar. I like more solrpy, since it uses POST requests and we can mask our users queries, but this makes its paginator harder to use (i guess..).
Other side, pysolr, thanks to the GET method, performs better (lower query timing), but so far i couldn't execute a query without getting a badrequest error.
Before choosing one, i wanted to ask the community any opinion. Users need to do only searches, our data is handled by a java process, no other db is used (except for storing user informations), and we need to use all solr features (faceting, highlight, word stopping, analyzers...).
What will you choose? And why? Any good code example you can point me at? I was looking throu the haystack source to see how they did implement all...
Thanks all!
We have used 'solrpy', but encountered some problems with it.
Sunburnt is actually an interesting API:
https://github.com/tow/sunburnt/
Actively developed, and easy to use. Unfortunately it introduces some additional dependencies.
I am really new to python, just played around with the scrapy framework that is used to crawl websites and extract data.
My question is, how to I pass parameters to a python script that is hosted somewhere online.
E.g. I make following request mysite.net/rest/index.py
Now I want to pass some parameters similar to php like *.php?id=...
Yes that would work. Although you would need to write handlers for extracting the url parameters in index.py. Try import cgi module for this in python.
Please note that there are several robust python based web frameworks available (aka Django, Pylons etc.) which automatically parses your url & forms a dictionary of all it's parameters, plus they do much more like session management, user authentication etc. I would highly recommend you use them for faster code turn-around and less maintenance hassles.