how to do http get with django - python

This should be one of the simplest things, but I can't find how to do it in the documentation. I have found how to do it python lib2, but I would like to do it with django to get back a status code, content, etc.
Something like:
response = httpget('http://my-ulr.com')
print response.status_code

This functionality is not part of Django (since the framework is designed mainly to serve request and not to make them), but Django is just Python - you can use Python's built-in urlib2 library, but I would strongly suggest using the excellent Requests library.
Requests is really fun to work with, and you can't say that about urlib2. You can even say Requests is quite djangish in its simple beauty, and it's creator Kenneth Reitz is an active member of the Django community. But anyways - djangish or not, it works great and you can use it in any Python code.
With Requests your example code would look like this:
import requests
response = requests.get('http://my-ulr.com')
print(response.status_code)

Related

How to access proxy_url\auth in urllib3.ProxyManager

I'm working right now on project (not mine to clarify) which scraps some sites using urllib3 to make requests and some of them are under CF protection. I found some cfscrape (etc etc list of similar names) library that is a wrapper of requests.Session which may help with circumventing antibot measures of CF but there is a catch, I need proxies which are fetched by API and put into ProxyManager objects. In devenv I have no access to those proxies because of policy. Is there an easy way to get proxy url and auth from ProxyManager or do I need to add some square wheels (aka save them somewhere else as second copy) to integrate that library into project with little work as possible without degrading performance by that much? Don't really want to rewrite urllib3 usage to requests.Session
To close the question - ProxyManager does have easy access tho it's kinda strange that I couldn't find anything in the docs (maybe I overlooked it)

How to implement simple POST, GET, and DELETE in python

Short version: Can I just use the Requests module for POST, GET, and DELETE?
I'm trying to use the Pinterest REST API. (Pinterest API Explorer)
I'm going the simple route and just manually got my authentication token via oauth, so basically all I need to know how to do is POST, GET, and DELETE to a specific URL and also include the parameters, then return a json.
I really only need three API functions, list authorized user's followers (GET), follow user (POST), and unfollow user (DELETE). The only param I need for any of those is my access_token that I got manually.
It seems like a simple problem, but there's about 5 python Pinterest API wrappers, none of them complete, some of them not working at all. I've looked at the pycurl, httplib, and requests modules. They all look like they have a simple enough method for GET, but it gets more complicated with POST and maybe DELETE. It seems like it should be super simple, a function that takes a method (POST/GET/DELETE/etc), a url, and a set of parameters, so why is it more complicated than that? If it were that easy, I don't understand why all these API wrappers would be half done since it theoretically should be as simple as calling a function with those 3 parameters (with an array for the 3rd parameter) for every function in the API.
In the Requests python package, there's this function under the RequestMethods class:
def request(self, method, url, fields=None, headers=None, **urlopen_kw)
Looks like I understand everything except what the headers are and the **urlopen_kw, but I think it should work without those to variables, correct?
I'd appreciate it if someone could point me in the right direction.
From the docs:
Here is an example of doing a PUT request using Request:
import urllib.request
DATA = b'some data'
req = urllib.request.Request(url='http://localhost:8080', data=DATA,method='PUT')
with urllib.request.urlopen(req) as f:
pass
print(f.status)
print(f.reason)
In your case the method would be 'POST', 'Delete' or whatever you like.
If you want to make more complex requests, have a look at this guide for the httplib2 library - it's worth reading.

Creating a Like-Gate (reveal tab) for a Facebook app, using Django/Python

I'm building a Facebook app using Python/Django. I've installed FanDjango and that works great. Just one more thing I need.
I'd like to build a "like-gate" for the app. I'd like the app to detect whether the user has "liked" a Fan page before they can view the bulk of it. I haven't found a good solution for that yet.
I'm wary of using something like PyFacebook. Can someone suggest a good option? Thanks.
Thanks. I got this to work by reading through the documentation in the facepy module I have installed. Here's how you access a user's "like" info for a particular page:
from facepy import SignedRequest
if 'signed_request' in request.REQUEST:
signed_request = SignedRequest.parse(request.REQUEST.get('signed_request'), settings.FACEBOOK_APPLICATION_SECRET_KEY)
if signed_request.page.is_liked:
test = "yes!"
else:
test = "no!"
Fandjango wraps facepy so it's actually easier. Install only Fandjango via pip to avoid conflicts.
In the view with the request object, you can simply check against
request.facebook.signed_request.page.is_liked
and perform different actions. Remember that page will be None if the app is not in a page.
I'm no Facebook expert, and haven't played that much with the Facebook graph, but this should work.
Once you've authenticated the user, you can get their likes off the Facebook Graph:
https://graph.facebook.com/me/likes/{your_contents_graph_id}?access_token={access_token}
In Python I might query this via:
import requests
url = "https://graph.facebook.com/me/likes/{your_contents_graph_id}?access_token={access_token}".format(your_contents_graph_id=your_contents_graph_id, access_token=access_token)
r = request.get(url)
if r.status_code == '200':
page_liked = True
else:
page_liked = False
All this said, I wouldn't like your content. It's not appropriate for me or anyone else to like something they haven't reviewed in full. You might want to consider an alternative way to get people to look at your content.

How to use python for a webservice

I am really new to python, just played around with the scrapy framework that is used to crawl websites and extract data.
My question is, how to I pass parameters to a python script that is hosted somewhere online.
E.g. I make following request mysite.net/rest/index.py
Now I want to pass some parameters similar to php like *.php?id=...
Yes that would work. Although you would need to write handlers for extracting the url parameters in index.py. Try import cgi module for this in python.
Please note that there are several robust python based web frameworks available (aka Django, Pylons etc.) which automatically parses your url & forms a dictionary of all it's parameters, plus they do much more like session management, user authentication etc. I would highly recommend you use them for faster code turn-around and less maintenance hassles.

Noob Question: Python + Twitter + App Engine - Oauth

I'm sorry but I'm having some trouble implementing Oauth within my app engine python project.
I've been working from http://github.com/tav/tweetapp, but I don't think I have a strong enough grasp on this platform to understand how to implement this class within my main.py I'm building the rest of my app in.
This maybe a feeble attempt, but here is what I have so far:
twa = twitter_auth
client = twa.OAuthClient('twitter')
I've created a source folder within my project called "twitter_auth" and that contains a file within it called "twitter_auth.py" which contains the above linked library, and a file called __ init__.py (no space) which is completely empty.
I really have no idea what to do from here :/
Let me recommend taking a look at the tweepy library and some example tweepy apps. Specifically here: http://github.com/wasauce/tweepy-examples
This shows how to use oauth to authenticate a user: http://github.com/wasauce/tweepy-examples/tree/master/appengine/oauth_example/
As Hagge said, it sounds like your issue is more with the tweetapp library than with App Engine. However, if you would like to know more about OAuth on App Engine and if I may be allowed to link to myself, my two articles on the topic seem to be reasonably popular.
The tweetapp library was a an early prototype for Twitter OAuth on twitter. Tav did the heavy lifting and I deployed the site http://twitteroauth.appspot.com , using some of the tweetapp library. The actual source of that site is here (I need to update the site to point here): http://github.com/ryanwi/twitteroauth
I am still using it in production, but, it has aged and does not work for all API calls. I'd recommend trying a different, more up to date and maintained library as others have mentioned.
But, take a look at the twitteroauth source if you want to try to get a first attempt working.
These two are on Twitter's list
http://github.com/brosner/python-oauth2
http://code.google.com/p/oauth-python-twitter2/
I'm not familiar with that library, but after a quick look and seeing the warning that it is not maintained I'd search for something better. I implemented a simple Twitter connection based on Tornado's auth: see an example of how to make Twitter API calls here (and an authentication example here). In case you don't want to use tipfy, I recommend implementing the python-twitter library in your framework of choice.

Categories