I'm writing a script in Python that should determine if it has internet access.
import urllib
CHECK_PAGE = "http://64.37.51.146/check.txt"
CHECK_VALUE = "true\n"
PROXY_VALUE = "Privoxy"
OFFLINE_VALUE = ""
page = urllib.urlopen(CHECK_PAGE)
response = page.read()
page.close()
if response.find(PROXY_VALUE) != -1:
urllib.getproxies = lambda x = None: {}
page = urllib.urlopen(CHECK_PAGE)
response = page.read()
page.close()
if response != CHECK_VALUE:
print "'" + response + "' != '" + CHECK_VALUE + "'" #
else:
print "You are online!"
I use a proxy on my computer, so correct proxy handling is important. If it can't connect to the internet through the proxy, it should bypass the proxy and see if it's stuck at a login page (as many public hotspots I use do). With that code, if I am not connected to the internet, the first read() returns the proxy's error page. But when I bypass the proxy after that, I get the same page. If I bypass the proxy BEFORE making any requests, I get an error like I should. I think Python is caching the page from the 1st time around.
How do I force Python to clear its cache (or is this some other problem)?
Call urllib.urlcleanup() before each call of urllib.urlopen() will solve the problem. Actually, urllib.urlopen will call urlretrive() function which creates a cache to hold data, and urlcleanup() will remove it.
You want
page = urllib.urlopen(CHECK_PAGE, proxies={})
Remove the
urllib.getproxies = lambda x = None: {}
line.
Related
I have a problem that, due to my little knowledge of python, I can't solve. I have this code, and when arrives at this command, the code stops working:
req_ORDER_CURRENT = req_ORDER_CURRENT.json()
I noticed that if I launch the get on the browser, it returns me a blank page but response code is 200:
req_ORDER_CURRENT = requests.get("http://" + ip + "/order/current")
statusCodeReq = req_ORDER_CURRENT.status_code
print(req_ORDER_CURRENT)
if statusCodeReq == 200:
req_ORDER_CURRENT = req_ORDER_CURRENT.json()
print(req_ORDER_CURRENT.json())
Is there a system to control this situation?
I'm experimenting with proxy servers, I want to create a Bot who connects every few minutes to my web server and scrapes a file (namely the index.html) for changes.
I tried to apply things I learned in some multihour python tutorials and got to the result to make it a bit more funny I could use random proxies.
So I wrote down this method:
import requests
from bs4 import BeautifulSoup
from random import choice
#here I get the proxy from a proxylist due processing a table embedded in html with beautifulSoup
def get_proxy():
print("bin in get_proxy")
proxyDomain = 'https://free-proxy-list.net/'
r = requests.get(proxyDomain)
print("bin in mache gerade suppe")
soup = BeautifulSoup(r.content, 'html.parser')
table = soup.find('table', {'id': 'proxylisttable'})
#this part works
#print(table.get_text)
print("zeit für die Liste")
ipAddresses = []
for row in table.findAll('tr'):
columns = row.findAll('td')
try:
ipAddresses.append("https://"+str(columns[0].get_text()) + ":" + str(columns[1].get_text()))
#ipList.append(str(columns[0].get_text()) + ":" + str(columns[1].get_text()))
except:
pass
#here the program returns one random IP Address from the list
return choice(ipAddresses)
# return 'https://': + choice(iplist)
def proxy_request(request_type, url, **kwargs):
print("bin in proxy_request")
while 1:
try:
proxy = get_proxy()
print("heute verwenden wir {}".format(proxy))
#so until this line everything seems to work as i want it to do
#now the next line should do the proxied request and at the end of the loop it should return some html text....
r = requests.request(request_type, url, proxies=proxy, timeout=5, **kwargs)
break
except:
pass
return r
def launch():
print("bin in launch")
r = proxy_request('get', 'https://mysliwje.uber.space.')
### but this text never arrives here - maybe the request is going to be carried out the wrong way
###does anybody got a idea how to solve that program so that it may work?
print(r.text)
launch()
as i explained in the code section before, the code works nice, it picks some random ip of a random ip list and prints it even to the cli. the next step all of the sudden seems to be carried out the wrong way, because the tools is running back scraping a new ip address
and another
and another
and another
and another...
of a list that seems to be updated every few minutes....
so i ask myself what is happening, why i dont see the simple html code of my indexpage?
Anybody any Idea?
Thanxx
I'm making a program that tries brute forcing a cookie value with python.
I'm working on an environment that is meant for IT-security students, it's like a CTF-as-a-service.
The mission i'm working on is a badly programmed login site that has a weak way of creating a cookie session.
The cookie consist of three values, an integer returned from the server side, the username, and a hash. I've already managed to acquire the username and hash, but i need to brute force the int value.
I have never done anything with cookies or tried to brute force them.
I was thinking i can manually observer the program running and returning the header of the site, until the content-length changes.
This is the code i have at the moment.
from requests import session
import Cookie
def createCook(salt):
#known atrributes for the cookie
salt = int('to_be_brute_forced')
user = str('user')
hash = str('hash_value')
# create the cookies
c = Cookie.SimpleCookie()
#assing cookie name(session) and values (s,u,h)
c['session'] = salt + user + hash
c['session']['domain'] = '127.0.0.1:7777'
c['session']['path'] = "/"
c['session']['expires'] = 1*1*3*60*60
print c
def Main():
print "Feed me Seymour: "
salt = 0
while (salt < 1000):
print 'this is the current cookie: ', createCook(salt)
cc = createCook(salt)
salt = salt + 1
try:
with session() as s:
s.post('http://127.0.0.1:7777/index.php', data=cc)
request = s.get('http://127.0.0.1:7777/index.php')
print request.headers
print request.text
except KeyboardInterrupt:
exit(0)
if __name__ == '__main__':
Main()
So my questions are:
1. Do i need to save the cookie before posting?
2. How do i always add +1 to the salt and recreate the cookie?
3. how do i post it to the site and know that the correct one is found?
While posting the request you have to use cookies as the argument instead of data
s.post(url, cookies=<cookiejar>)
How do I update FB Status using Python & GraphAPI? This question has been asked before, but many of the solutions have been deprecated and the requirement of GraphAPI seems to have rendered many solutions irrelevant.
I have fiddled around with the fbpy, Facebook, OAuth, and oauth2 packages, and have looked through their examples, but I still cannot figure out how to get them working. I have no trust in any of the code or the packages I have been using and am wondering if anyone has any definitive solutions that they know will work.
First you need to do is understand login flows. You should understand if you easily want to switch through the different Facebook libraries. Therefore it can have code that is very verbose to code that is very simple based on implementation.
The next thing is that there are different ways to implement handling OAuth and different ways to display and launch your web app in Python. There is no way to authorize without hitting a browser. Otherwise you would have to keep copy pasting the access_token to the code.
Let's say you chose web.py to handle your web app presentation and requests.py to handle the Graph API HTTP calls.
import web, requests
Then setup the URL we want all request to go through
url = (
'/', 'index'
)
Now get your application id, secret and post-login URL you would like to use
app_id = "YOUR_APP_ID"
app_secret = "APP_SECRET"
post_login_url = "http://0.0.0.0:8080/"
This code will have one class index to handle the logic. In this class we want to deal with the authorization code Facebook will return after logging in
user_data = web.input(code=None)
code = user_data.code
From here setup a conditional to check the code
if not code:
# we are not authorized
# send to oauth dialog
else:
# authorized, get access_token
Within the "not authorized" branch, send the user to the dialog
dialog_url = ( "http://www.facebook.com/dialog/oauth?" +
"client_id=" + app_id +
"&redirect_uri=" + post_login_url +
"&scope=publish_stream" )
return "<script>top.location.href='" + dialog_url + "'</script>"
Else we can extract the access_token using the code received
token_url = ( "https://graph.facebook.com/oauth/access_token?" +
"client_id=" + app_id +
"&redirect_uri=" + post_login_url +
"&client_secret=" + app_secret +
"&code=" + code )
response = requests.get(token_url).content
params = {}
result = response.split("&", 1)
for p in result:
(k,v) = p.split("=")
params[k] = v
access_token = params['access_token']
From here you can choose how you want to deal with the call to update the status, for example a form,
graph_url = ( "https://graph.facebook.com/me/feed?" +
"access_token=" + access_token )
return ( '<html><body>' + '\n' +
'<form enctype="multipart/form-data" action="' +
graph_url + ' "method="POST">' + '\n' +
'Say something: ' + '\n' +
'<input name="message" type="text" value=""><br/><br/>' + '\n' +
'<input type="submit" value="Send"/><br/>' + '\n' +
'</form>' + '\n' +
'</body></html>' )
Or using face.py
from facepy import GraphAPI
graph = GraphAPI(access_token)
try:
graph.post(
path = 'me/feed',
message = 'Your message here'
)
except GraphAPI.OAuthError, e:
print e.message
So in the end you can get a slimmed down version like
import web
from facepy import GraphAPI
from urlparse import parse_qs
url = ('/', 'index')
app_id = "YOUR_APP_ID"
app_secret = "APP_SECRET"
post_login_url = "http://0.0.0.0:8080/"
user_data = web.input(code=None)
if not user_data.code:
dialog_url = ( "http://www.facebook.com/dialog/oauth?" +
"client_id=" + app_id +
"&redirect_uri=" + post_login_url +
"&scope=publish_stream" )
return "<script>top.location.href='" + dialog_url + "'</script>"
else:
graph = GraphAPI()
response = graph.get(
path='oauth/access_token',
client_id=app_id,
client_secret=app_secret,
redirect_uri=post_login_url,
code=code
)
data = parse_qs(response)
graph = GraphAPI(data['access_token'][0])
graph.post(path = 'me/feed', message = 'Your message here')
For more info see
* Facebook API - User Feed: http://developers.facebook.com/docs/reference/api/user/#feed
* Publish a Facebook Photo in Python – The Basic Sauce: http://philippeharewood.com/facebook/publish-a-facebook-photo-in-python-the-basic-sauce/
* Facebook and Python – The Basic Sauce: http://philippeharewood.com/facebook/facebook-and-python-the-basic-sauce/
One possible (tested!) solution using facepy:
Create a new application or use an existing one previously created.
Generate a user access token using the Graph API explorer with the status_update extended permission for the application.
Use the user access token created in the previous step with facepy:
from facepy import GraphAPI
ACCESS_TOKEN = 'access-token-copied-from-graph-api-explorer-on-web-browser'
graph = GraphAPI(ACCESS_TOKEN)
graph.post('me/feed', message='Hello World!')
You can try this blog too. It's using fbconsole app.
The code from the blog:
from urllib import urlretrieve
import imp
urlretrieve('https://raw.github.com/gist/1194123/fbconsole.py', '.fbconsole.py')
fb = imp.load_source('fb', '.fbconsole.py')
fb.AUTH_SCOPE = ['publish_stream']
fb.authenticate()
status = fb.graph_post("/me/feed", {"message":"Your message here"})
This is how I got it to work. You absolutely don't need to create any app for this. I'll describe how to post status updates to your profile and to a facebook page of yours.
First, to post a status update to your profile:
Go to https://developers.facebook.com/tools/explorer.
You'll see a textbox with Access Token written before it. Click on the button 'Get Access Token' beside this textbox. It will open a pop up asking you for various permissions for the access token. Basically these permissions define what all you can do through the Graph API using this token. Check the tick boxes beside all the permissions you need one of which will be updating your status.
Now go ahead and install the facepy module. Best way would be to use pip install.
After this pase the following code snippet in any .py file:
from facepy import GraphAPI
access_token = 'YOUR_GENERATED_ACCESS_TOKEN'
apiConnection = GraphAPI(access_token)
apiConnection.post(path='me/feed',
message='YOUR_DESIRED_STATUS_UPDATE_HERE')
Now execute this .py file the standard python way and check your facebook. You should see YOUR_DESIRED_STATUS_UPDATE_HERE posted to your facebook profile.
Next, to do the same thing with a facebook page of yours:
The procedure is almost exactly the same except for generating your access token.
Now you can't use the same access token to post to your facebook page. You need to generate a new one, which might be a little tricky for someone new to the Graph API. Here's what you need to do:
Go to the same developers.facebook.com/tools/explorer page.
Find a dropdown showing 'Graph API Explorer' and click on it. From the dropdown, select your page you want to post updates from. Generate a new access token for this page. The process is described here: . Do not forget to check the manage_pages permission in the extended permissions tab.
Now use this token in the same code as you used earlier and run it.
Go to your facebook page. You should YOUR_DESIRED_STATUS_UPDATE posted to your page.
Hope this helps!
I'm trying to implement Facebook Realtime api with my application. I want to pull the feeds from my 'facebook PAGE'.
I've obtained app_access_token...
app_access_token = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxx'
url = 'https://graph.facebook.com/' + FB_CLIENT_ID + '/subscriptions?access_token=' + app_access_token
url_params = {'access_token':app_access_token,'object':'page', 'fields':'feed', 'callback_url':'http://127.0.0.1:8000/fb_notifications/', 'verify_token' : 'I am taking a random string here...'}
urlResponse = call_url(url, url_params)
Everytime I call the url with url parameters, I get error : HTTP Error 400: Bad Request
But If I call the url without url parameters, I get {"data": []}
Please note that in url parameters, I'm taking verify_token, a random string and callback_url is not same as the redirect_url parameter for the facebook application.(just wanna know is it necessary to put the same url here?)
Please tell me what I'm doing wrong?
I'm using python/django to implement.
Use POST rather than GET, with an empty body & object, fields, callback_url and verify_token passed as query parameters in the url.
See https://developers.facebook.com/docs/reference/api/realtime/.
I've figured this out...
.
.
.
.
Make a POST request to url :
'https://graph.facebook.com/' + FB_CLIENT_ID + '/subscriptions?access_token=' + app_access_token + '&object=page&fields=name&callback_url=' + YOUR_CALLBACK_URL + '&verify_token=' + ANY_RANDOM_STRING + '&method=post'
Pass {} as post parameters.....
Make sure that your_callback_url should be reachable. It will not work on localhost(I guess so... I was not able test it on localhost.)
So in Python the code should be :
url = 'https://graph.facebook.com/' + FB_CLIENT_ID + '/subscriptions?access_token=' + app_access_token + '&object=page&fields=name&callback_url=' + YOUR_CALLBACK_URL + '&verify_token=' + ANY_RANDOM_STRING + '&method=post'
url_params = {}
urlResponse = urllib2.urlopen(url, urllib.urlencode(url_params), timeout=socket.getdefaulttimeout()).read()
urlResponse should be null.
Function attached with callback_url should return:
def callback_function(request):
if request.GET: #(Handle this properly!!!)
return request.GET.get('hub.challenge') #hub_challenge for PHP Developers. :)
Please let me know in case of any doubts!!!
To know how to handle notifications from the FB:
Kindly visit the following URL:
Handling notifications request from Facebook after successful subscription