I wish to access information from this website using Python. But I cannot figure out how to post my log in information using urllib2
https://www.linksyssmartwifi.com/
Can anyone explain why it isn't working?
Edit - This question has been listed as too broad. I will be more specific.
When I try to use the following code, I can't seem to 'post' the user name and password to the webpage.
import urllib2
auth_handler = urllib2.HTTPBasicAuthHandler()
auth_handler.add_password(realm=' ??? ',
uri=' ??? ',
user='USERNAME',
passwd='PASSWORD')
opener = urllib2.build_opener(auth_handler)
urllib2.install_opener(opener)
urllib2.urlopen('EXAMPLELINK')
I am assuming this is because there is no string in the url containing the data and/or there is no api on this page that allows for posts. I don't understand WHY it is not working, I don't actually have any error code. If I print the content I've received after urlopen I get the html code of a page, but without having it logged in.
But my understanding here may be incorrect.
I actually would like to find out information about how many people are logged in to my home network, using a remote connection. I'm assuming this is the only way. I would like to automate some stuff based on who is logged into my local network, the information is available on this page after I log in.
I would preferably like to use Python or Bash scripting to do this.
This site does not use basic http authentication so the HTTPBasicAuthHandler you made will not even be called. The site just posts the username and password using SSLv3. I checked it out with fiddler.
Also, you may need to create an ssl handler. I had problems with urllib with https and had to use the handler below.
import urllib2, ssl
sslv3_handler = urllib2.HTTPSHandler(context=ssl.SSLContext(ssl.PROTOCOL_SSLv3))
opener = urllib2.build_opener(sslv3_handler)
Related
So I'm working on a django project, and one of the objectives is to allow a separate python script to make a HTTP request (using Requests library) to get json data after being authenticated. This works fine, the problem is that if I directly go the url the request.get object uses, I can see all of the data (without any user authentication being involved). This makes my authentication process pointless, as the data is easily visible by simply going to the url. So how would I hide the data on the web side from being viewed, but still allow a GET request to pull the data to my script?
On a side note, I already have a authentication system for the web interface portion of my project (which displays the data). I've tried putting it behind that but to no success.
import json, requests, _mysql
login_attempt = requests.post('http://127.0.0.1:8000/m_app/data_login/',
{'username': 'test', 'password': 'password1234'})
if login_attempt.content.decode('UTF-8') == 'Successful':
print('Logged in.')
else:
print('Not logged in.')
cookies = dict(sessionid=login_attempt.cookies.get('sessionid'))
data = requests.get('http://127.0.0.1:8000/m_app/load/data', #if I type this URL in, I see the data
cookies=cookies)
print(data.content) #prints desired data
By default all your views in django are public, but it's easy to make them protected by checking request.user or using a login_required decorator.
First of all, I'm not a Python guru as you can probably tell... So here we go.
I'm trying to use Asana's API to pull data with Python requests (Projects, tasks, etc) and doing the authentication using Oauth 2.0... I've been trying to find a simple python script to have something to begin with but I haven't had any luck and I can't find a decent and simple example!
I already created the app and got my client_secret and client_secret. But I don't really know where or how to start... Could anybody help me please?
import sys, os, requests
sys.path.append(os.path.dirname(os.path.dirname(__file__)))
import asana
import json
from six import print_
import requests_oauthlib
from requests_oauthlib import OAuth2Session
client_id=os.environ['ASANA_CLIENT_ID'],
client_secret=os.environ['ASANA_CLIENT_SECRET'],
# this special redirect URI will prompt the user to copy/paste the code.
# useful for command line scripts and other non-web apps
redirect_uri='urn:ietf:wg:oauth:2.0:oob'
if 'ASANA_CLIENT_ID' in os.environ:
#Creates a client with previously obtained Oauth credentials#
client = asana.Client.oauth(
#Asana Client ID and Secret, set as a Windows environments to avoid hardcoding variables into the script#
client_id=os.environ['ASANA_CLIENT_ID'],
client_secret=os.environ['ASANA_CLIENT_SECRET'],
# this special redirect URI will prompt the user to copy/paste the code.
# useful for command line scripts and other non-web apps
redirect_uri='urn:ietf:wg:oauth:2.0:oob'
)
print ("authorized=", client.session.authorized)
# get an authorization URL:
(url, state) = client.session.authorization_url()
try:
# in a web app you'd redirect the user to this URL when they take action to
# login with Asana or connect their account to Asana
import webbrowser
webbrowser.open(url)
except Exception as e:
print_("Open the following URL in a browser to authorize:")
print_(url)
print_("Copy and paste the returned code from the browser and press enter:")
code = sys.stdin.readline().strip()
# exchange the code for a bearer token
token = client.session.fetch_token(code=code)
#print_("token=", json.dumps(token))
print_("authorized=", client.session.authorized)
me = client.users.me()
print "Hello " + me['name'] + "\n"
params = {'client_id' : client_id, 'redirect_uri' : redirect_uri, 'response_type' : token,}
print_("*************** Request begings *******************"+"\n")
print_("r = requests.get('https://app.asana.com/api/1.0/users/me)" + "\n")
r = requests.get('https://app.asana.com/api/1.0/users/me', params)
print_(r)
print_(r.json)
print_(r.encoding)
workspace_id = me['workspaces'][0]['id']
print_("My workspace ID is" + "\n")
print_(workspace_id)
print_(client.options)
I'm not sure how to use the requests lib with Asana. Their python doc did not help me. I'm trying to pull the available projects and their code colours so I can later plot them into a web browser (For a high-level view of the different projects and their respective colours - Green, yellow or red)
When I introduce the url (https://app.asana.com/api/1.0/users/me) into a browser, it gives me back a json response with the data, but when I try to do the same with the script, it gives me back a 401 (not authorized) response.
Does anybody know what I'm missing / doing wrong?
Thank you!!!
I believe the issue is that the Requests library is a lower level library. You would need to pass all of the parameters to your requests.
Is there a reason you are not exclusively using the Asana Python client library to make requests? All of the data you are looking to fetch from Asana (projects, tasks, etc.) are accessible using the Asana Python library. You will want to look in the library to find the methods you need. For example, the methods for the tasks resource can be found here. I think this approach will be easier (and less error-prone) than switching between the Asana lib and the Requests lib. The Asana lib is actually built on top of Requests (as seen here).
I am trying to redirect one page to another by using mitmproxy and Python. I can run my inline script together with mitmproxy without issues, but I am stuck when it comes to changing the URL to another URL. Like if I went to google.com it would redirect to stackoverflow.com
def response(context, flow):
print("DEBUG")
if flow.request.url.startswith("http://google.com/"):
print("It does contain it")
flow.request.url = "http://stackoverflow/"
This should in theory work. I see http://google.com/ in the GUI of mitmproxy (as GET) but the print("It does contain it") never gets fired.
When I try to just put flow.request.url = "http://stackoverflow.com" right under the print("DEBUG") it won't work neither.
What am I doing wrong? I have also tried if "google.com" in flow.request.url to check if the URL contains google.com but that won't work either.
Thanks
The following mitmproxy script will
Redirect requesto from mydomain.com to newsite.mydomain.com
Change the request method path (supposed to be something like /getjson? to a new one `/getxml
Change the destination host scheme
Change the destination server port
Overwrite the request header Host to pretend to be the origi
import mitmproxy
from mitmproxy.models import HTTPResponse
from netlib.http import Headers
def request(flow):
if flow.request.pretty_host.endswith("mydomain.com"):
mitmproxy.ctx.log( flow.request.path )
method = flow.request.path.split('/')[3].split('?')[0]
flow.request.host = "newsite.mydomain.com"
flow.request.port = 8181
flow.request.scheme = 'http'
if method == 'getjson':
flow.request.path=flow.request.path.replace(method,"getxml")
flow.request.headers["Host"] = "newsite.mydomain.com"
You can set .url attribute, which will update the underlying attributes. Looking at your code, your problem is that you change the URL in the response hook, after the request has been done. You need to change the URL in the request hook, so that the change is applied before requesting resources from the upstream server.
Setting the url attribute will not help you, as it is merely constructed from underlying data. [EDIT: I was wrong, see Maximilian’s answer. The rest of my answer should still work, though.]
Depending on what exactly you want to accomplish, there are two options.
(1) You can send an actual HTTP redirection response to the client. Assuming that the client understands HTTP redirections, it will submit a new request to the URL you give it.
from mitmproxy.models import HTTPResponse
from netlib.http import Headers
def request(context, flow):
if flow.request.host == 'google.com':
flow.reply(HTTPResponse('HTTP/1.1', 302, 'Found',
Headers(Location='http://stackoverflow.com/',
Content_Length='0'),
b''))
(2) You can silently route the same request to a different host. The client will not see this, it will assume that it’s still talking to google.com.
def request(context, flow):
if flow.request.url == 'http://google.com/accounts/':
flow.request.host = 'stackoverflow.com'
flow.request.path = '/users/'
These snippets were adapted from an example found in mitmproxy’s own GitHub repo. There are many more examples there.
For some reason, I can’t seem to make these snippets work for Firefox when used with TLS (https://), but maybe you don’t need that.
I am trying to use Python to get a JSON file from the Web. If I open the URL in my browser (Mozilla or Chromium) I do see the JSON. But when I do the following with the Python:
response = urllib2.urlopen(url)
data = json.loads(response.read())
I get an error message that tells me the following (after translation in English): Errno 10060, a connection troughs an error, since the server after a certain time period did not react, or the connection was erroneous, or the host did not react.
ADDED
It looks like there are many people who faced the described problem. There are also some answers to the similar (or the same) question. For example here we can see the following solution:
import requests
r = requests.get("http://www.google.com", proxies={"http": "http://61.233.25.166:80"})
print(r.text)
It is already a step forward for me (I think that it is very likely that the proxy is the reason of the problem). However, I still did not get it done since I do not know URL of my proxy and I probably will need user name and password. Howe can I find them? How did it happen that my browsers have them I do not?
ADDED 2
I think I am now one step further. I have used this site to find out what my proxy is: http://www.whatismyproxy.com/
Then I have used the following code:
proxies = {'http':'my_proxy.blabla.com/'}
r = requests.get(url, proxies = proxies)
print r
As a result I get
<Response [404]>
Looks not so good, but at least I think that my proxy is correct, because when I randomly change the address of the proxy I get another error:
Cannot connect to proxy
So, I can connect to proxy but something is not found.
I think there might be something wrong, when you're trying to get the json from the online source(URL). Just to make things clear, here is a small code snippet
#!/usr/bin/env python
try:
# For Python 3+
from urllib.request import urlopen
except ImportError:
# For Python 2
from urllib2 import urlopen
import json
def get_jsonparsed_data(url):
response = urlopen(url)
data = str(response.read())
return json.loads(data)
If you still get a connection error, You can try a couple of steps:
Try to urlopen() a random site from the Interpreter (Interactive Mode). If you are able to grab the source code you're good. If not check internet conditions or try the request module. Check here
Check and see if the json in the URL is in the correct syntax. For sample json syntax check here
Try the simplejson module.
Edit 1:
if you want to access websites using a system wide proxy you will have to use a proxy handler to use loopback(local host) to connect to that proxy.. A sample code is shown below.
proxy = urllib2.ProxyHandler({
'http': '127.0.0.1',
'https': '127.0.0.1'
})
opener = urllib2.build_opener(proxy)
urllib2.install_opener(opener)
# this way you can send both http and https request using proxies
urllib2.urlopen('http://www.google.com')
urllib2.urlopen('https://www.google.com')
I have not not worked a lot with ProxyHandler. I just know the theory and code. I am sure there are better ways to access websites through proxies; One which does not involve installing the opener everytime you run the program. But hopefully it will point you in the right direction.
I am trying to put together a bash or python script to play with the facebook graph API. Using the API looks simple, but I'm having trouble setting up curl in my bash script to call authorize and access_token. Does anyone have a working example?
Update 2018-08-23
Since this still gets some views and upvotes I just want to mention that by now there seems to exist a maintained 3rd party SDK: https://github.com/mobolic/facebook-sdk
Better late than never, maybe others searching for that will find it. I got it working with Python 2.6 on a MacBook.
This requires you to have
the Python facebook module installed: https://github.com/pythonforfacebook/facebook-sdk,
an actual Facebook app set up
and the profile you want to post to must have granted proper permissions to allow all the different stuff like reading and writing.
You can read about the authentication stuff in the Facebook developer documentation. See https://developers.facebook.com/docs/authentication/ for details.
This blog post might also help with this: http://blog.theunical.com/facebook-integration/5-steps-to-publish-on-a-facebook-wall-using-php/
Here goes:
#!/usr/bin/python
# coding: utf-8
import facebook
import urllib
import urlparse
import subprocess
import warnings
# Hide deprecation warnings. The facebook module isn't that up-to-date (facebook.GraphAPIError).
warnings.filterwarnings('ignore', category=DeprecationWarning)
# Parameters of your app and the id of the profile you want to mess with.
FACEBOOK_APP_ID = 'XXXXXXXXXXXXXXX'
FACEBOOK_APP_SECRET = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
FACEBOOK_PROFILE_ID = 'XXXXXX'
# Trying to get an access token. Very awkward.
oauth_args = dict(client_id = FACEBOOK_APP_ID,
client_secret = FACEBOOK_APP_SECRET,
grant_type = 'client_credentials')
oauth_curl_cmd = ['curl',
'https://graph.facebook.com/oauth/access_token?' + urllib.urlencode(oauth_args)]
oauth_response = subprocess.Popen(oauth_curl_cmd,
stdout = subprocess.PIPE,
stderr = subprocess.PIPE).communicate()[0]
try:
oauth_access_token = urlparse.parse_qs(str(oauth_response))['access_token'][0]
except KeyError:
print('Unable to grab an access token!')
exit()
facebook_graph = facebook.GraphAPI(oauth_access_token)
# Try to post something on the wall.
try:
fb_response = facebook_graph.put_wall_post('Hello from Python', \
profile_id = FACEBOOK_PROFILE_ID)
print fb_response
except facebook.GraphAPIError as e:
print 'Something went wrong:', e.type, e.message
Error checking on getting the token might be better but you get the idea of what to do.
Here you go, as simple as it can get. Doesn’t require any 3rd-party SDK etc.
Make sure Python 'requests' module is installed
import requests
def get_fb_token(app_id, app_secret):
url = 'https://graph.facebook.com/oauth/access_token'
payload = {
'grant_type': 'client_credentials',
'client_id': app_id,
'client_secret': app_secret
}
response = requests.post(url, params=payload)
return response.json()['access_token']
Easy! Just use facebook-sdk.
import facebook
app_id = 'YOUR_APP_ID'
app_secret = 'YOUR_APP_SECRET'
graph = facebook.GraphAPI()
# exactly what you're after ;-)
access_token = graph.get_app_access_token(app_id, app_secret)
You first need to set up an application. The following will then spit out an access token given your application ID and secret:
> curl -F type=client_cred -F client_id=[...] -F client_secret=[...] https://graph.facebook.com/oauth/access_token
Since a web browser needs to be involved for the actual authorization, there is no such thing as a "standalone script" that does it all. If you're just playing with the API, or are writing a script to automate something yourself, and want a access_token for yourself that does not expire, you can grab one here: http://fbrell.com/auth/offline-access-token
There IS a way to do it, I've found it, but it's a lot of work and will require you to spoof a browser 100% (and you'll likely be breaking their terms of service)
Sorry I can't provide all the details, but the gist of it:
assuming you have a username/password for a facebook account, go curl for the oauth/authenticate... page. Extract any cookies returned in the "Set-Cookie" header and then follow any "Location" headers (compiling cookies along the way).
scrape the login form, preserving all fields, and submit it (setting the referer and content-type headers, and inserting your email/pass) same cookie collection from (1) required
same as (2) but now you're going to need to POST the approval form acquired after (2) was submitted, set the Referer header with thr URL where the form was acquired.
follow the redirects until it sends you back to your site, and get the "code" parameter out of that URL
Exchange the code for an access_token at the oauth endpoint
The main gotchas are cookie management and redirects. Basically, you MUST mimic a browser 100%. I think it's hackery but there is a way, it's just really hard!
s29 has the correct answer but leaves some steps to solve. The following script demonstrates a working script for acquiring an access token using the Facebook SDK:
__requires__ = ['facebook-sdk']
import os
import facebook
def get_app_access_token():
client = facebook.GraphAPI()
return client.get_app_access_token(
os.environ['FACEBOOK_APP_ID'],
os.environ['FACEBOOK_APP_SECRET'],
)
__name__ == '__main__' and print(get_app_access_token())
This script expects the FACEBOOK_APP_ID and FACEBOOK_APP_SECRET environment variables are set to the values for your app. Feel free to adapt that technique to load those values from a different source.
You must first install the Facebook SDK (pip install facebook-sdk; python get-token.py) or use another tool like rwt to invoke the script (rwt -- get-token.py).
Here is the Python Code. Try running some of these examples on command line, they work fine for me. See also — http://www.pythonforfacebook.com/