I wish to make a requests with the Python requests module. I have a large database of urls I wish to download. the urls are in the database of the form page.be/something/something.html
I get a lot of ConnectionError's. If I search the URL in my browser, the page exists.
My Code:
if not webpage.url.startswith('http://www.'):
new_html = requests.get(webpage.url, verify=True, timeout=10).text
An example of a page I'm trying to download is carlier.be/categorie/jobs.html. This gives me a ConnectionError, logged as below:
Connection error, Webpage not available for
"carlier.be/categorie/jobs.html" with webpage_id "229998"
What seems to be the problem here? Why can't requests make the connection, while I can find the page in the browser?
The Requests library requires that you supply a schema for it to connect with (the 'http://' part of the url). Make sure that every url has http:// or https:// in front of it. You may want a try/except block where you catch a requests.exceptions.MissingSchema and try again with "http://" prepended to the url.
Related
basically I am trying to look at the http redirect data when getting a link with selenium webdriver.
With python requests I would do it like this:
r = requests.get(link, allow_redirects=False)
match = re.search(r'some regex', r.headers['Location'])
But now the site is behind cloudflare protection, so simple http requests do not work anymore.
Any idea how I could look into the redirect headers with selenium on python?
Another option might be to inject the selenium cookie into the request, but that does not seem as robust.
More details on the redirects:
- I send GET request to URL_A
--> I receive redirect response to URL_B (< This is the one i want)
- URL_B is another redirect response to URL_C (I do not want that)
Basically I end up on URL_C but I want to know URL_B, so I have to look into the requests headers somehow with selenium
I'm interested in using Python to retrieve a file that exists at an HTTPS url.
I have credentials for the site, and when I access it in my browser I'm able to download the file. How do I use those credentials in my Python script to do the same thing?
So far I have:
import urllib.request
response = urllib.request.urlopen('https:// (some url with an XML file)')
html = response.read()
html.write('C:/Users/mhurley/Portable_Python/notebooks/XMLOut.xml')
This code works for non-secured pages, but (understandably) returns 401:Unauthorized for the https address. I don't understand how urllib handles credentials, and the docs aren't as helpful as I'd like.
I need to log into a website to access its html on a login-protected page for a project I'm doing.
I'm using this person's answer with the values I need:
from twill.commands import *
go('https://example.com/login')
fv("3", "email", "myemail#example.com")
fv("3", "password", "mypassword")
submit()
Assumedly this should log me in so I then run:
sock = urllib.urlopen("https://www.example.com/activities")
html_source = sock.read()
sock.close()
print html_source
Which I thought would print the html of the (now) accessible page but instead just gives me the html of the login page. I've tried other methods (e.g. with mechanize) but I get the identical result.
What am I missing? Do some sites restrict this type of login or does it not work with https or something? (The site is FitBit, since I couldn't use the url in the question)
You're using one library to log in and another to then retrieve the subsequent page. twill and urllib are not sharing data about your sessions. (Similar issue to this one.) If you do that, then you need to manage the session cookie / authentication yourself. Specifically, you'll need to copy the cookie + data and add that to the post-login request in the other library.
Otherwise, and more logically, use the same one for both the login and post-login requests.
I am trying to automate files download via a webserver. I plan on using wget or curl or python urllib / urllib2.
Most solutions use wget and urllib and urllib2. They all talk of HHTP based authentication and cookie based authentication. My problem is I dont know which one is used in the website that stores my data.
Here is the interaction with the site:
Normally I login to site http://www.anysite.com/index.cgi?
I get a form with a login and password. I type in both and hit return.
The url stays as http://www.anysite.com/index.cgi? during the entire interaction. But now I have a list of folders and files
If I click on a folder or file the URL changes to http://shamrockstructures.com/cgi-bin/index.cgi?page=download&file=%2Fhome%2Fjanysite%2Fpublic_html%2Fuser_data%2Fuserareas%2Ffile.tar.bz2
And the browser offers me a chance to save the file
I want to know how to figure out whether the site is using HTTP or cookie based authentication. After which I am assuming I can use cookielib or urllib2 in python to connect to it, get the list of files and folders and recursively download everything while staying connected.
p.S: I have tried the cookie cutter ways to connect via wget and wget --http-user "uname" --http-password "passwd" http://www.anysite.com/index.cgi? , but they only return the web form back to me.
If you log in using a Web page, the site is probably using cookie-based authentication. (It could technically use HTTP basic auth, by embedding your credentials in the URI, but this would be a dumb thing to do in most cases.) If you get a separate, smallish dialog with a user name and password field (like this one), it is using HTTP basic authentication.
If you try to log in using HTTP basic auth, and get back the login page, as is happening to you, this is a certain indication that the site is not using HTTP basic auth.
Most sites use cookie-based authentication these days. To do this with an HTTP cilent such as urllib2, you will need to do an HTTP POST of the fields in the login form. (You may need to actually request the login form first, as a site could include a cookie that you need to even log in, but usually this is not necessary.) This should return a "successfully logged in" page that you can test for. Save the cookies you get back from this request. When making the next request, include these cookies. Each request you make may respond with cookies, and you need to save those and send them again with the next request.
urllib2 has a function called a "cookie jar" which will automatically handle the cookies for you as you send requests and receive Web pages. That's what you want.
You can use pycurl like this:
import pycurl
COOKIE_JAR = 'cookiejar' # file to store the cookies
LOGIN_URL = 'http://www.yoursite.com/login.cgi'
USER_FIELD = 'user' # Name of the element in the HTML form
USER = 'joe'
PASSWD_FIELD = 'passwd' # Name of the element in the HTML form
PASSWD = 'MySecretPassword'
def read(html):
"""Read the body of the response, with posible
future html parsing and re-requesting"""
print html
com = pycurl.Curl()
com.setopt(pycurl.WRITEFUNCTION, read)
com.setopt(pycurl.COOKIEJAR, COOKIE_JAR)
com.setopt(pycurl.FOLLOWLOCATION, 1) # follow redirects
com.setopt(pycurl.POST, 1)
com.setopt(pycurl.POSTFIELDS, '%s=%s;%s=%s'%(USER_FIELD, USER,
PASSWD_FIELD, PASSWD))
com.setopt(pycurl.URL, LOGIN_URL )
com.perform()
Plain pycurl it may seam very "primitive" (with the limited setopt approach),
but it gets the job done, and handle pretty well the cookies with the cookie jar option.
AFAIK cookie based authentication is only used once you have logged in successfully atleast ONCE. You can try disabling storing cookies from that domain by changing your browser settings, if you are still able to download files that it should be a HTTP based authentication.
Try doing a equivalent GET request for the (possibly POST) login request that is probably happening right now for login. Use firebug or fiddler to see the login request that is sent.
Also note if there is some javascript code which is returning you a different output, based on your useragent string or some other parameter.
See if httplib, mechanize helps.
I'm working on a simple HTML scraper for Hulu in python 2.6 and am having problems with logging on to my account. Here's my code so far:
import urllib
import urllib2
from cookielib import CookieJar
#make a cookie and redirect handlers
cookies = CookieJar()
cookie_handler= urllib2.HTTPCookieProcessor(cookies)
redirect_handler= urllib2.HTTPRedirectHandler()
opener = urllib2.build_opener(redirect_handler,cookie_handler)#make opener w/ handlers
#build the url
login_info = {'username':USER,'password':PASS}#USER and PASS are defined
data = urllib.urlencode(login_info)
req = urllib2.Request("http://www.hulu.com/account/authenticate",data)#make the request
test = opener.open(req) #open the page
print test.read() #print html results
The code compiles and runs, but all that prints is:
Login.onError("Please \074a href=\"/support/login_faq#cant_login\"\076enable cookies\074/a\076 and try again.");
I assume there is some error in how I'm handling cookies, but just can't seem to spot it. I've heard Mechanize is a very useful module for this type of program, but as this seems to be the only speed bump left, I was hoping to find my bug.
What you're seeing is a ajax return. It is probably using javascript to set the cookie, and screwing up your attempts to authenticate.
The error message you are getting back could be misleading. For example the server might be looking at user-agent and seeing that say it's not one of the supported browsers, or looking at HTTP_REFERER expecting it to be coming from hulu domain. My point is there are two many variables coming in the request to keep guessing them one by one
I recommend using an http analyzer tool, e.g. Charles or the one in Firebug to figure out what (header fields, cookies, parameters) the client sends to server when you doing hulu login via a browser. This will give you the exact request that you need to construct in your python code.