How to log in over HTTPS via urllib? - python

I'm interested in using Python to retrieve a file that exists at an HTTPS url.
I have credentials for the site, and when I access it in my browser I'm able to download the file. How do I use those credentials in my Python script to do the same thing?
So far I have:
import urllib.request
response = urllib.request.urlopen('https:// (some url with an XML file)')
html = response.read()
html.write('C:/Users/mhurley/Portable_Python/notebooks/XMLOut.xml')
This code works for non-secured pages, but (understandably) returns 401:Unauthorized for the https address. I don't understand how urllib handles credentials, and the docs aren't as helpful as I'd like.

Related

Using request in python to download a xls file

In this page you will find a link to download an xls file (below attachment or adjuntos): https://www.banrep.gov.co/es/emisiones-vigentes-el-dcv
The link to download the xls file is: https://www.banrep.gov.co/sites/default/files/paginas/emisiones/EMISIONES.xls
I was using this code to automatically download that file:
import requests
import os
path = os.path.abspath(os.getcwd()) #donde se descargará el archivo
path = path.replace("\\", '/')+'/'
url = 'https://www.banrep.gov.co/sites/default/files/paginas/emisiones/EMISIONES.xls'
myfile = requests.get(url, verify=False)
open(path+'EMISIONES.xls', 'wb').write(myfile.content)
This code was working well, but suddently the downloaded file started being corrupted.
If I run the code, it raises this warning:
InsecureRequestWarning: Unverified HTTPS request is being made to host 'www.banrep.gov.co'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings
warnings.warn(
The error is related to how your request is being built. The status_code returned by the request is 403 [Forbiden]. You can see it typing
myfile.status_code
I guess the security issue is related to cookies and headers in your get request, because of that I suggest you take a view on how the webpage is building its headers in your request before the URL you're using is sent.
TIP: start you web browser in development mode and using Network tab, try to identify the headers.
To solve the issue of cookies take a view on how to retrieve naturally cookies pointing out to a previous webpage in www.banrep.gov.co, using requests.sessions
session_ = requests.Session()
Before coding you could try to test your requests using Postman, or other REST API test software.

Scraping an internal web page

I have to scrape an internal web page of my organization. If I use Beautiful soap I get
"Unauthorized access"
I don't want to put my username/password in the source code because it will be shared across collegues.
If I open the same web url using Firefox It doesn't not ask me to login, the only problem is when I make the same request using python script.
Is there a way to share the same session used by firefox with a python script?
I think my authentication is with my PC because if I log off deleting all cookies When i re-enter I because logged in automatically. Do you know why with my python script this doesn’t not happen?
When you use the browser to login to your organization, you provide your credentials and the server returns a cookie tied to your organization's domain. This cookie has an expiration and allows to use navigate your organization's site without having to login as long as the cookie is valid.
You can read about cookies here:
https://en.wikipedia.org/wiki/HTTP_cookie
Your website scraper does not need to store your credentials. First delete the cookies then, using your browser's developer tools, you can (look at the network tab):
Figure out if your organization uses a separate auth end point
If it's not evident, then you might ask the IT department
Use the auth endpoint to get a cookie using credentials passed in
See how this cookie is used by the system (look at the HTTP request/response headers)
Use this cookie to scrape the website
Share your code freely - if someone needs to scrape the website then they can either pass in their credentials, or use a curl command to get/set a valid cookie header
1) After authenticating in your Firefox browser, make sure to get the cookie key/value.
2) Use that data in the code below :
from bs4 import BeautifulSoup
import requests
browser_cookies = {'your_cookie_key':'your_cookie_value'}
s = requests.Session()
r = s.get(your_url, cookies=browser_cookies)
bsoup = BeautifulSoup(r.text, 'lxml')
The requests.Session() is for persistence.
One more tips, you could also call your script like that :
python3 /path/to/script/script.py cookies_key cookies_value
Then, get the two values with sys module. The code will be :
import sys
browser_cookies = {sys.argv[1]:sys.argv[2]}

Python - Requests module HTTP and HTTPS requests

I wish to make a requests with the Python requests module. I have a large database of urls I wish to download. the urls are in the database of the form page.be/something/something.html
I get a lot of ConnectionError's. If I search the URL in my browser, the page exists.
My Code:
if not webpage.url.startswith('http://www.'):
new_html = requests.get(webpage.url, verify=True, timeout=10).text
An example of a page I'm trying to download is carlier.be/categorie/jobs.html. This gives me a ConnectionError, logged as below:
Connection error, Webpage not available for
"carlier.be/categorie/jobs.html" with webpage_id "229998"
What seems to be the problem here? Why can't requests make the connection, while I can find the page in the browser?
The Requests library requires that you supply a schema for it to connect with (the 'http://' part of the url). Make sure that every url has http:// or https:// in front of it. You may want a try/except block where you catch a requests.exceptions.MissingSchema and try again with "http://" prepended to the url.

Retrieve the content of a page requiring authentication

I have access to an admin page by using a basic HTTP authentication system.
This page loads data using JavaScript by retrieving JSON data from another URL I can see in the Firefox Web Dev tools (the combination Ctrl+Shift+I, then going in the Network tab and reloading the page)
If I copy and paste this URL in the same instance of my browser, I retrieve the JSON data I need.
So:
Using Firefox, I connect to the admin page and provide the username/passwd.
Using Firefox Webdev toolbox, I retrieve the URL used to retrieve the JSON data I want.
I copy and paste this URL and get the JSON data I need, ready to be parsed.
Now, I would like to do the same automatically using Python 3.
I use Requests to make it easier. However, if I try to retrieve directly the URL found in step 3, I get an 401 Authentication error:
import requests
url = "http://xxx/services/users?from=0&to=50"
r = requests.get(url, auth=('user', 'passwd'))
r.status_code
>>> 401
I can do an authenticated request on the admin URL (something like http://xxx/admin-ui/) and I can retrieve the content of the web page, but it doesn't contain anything interesting since everything is loaded in JavaScript from that JSON data coming from the URL in step 3...
Any help would be more than welcome!
I needed to use form-based authentication, not HTTP Basic Auth as I originally thought.
So first I needed to login to the first URL in order to retrieve a auth cookie:
url = "http://xxx/admin-ui/"
credentials = {'j_username':'my_username','j_password':'my_passwd'}
s = requests.session()
s.post(url, credentials)
s.cookies
>>> <<class 'requests.cookies.RequestsCookieJar'>[Cookie(version=0, name='JSESSIONID', value='...>
Then I could connect to the second URL using this cookie and retrieve the data I needed:
url2 = "http://xxx/services/users?from=0&to=50"
r = requests.get(url2, cookies=s.cookies)
r.content
>>> (a lot of JSON data! \o/)

how to find out whether website is using cookies or http based authentication

I am trying to automate files download via a webserver. I plan on using wget or curl or python urllib / urllib2.
Most solutions use wget and urllib and urllib2. They all talk of HHTP based authentication and cookie based authentication. My problem is I dont know which one is used in the website that stores my data.
Here is the interaction with the site:
Normally I login to site http://www.anysite.com/index.cgi?
I get a form with a login and password. I type in both and hit return.
The url stays as http://www.anysite.com/index.cgi? during the entire interaction. But now I have a list of folders and files
If I click on a folder or file the URL changes to http://shamrockstructures.com/cgi-bin/index.cgi?page=download&file=%2Fhome%2Fjanysite%2Fpublic_html%2Fuser_data%2Fuserareas%2Ffile.tar.bz2
And the browser offers me a chance to save the file
I want to know how to figure out whether the site is using HTTP or cookie based authentication. After which I am assuming I can use cookielib or urllib2 in python to connect to it, get the list of files and folders and recursively download everything while staying connected.
p.S: I have tried the cookie cutter ways to connect via wget and wget --http-user "uname" --http-password "passwd" http://www.anysite.com/index.cgi? , but they only return the web form back to me.
If you log in using a Web page, the site is probably using cookie-based authentication. (It could technically use HTTP basic auth, by embedding your credentials in the URI, but this would be a dumb thing to do in most cases.) If you get a separate, smallish dialog with a user name and password field (like this one), it is using HTTP basic authentication.
If you try to log in using HTTP basic auth, and get back the login page, as is happening to you, this is a certain indication that the site is not using HTTP basic auth.
Most sites use cookie-based authentication these days. To do this with an HTTP cilent such as urllib2, you will need to do an HTTP POST of the fields in the login form. (You may need to actually request the login form first, as a site could include a cookie that you need to even log in, but usually this is not necessary.) This should return a "successfully logged in" page that you can test for. Save the cookies you get back from this request. When making the next request, include these cookies. Each request you make may respond with cookies, and you need to save those and send them again with the next request.
urllib2 has a function called a "cookie jar" which will automatically handle the cookies for you as you send requests and receive Web pages. That's what you want.
You can use pycurl like this:
import pycurl
COOKIE_JAR = 'cookiejar' # file to store the cookies
LOGIN_URL = 'http://www.yoursite.com/login.cgi'
USER_FIELD = 'user' # Name of the element in the HTML form
USER = 'joe'
PASSWD_FIELD = 'passwd' # Name of the element in the HTML form
PASSWD = 'MySecretPassword'
def read(html):
"""Read the body of the response, with posible
future html parsing and re-requesting"""
print html
com = pycurl.Curl()
com.setopt(pycurl.WRITEFUNCTION, read)
com.setopt(pycurl.COOKIEJAR, COOKIE_JAR)
com.setopt(pycurl.FOLLOWLOCATION, 1) # follow redirects
com.setopt(pycurl.POST, 1)
com.setopt(pycurl.POSTFIELDS, '%s=%s;%s=%s'%(USER_FIELD, USER,
PASSWD_FIELD, PASSWD))
com.setopt(pycurl.URL, LOGIN_URL )
com.perform()
Plain pycurl it may seam very "primitive" (with the limited setopt approach),
but it gets the job done, and handle pretty well the cookies with the cookie jar option.
AFAIK cookie based authentication is only used once you have logged in successfully atleast ONCE. You can try disabling storing cookies from that domain by changing your browser settings, if you are still able to download files that it should be a HTTP based authentication.
Try doing a equivalent GET request for the (possibly POST) login request that is probably happening right now for login. Use firebug or fiddler to see the login request that is sent.
Also note if there is some javascript code which is returning you a different output, based on your useragent string or some other parameter.
See if httplib, mechanize helps.

Categories