I am having trouble creating and keeping new sessions when I am scraping my page. I am initiating a session within my script using the Requests library and then parsing values to a web form. However, it's is returning a "Your session has timed out" page.
Here is my source:
import requests
session = requests.Session()
params = {'Rctl00$ContentPlaceHolder1$txtName': 'Andrew'}
r = session.post("https://www.searchiqs.com/NYALB/SearchResultsMP.aspx", data=params)
print(r.text)
The url I want to search from is this https://www.searchiqs.com/NYALB/SearchAdvancedMP.aspx
I am searching for a Party 1 name called "Andrew". I have identified the form element holding this search box as 'Rctl00$ContentPlaceHolder1$txtName'. The action url is SearchResultsMP.aspx.
When i do it from a browser, it gives the first page of results. When i do it in the terminal it gives me the session expired page. Any ideas?
First, I would refer you to the advanced documentation related to use of sessions within the requests Python module.
http://docs.python-requests.org/en/master/user/advanced/
I also notice that navigating to the base URL in your invocation of sessions.post redirects to:
https://www.searchiqs.com/NYALB/InvalidLogin.aspx?InvLogInCode=OldSession%2007/24/2016%2004:19:37%20AM
I "hacked" the URL to navigate to:
https://www.searchiqs.com/NYALB/
...and notice that if I click on the Show Login Fields link on that page, I am prompted a form appears with prompts for User ID and Password. Your attempts to programmatically do your searches are likely failing because you have not done any sorts of authentication. It likely works in your browser because you have been permitted to access this, either by some previous authentication you have completed and may have forgotten about, or some sort of server side access rules that don't ask for this based upon some criteria.
Running those commands in a local interpreter, I can see that the site owner did not bother to return a status code indicative of failed auth. If you check, the r.status_code is 200 but your r.text will be the Invalid Login page. I know nada about ASP, but am guessing that HTTP status codes should be indicative of what actually happened.
Here is some code, that does not really work, but may illustrate how you may want to interact with the site and sessions.
import requests
# Create dicts with our login and search data
login_params = {'btnGuestLogin': 'Log+In+as+GUEST'}
search_params = {'ctl00$ContentPlaceHolder1$txtName': 'Andrew'}
full_params = {'btnGuestLogin': 'Log+In+as+GUEST', 'ctl00$ContentPlaceHolder1$txtName': 'Andrew'}
# Create session and add login params
albany_session = requests.session()
albany_session.params = login_params
# Login and confirm login via searching for the 'ASP.NET_SessionId' cookie.
# Use the login page, not the search page first.
albany_session.post('https://www.searchiqs.com/NYALB/LogIn.aspx')
print(albany_session.cookies)
# Prepare a your search request
search_req = requests.Request('POST', 'https://www.searchiqs.com/NYALB/SearchAdvancedMP.aspx',data=search_params)
prepped_search_req = albany_session.prepare_request(search_req)
# Probably should work but does not seem to, for "reasons" unknown to me.
search_response = albany_session.send(prepped_search_req)
print(search_response.text)
An alternative may be for you to consider is Selenium browser automation with Python bindings.
http://selenium-python.readthedocs.io/
Related
I have to scrape an internal web page of my organization. If I use Beautiful soap I get
"Unauthorized access"
I don't want to put my username/password in the source code because it will be shared across collegues.
If I open the same web url using Firefox It doesn't not ask me to login, the only problem is when I make the same request using python script.
Is there a way to share the same session used by firefox with a python script?
I think my authentication is with my PC because if I log off deleting all cookies When i re-enter I because logged in automatically. Do you know why with my python script this doesn’t not happen?
When you use the browser to login to your organization, you provide your credentials and the server returns a cookie tied to your organization's domain. This cookie has an expiration and allows to use navigate your organization's site without having to login as long as the cookie is valid.
You can read about cookies here:
https://en.wikipedia.org/wiki/HTTP_cookie
Your website scraper does not need to store your credentials. First delete the cookies then, using your browser's developer tools, you can (look at the network tab):
Figure out if your organization uses a separate auth end point
If it's not evident, then you might ask the IT department
Use the auth endpoint to get a cookie using credentials passed in
See how this cookie is used by the system (look at the HTTP request/response headers)
Use this cookie to scrape the website
Share your code freely - if someone needs to scrape the website then they can either pass in their credentials, or use a curl command to get/set a valid cookie header
1) After authenticating in your Firefox browser, make sure to get the cookie key/value.
2) Use that data in the code below :
from bs4 import BeautifulSoup
import requests
browser_cookies = {'your_cookie_key':'your_cookie_value'}
s = requests.Session()
r = s.get(your_url, cookies=browser_cookies)
bsoup = BeautifulSoup(r.text, 'lxml')
The requests.Session() is for persistence.
One more tips, you could also call your script like that :
python3 /path/to/script/script.py cookies_key cookies_value
Then, get the two values with sys module. The code will be :
import sys
browser_cookies = {sys.argv[1]:sys.argv[2]}
i'm trying to retrieve the page content from https://www.awesomebox.io/scan
But before I can do that need to be logged in. At the moment I still get the login page content. Thats because it redirects because im not logged in.
Anybody know how to get the scan page content with python-requests?
I tried multiple requests authentication methods.
My code so far:
import requests
session = requests.session()
loginURL = 'http://www.awesomebox.io/login'
payload = {'username': '******','password': '******'}
session.post(loginURL, data=payload)
scanURL = "http://awesomebox.io/scan"
scanpage = session.get(scanURL)
print scanpage.content
I don't have an account with awesomebox, so therefore don't know exactly. But nowadays a login on websites is more sophisticated and secure than a simple post of username and password.
To find out, you can do a manual login and trace the web traffic in the developer mode of the browser (e.g. F12 for MSIE or Edge) and store it in a .har file. There you can (hopefully) see, how the Login procedure is implemented and build the same sequence in your requests session.
Sometimes there is a hidden field in the form (e.g. "lt" for login ticket) that has been populated via js by the page before. Sometimes it's even more complex, if a secret login in run via Ajax in the Background. In this case you even see nothing in the F12 view and have to dig into the js scripts.
Thank you, I noticed i forgot a hidden parameter.
I added the csrfmiddlewaretoken.
I have a Python script using mechanize browser which logs into a self hosted Wordpress blog, navigates to a different page after the automatic redirect to the dashboard to automate several builtin functions.
This script actually works 100% on most of my blogs but goes into a permanent loop with one of them.
The difference is that the only one which fails has a plugin called Wassup running. This plugin sets a session cookie for all visitors and this is what I think is causing the issue.
When the script goes to the new page the Wordpress code doesn't get the proper cookie set, decides that the browser isn't logged in and redirects to the login page. The script logs in again and attempts the same function and round we go again.
I tried using Twill which does login correctly and handles the cookies correctly but Twill, by default, outputs everything to the command line. This is not the behaviour I want as I am doing page manipulation at this point and I need access to the raw html.
This is the setup code
# Browser
self.br = mechanize.Browser()
# Cookie Jar
policy = mechanize.DefaultCookiePolicy(rfc2965=True)
cj = mechanize.LWPCookieJar(policy=policy)
self.br.set_cookiejar(cj)
After successful login I call this function
def open(self):
if 'http://' in str(self.burl):
site = str(self.burl) + '/wp-admin/plugin-install.php'
self.burl = self.burl[7:]
else:
site = "http://" + str(self.burl) + '/wp-admin/plugin-install.php'
try:
r = self.br.open(site, timeout=1000)
html = r.read()
return html
except HTTPError, e:
return str(e.code)
I'm thinking that I will need to save the cookies to a file and then shuffle the order so the Wordpress session cookie gets returned before the Wassup one.
Any other suggestions?
This turned out to be a quite different problem, and fix, than it seemed which is why I have decided to put the answer here for anyone who reads this later.
When a WordPress site is setup there is an option for the url to default to http://sample.com or http://www.sample.com. This turned out to be a problem for the cookie storage. Cookies are stored with the url as part of their name. My program semi-hardcodes the url with one or the other of these formats. This meant that every time I made a new url request it had the wrong format and no cookie with the right name could be found so the WordPress site rightfully decided I wasn't logged in and sent me back to login again.
The fix is to grab the url delivered in the redirect after login and recode the variable (in this case self.burl) to reflect what the .httaccess file expects to see.
This fixed my problem because some of my sites had one format and some the other.
I hope this helps someone out with using requests, twill, mechanise etc.
This script succeeds at getting a 200 response object, getting a cookie, and returning reddit's stock homepage source. However, it is supposed to get the source of the "recent activity" subpage which can only be accessed after logging in. This makes me think it's failing to log in appropriately but the username and password are accurate, I've double checked that.
#!/usr/bin/python
import requests
import urllib2
auth = ('username', 'password')
with requests.session(auth=auth) as s:
c = s.get('http://www.reddit.com')
cookies = c.cookies
for k, v in cookies.items():
opener = urllib2.build_opener()
opener.addheaders.append(('cookie', '{}={}'.format(k, v)))
f = opener.open('http://www.reddit.com/account-activity')
print f.read()
It looks like you're using the standard "HTTP Basic" authentication, which is not what Reddit uses to log in to its web site. (Almost no web sites use HTTP Basic (which pops up a modal dialog box requesting authentication), but implement their own username/password form).
What you'll need to do is get the home page, read the login form fields, fill in the user name and password, POST the response back to the web site, get the resulting cookie, then use the cookie in future requests. There may be quite a number of other details for you to work out too, but you'll have to experiment.
I just think maybe we're having the same problem. I get status code 200 ok. But the script never logged me in. I'm getting some suggestions and help. Hopefully you'll let me know what works for you too. Seems reddit is using the same system too.
Check out this page where my problem is being discussed.
Authentication issue using requests on aspx site
I am trying to automate files download via a webserver. I plan on using wget or curl or python urllib / urllib2.
Most solutions use wget and urllib and urllib2. They all talk of HHTP based authentication and cookie based authentication. My problem is I dont know which one is used in the website that stores my data.
Here is the interaction with the site:
Normally I login to site http://www.anysite.com/index.cgi?
I get a form with a login and password. I type in both and hit return.
The url stays as http://www.anysite.com/index.cgi? during the entire interaction. But now I have a list of folders and files
If I click on a folder or file the URL changes to http://shamrockstructures.com/cgi-bin/index.cgi?page=download&file=%2Fhome%2Fjanysite%2Fpublic_html%2Fuser_data%2Fuserareas%2Ffile.tar.bz2
And the browser offers me a chance to save the file
I want to know how to figure out whether the site is using HTTP or cookie based authentication. After which I am assuming I can use cookielib or urllib2 in python to connect to it, get the list of files and folders and recursively download everything while staying connected.
p.S: I have tried the cookie cutter ways to connect via wget and wget --http-user "uname" --http-password "passwd" http://www.anysite.com/index.cgi? , but they only return the web form back to me.
If you log in using a Web page, the site is probably using cookie-based authentication. (It could technically use HTTP basic auth, by embedding your credentials in the URI, but this would be a dumb thing to do in most cases.) If you get a separate, smallish dialog with a user name and password field (like this one), it is using HTTP basic authentication.
If you try to log in using HTTP basic auth, and get back the login page, as is happening to you, this is a certain indication that the site is not using HTTP basic auth.
Most sites use cookie-based authentication these days. To do this with an HTTP cilent such as urllib2, you will need to do an HTTP POST of the fields in the login form. (You may need to actually request the login form first, as a site could include a cookie that you need to even log in, but usually this is not necessary.) This should return a "successfully logged in" page that you can test for. Save the cookies you get back from this request. When making the next request, include these cookies. Each request you make may respond with cookies, and you need to save those and send them again with the next request.
urllib2 has a function called a "cookie jar" which will automatically handle the cookies for you as you send requests and receive Web pages. That's what you want.
You can use pycurl like this:
import pycurl
COOKIE_JAR = 'cookiejar' # file to store the cookies
LOGIN_URL = 'http://www.yoursite.com/login.cgi'
USER_FIELD = 'user' # Name of the element in the HTML form
USER = 'joe'
PASSWD_FIELD = 'passwd' # Name of the element in the HTML form
PASSWD = 'MySecretPassword'
def read(html):
"""Read the body of the response, with posible
future html parsing and re-requesting"""
print html
com = pycurl.Curl()
com.setopt(pycurl.WRITEFUNCTION, read)
com.setopt(pycurl.COOKIEJAR, COOKIE_JAR)
com.setopt(pycurl.FOLLOWLOCATION, 1) # follow redirects
com.setopt(pycurl.POST, 1)
com.setopt(pycurl.POSTFIELDS, '%s=%s;%s=%s'%(USER_FIELD, USER,
PASSWD_FIELD, PASSWD))
com.setopt(pycurl.URL, LOGIN_URL )
com.perform()
Plain pycurl it may seam very "primitive" (with the limited setopt approach),
but it gets the job done, and handle pretty well the cookies with the cookie jar option.
AFAIK cookie based authentication is only used once you have logged in successfully atleast ONCE. You can try disabling storing cookies from that domain by changing your browser settings, if you are still able to download files that it should be a HTTP based authentication.
Try doing a equivalent GET request for the (possibly POST) login request that is probably happening right now for login. Use firebug or fiddler to see the login request that is sent.
Also note if there is some javascript code which is returning you a different output, based on your useragent string or some other parameter.
See if httplib, mechanize helps.