I am trying to sign into facebook with python requests.
When I run the following code:
import requests
def get_facebook_cookie():
sign_in_url = "https://www.facebook.com/login.php?login_attempt=1"
#need to post: email , pass
payload = {"email":"xxxx#xxxx.com", "pass":"xxxxx"}
headers = {"accept":"text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8", "user-agent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.132 Safari/537.36"}
s = requests.Session()
r1 = s.get(sign_in_url, headers = headers, timeout = 2)
r = s.post(sign_in_url, data = payload, headers = headers, timeout = 2, cookies = r1.cookies)
print r.url
text = r.text.encode("ascii", "ignore")
fbhtml = open("fb.html", "w")
fbhtml.write(text)
fbhtml.close()
return r.headers
print get_facebook_cookie()
( note the url is supposed to redirect to facebook.com-- still does not do that)
facebook returns the following error:
(the email is actually populated in the email box -- so I know it is passing it in atleast)
According to the requests session documentation it handles all of the cookies, so I do not even think I need to be passing it in. However, I have seen other do it elsewhere in order to populate the second request with an empty cookie so I gave it a shot.
The question is, why is facebook telling me I do not have cookies enabled? Is there some extra request header I need to pass in? Would urllib2 be a better choice for something like this?
it looks like according to this answer Login to Facebook using python requests
there is more data that needs to be sent in, in order to successfully sign into facebook.
Related
I'm trying to login to a website using python requests, however the webpage has a mandatory data protection consent form pop-up on the first page. I think this is why I cannot yet login, because posting your login credentials to the login URL requires these content cookies (which are probably dynamic).
After checking out the login post headers request (via inspection tools) it says it requires the cookies from a CMP, specifically a variable called euconsent-v2 (https://help.consentmanager.net/books/cmp/page/cookies-set-by-the-cmp), so my question is how to get these cookies (and/or other necessary cookies) from the website after accepting a consent pop-up, so I can login.
Here is my code so far:
import requests
# Website
base_url = 'https://www.wg-gesucht.de'
# Login URL
login_url = 'https://www.wg-gesucht.de/ajax/sessions.php?action=login'
# Post headers (just a sample of all variables)
headers = {...,
'Cookie': 'euconsent-v2=********'}
# Post params
payload = {'display_language': "de",
'login_email_username': "******",
'login_form_auto_login': "1",
'login_password': "******"}
# Setup session and login
sess = requests.session()
resp_login = sess.post(login_url, data=payload, headers=headers)
UPDATE: I have searched through all recorded requests from starting up the website to login and the only mention of euconsent-v2 is in the response of this:
cookie_url = 'https://cdn.consentmanager.mgr.consensu.org/delivery/cmp_en.min.js'
referer = 'https://www.wg-gesucht.de'
headers = {'Referer': referer,
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.66 Safari/537.36'}
sess = requests.session()
resp_init = sess.get(cookie_url, headers=headers)
But I still cannot get the required cookies
The best way would be creating a session, then requesting all the sites that set the cookie that you need. Then with all the cookies in the session you make, you request the login page.
https://help.consentmanager.net/books/cmp/page/cookies-set-by-the-cmp
On the right hand side there is the location.
The image shown below, is just an example of what I mean. Its a random site/url that on the response header, it sets two cookies. A session will save all the cookies and then when you have all the mandatory ones, you make a request to the login page with post data.
I created a API in my site and I'm trying to call an API from python but I always get 406 as a response, however, if I put the url in the browser with the parameters, I can see the correct answer
I already did some test in pages where you can tests you own API, I test it in the browser and work fine.
I already followed up a manual that explains how to call an API from python but I do not get the correct response :(
This is the URL of the API with the params:
https://icassy.com/api/login.php?usuario_email=warles34%40gmail.com&usuario_clave=123
This is the code I use to call the API from Python
import requests
urlLogin = "https://icassy.com/api/login.php"
params = {'usuario_email': 'warles34#gmail.com', 'usuario_clave': '123'}
r = requests.get(url=urlLogin, data=params)
print(r)
print(r.content)
and I get:
<Response [406]>
b'<head><title>Not Acceptable!</title></head><body><h1>Not Acceptable!</h1><p>An appropriate representation of the requested resource could not be found on this server. This error was generated by Mod_Security.</p></body></html>'
I should receive in JSON format the success message and the apikey like this:
{"message":"Successful login.","apikey":"eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJodHRwOlwvXC9leGFtcGxlLm9yZyIsImF1ZCI6Imh0dHA6XC9cL2ljYXNzeS5jb20iLCJpYXQiOjEzNTY5OTk1MjQsIm5iZiI6MTM1NzAwMDAwMCwiZGF0YSI6eyJ1c3VhcmlvX2lkIjoiMzQiLCJ1c3VhcmlvX25vbWJyZSI6IkNhcmxvcyIsInVzdWFyaW9fYXBlbGxpZG8iOiJQZXJleiIsInVzdWFyaW9fZW1haWwiOiJ3YXJsZXMzNEBnbWFpbC5jb20ifX0.bOhrC-vXhQEHtbbZGmhLByCxvJY7YxDrLhVOfy9zeFc"}
Looks like there is a validation on the server to check if request is made from some browser. Adding a user-agent header should do it -
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36'}
r = requests.get(url=urlLogin, params=params, headers=headers)
This link of user agents might come handy in future.
I turned out that the service I was doing a request to was hosted on Akamai that has a bot manager. It looks at the requests (where it comes from) and if it determines that it is a bot you get a 406 error.
The solution was to ask for the server IP to be whitelisted, or to send a special header to all server communication.
In my case, I had
'Accept': 'text/plain'
and it worked after I replaced it with
'Accept': 'application/json'
I didn't need to use user-agent at all
I am trying to complete a webscrape of a page that requires a log-in first. I am fairly certain that I have my code and input names ('login' and 'password') correct yet it still gives me a 'Login Failed' page. Here is my code:
payload = {'login': 'MY_USERNAME', 'password': 'MY_PASSWORD'}
login_url = "https://www.spatialgroup.com.au/property_daily/"
with requests.Session() as session:
session.post(login_url, data=payload)
response = session.get("https://www.spatialgroup.com.au/cgi-bin/login.cgi")
html = response.text
print(html)
I've done some snooping around and have figured out that the session doesn't stay logged in when I run my session.get("LOGGEDIN_PAGE"). For example, if I complete the log in process and then enter a URL into the address bar that I know for a fact is a page only accessible once logged in, it returns me to the 'Login Failed' page. How would I get around this if my login session is not maintained?
As others have mentioned, its hard to help here without knowing the actual site you are attempting to log in to.
I'd point out that you aren't using any set HTTP headers at all, which is a common validation check for logins on webpages. If you're sure that you are POSTing the data in the right format (form encoded versus json encoded), then I would open up Chrome inspector and copy the user-agent from your browser.
s = requests.Session()
s.headers = {
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36',
'Accept': '*/*'
}
Also, it's good practice to check the response status code of each web request you make using a try/except pattern. This will help you catch errors as you write and test requests, instead of blindly guessing which requests are erroneous.
r = requests.get('http://mypage.com')
try:
r.raise_for_status()
except requests.exceptions.HTTPError:
print('oops bad status code {} on request!'.format(r.status_code))
Edit: Now that you've given us the site, inspecting a login attempt reveals that the form data isn't actually being POSTed to that website, but rather it's being sent to a CGI script url.
To find this, open up Chrome Inspector and watch the "Network" tab as you try to login. You'll see that the login is actually being sent to https://www.spatialgroup.com.au/cgi-bin/login.cgi, not the actual login page. When you submit to this login page, it executes a 302 redirect after logging in. We can check the location after performing the request to see if the login was successful.
Knowing this I would send a request like this:
s = requests.Session()
# try to login
r = s.post(
url='https://www.spatialgroup.com.au/cgi-bin/login.cgi',
headers={
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3'
},
data={
'login': USERNAME,
'password': PASSWORD
}
)
# now lets check to make sure we didnt get 4XX or 5XX errors
try:
r.raise_for_status()
except requests.exceptions.HTTPError:
print('oops bad status code {} on request!'.format(r.status_code))
else:
print('our login redirected to: {}'.format(r.url))
# subsequently if the login was successful, you can now make a request to the login-protected page at this point
It's very difficult to help you without having the actual website you are working with. That being said I would recommend you changing this line:
session.post(login_url, data=payload)
to this one:
session.post(login_url, json=payload)
hope this helps
I am trying to log in to a website using python and the requests module.
My problem is that I am still seeing the log in page even after I have given my username / password and am trying to access pages after the log in - in other words, I am not getting past the log in page, even though it seems successful.
I am learning that it can be a different process with each website and so it's not obvious what I need to add to fix the problem.
It was suggested that I download a web traffic snooper like Fiddler and then try to replicate the actions with my python script.
I have downloaded Fiddler, but I'm a little out of my depth with how I find and replicate the actions that I need.
Any help would be gratefully received.
My original code:
import requests
payload = {
'login_Email': 'xxxxx#gmail.com',
'login_Password': 'xxxxx'
}
with requests.Session() as s:
p = s.post('https://www.auction4cars.com/', data=payload)
print p.text
If you look at the browser developer tools, you may see that the login POST request needs to be submitted to a different URL:
https://www.auction4cars.com/Home/UserLogin
Note that also the payload needs to be:
payload = {
'login_Email_or_Username': 'xxxxx#gmail.com',
'login_Password': 'xxxxx'
}
I'd still visit the login page before doing that and set the headers:
HOME_URL = 'https://www.auction4cars.com/'
LOGIN_URL = "https://www.auction4cars.com/Home/UserLogin"
with requests.Session() as s:
s.headers = {
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36"
}
s.get(HOME_URL)
p = s.post(LOGIN_URL, data=payload)
print(p.text) # or use p.json() as, I think, the response format is JSON
I am trying to login-in into a website using Python requests module.
Website : http://www.way2sms.com/
I use POST to submit the form data. Following is the code that is use.
import requests as r
URL = "http://www.way2sms.com"
data = {'mobileNo':'###','password':'#####'}
header={'User-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.42 Safari/537.36'}
sess = r.Session()
x = sess.post(url , data= data , headers = header)
print x.status_code()
I don't seem to find a way to validate if the login was successful or not. Also the Response is always 200 whether if I enter the right login details or not.
My whole intention is to login-in and then send text messages using this website(I know that I could have used some API). But I am unable to know if I have logged-in successfully or not.
Also this website uses some kind of JSESSIONID (don't know much about that) to maintain the session.
As you can see in the picture, site submit an AJAX request to www.way2sms.com/re-login so it would be better to submit your request directly here and then check response (returned content)
Something like this would help:
session = requests.Session()
URL = 'http://www.way2sms.com/re-login'
data = {'mobileNo': '94########', 'password': 'pass'} # Make sure to remove '+' from your number
post = session.post(URL, data=data)
if post.text != 'login-reg': # This returned when i did input invalid credentials
print('Login successful')
else:
print(post.text)
Since i don't have an account there you may also need to check success response
Check if the response object contains the cookie you're looking for, namely JSESSIONID.
if x.cookies.get('JSESSIONID'):
print 'Login successful.'