Submit contact form using POST request (requests & python3) - python

I'm not sure if such a thing is possible, but I am trying to submit to a form such as https://lambdaschool.com/contact using a POST request.
I currently have the following:
import requests
payload = {"name":"MyName","lastname":"MyLast","email":"someemail#gmail.com","message":"My message"}
r = requests.post('http://lambdaschool.com/contact',params=payload)
print(r.text)
But I get the following error:
<title>405 Method Not Allowed</title>
etc.
Is such a thing possible to submit using a POST request?

If it were that simple, you'd see a lot of bots attacking every login form ever.
That URL obviously doesn't accept POST requests. That doesn't mean the submit button is POST-ing to that page (though clicking the button also gives that same error...)
You need to open the chrome / Firefox dev tools and watch the request to see what happens on form submit and replicate that data in Python.
Another option would be the mechanize or Selenium webdriver libraries to simulate a browser and fill out the form

params is for query parameters. You either want data, for a form encoded body, or json, for a JSON body.

I think the url should be 'http://lambdaschool.com/contact-form'.

Related

can not POST to website and hitting search

I am trying to make a post request to that website: http://archive.eso.org/wdb/wdb/asm/dimm_paranal/form
so far I did that:
import requests
import bs4
url = 'http://archive.eso.org/wdb/wdb/asm/dimm_paranal/form'
p = {'search': 'Search',
'start_date' : '2019-09-17..2019-09-18'}
post = requests.post(url,data=p)
when I analyse the text from the post I only get the form webpage html code and not the result of the query. How can I simulate the query?
Additional question: How can I check the checkboxes in the form?
The form has an action, in this case it is /wdb/wdb/asm/dimm_paranal/query. Try to send the request there...
In devtools (Ctrl+Shift+I) you have "Network". Go there and see what is actually requested, check all the data, response, headers and so on.
Another help I would recommend is a programm caled Postman. You can create requests there, no need to code it.
Additional answer to your additional question: The checkboxes have no default value. Just set anything. 1, true, whatever. It should work.

Figuring out url made in ajax post

At this link when hover over any row, then there is an image box which says "i" you can click to get extra data. Then navigate to Lines History. Where is that information coming from? I can't find the URL that is connected with that.
I used dev tools in chrome, and found out that there's an ajax post being made:
Request URL:http://www.sbrforum.com/ajax/?a=[SBR.Odds.Modules]OddsEvent_GetLinesHistory
Form Data: UserId=0&Sport=basketball&League=NBA&EventId=259672&View=LH&SportsbookId=238&DefaultBookId=238&ConsensusBookId=19&PeriodTypeId=&StartDate=2014-03-24&MatchupLink=http%3A%2F%2Fwww.sbrforum.com%2Fnba-basketball%2Fmatchups%2F20140324-602%2F&Key=de2f9e1485ba96a69201680d1f7bace4&theme=default
but when I try to visit this url in browser I got Invalid Ajax Call -- from host:
Any idea?
Like you say, it's probably an HTTP POST request.
When you navigate to the URL with the browser, the browser issues a GET request, without all the form data.
Try curl, wget, or the javascript console in your browser to do a POST.

Python: Connecting to a private page using urllib

I am trying to connect to a private page were you have to be logged in to view it using urllib. When I try to connect to the page I just get redirected, to the login page.
Is there a way to log in with urllib or use cookies from my webrowser or something like that?
I have tried to figure out how to do it myself and have failed.
Any help on this would be nice.
If your page uses HTML authentication, use HTTPBasicAuthHandler.
If your page uses authentication by form, use POST request to send login form and store the cookies using cookielib.
Look for Authentication under http://docs.python.org/library/urllib2.html#examples

Python script is scraping the wrong page source. I think it's failing to login properly?

This script succeeds at getting a 200 response object, getting a cookie, and returning reddit's stock homepage source. However, it is supposed to get the source of the "recent activity" subpage which can only be accessed after logging in. This makes me think it's failing to log in appropriately but the username and password are accurate, I've double checked that.
#!/usr/bin/python
import requests
import urllib2
auth = ('username', 'password')
with requests.session(auth=auth) as s:
c = s.get('http://www.reddit.com')
cookies = c.cookies
for k, v in cookies.items():
opener = urllib2.build_opener()
opener.addheaders.append(('cookie', '{}={}'.format(k, v)))
f = opener.open('http://www.reddit.com/account-activity')
print f.read()
It looks like you're using the standard "HTTP Basic" authentication, which is not what Reddit uses to log in to its web site. (Almost no web sites use HTTP Basic (which pops up a modal dialog box requesting authentication), but implement their own username/password form).
What you'll need to do is get the home page, read the login form fields, fill in the user name and password, POST the response back to the web site, get the resulting cookie, then use the cookie in future requests. There may be quite a number of other details for you to work out too, but you'll have to experiment.
I just think maybe we're having the same problem. I get status code 200 ok. But the script never logged me in. I'm getting some suggestions and help. Hopefully you'll let me know what works for you too. Seems reddit is using the same system too.
Check out this page where my problem is being discussed.
Authentication issue using requests on aspx site

how to find out whether website is using cookies or http based authentication

I am trying to automate files download via a webserver. I plan on using wget or curl or python urllib / urllib2.
Most solutions use wget and urllib and urllib2. They all talk of HHTP based authentication and cookie based authentication. My problem is I dont know which one is used in the website that stores my data.
Here is the interaction with the site:
Normally I login to site http://www.anysite.com/index.cgi?
I get a form with a login and password. I type in both and hit return.
The url stays as http://www.anysite.com/index.cgi? during the entire interaction. But now I have a list of folders and files
If I click on a folder or file the URL changes to http://shamrockstructures.com/cgi-bin/index.cgi?page=download&file=%2Fhome%2Fjanysite%2Fpublic_html%2Fuser_data%2Fuserareas%2Ffile.tar.bz2
And the browser offers me a chance to save the file
I want to know how to figure out whether the site is using HTTP or cookie based authentication. After which I am assuming I can use cookielib or urllib2 in python to connect to it, get the list of files and folders and recursively download everything while staying connected.
p.S: I have tried the cookie cutter ways to connect via wget and wget --http-user "uname" --http-password "passwd" http://www.anysite.com/index.cgi? , but they only return the web form back to me.
If you log in using a Web page, the site is probably using cookie-based authentication. (It could technically use HTTP basic auth, by embedding your credentials in the URI, but this would be a dumb thing to do in most cases.) If you get a separate, smallish dialog with a user name and password field (like this one), it is using HTTP basic authentication.
If you try to log in using HTTP basic auth, and get back the login page, as is happening to you, this is a certain indication that the site is not using HTTP basic auth.
Most sites use cookie-based authentication these days. To do this with an HTTP cilent such as urllib2, you will need to do an HTTP POST of the fields in the login form. (You may need to actually request the login form first, as a site could include a cookie that you need to even log in, but usually this is not necessary.) This should return a "successfully logged in" page that you can test for. Save the cookies you get back from this request. When making the next request, include these cookies. Each request you make may respond with cookies, and you need to save those and send them again with the next request.
urllib2 has a function called a "cookie jar" which will automatically handle the cookies for you as you send requests and receive Web pages. That's what you want.
You can use pycurl like this:
import pycurl
COOKIE_JAR = 'cookiejar' # file to store the cookies
LOGIN_URL = 'http://www.yoursite.com/login.cgi'
USER_FIELD = 'user' # Name of the element in the HTML form
USER = 'joe'
PASSWD_FIELD = 'passwd' # Name of the element in the HTML form
PASSWD = 'MySecretPassword'
def read(html):
"""Read the body of the response, with posible
future html parsing and re-requesting"""
print html
com = pycurl.Curl()
com.setopt(pycurl.WRITEFUNCTION, read)
com.setopt(pycurl.COOKIEJAR, COOKIE_JAR)
com.setopt(pycurl.FOLLOWLOCATION, 1) # follow redirects
com.setopt(pycurl.POST, 1)
com.setopt(pycurl.POSTFIELDS, '%s=%s;%s=%s'%(USER_FIELD, USER,
PASSWD_FIELD, PASSWD))
com.setopt(pycurl.URL, LOGIN_URL )
com.perform()
Plain pycurl it may seam very "primitive" (with the limited setopt approach),
but it gets the job done, and handle pretty well the cookies with the cookie jar option.
AFAIK cookie based authentication is only used once you have logged in successfully atleast ONCE. You can try disabling storing cookies from that domain by changing your browser settings, if you are still able to download files that it should be a HTTP based authentication.
Try doing a equivalent GET request for the (possibly POST) login request that is probably happening right now for login. Use firebug or fiddler to see the login request that is sent.
Also note if there is some javascript code which is returning you a different output, based on your useragent string or some other parameter.
See if httplib, mechanize helps.

Categories