Using requests in python to post to a html form? - python

If I were trying to google something, how would I send the data I want to search to be searched? I know you can add it to the url, but I do not want to do this.

Using the requests library. You would use the .post method in the same way as the .get method. Passing in the data as a dictionary to the data parameter of the function.
The quickstart docs describe it here http://requests.readthedocs.org/en/latest/user/quickstart/#more-complicated-post-requests
If using urllib or urllib2 the data parameter of the urlopen function will POST the data to the page rather than GET it.
see the docs here http://docs.python.org/library/urllib.html#urllib.urlopen

Related

How do I format get requests to Blogger API in Python?

I'm new to Python and API's. I'm trying to work on a project and need some help.
I just want to request some information from Blogger to get back blog post details (title, body, etc) and return it.
I'm wanting to use these: https://developers.google.com/resources/api-libraries/documentation/blogger/v3/python/latest/blogger_v3.blogs.html
I can use requests.get(url/key) and I get a server status[200], but everytime I try to find a way to use one of the requests from the link I get keyword errors.
For example: "TypeError: request() got an unexpected keyword argument 'blogId'"
My code is requests.get('url/key' blogId='BLOG ID HERE', x__xgafv=None, maxPosts=None, view=None)
I don't feel comfortable posting my exact code, since it has my API in it.
What am I doing wrong?
request.get() method doesn't have blogID parameter. For more info use this link
I am not sure, but oyu can use params like that:
page = get('https://google.com', params={'blogId': '12345'})
It's better to look up information in the docs: https://requests.readthedocs.io/en/master/
I found my solution after digging a bit.
I was using get requests on the API to get a response, which is how it should be used.
Yet, I was also trying to use a method of returning data that also used the get keyword, but was meant for use with the Google API Library.
I had to install the Google API Client library and import build, and requests so that I could use the API methods instead. This allows me to return results that are far more specific (hence the keyword arguments), and can be programmed to do a number of things.
Thanks everyone!

Open and Receive JSON response from url

I have a document in JSON, with information that I intend for my addon, I found a code in this forum and tried to modify without success. What I intend is that through the function that I will leave, call this link (https://tugarepo.000webhostapp.com/lib/lib.json) so that I can see the content.
CODE:
return json.loads(openfile('lib.json',path.join('https://tugarepo.000webhostapp.com/lib/lib.json')))
Python Answer
You can use
import urllib2
urllib2.openurl('https://tugarepo.000webhostapp.com/lib/lib.json').read()
in Python 2.7 to perform a simple GET request on your file. I think you're confusing openfile, which is for local files only and a HTTP get request which is for hosted content. The result of the read() you can put into any JSON library available for your project.
Original Answer for Javascript tag
In plain Javascript, you can use a function like explained in the following: HTTP GET request in JavaScript?
If you're using Bootstrap or Jquery, you can use the following: http://api.jquery.com/jquery.getjson/
If you wanna see the content on the html page (associated with your Javascript), you'll simply have to grab an element from the page (document.getElementById or document.getElementByClass and such). Once you have a DOM element you can add html into it yourself, that contains your JSON data.
Example code: https://codepen.io/MrKickkiller/pen/prgVLe
The above code is based on having JQuery linked in your html Element. There is however an error since your link doesn't have Acces Control headers. Therefor currently only requests coming from the tugarepo.000webhostapp.com domain have access to the JSON file. Consider adding CORS Headers. https://enable-cors.org/
Simply do:
fetch('https://tugarepo.000webhostapp.com/lib/lib.json')
.then(function (response) { return response.json() })
.then(function (body) { console.log(body)});
But this throws an error as your JSON is invalid.

Python and Parse HTML

My input is the URL of a page. I wanna get the HTML of the page then parse it for a specific JSON response and grab a Product ID and another URL. On the next step, would like to append the Product ID to the URL found.
Any advice on how to achieve this?
As far as retrieving the page, the requests library is a great tool, and much more sanity-friendly than cURL.
I'm not sure based on your question, but if you're getting JSON back, just import the native JSON library (import json) and use json.loads(data) to get a dictionary (or list) provided the response is valid JSON.
If you're parsing HTML, there are several good choices, including BeautifulSoup and lxml. The former is easier to use but doesn't run as quickly or efficiently; the latter can be a bit obtuse but it's blazingly fast. Which is better depends on your app's requirements.

Proper way to extract JSON data from the web given an API

I have an URL in the form of
http://site.com/source.json?s=
And I wish to use Python to create a class that will allow me to parse in my "s" query, send it to that site, and extract out the JSON results.
I've tried importing json/setting up the class, but nothing ever really works and I'm trying to learn good practices at the same time. Can anyone help me out?
Ideally, you should (especially when starting out), use the requests library. This would enable your code to be:
import requests
r = requests.get('http://site.com/source.json', params={'s': 'somevalue/or other here'})
json_result = r.json()
This automatically escapes the parameters, and automatically converts your JSON result into a Python dict....

Python urllib2 automatic form filling and retrieval of results

I'm looking to be able to query a site for warranty information on a machine that this script would be running on. It should be able to fill out a form if needed ( like in the case of say HP's service site) and would then be able to retrieve the resulting web page.
I already have the bits in place to parse the resulting html that is reported back I'm just having trouble with what needs to be done in order to do a POST of data that needs to be put in the fields and then being able to retrieve the resulting page.
If you absolutely need to use urllib2, the basic gist is this:
import urllib
import urllib2
url = 'http://whatever.foo/form.html'
form_data = {'field1': 'value1', 'field2': 'value2'}
params = urllib.urlencode(form_data)
response = urllib2.urlopen(url, params)
data = response.read()
If you send along POST data (the 2nd argument to urlopen()), the request method is automatically set to POST.
I suggest you do yourself a favor and use mechanize, a full-blown urllib2 replacement that acts exactly like a real browser. A lot of sites use hidden fields, cookies, and redirects, none of which urllib2 handles for you by default, where mechanize does.
Check out Emulating a browser in Python with mechanize for a good example.
Using urllib and urllib2 together,
data = urllib.urlencode([('field1',val1), ('field2',val2)]) # list of two-element tuples
content = urllib2.urlopen('post-url', data)
content will give you the page source.
I’ve only done a little bit of this, but:
You’ve got the HTML of the form page. Extract the name attribute for each form field you need to fill in.
Create a dictionary mapping the names of each form field with the values you want submit.
Use urllib.urlencode to turn the dictionary into the body of your post request.
Include this encoded data as the second argument to urllib2.Request(), after the URL that the form should be submitted to.
The server will either return a resulting web page, or return a redirect to a resulting web page. If it does the latter, you’ll need to issue a GET request to the URL specified in the redirect response.
I hope that makes some sort of sense?

Categories