I'm new to Python and API's. I'm trying to work on a project and need some help.
I just want to request some information from Blogger to get back blog post details (title, body, etc) and return it.
I'm wanting to use these: https://developers.google.com/resources/api-libraries/documentation/blogger/v3/python/latest/blogger_v3.blogs.html
I can use requests.get(url/key) and I get a server status[200], but everytime I try to find a way to use one of the requests from the link I get keyword errors.
For example: "TypeError: request() got an unexpected keyword argument 'blogId'"
My code is requests.get('url/key' blogId='BLOG ID HERE', x__xgafv=None, maxPosts=None, view=None)
I don't feel comfortable posting my exact code, since it has my API in it.
What am I doing wrong?
request.get() method doesn't have blogID parameter. For more info use this link
I am not sure, but oyu can use params like that:
page = get('https://google.com', params={'blogId': '12345'})
It's better to look up information in the docs: https://requests.readthedocs.io/en/master/
I found my solution after digging a bit.
I was using get requests on the API to get a response, which is how it should be used.
Yet, I was also trying to use a method of returning data that also used the get keyword, but was meant for use with the Google API Library.
I had to install the Google API Client library and import build, and requests so that I could use the API methods instead. This allows me to return results that are far more specific (hence the keyword arguments), and can be programmed to do a number of things.
Thanks everyone!
Related
I'm new to webscraping and I wanted to retrieve all the wins and losses within this season of the NHL. Now this url works fine: https://www.nhl.com/scores ... but the problem arises when I want to go back to previous dates like so: https://www.nhl.com/scores/2022-09-24 ... this is the url that shows up when I interact with the buttons in that first url. you can see for yourself. I know i'm missing something here but to me it's not as obvious. please enlighten me.
I then tried to see if there was a way to use https://www.nhl.com/scores/ to obtain the information I require but I am having trouble accessing that data.
I'd recommend not using the URLs to access a specific date's data and instead take a look at the Fetch/XHR requests in your browser's dev tools -> Network activity to see what kinds of API calls are being made whenever you click on a date. Then you can make a call directly to that API endpoint within your python script and parse the JSON response
You can use the requests library for this: https://requests.readthedocs.io/en/latest/
I am trying to figure out how to do multipart-uploads to AWS Glacier and found some Example Request on this documentation page. How do I implement this example in Python? I think I should use the 'requests' module but don't know exactly how to make it work.
Here is what I have done:
import requests
r = requests.post('/042415267352/vaults/history/multipart-uploads')
And this is the error I have:
MissingSchema: Invalid URL '/042415267352/vaults/history/multipart-uploads': No schema supplied.
Perhaps you meant http:///042415267352/vaults/history/multipart-uploads?
I am having this trouble because I don't really understand these stuff, HTTP request, RESTFul API etc.If someone can suggest some resources for me to learn these, in addition to helping out with this specific question, that will be great! Because I don't want to come here ask question again if I come across similar situation in the future. But for now, I even don't know where to start the learning process.
Your help are highly appreciated!
You do not need to implement low level HTTP requests yourself, this is what boto module is for in Python. You can do all this via module which abstracts all low level requests for you.
For documentation and examples, see Boto3 Glacier docs which contains lots of examples.
requests.post expects an absolute path. Prepend the scheme (http or https, or whichever else) to the relative URL.
I am trying to write an application which can archive your Facebook messages and their attachments, something similar to the amazing tool https://github.com/bnvk/social-archiver, but i have a problem with non-image attachments, say a audio file for example.
According to this View attachments in threads, the solution is the messaging.getattachment api call, with the proper parameters, now it looks like this is working for some people, but when i do it, through a browser direct call as well as with the python code mentioned before the response is always the same:
{"error_code":3,"error_msg":"Unknown method","request_args"
followed by all my parameters.
What am I doing wrong here? Is there something wrong with this api endpoint at this moment? Am I passing the parameters the wrong way? Maybe someone who got this working can put an example of how they passed their parameters(not the access token of course :() but maybe im putting the mid parameter the wrong way.
Any help appreciated.
The messaging.getattachment method is no longer available and only in use with Facebook Native mobile applications. You will need an access_token from one of the applications for example, Facebook Messenger for iPhone to use it.
I am using Python 2.7 in conjunction with the requests module and the PivotalTracker API to make a GET request to retrieve the stories in a particular project. Here is the definition of the endpoint: http://www.pivotaltracker.com/help/api/rest/v5#projects_project_id_stories_get. So I am able to use this and get all the stories in the project, but I am trying to only get the stories of type = bug. This is what I've tried:
#project_id is my project ID and PTtoken is my API token
url = 'https://www.pivotaltracker.com/services/v5/projects/{}/stories?fields=name,story_type,created_at&with_label=bug'
r = requests.get(url.format(project_id), headers={'X-TrackerToken':PTtoken})
This returns this error:
{"kind":"error","code":"invalid_parameter","general_problem":"this endpoint cannot accept the parameter: with_type","error":"One or more request parameters was missing or invalid."}
So obviously the with_type=bug is whats providing the error as it works fine without it. But reading through the documentation it seems to me like I should be able to refine my search based on the type. In the documentation it says this: "Searches can be refined with all of the modifications listed here: How can a search be refined?" Upon clicking through to the link, it says I can search based on type.
Any help or guidance is greatly appreciated. Thanks!
How to structure GET 'review link' request from Vimeo API?
New to python and assume others might benefit from my ignorance.
I'm simply trying to upload via the new vimeo api and return a 'review link'.
Are there current examples of the vimeo-api in python? I've read the documentation and can upload perfectly fine. However, when it comes to the http GET I can't seem to figure it out. Im using python2.7.5 and have tried requests library. Im ready to give up and just go back to PHP because its documented so much better.
Any python programmers out there familiar?
EDIT: Since this was written the vimeo.py library was rebuilt. This is now as simple as taking the API URI and requesting vc.get('/videos/105113459') and looking for the review link in the response.
The original:
If you know the API URL you want to retrieve this for, you can convert it into a vimeo.py call by replacing the slashes with dots. The issue with this is that in Python attributes (things separated by the dots), are syntax errors.
With our original rule, if you wanted to see /videos/105113459 in the python library you would do vc.videos.105113459() (if you had vc = vimeo.VimeoClient(<your token and app data>)).
To resolve this you can instead use python's getattr() built-in function to retrieve this. In the end you use getattr(vc.videos, '105113459')() and it will return the result of GET /videos/105113459.
I know it's a bit complicated, but rest assured there are improvements that we're working on to eliminate this common workaround.