How to structure get 'review link' request from Vimeo API? - python

How to structure GET 'review link' request from Vimeo API?
New to python and assume others might benefit from my ignorance.
I'm simply trying to upload via the new vimeo api and return a 'review link'.
Are there current examples of the vimeo-api in python? I've read the documentation and can upload perfectly fine. However, when it comes to the http GET I can't seem to figure it out. Im using python2.7.5 and have tried requests library. Im ready to give up and just go back to PHP because its documented so much better.
Any python programmers out there familiar?

EDIT: Since this was written the vimeo.py library was rebuilt. This is now as simple as taking the API URI and requesting vc.get('/videos/105113459') and looking for the review link in the response.
The original:
If you know the API URL you want to retrieve this for, you can convert it into a vimeo.py call by replacing the slashes with dots. The issue with this is that in Python attributes (things separated by the dots), are syntax errors.
With our original rule, if you wanted to see /videos/105113459 in the python library you would do vc.videos.105113459() (if you had vc = vimeo.VimeoClient(<your token and app data>)).
To resolve this you can instead use python's getattr() built-in function to retrieve this. In the end you use getattr(vc.videos, '105113459')() and it will return the result of GET /videos/105113459.
I know it's a bit complicated, but rest assured there are improvements that we're working on to eliminate this common workaround.

Related

How do I format get requests to Blogger API in Python?

I'm new to Python and API's. I'm trying to work on a project and need some help.
I just want to request some information from Blogger to get back blog post details (title, body, etc) and return it.
I'm wanting to use these: https://developers.google.com/resources/api-libraries/documentation/blogger/v3/python/latest/blogger_v3.blogs.html
I can use requests.get(url/key) and I get a server status[200], but everytime I try to find a way to use one of the requests from the link I get keyword errors.
For example: "TypeError: request() got an unexpected keyword argument 'blogId'"
My code is requests.get('url/key' blogId='BLOG ID HERE', x__xgafv=None, maxPosts=None, view=None)
I don't feel comfortable posting my exact code, since it has my API in it.
What am I doing wrong?
request.get() method doesn't have blogID parameter. For more info use this link
I am not sure, but oyu can use params like that:
page = get('https://google.com', params={'blogId': '12345'})
It's better to look up information in the docs: https://requests.readthedocs.io/en/master/
I found my solution after digging a bit.
I was using get requests on the API to get a response, which is how it should be used.
Yet, I was also trying to use a method of returning data that also used the get keyword, but was meant for use with the Google API Library.
I had to install the Google API Client library and import build, and requests so that I could use the API methods instead. This allows me to return results that are far more specific (hence the keyword arguments), and can be programmed to do a number of things.
Thanks everyone!

How to automate pulling data (KMZ? JSON?) from My Google Maps

Seeking a bit of guidance on a general approach as to how one would automate the retrieval of data from a My Google Map. While I could easily export any given layer to KML/KMZ, I'm looking for a way to do this within a larger script, that will automate the process. Preferably, where I wouldn't even have to log in to the map itself to complete the data pull.
So, what do you think the best approach is? Two possible options I'm considering are 1) using selenium/beautiful soup to simulate page-clicks on Google Maps and export the KMZ or 2) making use of Python Google Maps API. Though, I'm not sure if this API makes it possible to download Google Maps layer via a script.
To be clear, the data is already in the map - I'm just looking for a way to export it. It could either be a KMZ export, or better yet, GeoJSON.
Any thoughts or advice welcome! Thank you in advance.
I used my browser’s inspection feature to figure out what was going on under the hood with the website I was interested in grabbing data from, which led me to this solution.
I use Selenium to login and navigate said website, then transfer my cookies to Python’s Requests package. I have Requests send a specific query to the server whose response is in the form of JSON. I was able to figure out what query to send and what form the response would be through the inspection feature previously stated. Once I have the response in JSON I use Python’s JSON package to convert into a Python dict to use however I need.
Sounds like you might not necessarily need Selenium but it does sound like the Requests package would be useful to your use case. I think your first step is figuring out what form the server response is when you interact with the website naturally to get what you want.
Hopefully this helps to some degree!

How do you use a web.py application in Wordpress?

I have written an application in python to collect data from a javascript form and returned the processed text. It is based entirely off of the code here (but a lot more complex, so I have to use python for this).
https://kooneiform.wordpress.com/2010/02/28/python-and-ajax-for-beginners-with-webpy-and-jquery/
(note to people who like to edit...please leave this link in place since it shows all the relevant code sections in python and javascript).
I need to use this in wordpress (since that's what runs my site) and I honestly have no idea how to pull this off. Webpy can run with Apache CGI, but the documentation (http://webpy.org/cookbook/cgi-apache) is only clear if one wants to navigate directly to the python app as its own page.
I'm hoping someone here has expertise in how to embed this all within a Wordpress page/post?
Thanks!!
As far as I know, there is no native way to run Python code inside a WordPress site just like php. In fact, if you are not doing anything unique to Python, I would suggest you to use php, which supports regular expression and can be used in WordPress by installing the plugin "Insert PHP".
If you really want to use Python, then you need an API endpoint where you connect the function to your website. You would have to look into Azure Function App/AWS lambda on which you write a function app to work as a backend. Then whenever someone request your website, your website would do an HTTP request to that API.
Can you explain what exactly you want to do on your website?

Facebook Graph getattachment method returning "unknown method" error

I am trying to write an application which can archive your Facebook messages and their attachments, something similar to the amazing tool https://github.com/bnvk/social-archiver, but i have a problem with non-image attachments, say a audio file for example.
According to this View attachments in threads, the solution is the messaging.getattachment api call, with the proper parameters, now it looks like this is working for some people, but when i do it, through a browser direct call as well as with the python code mentioned before the response is always the same:
{"error_code":3,"error_msg":"Unknown method","request_args"
followed by all my parameters.
What am I doing wrong here? Is there something wrong with this api endpoint at this moment? Am I passing the parameters the wrong way? Maybe someone who got this working can put an example of how they passed their parameters(not the access token of course :() but maybe im putting the mid parameter the wrong way.
Any help appreciated.
The messaging.getattachment method is no longer available and only in use with Facebook Native mobile applications. You will need an access_token from one of the applications for example, Facebook Messenger for iPhone to use it.

Extracting and parsing HTML from a secure website with Python?

Let's dive into this, shall we?
Ok, I need to write a script (I don't care what language, prefer something like Python or Javascript, but whatever works I will take time to learn). The script will access multiple URL's, extract text from each site and store it into a folder on my PC. (From there I am manipulating the data with Python, which I know how to do.)
EDIT:
Currently I am using python's NLTK module. Here is a simple version of my code:
url = "<URL HERE>"
html = urlopen(url).read()
raw = nltk.clean_html(html)
print(raw)
This code works fine for both http and https, but not for instances where authentication is required.
Is there a Python module which deals with secure authentication?
Thanks in advance for help! And to the mods who will view this as a bad question, please just give me ways to make it better. I need ideas..from people, not Google.
Mechanize (2) is one option, other is just with urllib2

Categories