I need to find a way to pin comments in YouTube automatically. I have checked YouTube API v3 documentation but it does not have this feature. Is there any idea?
To initialize the automatic mechanism, you first need to open your web-browser Web Developer Tools Network tab, then pin an ad hoc comment, you should notice a XHR request to perform_comment_action endpoint. Right-click this request and copy it as cURL. Notice the last field actions in the JSON encoded --data-raw argument. Decode this base64 encoded field and modify the first plaintext argument Ug...Ag to the comment id you want to pin and re-encode the field in base64 and then execute the cURL request and that's it!
Note that there is no need to modify any other parameter for pinning a comment on another video than the ad hoc comment is posted on.
Related
i am trying to use the requests library with python to fetch data from the traffic api with python.
this is the link for website with api that should include the traffic data:
https://api.tomtom.com/traffic/services/4/flowSegmentData/relative0/10/json?point=52.41072%2C4.84239&openLr=true&jsonp=jsonp&key=3EeqxQCR2DNsYzRCT0RPIxUhlzAM3hQc
but it returns an "Developer Inactive" on the website . how to solve that and use the api
also i want to ask if this will work with kivy.
API request that you provided has an API key that is not existing anymore. I tried it with your API key that you provided in the comment and it worked. But you must notice that you copied it wrongly - there is an additional character at the beginning.
I am building a python script that is trying to stream a screen capture to my Facebook Page and to be able to retrieve all the comments from the Facebook Live stream real time so that I can do some processing in the middle of the stream.
The Facebook App was set up (in development mode) but when I tried to retrieve the comments from my live stream, I am only able to retrieve comments with their name and id ("from") that are made as the Facebook Page Admin, not comments that are made by other users. I need the user's id, user's name and their comments.
I understand that I need to get Facebook App to be live mode in order to retrieve all the comments with their details tagged to it. When I tried to get it, it tells me that I need to get the permission approved. I tried to fill in most of the stuff and try to get the two permission (manage_page for the comments and live video API for the streaming) but I was unable to because I left the platform empty.
Below is the message I got:
You do not have any platforms eligible for review. Please configure a platform on your Settings page.
The problem is when I tried to choose a platform that was shown in the list, python script does not fall in the list of platform.
Does anyone know of a solution or a different way to achieve what I need to retrieve?
Have you tried using PyLivestream?
It can be used to stream to Facebook Live using FFmpeg (to multiple services simultaneously actually, like Periscope, YouTube etc).
It adheres to the RTMPS requirement and should be an option for you if I interpret your needs correctly.
python -m pip install PyLivestream
Facebook Live
Facebook Live requires FFmpeg >= 4.2 due to mandatory RTMPS
configure your Facebook Live stream
Put stream ID from https://www.facebook.com/live/create into the file facebook.key
Run Python script for Facebook with chosen input
Check out the PyPi PyLivestream page for details.
To be able to retrieve all the comments from the Facebook Live stream
I'm not sure if this is possible using PyLivestream alone, but the Polls API can be used to represent VideoPoll objects in the Graph API, to create polls on live video broadcasts and get real-time responses from your viewers and can be created with the
POST /{live-video-id}/polls
endpoint on a LiveVideo object.
Upon creation, the API will return a VideoPoll object ID, which you can use to manipulate the poll and query for viewer interactions.
Guess you'll have to do a bit of digging to figure out the details,
but I believe this would be the right way to approach this task.
In order to get the "from" field when retrieving the comments, you need to have manage_pages permission from your Facebook App that is linked to your Facebook Page. You will need to submit an App review for your Facebook App that usually takes 1-3 days to process. If you are lucky, it will probably take about 6-8 hours.
Once it is approved, you can request the permission and get your application to go live.
Also use the Page Access token in your "access_token" field when invoking the API so that it will allow you to pull the "from" field, which contains the id and name of the user.
I need to download a file from a private GitLab.
I already saw this post:
Download from a GitLab private repository
But I cannot use the API, since I dont have the needed IDs to download the files.
In fact, I need to download them by theirs HTTP raw urls, like:
http://gitlab.private.com/group/repo_name/raw/master/diagrams/test.plantuml
Since I turned on authentication, every time I try to access something programatically, I am redirected to login page.
I wrote a Python script to mimic the login process, obtain the authenticity_token and the _gitlab_session cookie, but still not working.
If I grab a session cookie from my Chrome browser after a successful login, everything works like a charm (from the file download perspective) on Python and even curl.
So, any help is apreciated to obtain this cookie, os a different approach. To use the API I would first need to struggle among all repos performing strings matches so I can find the proper IDs. This is the last option.
Tks
Marco
First generate a Personal Access Token from your settings page: /profile/personal_access_tokens. Make sure it has the read_repository scope.
Then you can use one of two methods, replacing PRIVATETOKEN with the token you acquired:
Pass a Private-Token header in your request. E.g.
curl --header "Private-Token: PRIVATETOKEN" http://gitlab.private.com/group/repo_name/raw/master/diagrams/test.plantuml
Add a private_token query string to your request. E.g.
curl 'http://gitlab.private.com/group/repo_name/raw/master/diagrams/test.plantuml?private_token=PRIVATETOKEN'
I'm trying to read in info that is constantly changing from a website.
For example, say I wanted to read in the artist name that is playing on an online radio site.
I can grab the current artist's name but when the song changes, the HTML updates itself and I've already opened the file via:
f = urllib.urlopen("SITE")
So I can't see the updated artist name for the new song.
Can I keep closing and opening the URL in a while(1) loop to get the updated HTML code or is there a better way to do this? Thanks!
You'll have to periodically re-download the website. Don't do it constantly because that will be too hard on the server.
This is because HTTP, by nature, is not a streaming protocol. Once you connect to the server, it expects you to throw an HTTP request at it, then it will throw an HTTP response back at you containing the page. If your initial request is keep-alive (default as of HTTP/1.1,) you can throw the same request again and get the page up to date.
What I'd recommend? Depending on your needs, get the page every n seconds, get the data you need. If the site provides an API, you can possibly capitalize on that. Also, if it's your own site, you might be able to implement comet-style Ajax over HTTP and get a true stream.
Also note if it's someone else's page, it's possible the site uses Ajax via Javascript to make it up to date; this means there's other requests causing the update and you may need to dissect the website to figure out what requests you need to make to get the data.
If you use urllib2 you can read the headers when you make the request. If the server sends back a "304 Not Modified" in the headers then the content hasn't changed.
Yes, this is correct approach. To get changes in web, you have to send new query each time. Live AJAX sites do exactly same internally.
Some sites provide additional API, including long polling. Look for documentation on the site or ask their developers whether there is some.
I'm trying to make an http request using httplib2:
import httplib2, time, re, urllib`
conn = httplib2.Http(".cache")
page = conn.request(u"http://www.mydomain.com/search?q=cars#p=100","GET")
The response is ok, but the "#p=100" does not get passed over. Does anyone know how to pass this over with httplib2?
thanks
The fragment in the URL is not passed to the server.
+1 to Ignacio because he answered correctly first.
The relevant documentation, from https://www.rfc-editor.org/rfc/rfc2396#section-4.1
When a URI reference is used to perform a retrieval action on the identified resource, the optional fragment identifier, separated from the URI by a crosshatch ("#") character, consists of additional reference information to be interpreted by the user agent after the retrieval action has been successfully completed. As such, it is not part of a URI, but is often used in conjunction with a URI.
In the case of the link above, the browser uses the information after the crosshatch as a bookmark for a particular spot in the HTML.
If anyone else stumbles onto this question and wants an answer, I found an answer from another Stack Overflow question:
The fragment of the url after the hash (#) symbol is for client-side handling and isn't actually sent to the webserver. My guess is there is some javascript on the page that requests the correct data from the server using AJAX, and you need to figure out what URL is used for that.
If you use chrome you can watch the Network tab of the developer tools and see what URLs are requested when you click the link to go to page two in your browser.
To get the developer tools In Chrome press F11(Windows) or Apple+Alt+i(Mac). If you click on the option's gear in the bottom right corner, make sure the Preserve log upon navigation is checked.