Ok so I'm trying to get my likes in python in the same format as I use " get('e1/me/stream.json) ". My understanding is that I can only use v1 with get in python. Listening the requests I tried this url :
https://api.soundcloud.com/e1/me/track_likes/ids?app_version=c53f4bf&client_id=02gUJC0hH2ct1EGOcYXQIzRFU91c72Ea&cursor=1426363878000000&limit=5000&linked_partitioning=1&page_number=0&page_size=200
I get 401 Unauthorized everytime, but what's weird is that if I go to another url and then hit previous all the ids are displayed.
It used to be as simple as get(e1/me/likes.json) but now it doesn't work anymore.
Thank you so much and happy holidays to you guys!
Alex
Ok so the answer is
me/favorites?oauth_token=your_auth_token
Weird that it's perfectly fine to get your stream (https://api.soundcloud.com/e1/me/stream.json?) but need authentification for your public likes.
Related
I'm using YouTube Data API V3 to extract info about my YouTube channel.
I'd like to identify Shorts so I can analyze them separately.
I've found in another discussion a solution which is to do a head request at "https://www.youtube.com/shorts/videoId" as it should redirect the URL if it's not a short and it should not if it is one.
Unfortunately, regardless of if I'm passing a Short or not I get <Response [302]>.
I suspect this is because I'm in the EU and if I try to access the URL without being logged-in I'm redirected to the cookie consent page: https://consent.youtube.com/m?continue=https%3A%2F%2Fwww.youtube.com%2Fshorts%2F-2mHZGXtXSo%3Fcbrd%3D1&gl=DE&m=0&pc=yt&uxe=eomty&hl=en&src=1
Is that the case?
If so, is there any workaround? (aside from a VPN)
Thanks in advance,
I would have gladly commented on the other discussion instead of creating another topic but I'm a simple lurker with no reputation so I can't comment
Here is the original conversation: how do i get youtube shorts from youtube api data v3
Ran into this as well trying to identify shorts. Turns out, sending a cookie value of CONSENT=YES+ will bypass the consent screen. In Python, this might look like:
requests.head(shorts_link, cookies={"CONSENT": "YES+"})
I am not experienced in web development, and trying to use requests.get to get some authenticated data. So far the internet appears to tell me to just do it, and i think i am formatting it wrong, but unsure how. After some trial and error, i was able to grab my cookie for the website. The following is some a made up version of what i grabbed with similar formating.
cookie = "s:abcDEfGHIJ12k34LMNopqRst5UvW-6xy.ZAbCd/eFGhi7j8KlmnoPqrstUvWXYZ90a1BCDE2fGH3"
Then, in python, i am trying to send a request. Following is a bit more pseudo code for what i am doing
r = requests.get('https://www.website.com/api/getData', cookies={"connect.sid": cookie})
After all this, the site keeps sending me a 400 error. Wondering if you guys had any idea if I am putting in the wrong cookie/part of cookie. If everything looks right and it is probably the site at fault, or what.
Grabbed a wireshark capture, and found there were other fields in the cookie that were sent that i had not filled out.
_ga
_gid
___gads
Filled those out with the relevant values, and it works.
I have been trying to download past broadcasts for a streamer on twitch using python. I found this python code online:
https://gist.github.com/baderj/8340312
However, when I try to call the functions I am getting errors giving me a status 400 message.
Unsure if this is the code I want to download the video (as an mp4) or how to use it properly.
And by video I mean something like this as an example: www(dot)twitch.tv/imaqtpie/v/108909385 //note cant put more than 3 links since I don't have 10 reputation
Any tips on how i should go about doing this?
Here's an example of running it in cmd:
python twitch_past_broadcast_downloader.py 108909385
After running it, it gave me this:
Exception API returned 400
This is where i got the information on running it:
https://www.johannesbader.ch/2014/01/find-video-url-of-twitch-tv-live-streams-or-past-broadcasts/
Huh it's not as easy at it seems ... The code you found on this gist is quite old and Twitch has completely changed its API. Now you will need a Client ID to download videos, in order to limit the amount of video you're downloading.
If you want to correct this gist, here are simple steps you can do :
Register an application : Everything is explained here ! Register you app and keep closely your client id.
Change API route : It is no longer '{base}/api/videos/a{id_}' but {base}/kraken/videos/{id_} (not sure about the last one). You will need to change it inside the python code. The doc is here.
Add the client id to the url : As said in the doc, you need to give a header to the request you make, so add a Client-ID: <client_id> header in the request.
And now I think you will need to start debugging a bit, because it is old code :/
I will try myself to do it and I will edit this answer when I'm finished, but try yourself :)
See ya !
EDIT : Mhhh ... It doesn't seem to be possible anyway to download a video with the API :/ I was thinking only links to API changed, but the chunks section of the response from the video url disappeared and Twitch is no longer giving access to raw videos :/
Really sorry I told you to do that, even with the API I think is no longer possible :/
You can download past broadcasts of Twitch videos with Python library streamlink.
You will need OAuth token which you can generate with the command
streamlink --twitch-oauth-authenticate
Download VODs with:
streamlink --twitch-oauth-token <your-oauth-token> https://www.twitch.tv/videos/<VideoID> best -o <your-output-folder>
I am trying to write an application which can archive your Facebook messages and their attachments, something similar to the amazing tool https://github.com/bnvk/social-archiver, but i have a problem with non-image attachments, say a audio file for example.
According to this View attachments in threads, the solution is the messaging.getattachment api call, with the proper parameters, now it looks like this is working for some people, but when i do it, through a browser direct call as well as with the python code mentioned before the response is always the same:
{"error_code":3,"error_msg":"Unknown method","request_args"
followed by all my parameters.
What am I doing wrong here? Is there something wrong with this api endpoint at this moment? Am I passing the parameters the wrong way? Maybe someone who got this working can put an example of how they passed their parameters(not the access token of course :() but maybe im putting the mid parameter the wrong way.
Any help appreciated.
The messaging.getattachment method is no longer available and only in use with Facebook Native mobile applications. You will need an access_token from one of the applications for example, Facebook Messenger for iPhone to use it.
I've asked one question about this a month ago, it's here: "post" method to communicate directly with a server.
And I still didn't get the reason why sometimes I get 404 error and sometimes everything works fine, I mean I've tried those codes with several different wordpress blogs. Using firefox or IE, you can post the comment without any problem whatever wordpress blog it is, but using python and "post" method directly communicating with a server I got 404 with several blogs. And I've tried to spoof the headers, adding cookies in the code, but the result remains the same. It's bugging me for quite a while... Anybody knows the reason? Or what code should I add to make the program works just like a browser such as firefox or IE etc ? Hopefully you guys would help me out!
You should use somthing like mechanize.
The blog may have some spam protection against this kind of posting. ( Using programmatic post without accessing/reading the page can be easily detected using javascript protection ).
But if it's the case, I'm surprised you receive a 404...
Anyway, if you wanna simulate a real browser, the best way is to use a real browser remote controlled by python.
Check out WebDriver (http://seleniumhq.org/docs/09_webdriver.html) It has a python implementation and can run HtmlUnit, chrome, IE and Firefox browsers.