How do you get the url from Submission object in PRAW? - python

I'm using PRAW to create a Reddit bot that submits something once a day. After submitting I want to save the url of the submission and write it to a text file.
url = r.submit(subreddit, submission_title, text=submission_text)
The above returns a Submission object, but I want the actual url. Is there a way to get the url from a Submission object, or do I need to do something else to get the url?

submission.shortlink (previously .short_link) is what you're looking for, if submission.permalink wasn't good enough.
reddit = praw.Reddit("Amos")
submission = reddit.get_submission(submission_id="XYZ")
print submission.permalink
>>> www.reddit.com/r/subreddit/comments/XYZ

I see that #TankorSmash has answered your question already, though I thought I might add some fundamental knowledge for future references:
If you use "dir(object)," you'll be able to see both attributes and methods that pertain to the Reddit API (which you may use to test and see all properties that effect the given object being tested). You can ignore everything that starts with an underscore (most likely).
An example would be:
submissionURL = submission.url
Or you can go straight to source where PRAW is getting its data. The variable names are not set by PRAW, they come from this JSON (linked above).

Related

How to remove my own instagram followers with Python?

I want to remove my own instagram followers without blocking them, using python.
I have seen many, many, many, many instagram python libraries online that allow you to stop or start following a person, but that is not what I'm looking for; I don't want to remove who I am following or start following someone, I want to remove people who are following me.
I looked into the official documentation of Instagram's HTTP API trying to make my own solution, but I couldn't find the documentation of this action under any endpoint ( I assume it should be under /friends/ ).
I vaguely remember some library that used to do this, but I cannot find it. Does anyone know of a good way to achieve this, preferably via passing an inclusion/exclusion list for the followers I want to have as a result?
I found a solution in an old library that does something similar. You can't directly remove followers through most tools, but if you block and then unblock a user, the effect you want is achieved. Example code:
# https://instagram-private-api.readthedocs.io/en/latest/_modules/instagram_private_api/endpoints/friendships.html
import instagramPrivateApi
# ...
# Implement a Client class that inherits FriendshipMixin
api = new Client()
api.friendships_block(uid)
api.friendships_unblock(uid)
Here is the API endPoint for removing a follower https://www.instagram.com/web/friendships/{user_id}/remove_follower/
You can do a post request on this URL with appropriate headers and that can do the job.

In python what is a response object returned from a website?

I'm trying to use the etsy API and I was finally able to get it running from the source. I gave it my key, and it returned the following when printed out.
<etsy._v2.EtsyV2 object at 0xb7284ccc>
However I have no idea what to do with it. The github-repo doesn't have much documentation, and the command that is suppose to follow doesn't work. I read the Etsy API and didn't find the mentioned command getFrontFeaturedListings like the github listed.
I've had this issue before with an HTTP response object and I was told to use response.content to check out more info on the object. It didn't work for this object so I'm wondering if there was a simple way to test any generic object, or at least see what this object contains?
When in doubt you can always use the dir built-in method on an arbitrary python object. This will show you methods and fields attached to the object. https://docs.python.org/3/library/functions.html#dir
Anyway, sorry to hear about the poor documentation of the library. Last time I used Etsy's API I just created a little class that used requests. It wasn't much work since Etsy lays out all of the URIs + documentation nicely on their developer site. https://www.etsy.com/developers/documentation/reference/favoritelisting

How to retrieve videos from Instagram Python client?

I'm using Python Instagram Client to retrieve data from Instagram. I have created an Instagram account for testing purposes where I have three media content: two images and one video. After making a request using Python Instagram Client using python console, I get next response (django shell):
>>> recent_media, next = api.user_recent_media()
>>> recent_media
>>> [Media: 673901579909298365_1166496117, Media: 673880146437045009_1166496117, Media: 673827880594143995_1166496117]
I have inspected all the media objects, and there is no video information in them, in spite of last media object being a video. All three objects return an attribute called images; last media object, despite being a video as I said before, has also an images attribute with a video snapshot in different resolutions. After reading Instagram Rest API, my understanding is last Media object should have an attribute called videos, which would be a dict, and video information would be there (basically I'm interested in retrieving videos' urls).
My question is: is Python Instagram Client outdated so it returns no video information at all and I have to use the rest api to get video info? Or am I doing something wrong in my requests?
Thanks in advance
You are not doing anything wrong. The Python API for Instagram is full of missing features and bugs. I've fixed them on my own local version, but I haven't pushed anything to the official github and I am not sure they would accept the changes.
What is happening in general is their API client is stripping out data when it converts things back to a model. Why they didn't just use something that would convert dictionaries to dot notation models, I am unsure. It's completely manually and full of mistakes/bad Python IMO. Anyway, the gist is that the data is all there, but they are ignoring it when converting from dictionaries into their proprietary API models.
Here is what I found is problematic for what you are trying to do:
No "type" information is returned in the API media model. There is a "type" property that you can check for any media related response to see if it is an image or video. You can add this yourself as I did, or you can try to just assume that anything you get that has a "videos" section with populated data is a video.
No "videos" information is returned with an API media model. I also just added this myself. There are two URLs you can use which you can see if you look at the json, one for standard resolution and one for low resolution. When you process the response, these properties aren't always there so your code should make checks with get/getattr/etc. accordingly.
The paging information in the API is also broken IMO. You are supposed to get back an object with a few different pieces of information, part of which they claim is deprecated (why they are inflating the response at the same version endpoint with this info, I have no idea). The only piece of information you get back here is the next url for paging, which is completely useless in the python API client. There is no reason to get back a REST URL that you would have to manually call and parse outside the API when the whole reason you're using the python client is to avoid that. Anyway, what you will need to do is patch the API client to again send you back the proper models for this or simply parse it out of the URL. I chose to do the latter originally because originally I hoped to not patch the client itself. You'll run into an additional problem because some end points such as tags actually change the querystring parameters in the paging url you get back, so you'll have to conditionally check what they give you. Again, the design is inconsistent and that's not a good thing IMO.
I can post code for all of this if you like, but if you want to try to find a more elegant way to patch all this, you want to look in I believe models.py in the API. I'm not in front of the code right now, but here's what I did from memory.
Create a new video model that inherits from the media model, as they did for the image model.
Where they read the response dictionary, parse out the videos and add them to the response dictionary as they did the images. Remember to add a pre-condition to check if the videos key is missing as I mentioned earlier.
Parse the type property and add it to the response model.
Add a model for the paging data and parse it out into the model. Alternatively, just wrap this via some querystring parsing in your own code if you prefer.
If you do all the above, you should be able to simply read a "videos" property and get the 2 video URLs. That's it. The information is always coming back in the response, just remember they are dropping it in the code. I'm happy to provide code\more info if you like.
Edit: Here's some code - put in models.py in object_from_dictionary in the API:
#add the videos
if "videos" in entry:
new_media.videos = {}
for version, version_info in entry['videos'].iteritems():
new_media.videos[version] = Video.object_from_dictionary(version_info)
#add the type
new_media.type = entry.get('type')
#Add this class as well for the videos....
class Video(ApiModel):
def __init__(self, url, width, height):
self.url = url
self.height = height
self.width = width
def __unicode__(self):
return "Video: %s" % self.url

How can you distinguish an original post from a reblog in Tumblr using Python?

I have started using pytumblr to get posts from Tumblr. My goal is to see which are reblogs and which are original posts. I tried looking at the data that the Tumblr API provides for each post, but I can't find a difference between reblogs and original posts. Also, there is not a parameter stating something like that.
I use the following function but neither the reblog_info or notes_info gives me more information.
blog_posts = client.posts(example_blog, notes_info=True, reblog_info=True)
Any insights ? Thanks.
I found the problem. Instead of reblog_info=True, it needs to be reblog_info='true'.

Twitter Stream API, Parameter follow has unparseable items

Recently I'm doing a small project on twitter, and I want to get tweets from some specific users.
So I use Streaming API, pycurl and python.
The API Reference says the follow parameter is:
A comma separated list of user IDs, indicating the users to return
statuses for in the stream. See the follow parameter documentation for
more information.
And I tried this
c.setopt(c.POSTFIELDS, 'follow=slaindev')
but the return message is not the tweets that slaindev posted, but an error
Parameter follow has unparseable items slaindev
So do I misunderstand the meaning of user ID? I think it is the one we use to mention someone( I mean I use #slaindev to mention this guy).
When I try track parameter, it works fine.
Your assumption regarding user_id is incorrect. See this, for example. You are talking about screen_name.

Categories