import tweepy
ACCESS_KEY = 'XXXXXXXXXXXXXXXXXXX'
ACCESS_SECRET = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXX'
CONSUMER_KEY = 'XXXXXXXXXXXXXXXXXXXXXX'
CONSUMER_SECRET = 'XXXXXXXXXXXXXXXXXXXXXXX'
api = tweepy.Client(bearer_token='XXXXXXXXXXXXXXXXXXX',
access_token=ACCESS_KEY,
access_token_secret=ACCESS_SECRET,
consumer_key=CONSUMER_KEY,
consumer_secret=CONSUMER_SECRET)
api.create_tweet(text='I want to Post 3 Photos and description')
I'm using tweepy V2 But I don't know how to upload photos + descriptions
Does anyone help me? I want to tweet images with text, I've 3 image's
Had to do a little digging, as I don't have experience with tweepy, but I think I found an answer.
When you send the tweet, you can attach media using a "media id". Somewhat like this:
api.create_tweet(text = 'Images can be fun too!', media = {media_ids = ["1455952740635586573", "1234567890"]})
The media_ids list can contain multiple media IDs. However, you need to upload the images to Twitter to get the media IDs.
Tweepy provides a file upload function, that can be used like so:
mediaID = api.media_upload(filename)
Simply upload your files, put them into a dictionary, and send your tweet!
I made a little example that you can add to the end of your program.
import tweepy
ACCESS_KEY = 'XXXXXXXXXXXXXXXXXXX'
ACCESS_SECRET = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXX'
CONSUMER_KEY = 'XXXXXXXXXXXXXXXXXXXXXX'
CONSUMER_SECRET = 'XXXXXXXXXXXXXXXXXXXXXXX'
api = tweepy.Client(bearer_token='XXXXXXXXXXXXXXXXXXX',
access_token=ACCESS_KEY,
access_token_secret=ACCESS_SECRET,
consumer_key=CONSUMER_KEY,
consumer_secret=CONSUMER_SECRET)
mediaID1 = mediaID = api.media_upload("media1.png")
mediaID2 = mediaID = api.media_upload("image2.png")
mediaID3 = mediaID = api.media_upload("image3.png")
api.create_tweet(text='I want to Post 3 Photos and description',
media={media_ids=[mediaID1,mediaID2,mediaID3]})
Please refer to the Twitter documentation if you need a more accurate depiction of the Twitter API:
https://developer.twitter.com/en/docs/twitter-api/tweets/manage-tweets/api-reference/post-tweets
https://developer.twitter.com/en/docs/twitter-api/v1/media/upload-media/api-reference/post-media-upload
I'm still a little rusty at Python and this is my first time using Stackoverflow but this should work.
Related
I am able to reply to a specific tweet by getting tweet IDs, but cannot get my configuration to do what I want it to do, which is to reply to every tweet from a specific user. I have that user's username and ID. Currently it appears to only be pulling one tweet, which I suspect has something to do with line 23's tweet.id. I guess what I'm looking for is a way to ensure that my bot replies every single time this user tweets. Here is my current code (sensitive info redacted)
from ast import For
import tweepy
api_key = "###############################################"
api_secret = "###############################################"
bearer_token = r"###############################################"
access_token = "###############################################"
access_token_secret = "###############################################"
client = tweepy.Client(bearer_token, api_key, api_secret, access_token, access_token_secret)
auth = tweepy.OAuth1UserHandler(api_key, api_secret, access_token, access_token_secret)
api = tweepy.API(auth)
toReply = "TwitterUsernameHere"
api = tweepy.API(auth)
tweets = api.user_timeline(screen_name = toReply, count=1)
for tweet in tweets:
api.update_status("#" + toReply + " Why? ", in_reply_to_status_id = tweet.id)
Assuming that you are following the Twitter automation rules (i.e. that you're only replying to Tweets that the user has opted-in for your app to reply to - otherwise your user account or app will be restricted)...
... your code currently checks the user's Timeline, and then replies to the most recent single Tweet (count=1 on the user_timeline call). You would need this to check for new Tweets in order to reply to different ones. You could store tweet.id somewhere and only reply to it when it changes.
Note that there are a few other things to tidy up:
from ast import For is not required
client = tweepy.Client targets the Twitter API v2 but the rest of the code uses Twitter API v1.1 (via tweepy.API)
bearer_token is unused in this code and will only work for a read operation in v1.1 of the API so you could remove it.
I'm trying to write a simple python programme that uses the tweepy API for twitter and wget to retrieve the image link from a twitter post ID (Example: twitter.com/ExampleUsername/12345678), then download the image from the link. The actual programme works fine, but there is a problem. While it runs FOR every ID in the dictionary (if there are 2 IDs, it runs 2 times), it doesn't use every ID, so the script ends up looking at the last ID on the dictionary, then downloading the image from that same id however many times there is an ID in the dictionary. Does anyone know how to make the script run again for every ID?
tl;dr I want the programme to look at the first ID, grab its image link, download it, then do the same thing with the next ID until its done all of the IDs.
#!/usr/bin/env python
# encoding: utf-8
import tweepy #https://github.com/tweepy/tweepy
import wget
#Twitter API credentials
consumer_key = "nice try :)"
consumer_secret = "nice try :)"
access_key = "nice try :)"
access_secret = "my, this joke is getting really redundant"
def get_all_tweets():
#authorize twitter, initialize tweepy
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_key, access_secret)
api = tweepy.API(auth)
id_list = [1234567890, 0987654321]
# Hey StackOverflow, these are example ID's. They won't work as they're not real twitter ID's, so if you're gonna run this yourself, you'll want to find some twitter IDs on your own
# tweets = api.statuses_lookup(id_list)
for i in id_list:
tweets = []
tweets.extend(api.statuses_lookup(id_=id_list, include_entities=True))
for tweet in tweets:
spacefiller = (1+1)
# this is here so the loop runs, if it doesn't the app breaks
a = len(tweets)
print(tweet.entities['media'][0]['media_url'])
url = tweet.entities['media'][0]['media_url']
wget.download(url)
get_all_tweets()
Thanks,
~CS
I figured it out!
I knew that loop was being used for something...
I moved everything from a = len(tweets to wget.download(url) into the for tweet in tweets: loop, and removed the for i in id_list: loop.
Thanks to tdelany this programme works now! Thanks everyone!
Here's the new code if anyone wants it:
#!/usr/bin/env python
# encoding: utf-8
import tweepy #https://github.com/tweepy/tweepy
import wget
#Twitter API credentials
consumer_key = "nice try :)"
consumer_secret = "nice try :)"
access_key = "nice try :)"
access_secret = "my, this joke is getting really redundant"
def get_all_tweets():
#authorize twitter, initialize tweepy
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_key, access_secret)
api = tweepy.API(auth)
id_list = [1234567890, 0987654321]
# Hey StackOverflow, these are example ID's. They won't work as they're not real twitter ID's, so if you're gonna run this yourself, you'll want to find some twitter IDs on your own
tweets = []
tweets.extend(api.statuses_lookup(id_=id_list, include_entities=True))
for tweet in tweets:
a = len(tweets)
print(tweet.entities['media'][0]['media_url'])
url = tweet.entities['media'][0]['media_url']
wget.download(url)
get_all_tweets()
One strange thing I see is that the variable i declared in the outer loop is never used after on. Shouldn't your code be
tweets.extend(api.statuses_lookup(id_=i, include_entities=True))
and not id_=id_list as you wrote?
1) Followed the steps for keys creation on developer site of linkedin.
2) Works well to get my information using Python and oauth 2.0:
import oauth2 as oauth
import time
url = "http://api.linkedin.com/v1/people/~"
consumer_key = 'my_app_key'
consumer_secret = 'my_app_secret_key'
oath_key = 'oath_key'
oath_secret = 'oath_secret_key'
consumer = oauth.Consumer(
key=consumer_key,
secret=consumer_secret)
token = oauth.Token(
key=oath_key,
secret=oath_secret)
client = oauth.Client(consumer, token)
resp, content = client.request(url)
print resp
print content
But, I want to know the information of other people, e.g. to get the info based on first_name, last_name, and company.
There seems to be good information at https://developer.linkedin.com/documents/profile-api
but, cannot get through it.
What exactly is "id" value?
You can't do that. You can get public profile data of somebody if you know their public profile url or their unique member id, but you can't query on anything else.
So I have these two scripts:
redditScraper.py
# libraries
import urllib2
import json
# get remote string
url = 'http://www.reddit.com/new.json?sort=new'
response=urllib2.urlopen(url)
# interpret as json
data = json.load(response)
#print(data)
response.close()
print data['data']['children'][3]['data']['title']
print data['data']['children'][3]['data']['permalink']
print data['data']['children'][3]['data']['subreddit']
and minerTweets.py
#!/usr/bin/env python
import sys
from twython import Twython
CONSUMER_KEY = 'XXXXXXXXXXXXXXXX'
CONSUMER_SECRET = 'XXXXXXXXXXXXXXXX'
ACCESS_KEY = 'XXXXXXXXXXXXXXXX'
ACCESS_SECRET = 'XXXXXXXXXXXXXXXX'
api = Twython(CONSUMER_KEY,CONSUMER_SECRET,ACCESS_KEY,ACCESS_SECRET)
api.update_status(status=sys.argv[1])
This is a Raspberry Pi that will update a twitter account (it's for academic purposes). Being new to python I took each part of the script I'm trying to write one at a time. I have one script that successfully scraps the title, link, and subreddit of the reddit "new" page and prints it. And then I have another that successfully hits the Twython API to update a status taking the sys.argv currently for testing. What I want the finished script to do is take the printed data from the redditScraper.py and update a twitter account's status with my minerTweets.py script. I've looked all over the place and since I'm just learning python my knowledge for the best way to accomplish this is limited.
I appreciate any advice in advance. Thank you!
You can store the results of redditScrapper.py to a file, and let minerTweets.py take the data from there:
with open('test.txt', 'w') as fp:
data = json.load(response)
json.dump(data, fp)
With test.txt stored in the same directory. Now, the only thing left is to read it:
with open('test.txt') as fp:
api = Twython(CONSUMER_KEY,CONSUMER_SECRET,ACCESS_KEY,ACCESS_SECRET)
data = json.load(fp)
api.update_status(data['data']['children'][3]['data']['title'])
Edit: If you want to merge the script, it's not so hard.
import urllib2
import json
from twython import Twython
CONSUMER_KEY = 'XXXXXXXXXXXXXXXX'
CONSUMER_SECRET = 'XXXXXXXXXXXXXXXX'
ACCESS_KEY = 'XXXXXXXXXXXXXXXX'
ACCESS_SECRET = 'XXXXXXXXXXXXXXXX'
# get remote string
url = 'http://www.reddit.com/new.json?sort=new'
response=urllib2.urlopen(url)
# interpret as json
data = json.load(response)
#print(data)
response.close()
api = Twython(CONSUMER_KEY,CONSUMER_SECRET,ACCESS_KEY,ACCESS_SECRET)
d = data['data']['children'][3]['data']
title = d['title']
permalink = d['permalink']
subreddit = d['subreddit']
api.update_status(status=title+permalink+subreddit) #or you can post this as different
#status, depends on how you'd like to format the tweet.
I have the following code:
import urlparse
import oauth2 as oauth
PROXY_DOMAIN = "twitter1-ewizardii.apigee.com"
consumer_key = '...'
consumer_secret = '...'
consumer = oauth.Consumer(consumer_key, consumer_secret)
oauth_token = '...'
oauth_token_secret = '...'
token = oauth.Token(oauth_token, oauth_token_secret)
client = oauth.Client(consumer, token)
request_token_url = "https://twitter1-ewizardii.apigee.com/1/account/rate_limit_status.json"
resp, content = client.request(request_token_url, "GET", PROXY_DOMAIN)
print resp
print content
However I continue to get the error "error":"Incorrect signature" this was working earlier, and I tried out solutions people have suggested online, generated new credentials etc, but it doesn't seem to work anymore after working for a week like this.
Thanks,
Although I have switched to tweepy for anyone who finds this question this may be of use to you:
http://dev.twitter.com/pages/libraries
It could have been a glitch on the day I was testing as I didn't go back to trying out the oauth-python module since tweepy has been working for me. But that link list all the possible libraries available and is a valuable resource if such a problem arises again.