Okay, so I'm making a bot framework in python, and I'm having issues with creating a twitter command. I'm not sure how much sense this will make to anyone else, I think I did my best to explain this, so let me know if I can provide anymore information or code.
So, my twitter command looks like this
#Command(name="tweets", aliases="tweet")
def tweets(chat, message, args, sender):
if len(args) == 0:
chat.SendMessage("Provide a user")
return
def get_tweets():
consumer_key = JakeBot.conf.get_value("twitter_api-key")
consumer_secret = JakeBot.conf.get_value("twitter_api-secret")
access_token_key = JakeBot.conf.get_value("twitter_access")
access_token_secret = JakeBot.conf.get_value("twitter_access-secret")
api = twitter.Api(consumer_key, consumer_secret, access_token_key, access_token_secret)
public_tweets = api.GetUserTimeline(api.GetUser(args[0]).id)
return public_tweets
print get_tweets()[0].text
Now this, is my standard command format, which works for any command.
I call my command by searching by command name and getting the function and calling it in a separate thread like so:
func = commands[command]
thread.start_new_thread(func, (message.Chat, message.Body, get_args(args), message.Sender))
This works for all my other commands like !echo
For reference, I am using the following lib for twitter: https://code.google.com/p/python-twitter/
Now, the problem is that the application 'hangs' on:
api = twitter.Api(consumer_key, consumer_secret, access_token_key, access_token_secret)
I load my command modules through a file reader from my commands package, which I don't think is my issue here. However, if I just throw this in a quick test module, it works perfectly fine, and prints my latest tweet. Instead, when I call it as stated earlier, it hangs on creating the instance of the twitter API, and I really don't know why, even after hours of debugging this.
Related
I recently started to learn python, i had 0 knowledge on pyhton, and in the last few weeks i've been studying python and the twitter api.
I decided to work on a simple twitter bot, that automatically posts whatever people send on my direct messages, and i maneged to do so.
Here's my code:
import tweepy
import json
import sys
import re
import time
print("acessing api...")
consumer_key = 'consumer_key'
consumer_secret = 'consumer_secret'
key = 'key'
secret = 'key_secret'
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(key, secret)
api = tweepy.API(auth, wait_on_rate_limit = True)
print('api accessed')
print('colecting dms')
direct_message = api.list_direct_messages()
text = direct_message[0].message_create["message_data"]["text"]
def send_tweet():
print('sending tweet...')
api.update_status('/bot: ' + text)
while True:
send_tweet()
time.sleep(30)
The code works, but not 100%, i'm able to extract the text from the Dm and use it on the api.update_status("/bot: " + text) but when it loops i get the following error:
tweepy.error.TweepError: [{'code': 187, 'message': 'Status is a duplicate.'}]
i've looked it up, and tried many things, for example:
while True:
try:
send_tweet()
time.sleep(30)
except tweepy.TweepError as error:
if error.api_code == 187:
print('duplicate message')
else:
print('waiting for new message')
time.sleep(30)
All i want is a way to keep the code going and ignore messages and tweets that were already sent, also wait for new dm's
Another thing that kept happening was that when i tried testing the extraction of the text, i kept getting the same text from the same dm, instead of getting a nother newer one.
Direct Messages stay in the Inbox even after your read or reply. It is best to delete a DM after you process it (in your case tweet it) so you don't need to worry about it next time you fetch the messages.
I installed Twitter library for Python with pip install Twitter and I am trying to replicate an example from here. This is my code:
config = {}
execfile("config.py", config)
twitter = Twitter(auth = OAuth(config["access_key"], config["access_secret"], config["consumer_key"], config["consumer_secret"]))
query = twitter.search.tweets(q = "lazy dog")
print query
The config.py file contains the keys, where XxXxX are my own keys from dev Twitter:
consumer_key = "XxXxXxxXXXxxxxXXXxXX"
consumer_secret = "xXXXXXXXXxxxxXxXXxxXxxXXxXxXxxxxXxXXxxxXXx"
access_key = "XXXXXXXX-xxXXxXXxxXxxxXxXXxXxXxXxxxXxxxxXxXXxXxxXX"
access_secret = "XxXXXXXXXXxxxXXXxXXxXxXxxXXXXXxXxxXXXXx"
However, I have this error:
TypeError: 'TwitterDictResponse' object is not callable
Which I couldn't find anywhere on Google. Any idea?
I copy/pasted your code on my laptop and it works perfectly for me with Python 2 and Python 3 (with slight modifications in this case).
I am using the 'twitter-1.17.1' library.
Try this, as you should not give the variable the same name as the library, your authorization function appears wrong, and it's better to code in components:
import twitter
config = {}
execfile("config.py", config)
auth = twitter.oauth.OAuth(config["access_key"], config["access_secret"], config["consumer_key"], config["consumer_secret"])
twitter_api = twitter.Twitter(auth=auth)
query = twitter_api.search.tweets(q="lazy dog", counts=100)
That should work. Interesting approach to put parameters in a dictionary.
While running this program to retrieve Twitter data using Python 2.7.8 :
#imports
from tweepy import Stream
from tweepy import OAuthHandler
from tweepy.streaming import StreamListener
#setting up the keys
consumer_key = '…………...'
consumer_secret = '………...'
access_token = '…………...'
access_secret = '……………..'
class TweetListener(StreamListener):
# A listener handles tweets are the received from the stream.
#This is a basic listener that just prints received tweets to standard output
def on_data(self, data):
print (data)
return True
def on_error(self, status):
print (status)
#printing all the tweets to the standard output
auth = OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_secret)
stream = Stream(auth, TweetListener())
t = u"سوريا"
stream.filter(track=[t])
after running this program for 5 hours i got this Error message:
Traceback (most recent call last):
File "/Users/Mona/Desktop/twitter.py", line 32, in <module>
stream.filter(track=[t])
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tweepy/streaming.py", line 316, in filter
self._start(async)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tweepy/streaming.py", line 237, in _start
self._run()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tweepy/streaming.py", line 173, in _run
self._read_loop(resp)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tweepy/streaming.py", line 225, in _read_loop
next_status_obj = resp.read( int(delimited_string) )
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.py", line 543, in read
return self._read_chunked(amt)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.py", line 612, in _read_chunked
value.append(self._safe_read(chunk_left))
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.py", line 660, in _safe_read
raise IncompleteRead(''.join(s), amt)
IncompleteRead: IncompleteRead(0 bytes read, 976 more expected)
>>>
Actually i don't know what to do with this problem !!!
You should check to see if you're failing to process tweets quickly enough using the stall_warnings parameter.
stream.filter(track=[t], stall_warnings=True)
These messages are handled by Tweepy (check out implementation here) and will inform you if you're falling behind. Falling behind means that you're unable to process tweets as quickly as the Twitter API is sending them to you. From the Twitter docs:
Setting this parameter to the string true will cause periodic messages to be delivered if the client is in danger of being disconnected. These messages are only sent when the client is falling behind, and will occur at a maximum rate of about once every 5 minutes.
In theory, you should receive a disconnect message from the API in this situation. However, that is not always the case:
The streaming API will attempt to deliver a message indicating why a stream was closed. Note that if the disconnect was due to network issues or a client reading too slowly, it is possible that this message will not be received.
The IncompleteRead could also be due to a temporary network issue and may never happen again. If it happens reproducibly after about 5 hours though, falling behind is a pretty good bet.
I've just had this problem. The other answer is factually correct, in that it's almost certainly:
Your program isn't keeping up with the stream
you get a stall warning if that's the case.
In my case, I was reading the tweets into postgres for later analysis, across a fairly dense geographic area, as well as keywords (London, in fact, and about 100 keywords). It's quite possible that, even though you're just printing it, your local machine is doing a bunch of other things, and system processes get priority, so the tweets will back up until Twitter disconnects you. (This is typically manifests as an apparent memory leak - the program increases in size until it gets killed, or twitter disconnects - whichever is first.)
The thing that made sense here was to push off the processing to a queue. So, I used a redis and django-rq solution - it took about 3 hours to implement on dev and then my production server, including researching, installing, rejigging existing code, being stupid about my installation, testing, and misspelling things as I went.
Install redis on your machine
Start the redis server
Install Django-RQ (or just Install RQ if you're working solely in python)
Now, in your django directory (where appropriate - ymmv for straight python applications) run:
python manage.py rqworker &
You now have a queue! You can add jobs to that like by changing your handler like this:
(At top of file)
import django_rq
Then in your handler section:
def on_data(self, data):
django_rq.enqueue(print, data)
return True
As an aside - if you're interested in stuff emanating from Syria, rather than just mentioning Syria, then you could add to the filter like this:
stream.filter(track=[t], locations=[35.6626, 32.7930, 42.4302, 37.2182]
That's a very rough geobox centred on Syria, but which will pick up bits of Iraq/Turkey around the edges. Since this is an optional extra, it's worth pointing this out:
Bounding boxes do not act as filters for other filter parameters. For
example track=twitter&locations=-122.75,36.8,-121.75,37.8 would match
any tweets containing the term Twitter (even non-geo tweets) OR coming
from the San Francisco area.
From this answer, which helped me, and the twitter docs.
Edit: I see from your subsequent posts that you're still going down the road of using Twitter API, so hopefully you got this sorted anyway, but hopefully this will be useful for someone else! :)
This worked for me.
l = StdOutListener()
auth = OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
stream = Stream(auth, l)
while True:
try:
stream.filter(track=['python', 'java'], stall_warnings=True)
except (ProtocolError, AttributeError):
continue
A solution is restarting the stream immediately after catching exception.
# imports
from tweepy import Stream
from tweepy import OAuthHandler
from tweepy.streaming import StreamListener
# setting up the keys
consumer_key = "XXXXX"
consumer_secret = "XXXXX"
access_token = "XXXXXX"
access_secret = "XXXXX"
# printing all the tweets to the standard output
auth = OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_secret)
class TweetListener(StreamListener):
# A listener handles tweets are the received from the stream.
# This is a basic listener that just prints received tweets to standard output
def on_data(self, data):
print(data)
return True
def on_exception(self, exception):
print('exception', exception)
start_stream()
def on_error(self, status):
print(status)
def start_stream():
stream = Stream(auth, TweetListener())
t = u"سوريا"
stream.filter(track=[t])
start_stream()
For me the back end application to which the URL is pointing is directly returning the string
I changed it to
return Response(response=original_message, status=200, content_type='application/text')
in the start I just returned text like
return original_message
I think this answer works only for my case
I am extremely new to python , scripting and APIs, well I am just learning. I came across a very cool code which uses facebook api to reply for birthday wishes.
I will add my questions, I will number it so that it will be easier for someone else later too. I hope this question will clear lots of newbies doubts.
1) Talking about APIs, in what format are the usually in? is it a library file which we need to dowload and later import? for instance, twitter API, we need to import twitter ?
Here is the code :
import requests
import json
AFTER = 1353233754
TOKEN = ' <insert token here> '
def get_posts():
"""Returns dictionary of id, first names of people who posted on my wall
between start and end time"""
query = ("SELECT post_id, actor_id, message FROM stream WHERE "
"filter_key = 'others' AND source_id = me() AND "
"created_time > 1353233754 LIMIT 200")
payload = {'q': query, 'access_token': TOKEN}
r = requests.get('https://graph.facebook.com/fql', params=payload)
result = json.loads(r.text)
return result['data']
def commentall(wallposts):
"""Comments thank you on all posts"""
#TODO convert to batch request later
for wallpost in wallposts:
r = requests.get('https://graph.facebook.com/%s' %
wallpost['actor_id'])
url = 'https://graph.facebook.com/%s/comments' % wallpost['post_id']
user = json.loads(r.text)
message = 'Thanks %s :)' % user['first_name']
payload = {'access_token': TOKEN, 'message': message}
s = requests.post(url, data=payload)
print "Wall post %s done" % wallpost['post_id']
if __name__ == '__main__':
commentall(get_posts())`
Questions:
importing json--> why is json imported here? to give a structured reply?
What is the 'AFTER' and the empty variable 'TOKEN' here?
what is the variable 'query' and 'payload' inside get_post() function?
Precisely explain almost what each methods and functions do.
I know I am extremely naive, but this could be a good start. A little hint, I can carry on.
If not going to explain the code, which is pretty boring, I understand, please tell me how to link to APIs after a code is written, meaning how does a script written communicate with the desired API.
This is not my code, I copied it from a source.
json is needed to access the web service and interpret the data that is sent via HTTP.
The 'AFTER' variable is supposed to get used to assume all posts after this certain timestamp are birthday wishes.
To make the program work, you need a token which you can obtain from Graph API Explorer with the appropriate permissions.
I am using oAuth2WebServerFlow to get an oAuth access token and then retrieve a list of a user's contacts. I'm using web2py as the web framework.
flow = oauth2client.client.OAuth2WebServerFlow(client_id=CLIENT_ID,
client_secret=CLIENT_SECRET,
scope='https://www.google.com/m8/feeds',
user_agent=USER_AGENT)
callback = 'http://127.0.0.1:8000/Test/searcher/oauth2callback'
authorise_url = flow.step1_get_authorize_url(callback)
session.flow = pickle.dumps(flow)
redirect(authorise_url)
With the redirect then being handled as follows
flow = pickle.loads(session.flow)
credentials = flow.step2_exchange(request.vars)
My question is how to change the OAuth2Credentials object returned above into an OAuth2AccessToken object, that I can then use to authorise a request to the contacts library with something like:
gc = gdata.contacts.client.ContactsClient(source="")
token.authorize(gc)
gc.GetContacts
I've tried various methods with no success, normally getting an oAuth2AccessTokenError message of "Invalid Grant". I'm thinking something like this may work but also think there must be a simpler way!
token = gdata.gauth.OAuth2Token(client_id=CLIENT_ID, client_secret=CLIENT_SECRET, scope='https://www.google.com/m8/feeds', user_agent=USER_AGENT)
token.redirect_uri = 'http://127.0.0.1:8000/Test/searcher/oauth2callback'
token.get_access_token(<<code to pass the access_token out of the Credentials object??>>)
Can anyone help with this?
I managed to get this working. It was pretty straightforward actually, I just stopped using the OAuth2WebServerFlow, which didn't seem to be adding much value anyway. So the new code looks like this:
token = gdata.gauth.OAuth2Token(client_id, client_secret, scope, ua)
session.token = pickle.dumps(token)
redirect(token.generate_authorize_url(redirect_uri='http://127.0.0.1:8000/Test/default/oauth2callback'))
Followed by
def oauth2callback():
token = pickle.loads(session.token)
token.redirect_uri='http://127.0.0.1:8000/Test/default/oauth2callback'
token.get_access_token(request.vars.code)
gc = gdata.contacts.client.ContactsClient(source='')
gc = token.authorize(gc)
feed = gc.GetContacts()
Hope this is helpful to someoone!
Assuming you have code for newer OAuth2.0 APIs setup correctly, you can get this working by creating a Token class that modifies headers that converts Credentials -> Token class.
OAUTH_LABEL='OAuth '
#Transforms OAuth2 credentials to OAuth2 token.
class OAuthCred2Token(object):
def __init__(self, token_string):
self.token_string = token_string
def modify_request(self, http_request):
http_request.headers['Authorization'] = '%s%s' % (OAUTH_LABEL,
self.token_string)
ModifyRequest = modify_request
You can test it as follows:
gc = gdata.contacts.client.ContactsClient(source='')
token = OAuthCred2Token(creds.access_token)
gc.auth_token = token
print gc.GetContacts()
Note that this code will not handle token refreshes, which code using credentials handles.
In my own application, it is acceptable to make a simple call using a service to refresh the credentials before making a call to get contacts.