I'm making a program which scrapes bus information from a server and sends it to a user via Facebook messenger. It works fine, but I'm trying to add functionality which splits really long timetables into separate messages. To do this, I had to make an if statement that detects really long timetables, splits them and calls the send_message function from my main file, app.py
Here is the part of the function in app with the variable I need to extract:
for messaging_event in entry["messaging"]:
if messaging_event.get("message"): # someone sent us a message
sender_id = messaging_event["sender"]["id"] # the facebook ID of the person sending you the message
recipient_id = messaging_event["recipient"]["id"] # the recipient's ID, which should be your page's facebook ID
message_text = messaging_event["message"]["text"] # the message's text
tobesent = messaging_event["message"]["text"]
send_message(sender_id, fetch.fetchtime(tobesent))
and here is the if statement in fetch which detects long messages, splits them and calls the send_message function from the other file, app.py:
if len(info["results"]) > 5:
for i, chunk in enumerate(chunks(info, 5), 1):
app.send_message((USER ID SHOULD BE HERE, 'Listing part: {}\n \n{}'.format(i, chunk)))
I'm trying to call the send_message function from app.py, but it requires two arguments, sender_id and the message text. How can I go about getting the sender_id variable from this function and using it in fetch? I've tried returning it and calling the function, but it doesn't work for me.
EDIT:error
Traceback (most recent call last):
File "bus.py", line 7, in <module>
print fetch.fetchtime(stopnum)
File "/home/ryan/fb-messenger-bot-master/fetch.py", line 17, in fetchtime
send_message((webhook(),'Listing part: {}\n \n{}'.format(i, chunk)))
File "/home/ryan/fb-messenger-bot-master/app.py", line 30, in webhook
data = request.get_json()
File "/usr/local/lib/python2.7/dist-packages/werkzeug/local.py", line 343, in __getattr__
return getattr(self._get_current_object(), name)
File "/usr/local/lib/python2.7/dist-packages/werkzeug/local.py", line 302, in _get_current_object
return self.__local()
File "/usr/local/lib/python2.7/dist-packages/flask/globals.py", line 37, in _lookup_req_object
raise RuntimeError(_request_ctx_err_msg)
RuntimeError: Working outside of request context.
This typically means that you attempted to use functionality that needed
an active HTTP request. Consult the documentation on testing for
information about how to avoid this problem.
Related
Since my liked songs aren't public I want spotipy to get a list from all the songs and add them to my playlist, but when I try to do that with a loop it says that the uri is incorrect, I don't know if I should use another method.
client_credentials_manager = SpotifyClientCredentials(client_id=cid, client_secret=secret)
scope = 'user-library-read playlist-modify-public'
sp = spotipy.Spotify(client_credentials_manager=client_credentials_manager,auth_manager=SpotifyOAuth(scope=scope))
def show_tracks(results):
for item in results['items']:
track = item['track']
#print("%32.32s %s" % (track['artists'][0]['name'], track['name']))
sp.playlist_add_items(playlist_id, track['uri'])
results = sp.current_user_saved_tracks()
show_tracks(results)
while results['next']:
results = sp.next(results)
show_tracks(results)
The error is
HTTP Error for POST to https://api.spotify.com/v1/playlists/5ZzsovDgANZfiXgRrwq5fw/tracks returned 400 due to Invalid track uri: spotify:track:s
Traceback (most recent call last):
File "C:\Users\ferch\AppData\Local\Programs\Python\Python37\lib\site-packages\spotipy\client.py", line 245, in _internal_call
response.raise_for_status()
File "C:\Users\ferch\AppData\Local\Programs\Python\Python37\lib\site-packages\requests\models.py", line 941, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://api.spotify.com/v1/playlists/5ZzsovDgANZfiXgRrwq5fw/tracks
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "make_playlists.py", line 23, in <module>
show_tracks(results)
File "make_playlists.py", line 20, in show_tracks
sp.playlist_add_items(playlist_id, track['uri'])
File "C:\Users\ferch\AppData\Local\Programs\Python\Python37\lib\site-packages\spotipy\client.py", line 1025, in playlist_add_items
position=position,
File "C:\Users\ferch\AppData\Local\Programs\Python\Python37\lib\site-packages\spotipy\client.py", line 296, in _post
return self._internal_call("POST", url, payload, kwargs)
File "C:\Users\ferch\AppData\Local\Programs\Python\Python37\lib\site-packages\spotipy\client.py", line 266, in _internal_call
headers=response.headers,
spotipy.exceptions.SpotifyException: http status: 400, code:-1 - https://api.spotify.com/v1/playlists/5ZzsovDgANZfiXgRrwq5fw/tracks:
Invalid track uri: spotify:track:s, reason: None
I think this problem is because of the type of variable of track['uri']
playlist_add_items is expecting a list of URIs, URLs, or IDs to add to the playlist, but right now you're passing a single URI, which is a string like this: spotify:track:2t7rS8BHF5TmnBR5PmnnSU. The code for the spotipy library is likely doing a loop for item in items..., so when you pass it a string, it considers each character in the string as a different item. So it encounters the first character, s and tries to make a URI out of it resulting in spotify:track:s. This isn't a valid URI, so the request fails.
You can try wrapping the uri in a list like so:
for item in results['items']:
track = item['track']
# Note brackets around track['uri']
sp.playlist_add_items(playlist_id, [track['uri']])
This will handle the issue you're getting now, but you may have issues down the line making one request per track you want to add to the playlist. You could run into rate limiting issues, so I recommend trying to build a list of 100 URIs at a time, which is the max that can be sent in one request.
Keeping this in mind, we could try something like this:
def show_tracks(results):
for idx in range(0, len(results['items']), 100):
uris = [item['track']['uri'] for item in results['items'][idx:idx+100]]
sp.playlist_add_items(playlist_id, uris)
Another way to do this would be to create a list with all the uris/ids of the tracks you want to add, and then pass that list into the sp.playlist_add_items() function. This could be useful if you need the list of uris again further down the line. Like so :
uris = []
for item in results['items']:
track = item['track']
uris.append(track['uri'])
sp.playlist_add_items(playlist_id, uris)
Bear in mind, sp.playlist_add_items only lets you add <= 100 tracks at a time. i created this loop to handle adding a list of tracks no matter the size: (where songIDs is a list of song ids / uris)
i = 0
increment = 99
while i < len(songIDS)+increment:
try:
sp.playlist_add_items(playlistID, songIDS[i: i+increment])
except spotipy.exceptions.SpotifyException:
pass
i += increment
Hope this helps, i've only been using spotipy for a week myself
I'm checking a list of around 3000 telegram chats to get and retrieve the number of chat members in each chat using the get_chat_members_count method.
At some point I'm hitting a flood limit and getting temporarily banned by Telegram BOT.
Traceback (most recent call last):
File "C:\Users\alexa\Desktop\ico_icobench_2.py", line 194, in <module>
ico_tel_memb = bot.get_chat_members_count('#' + ico_tel_trim, timeout=60)
File "C:\Python36\lib\site-packages\telegram\bot.py", line 60, in decorator
result = func(self, *args, **kwargs)
File "C:\Python36\lib\site-packages\telegram\bot.py", line 2006, in get_chat_members_count
result = self._request.post(url, data, timeout=timeout)
File "C:\Python36\lib\site-packages\telegram\utils\request.py", line 278, in post
**urlopen_kwargs)
File "C:\Python36\lib\site-packages\telegram\utils\request.py", line 208, in _request_wrapper
message = self._parse(resp.data)
File "C:\Python36\lib\site-packages\telegram\utils\request.py", line 168, in _parse
raise RetryAfter(retry_after)
telegram.error.RetryAfter: Flood control exceeded. Retry in 85988 seconds
The python-telegram-bot wiki gives a detailed explanation and example on how to avoid flood limits here.
However, I'm struggling to implement their solution and I hope someone here has more knowledge of this than myself.
I have literally made a copy and paste of their example and can't get it to work no doubt because i'm new to python. I'm guessing I'm missing some definitions but I'm not sure which. Here is the code below and after that the first error I'm receiving. Obviously the TOKEN needs to be replaced with your token.
import telegram.bot
from telegram.ext import messagequeue as mq
class MQBot(telegram.bot.Bot):
'''A subclass of Bot which delegates send method handling to MQ'''
def __init__(self, *args, is_queued_def=True, mqueue=None, **kwargs):
super(MQBot, self).__init__(*args, **kwargs)
# below 2 attributes should be provided for decorator usage
self._is_messages_queued_default = is_queued_def
self._msg_queue = mqueue or mq.MessageQueue()
def __del__(self):
try:
self._msg_queue.stop()
except:
pass
super(MQBot, self).__del__()
#mq.queuedmessage
def send_message(self, *args, **kwargs):
'''Wrapped method would accept new `queued` and `isgroup`
OPTIONAL arguments'''
return super(MQBot, self).send_message(*args, **kwargs)
if __name__ == '__main__':
from telegram.ext import MessageHandler, Filters
import os
token = os.environ.get('TOKEN')
# for test purposes limit global throughput to 3 messages per 3 seconds
q = mq.MessageQueue(all_burst_limit=3, all_time_limit_ms=3000)
testbot = MQBot(token, mqueue=q)
upd = telegram.ext.updater.Updater(bot=testbot)
def reply(bot, update):
# tries to echo 10 msgs at once
chatid = update.message.chat_id
msgt = update.message.text
print(msgt, chatid)
for ix in range(10):
bot.send_message(chat_id=chatid, text='%s) %s' % (ix + 1, msgt))
hdl = MessageHandler(Filters.text, reply)
upd.dispatcher.add_handler(hdl)
upd.start_polling()
The first error I get is:
Traceback (most recent call last):
File "C:\Users\alexa\Desktop\z test.py", line 34, in <module>
testbot = MQBot(token, mqueue=q)
File "C:\Users\alexa\Desktop\z test.py", line 9, in __init__
super(MQBot, self).__init__(*args, **kwargs)
File "C:\Python36\lib\site-packages\telegram\bot.py", line 108, in __init__
self.token = self._validate_token(token)
File "C:\Python36\lib\site-packages\telegram\bot.py", line 129, in _validate_token
if any(x.isspace() for x in token):
TypeError: 'NoneType' object is not iterable
The second issue I have is how to use wrappers and decorators with get_chat_members_count.
The code I have added to the example is:
#mq.queuedmessage
def get_chat_members_count(self, *args, **kwargs):
return super(MQBot, self).get_chat_members_count(*args, **kwargs)
But nothing happens and I don't get my count of chat members. I'm also not saying which chat I need to count so not surprising I'm getting nothing back but where am I supposed to put the telegram chat id?
You are getting this error because MQBot receives an empty token. For some reason, it does not raise a descriptive exception but instead crashes unexpectedly.
So why token is empty? It seems that you are using os.environ.get incorrectly. The os.environ part is a dictionary and its' method get allows one to access dict's contents safely. According to docs:
get(key[, default])
Return the value for key if key is in the dictionary, else default. If default is not given, it defaults to None, so that this method never raises a KeyError.
According to your question, in this part token = os.environ.get('TOKEN') you pass token itself as a key. Instead, you should've passed the name of the environmental variable which contains your token.
You can fix this either rewriting that part like this token = 'TOKEN' or by setting environmental variable correctly and accessing it from os.environ.get via correct name.
I'm trying to upload a file to GCS from Appengine Endpoints. I'm using Python. When the file ends to upload, shows an error " AttributeError: 'str' object has no attribute 'ToMessage' ".
So, if I go to GCS File Explorer, in the browser, I see the recently filename uploaded but its size is 0K.
This is my model:
class File(EndpointsModel):
_message_fields_schema = ('blob', 'url')
blob = ndb.BlobKeyProperty() #stored in GCS
url = ndb.StringProperty()
enable = ndb.BooleanProperty(default=True)
def create_file(filename):
file_info = blobstore.FileInfo(filename)
filename = '/gs'+ str(file_info.filename.blob)
gcs.open(secrets.BUCKET_NAME +'/' + filename, 'w').close()
return blobstore.create_gs_key(filename)
So, what I need to do to upload correctly a file to GCS from Appengine Endpoints.
Traceback:
ERROR 2014-11-25 20:35:22,654 service.py:191] Encountered unexpected error from ProtoRPC method implementation: AttributeError ('str' object has no attribute 'ToMessage')
Traceback (most recent call last):
File "/home/alpocr/workspace/google_appengine/lib/protorpc-1.0/protorpc/wsgi/service.py", line 181, in protorpc_service_app
response = method(instance, request)
File "/home/alpocr/workspace/google_appengine/lib/endpoints-1.0/endpoints/api_config.py", line 1332, in invoke_remote
return remote_method(service_instance, request)
File "/home/alpocr/workspace/google_appengine/lib/protorpc-1.0/protorpc/remote.py", line 412, in invoke_remote_method
response = method(service_instance, request)
File "/home/alpocr/workspace/mall4g-backend/libs/endpoints_proto_datastore/ndb/model.py", line 1429, in EntityToRequestMethod
response = response.ToMessage(fields=response_fields)
AttributeError: 'str' object has no attribute 'ToMessage'
It sounds like you have defined the return type correctly for your endpoints method, and it's expecting to turn the result into a Message object, but the endpoints method code is actually returning a string. Can you post the endpoints method that is called when this error occurs?
Either that or endpoints proto model is acting weird when you (somewhere in your code) assign a string value to one of its properties. When it tries to convert it to a Message (and thus recursively to turn its properties into Messages), it finds the String and bugs out. It's hard to tell without seeing the affected endpoint method's code.
UPDATE: Also, checking the source of endpoints_proto_datastore, we see the following comment above the line that bugs:
# If developers using a custom request message class with
# response_fields to create a response message class for them, it is
# up to them to return an instance of the current EndpointsModel
# class. If not, their API users will receive a 503 from an uncaught
# exception.
Could this apply to you?
I found this python code on the forums as an answer to something that relates to my problem. I don't really understand python, so can somebody tell me why this isn't working?
(Some background information: I have a web form that get automatically emailed to openERP, which then automatically creates a lead. However, when a lead is created, info like phone and name do not get read from the email and sorted into their corresponding fields in the lead's form.)
# You can use the following variables:
# - self: ORM model of the record on which the action is triggered
# - object: browse_record of the record on which the action is triggered if there is one, otherwise None
# - pool: ORM model pool (i.e. self.pool)
# - time: Python time module
# - cr: database cursor
# - uid: current user id
# - context: current context
# If you plan to return an action, assign: action = {...}
def parse_description(description):
'''
there is parse function
It is example for parsing messages like this:
Name: John
Phone: +100500
'''
fields=['Name','Phone']
_dict={}
description=description.lower()
for line in description.split('\n'):
for field in fields:
if field in line:
split_line=line.split(':')
if len(split_line)>1:
pre_dict[field]=line.split(':')[1]
return dict
lead=self.browse(cr,uid,context['active_id'],context=context)
description=lead['description']
_dict=parse_description(description)
self.write(cr,uid,context['active_id'],{
'partner_name':_dict.get('name'),
'contact_name':_dict.get('name'),
'phone':_dict.get(u'phone'),
'mobile':_dict.get(u'phone')})
Update:
I got these traceback while I am fetching mail
2014-07-01 13:39:40,188 4992 INFO v8_demo openerp.addons.mail.mail_thread: Routing
mail from Atul Jain <jain.atul43#gmail.com> to jain.atul10#hotmail.com with
Message-Id <CAG=2G76_SRthL3ybGGyx2Lai5H=RMNxUOjRRR=+5-ODrcgtEZw#mail.gmail.com>:
fallback to model:crm.lead, thread_id:False, custom_values:None, uid:1
2014-07-01 13:39:40,445 4992 ERROR v8_demo openerp.addons.fetchmail.fetchmail:
Failed to fetch mail from imap server Gmail.
Traceback (most recent call last):
File "/home/atul/openerp-8/openerp/addons/fetchmail/fetchmail.py", line 206, in
fetch_mail
action_pool.run(cr, uid, [server.action_id.id], {'active_id': res_id, 'active_ids'
:[res_id], 'active_model': context.get("thread_model", server.object_id.model)})
File "/home/atul/openerp-8/openerp/addons/base/ir/ir_actions.py", line 967, in run
res = func(cr, uid, action, eval_context=eval_context, context=run_context)
File "/home/atul/openerp-8/openerp/addons/base/ir/ir_actions.py", line 805,
in run_action_code_multi
eval(action.code.strip(), eval_context, mode="exec", nocopy=True) # nocopy allows
to return 'action'
File "/home/atul/openerp-8/openerp/tools/safe_eval.py", line 254, in safe_eval
return eval(c, globals_dict, locals_dict)
File "", line 14, in <module>
File "", line 4, in parse_description
ValueError: "'bool' object has no attribute 'lower'" while evaluating
u"def parse_description(description):
fields=['name','phone']
_dict={}
description=description.lower()
for line in description.split('\\n'):
for field in fields:
if field in line:
split_line=line.split(':')
if len(split_line)>1:
_dict[field]=split_line[1]
return _dict
lead=self.browse(cr,uid,context['active_id'],context=context)\ndescription=lead['description']
_dict=parse_description(description)
self.write(cr,uid,context['active_id'],{ 'partner_name':_dict.get('name'), 'contact_name':_dict.get('name'),
'phone':_dict.get(u'phone'),
'mobile':_dict.get(u'phone')})"
Please help me in understanding the problem.
I've fixed the parse_description function:
def parse_description(description):
'''
there is parse function
It is example for parsing messages like this:
Name: John
Phone: +100500
'''
fields=['name','phone']
_dict={}
description=description.lower()
for line in description.split('\n'):
for field in fields:
if field in line:
split_line=line.split(':')
if len(split_line)>1:
_dict[field]=split_line[1]
return _dict
I changed the fields values to lower case because all operations on the description are on description.lower().
On the line pre_dict[field]=line.split(':')[1], you are splitting line to get your result. This has already been done: split_line=line.split(':') so you can just replace the pre_dict line with pre_dict[field]=split_line[1]
On that same line you are using a variable, pre_dict, which hasn't been referenced before. I think you mean to use _dict so the line should be _dict[field]=split_line[1]
The function returns dict which is a type, not a variable. You probably want it to return the dictionary which contains the field data, so it should return _dict instead; otherwise you'll always get the result <type 'dict'>
As for the remaining code, there's not enough context for me to understand what's happening or what's wrong. At least the parse_description function should be working now.
Hey guys, I am a little lost on how to get the auth token. Here is the code I am using on the return from authorizing my app:
client = gdata.service.GDataService()
gdata.alt.appengine.run_on_appengine(client)
sessionToken = gdata.auth.extract_auth_sub_token_from_url(self.request.uri)
client.UpgradeToSessionToken(sessionToken)
logging.info(client.GetAuthSubToken())
what gets logged is "None" so that does seem right :-(
if I use this:
temp = client.upgrade_to_session_token(sessionToken)
logging.info(dump(temp))
I get this:
{'scopes': ['http://www.google.com/calendar/feeds/'], 'auth_header': 'AuthSub token=CNKe7drpFRDzp8uVARjD-s-wAg'}
so I can see that I am getting a AuthSub Token and I guess I could just parse that and grab the token but that doesn't seem like the way things should work.
If I try to use AuthSubTokenInfo I get this:
Traceback (most recent call last):
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/webapp/__init__.py", line 507, in __call__
handler.get(*groups)
File "controllers/indexController.py", line 47, in get
logging.info(client.AuthSubTokenInfo())
File "/Users/matthusby/Dropbox/appengine/projects/FBCal/gdata/service.py", line 938, in AuthSubTokenInfo
token = self.token_store.find_token(scopes[0])
TypeError: 'NoneType' object is unsubscriptable
so it looks like my token_store is not getting filled in correctly, is that something I should be doing?
Also I am using gdata 2.0.9
Thanks
Matt
To answer my own question:
When you get the Token just call:
client.token_store.add_token(sessionToken)
and App Engine will store it in a new entity type for you. Then when making calls to the calendar service just dont set the authsubtoken as it will take care of that for you also.