I want to get all messages of a facebook conversation. This conversation has around 60,000 messages and I want to get all of them. Is possible to get all of them in one go or I have to make multiple requests with pagination?
The Thread documentation is of hardly any help. I registered an app and using the access key in the Graph API Explorer. Even if I change the the limit to 25000 still I am getting handful of messages.
t_id.<thread_id>/messages?limit=25000
So what's the best way to get all messages?
I will be using this in my tornado app. The call would be something like this:
self.facebook_request("/53......13", access_token=self.get_secure_cookie('access_token'), callback=self.async_callback(self._after_messages))
Related
this Platform is really really helpful. My issue is that is it possible to send 500k promotional messages per day? I have the script ready using the API # and API ID using Telethon. But anytime i try to send Messages, the accounts get banned. Maybe it got banned because i was on the same IP Address while scraping the users and sending messages. But what I really need to know is that is it even Possible to send 500k messages in day, if yes:
How many Accounts are needed(where can I get these number of
accounts from)?
Do I have to manually create accounts and receive
OTP's for each or is there a script ready for that as well that I
can implement?
What should be the duration between each messages and
how many messages from one single account.
How to shuffle between accounts while send messages? Manually or is there again a script
ready for it?
Please guys I really really need help. If not 500k how many can i send per day maximum.
You can send up to 20 messages per second in the same chat on Telegram.
I'm using a python script that monitors a website and sends me messages on Facebook if there is any specific updates.
I have tried a module which called 'fbchat', so simple and so easy, but the problem is that I'm using real Facebook accounts and somehow Facebook detected that it's a bot and banned that profile, even if I have made random pauses in my code.
I know that I can do make those notifications through emails, but for me Facebook messages are better... Any ideas about how can I make it possible (maybe through bots!!)?
Thank you!
First take a look at which parameters(headers and payloads) the POST method takes(using network tools in google chrome for example), and try again with as many parameters as possible, while also using a session so cookies are enabled as well.
Different websites use different methods of detecting bots, and you'll just have to test and see what works.
P.S: take a look at this answer for more info.
Multiple Accounts are not allowed on Facebook, and there is no (allowed) way to send messages between users. You can only send messages from Pages to Users, and only if the User started the conversation. You can find more information about that in the docs: https://developers.facebook.com/docs/messenger-platform/
I am making a slack bot. I have been using python slackclient library to develop the bot. Its working great with one team. I am using Flask Webframework.
As many people add the app to slack via "Add to Slack" button, I get their bot_access_token.
Now
how should I run the code with so many Slack tokens. Should I store them in a list and then traverse using for loops for all token! But this was is not good as I may not be able to handle the simultaneous messages or events I receive Or "its a good way".
Any other way if its not?
If you're using the real-time API, you'll need one WebSocket open per team. Yes, you would typically use a loop to establish these connections. Depending on the way slackclient works, you might need to start each in a separate thread or process.
EDIT: As mentioned in the comments below, threading is to be preferred over multiple processes. Even better would be to use something lighter weight than threads, but at this point in your learning, I wouldn't bother over-optimizing here.
SECOND EDIT: It looks like python-slackclient has non-blocking reads, so you don't even need to use threads. E.g. the following will not block:
for team in teams:
for event in team.client.rtm_read():
# process the event for that team
(This assumes some sort of "team" object that contains an instance of SlackClient.)
You do indeed need to
Store each team token. Please remember to encrypt it
When a team installs your app, create a new RTM connection. When your app/server restarts, loop across all your teams, open a RTM connection for each of them
each connection will receive events from that team, and that team only. You will not receive all notifications on the same connection
(maybe you are coming from Facebook Messenger bots background, where all notifications arrive at the same webhook ? That's not the case with Slack)
I'm creating an app using python/tweepy
I'm able to use the StreamListener to get real time "timeline" when indicating a "topic" or #, $, etc.
Is there a way to have a real-time timeline function similar to tweetdeck or an embedded widget for a website for the user ID? non-stop
When using api.user_timeline receiving the 20 most recent tweepy.
Any thoughts?
Twitter is a Python library, so there's really no way to use it directly on a website. You could send the data from the stream to a browser using WebSockets or AJAX long polling, however.
There's no way to have the Userstream endpoint send tweets to you any faster - all of that happens on Twitter's end. I'm not sure why Tweetdeck would receive them faster, but there's nothing than can be done except to contact Twitter, if you're getting a slow speed.
I'm writing a chat application using Google App Engine. I would like chats to be logged. Unfortunately, the Google App Engine datastore only lets you write to it once per second. To get around this limitation, I was thinking of using a memcache to buffer writes. In order to ensure that no data is lost, I need to periodically push the data from the memcache into the data store.
Is there any way to schedule jobs like this on Google App. Engine? Or am I going about this in entirely the wrong way?
I'm using the Python version of the API, so a Python solution would be preferred, but I know Java well enough that I could translate a Java solution into Python.
To get around the write/update limit of entity groups (note that entities without parent are their own entity group) you could create a new entity for every chat message and keep a property in them that would reference a chat they belong to.
You'd then find all chat messages that belong to a chat via a query. But this would be very inefficient, as you'd then need to do a query for every user for every new message.
So go with the above advice, but additionally do:
Look into backends. This are always-on instances where you could aggregate chat messages in memory (and immediately/periodically flush them to datastore). When user requests latest chat messages, you already have them in memory and would serve them instantly (saving on time and cost compared to using Datastore). Note that backends are not 100% reliable, they might go down from time to time - adjust chat message flushing to datastore accordingly.
Check out Channels API. This will allow you to notify users when there is a new chat message. This way you'd avoid polling for new chat messages and keep the number or requests down.
Sounds like the wrong way since you are risking losing data on memcache.
You can write to one entity group once per second.
You can write separate entity groups very rapidly. So it really depends how you structure your data. For example, if you kept an entire chat in one entity, you can only write that chat once per second. And you'd be limited to 1MB.
You should write a separate entity per message in the chat, you can write very, very quickly, but you need to devise a way to pull all the messages together, in order for the log.
Edit I agree with Peter Knego that the costs of using one entity per message will get way too expensive. His backend suggestion is pretty good too, although if your app is popular, backends don't scale that well.
I was trying to avoid sharding, but I think it will be necessary. If you're not familiar with sharding, read up on this: https://developers.google.com/appengine/articles/sharding_counters
Sharding would be an intermediate between writing one entity for all messages in a conversation, vs one entity per message. You would randomly split the messages between a number of entities. For example, if you save the messages in 3 entities, you can write 5x/sec (I doubt most human conversations would go any faster than that).
On fetching, you would need to grab the 3 entities, and merge the messages in chronological order. This would save you a lot on cost. But you would need to write the code to do the merging.
One other benefit is that your conversation limit would now be 3MB instead of 1MB.
Why not use a pull task? I highly recommend this Google video is you are not familiar enough with task queues. First 15 minutes will cover pull queue info that may apply to your situation. Anything involving per message updates may get quite expensive re: database ops, and this will be greatly exacerbated if you have any indices involved. Video link:
https://www.youtube.com/watch?v=AM0ZPO7-lcE&feature=player_embedded
I would simply set up my chat entity when users initiate it in the on-line handler, passing back the entity id to the chat parties. Send the id+message to your pull queue, and serialize the messages within the chat entity's TextProperty. You wont likely schedule the pull queue cron more often than once per second, so that avoids your entity update limitation. Most importantly: your database ops will be greatly reduced.
I think you could create tasks which will persist the data. This has the advantage that, unlike memcached the tasks are persisted and so no chats would be lost.
when a new chat comes in, create a task to save the chat data. In the task handler do the persist. You could either configure the task queue to pull at 1 per second (or slightly slower) and save each bit of chat data held in the task persist the incoming chats in a temporary table (in different entity groups), and every have the tasks pull all unsaved chats from the temporary table, persist them to the chat entity then remove them from the temporary table.
i think you would be fine by using the chat session as entity group and save the chat messages .
this once per second limit is not the reality, you can update/save at a higher rate and im doing it all the time and i don't have any problem with it.
memcache is volatile and is the wrong choice for what you want to do. if you start encountering issues with the write rate you can start setting up tasks to save the data.