I have made a chat bot in django python which listens via Http requests. Certain chat channels, such as slack, require an immediate 200OK http response from the server. Hence I register a celery task (into a queue) to return http 200OK instantly and let the reply be processed in background.
It is taking 3-4 seconds in production (SQS based) for the bot's reply to be received by the end user. Through logs I have found out that the delay is in the task to reach the celery worker.
I want to make my chat bot's replies to come really fast when a user types in a message and am looking for a faster alternative to celery for this specific use case. Thank you!
Note that I do not want to use slack's RTM api because I do not intend to make my bot slack specific.
I solved it by using multi threading as explained in this answer, although I am not so sure about scalability of this solution as of yet.
Related
I am using WhatsApp official API to send messages to customers. I want to send them a reminder message if they have not replied during a certain period of time (a couple of hours more or less). In other words, I will send a message and wait for a certain period of time for their response. If they respond, no reminder message is needed. If they don't, I will send them a reminder message. WhatsApp API sends webhook notifications when they reply.
How do I implement that in a Django view? I am thinking of creating an async process/thread each time I want to wait, and handle the timeout condition in that process somehow. Is this the right approach? I think that would be costly in terms of server time used, but I am not sure. If this is the right approach, please show some pseudo-code on how it can be done.
I will be deploying my app on Heroku.
Your answer is much appreciated..
Suppose, I have sent a post request from react to Django rest API and that request is time taking. I want to get how many percentages it has been processed and send to the frontend without sending the real response?
There are two broad ways to approach this.
(which I would recommend to start with): Break the request up. The initial request doesn't start the work, it sends a message to an async task queue (such as Celery) to do the work. The response to the initial request is the ID of the Celery task that was spawned. The frontend now can use that request ID to poll the backend periodically to check if the task is finished and grab the results when they are ready.
Websockets, wherein the connection to the backend is kept open across many requests, and either side can initiate sending data. I wouldn't recommend this to start with, since its not really how Django is built, but with a higher level of investment it will give an even smoother experience.
i'm developing a python telegram bot and i have a script that is always running (for receiving new command from telegram) and i want the bot to send messages when the user perform a action in a website.
example: the user start the bot, the bot send a link to perform an action in the website (like login to the user's account and connect the telegram id with the user id) and then send a confirmation message on telegram that all's good.
my problem is how i can tell the python script that the action in the browser is done? for now i'm constantly query a database but my solution is pretty dumb because if the user don't perform any action the query can go forever.
any suggestion how to do it correctly?
thx <3
I see two viable solutions here:
Send the message directly from the website. While only one process is allowed to fetch updates at a time, you can make other requests from as many servers as you like. Depending on how your website works, you can make a plain HTTP request to the Bot API or use an API wrapper like python-telegram-bot or a wrapper in a different language to make the request. e.g. if you're running a php-based website, you could use a php API wrapper.
If for some reason 1. is not an option for you, you can try to inform your running process about the user login. the PTB FAQ have an entry that should help you get started. If your website & bot are running on the same server, it might be possible to make the update_queue directly available to the website process. If not, you can try to set up a webhook for your bot and post an update to the webhook that you then enqueue into the update_queue
Approach 1. has the downside that you don't have all the bot logic in one place, but it should be by far easier to implement than 2.
Disclaimer: I'm currently the maintainer of python-telegram-bot
I am tasked with building a Slack slash command app in Python which will respond to incoming slash commands. However, for security reasons, I am not allowed to open the firewall for incoming webhooks from Slack. Is there instead a way to check a queue of sent slash commands?
For example, a user types "/myslashapp" in a specific channel. My app will need to do something like call an endpoint every 30 seconds and check if the "/myslashapp" command was sent. If it was, my app should trigger a Lambda function in AWS.
Based on reading the Slack API docs, I haven't found any way to do this other than perhaps the RTM API, though it seems like overkill and still requires an open socket.
No. The Slack API has no build-in support that allows you to pull requests after-the-fact from a queue instead of receiving them from Slack when they happen.
The RTM API might work for you, because the connection to Slack is initiated from your side. So - provided you firewall allows it - would also work from within an intranet. However, you can not do slash commands with the RTM API or any of the other interesting interactive Slack features like buttons. Only simple messages and events.
You could implement your own bridging solution and pull from it. But I don't think that a pulling solution would work, because it creates a lot of latency for your app. Users expect an immediate response to their slash command, not a delay of 30 secs or more.
So in summary I think you only have two valid options:
Host your app internally and use a secure VPN like ngrok to expose a public URL to your app.
Run your app on the Internet and let it have a secure connection to your Intranet for accessing internal data. (similar to e.g. a shopping web site would work, that has a public app on the Internet, but also can transmit orders to the business applications on the companies Intranet.)
Our website has around 50,000 users and daily active traffic is pretty good. We are designing a new notifications feature for our user base.
Our requirement is as follows:
Users are part of different Groups.
A user can be part of multiple Groups.
When a user uploads a image in a group, all the members of that particular group should get a notification saying "new image uploaded" regardless of being online or offline.
We thought of creating rabbitmq exchanges for each group and queue for each user. But got confused going forward of designing the right way!!
Say, a user should receive notifications even he logs-in to site days after the notifications is generated. We ended up storing the messages in DB which is not a good thing at all for offline users.
Can someone suggest proper design pattern with explanation for this use case? We are using celery + rabbitmq + tornado. Should tornado talk directly to the celery consumer? Where do the messages get stored when the user is offline?
I have similar project. So how it works:
Put messages to your rabbit queue, from your events source ( django, celery, everywhere)
You can use pika+tornado IOLoop ( so when messages come to tornado you will receive notification via pika loop, when request comes http, or websocket connection via Tornado)
Use collection of opened in TornaodApplication websockets for messaging for users
You can check very close project with logging via torando+rabbitmq on my github: https://github.com/rmuslimov/RapidLog