I am tasked with building a Slack slash command app in Python which will respond to incoming slash commands. However, for security reasons, I am not allowed to open the firewall for incoming webhooks from Slack. Is there instead a way to check a queue of sent slash commands?
For example, a user types "/myslashapp" in a specific channel. My app will need to do something like call an endpoint every 30 seconds and check if the "/myslashapp" command was sent. If it was, my app should trigger a Lambda function in AWS.
Based on reading the Slack API docs, I haven't found any way to do this other than perhaps the RTM API, though it seems like overkill and still requires an open socket.
No. The Slack API has no build-in support that allows you to pull requests after-the-fact from a queue instead of receiving them from Slack when they happen.
The RTM API might work for you, because the connection to Slack is initiated from your side. So - provided you firewall allows it - would also work from within an intranet. However, you can not do slash commands with the RTM API or any of the other interesting interactive Slack features like buttons. Only simple messages and events.
You could implement your own bridging solution and pull from it. But I don't think that a pulling solution would work, because it creates a lot of latency for your app. Users expect an immediate response to their slash command, not a delay of 30 secs or more.
So in summary I think you only have two valid options:
Host your app internally and use a secure VPN like ngrok to expose a public URL to your app.
Run your app on the Internet and let it have a secure connection to your Intranet for accessing internal data. (similar to e.g. a shopping web site would work, that has a public app on the Internet, but also can transmit orders to the business applications on the companies Intranet.)
Related
How would one implement a comprehensive chat system using sockets within FastAPI. Specifically keeping the following in mind:
Multiple Chat rooms many-to-many between users
Storing messages with a SQL or NoSQL database for persistence
Security: Authentication or possibly encryption
I've looked at some libraries, but actual useful implementations are far between, regrettably.
Any advice or redirects towards places for more information will be of great help!
For chat rooms you could use FastAPI builtin websockets support and add redis pubsub or PostgreSQL pg_notify to it for sending messages to all participants in the room.
Storing messages in PostgreSQL is a solid choice because of its long history and stability.
Authentication can be handled by OAuth2 provider in FastAPI. Authorization can be handled by OAuth2 scopes that is hidden in the Advanced Security section in the FastAPI Documentation. Encryption is provided by HTTPS and reverse proxy that you put in front of your app.
There aren't any fully ready made libraries that provide everything out of the box. But breaking down the problem in to smaller pieces and then working on those will get you pretty far.
Write down what fields/data you want to store about your users, chat rooms, messages.
Implement those basic models in FastAPI probably using SQLAlchemy.
Wire up those models to api endpoints so that you can use those models in Swagger (list chatrooms, get and post messages into chatrooms).
Implement a websocket endpoint in FastAPI that will echo back everything sent to it. That should allow you to wire up some client side javascript for sending and receiving messages from the websocket.
Modify your exising message storing endpoint to push the same message also to redis publish topic and change your websocket endpoint to subscribe to the redis subscribe topic.
Add authentication to your endpoints. At first basic user/password, later more advanced configurations.
Add reverse proxy with https in front and voila.
If I develop a REST service hosted in Apache and a Python plugin which services GET, PUT, DELETE, PATCH; and this service is consumed by an Angular client (or other REST interacting browser technology). Then how do I make it scale-able with RabbitMQ (AMQP)?
Potential Solution #1
Multiple Apache's still faces off against the browser's HTTP calls.
Each Apache instance uses an AMQP plugin and then posts message to a queue
Python microservices monitor a queue and pull a message, service it and return response
Response passed back to Apache plugin, in turn Apache generates the HTTP response
Does this mean the Python microservice no longer has any HTTP server code at all. This will change that component a lot. Perhaps best to decide upfront if you want to use this pattern as it seems it would be a task to rip out any HTTP server code.
Other potential solutions? I am genuinely puzzled as to how we're supposed to take a classic REST server component and upgrade it to be scale-able with RabbitMQ/AMQP with minimal disruption.
I would recommend switching wsgi to asgi(nginx can help here), Im not sure why you think rabbitmq is the solution to your problem, as nothing you described seems like that would be solved by using this method.
asgi is not supported by apache as far as I know, but it allows the server to go do work, and while its working it can continue to service new requests that come in. (gross over simplification)
If for whatever reason you really want to use job workers (rabbitmq, etc) then I would suggest returning to the user a "token" (really just the job_id) and then they can call with that token, and it will report back either the current job status or the result
I am using Twilio to send and receive SMS messages from a Python application. The issue is that their tutorials use ngrok as a way to get through the firewall but I don't want to have to run ngrok every time I run my app and the URL changes every time ngrok runs so I have to change the webhook url on Twilio every time. Is there a better way around this? Is this something that requires a server?
There are two options that you have.
The paid option of ngrok allows you to set a persistent url so that you don't have to chance the webhook url on Twilio each time.
If you have a server, then you would also be able to set a persistent url to your server.
Unfortunately, the free version of ngrok does not allow you to set a persistent url.
You can look at going Serverless with Twilio Functions (Node.js/JavaScript).
Building Apps with Twilio Functions
https://support.twilio.com/hc/en-us/articles/115007737928-Building-apps-with-Twilio-Functions
I'm trying to create a Slack App (see here), but I'm having incredible difficulty with how to create a Redirect URI.
Slack states the following:
You must specify at least one redirect URL for OAuth to work. If you
pass a URL in an OAuth request, it must (at least partially) match one
of the URLs you enter here. Learn more
I have a rudimentary understanding of a Redirect URI conceptually, but I have no idea how to go about actually getting this Redirect URI that Slack requires.
I've successfully used all of Slacks Integrations with Python including Real Time Messaging, but setting up a Redirect URI seems to require a special server or a website.
As already mentioned in the comments you will need a publicly reachable webserver to host your script for installing the Slack app. So the redirect URL is the URL to your installation script.
Basically any webserver or script hosting service that runs your favorite script flavor (e.g. PHP or Python) will work. See also this answer on how the OAUTH process can be implemented.
The redirect URL works without SSL, but for security reasons SSL is strongly recommended. Also many other features of Slack requires you to run SSL on your webserver (e.g. Interactive Buttons)
Another option is to run a webserver on your local machine (e.g. WAMP for windows) and open it to the Internet through a secure tunnel (e.g. ngrok). For developing and testing this is actually the better alternative, since you can test and fix your Slack app locally without having to deploy every change on a public server.
However for running a public Slack app (e.g. one that is listed on the Slack App Directory) I would strongly recommend to put the production version of your App on a public webserver.
If you're just trying to get it up so that you can authorize another workspace you can always use 'http://localhost' after authorizing it will try to redirect you there and you wont be able to see anything useful, but the authorization should still have taken place I believe.
of course if you're looking for the api code, you will have to pull it directly from the browser url. ... it's very manual.
I am somewhat new to RESTful APIs.
I'm trying to implement a python system that will control various tasks across multiple computers, with one computer acting as the controller.
I would like all these tasks to be divided amongst multiple users (ex. task foo runs as user foo, and task bar runs as user bar) while handling all requests with a central system. The central system should also act as a simple web server and be able to server basic pages for status purposes.
It it possible to have each user register a "page" with a central server for the API and have the server pass all requests to the programs (probably written in Python)?
Sure, you just need the clients to POST their notifications URL to the server, so that the server can then POST them back with the requests. These are called Webhooks by some people.
Also see RESTful Webhooks.
Yes. Keep in mind that being RESTful is merely a way to organize your web application's URL's in a standard way. You can build your web application to do whatever you want.