I have a quick question regarding sending and receiving STOMP messages over an AMQ broker.
I have a python script that is sending data as STOMP messages to an AMQ instance, and another script which listens to that messages topic and grabs it. Everything is working as expected so far, but I'm curious about the security of the system. Would someone on the network be able to use a packet sniffer or similar tool to read the messages that are being sent/received by the broker? Or are they unable to see the data without the AMQ server login? My gut tells me it's the latter, but I wanted to confirm.
For context, my sender sends out the data using stomp.py
conn = stomp.Connection(host_and_ports=[(ip, port)])
conn.connect(wait=True)
conn.send(body=clean_msg, destination=f"/topic/{topic}")
Is that conn.send call encrypting or protecting my data in any way? If it isn't, how do I go about doing so? All my research into ActiveMQ and STOMP encryption leads me to encrypting the login or using SSL to login to the AMQ server, which leads me to believe that as long as the login is secure, I should be fine.
Thanks in advance!
STOMP is a text oriented protocol so unless you're using SSL/TLS then anybody who has access to the network would be able to look at the packets and fairly easily read the message data that's being sent from your producer(s) to the broker and from the broker to the consumer(s).
From what I can tell your Python STOMP client is not using SSL/TLS so your transmissions would not be protected.
Furthermore, once the data is stored on the broker then anybody with file-system access would be able to read the data as it is not encrypted in the storage. You can, of course, mitigate this risk by enforcing standard user access.
Related
I am the beginner to tornado(python based web server). I have to create an application which will have public chat rooms and private messaging between two users.so, I have been looking for a good tutorial about tornado to implement the same but what i found is we can just create the websockets and once we have connected to socket we can send message to server and we can open multiple tabs of browser to replicate multiple users. So all users can send messages to server and every other user and can see all those messages but i need to create private message chat between two users like whatsapp. So can i do the same with tornado ? Please help me out. Any help would be appreciable.
If you can form sockets, from client to the server then yes!
Sockets are just data streams. You will have to add chat room request data and authentication to the sockets so the server can direct each client to the appropriate chat 'room' (or drop the connection if authentication fails).
after that it's the same as what you have implemented already.
For secure chat, you'll need some form of encryption on top of all this - at least so that clients know they are talking to the correct server. From there it's adding encryption for clients to know they are talking to the right clients.
The final step would be to implement peer to peer capabilities after authenticating at the server.
Most IMAP email clients have the ability to receive emails automatically in real time, without refreshing on a fixed interval. This is because the email client keeps a connection open to the server, and if there isn't any email data exchanged to keep the connection alive, the email client will send a NOOP packet on a fixed interval (just like other TCP protocols have their own keepalive packets). In Thunderbird, this behavior can be controlled by the 'Allow immediate server notifications when new messages arrive' option.
I'm writing a Python program that needs to know instantly when emails come in. The standard procedure for receiving emails is (after connection and login) select('inbox'), search(None, 'ALL') (or some specific search term), and fetch(). I can't figure out how to listen for new emails in real time. Every guide I've read suggests running the search() and fetch() functions in a loop. I've tried running the read() function while sending myself emails, but I have never seen read() output any data. Perhaps the server needs to know to push emails to the client? I haven't been able to confirm or refute this theory.
How can I instantly receive new emails with imaplib, or a similar Python library?
While I did not know at the time of originally posting this question, I was looking for a solution that implements the IMAP IDLE command, as defined in RFC2177.
Since you want to fetch emails asynchronously you should use the below library
https://github.com/bamthomas/aioimaplib
I used aioimaplib in my code, but imaplib2 also supposedly supports IDLE; see: https://web.archive.org/web/20090814230913/http://blog.hokkertjes.nl/2009/03/11/python-imap-idle-with-imaplib2/
I am creating a colloabrative note-making app in python.
Here, one guy on computer running the app can create the server subseuqently the changes on the screen([color, pixel], where pixel=[x,y]) will be transmitted to others connected to the server.
I am using kivy for creating the app. My question is with respect to transmitting the data over the server.
I can create server using this:
import socket
ip_address=socket.gethostbyname(socket.gethostname())
execfile( "manage.py runserver "+ip_address+":8000" )
Now, how do others connect to the server and request the data(assuming the above code is correct). Also, how to send the data in django.
Well, Django is a framework that allows creating a site or API that is reachable through HTTP protocol. This has several consequences for you:
Server cannot send a message to client unless the client asks. HTTP is a "request-response" protocol. Client sends a request (for example, http://server.com/getUpdates?id=100500) and gets a response from server.
Creating clients that ask the server to give them updates all the time is a bad practice, probably leading to server DoS.
Although you can use WebSockets, using Django for such a task is really an overkill.
Summarizing, you need a reliable duplex channel for sending data in both directions. I'd start with TCP server, rather than HTTP. Fortunately, Python stdlib has a module you can start with - socketserver.
Additional reading
TCP
UDP (you will probably want this for broadcasting)
Berkeley sockets (a socket standard underlying socketserver module)
TCP vs. UDP
When deciding what protocol to use, following aspects should be considered:
TCP is reliable. Messages never disappear implicitly. If there was a network error, message will be resent. If there's no connection, explicit error will be raised. TCP uses several algorithms to fit into the network channel. It is an intelligent protocol.
UDP is unreliable. It possesses no feature TCP has. Packets can disappear, get reordered. But UDP messages are lightweight and in experienced hands they summon to life such systems as network action games and streaming video (lost and reordered messages aren't crucial here and TCP becomes too slow).
So I'd recommend to start with TCP. It's way more easier to get working fast and correct than UDP. Switch to UDP if you have some experience with TCP and there are a lot of people using you app and wanting to get the lowest latency possible.
I'm building a application that needs to send and receive emails.
However I do not want to have a separate email server (or use IMAP and POP3), since I need to create/delete/manage inboxes on the fly, with no email inbox passwords, etc.
I have an email storage database in place, and I can receive emails by using a custom smtpd server, replacing postfix. However, that way I'm not able to send emails via postfix (using smtplib, connecting through port 25 to postfix and sending emails)
Any solution to this problem? How to send emails with a custom smtp server? Can I configure postfix to relay all incoming emails to a custom smtp server running in another port, and still use postfix on port 25 to send emails?
Thanks for your time
By using a custom SMTP server, you run the risk of inadvertently creating security holes or violating the SMTP protocol in some way. With so many great SMTP servers out there (Postfix, exim, sendmail...), that doesn't sound like a good option to me.
The easiest way I can think of to solve that issue is to use Postfix to relay inbound and outbound e-mail.
Inbound e-mail can be configured to be piped to an application, and outbound e-mail can be configured to be delivered by Postfix, either directly or relayed through a different server.
This way, you can use, instead of a custom SMTP server, an application that is able to parse RFC822-compliant message files. This is better than doing exactly the same thing, but with the overhead of having to implement the SMTP protocol.
This approach probably won't scale well should you need to receive a high volume of messages - every message will fork+exec a new process. If that is a requirement, a good approach would be to keep a custom SMTP server to do that job, but let Postfix relay it the messages - you will then benefit from Postfix's architecture in front of your parser.
Assuming you follow the approach of piping the messages to an application, all you need to do in Postfix is to
Configure Postfix's alias_maps' parameter to look for such a map:
alias_maps = hash:/etc/aliases, hash:/etc/postfix/app-aliases
Then, configure the map to pipe messages sent to each address into an application:
test: "|/usr/local/bin/your-app"
As usual, don't forget to $ postalias app-aliases.
This will make a message sent to test#yourdomain be piped into /usr/local/bin/your-app, which acts as an e-mail gateway to your application.
I'm looking for a way to take gads of inbound SMTP messages and drop them onto an AMQP broker for further routing and processing. The messages won't actually end up in a mailbox, but instead SMTP is used as a message gateway.
I've written a Postfix After-Queue Content Filter in Python that drops the inbound SMTP message onto a RabbitMQ broker. That works well - I get the raw message over a queue and it gets picked up nicely by a consumer. The issue is that the AMQP connection is created and torn down with each message... the Content Filter script gets re-executed from scratch each time. I imagine that will end up being a performance issue.
If I could leverage something re-entrant I could reuse the connection. Or maybe I'm just approaching the whole thing incorrectly...
Making an AMQP connection over plain TCP is pretty quick. Perhaps if you're using SSL then it's another story but you sure that enqueueing the raw message onto the AMQP exchange is going to be the bottleneck? My guess would be that actually delivering the message via SMTP is going to be much slower so how fast you can queue things up isn't going to affect the throughput of the system.
If this piece does turn out to be a bottleneck I rather like creating little web servers using Sinatra, or Rack but it sounds like you might prefer a Python based solution. Have the postfix content filter perform a HTTP POST using curl to a webserver, which maintains a persistent connection to the AMQP server.
Of course now you have an extra moving part for which you need to think about monitoring, error handling and security.
Use SwiftMQ. It has a JavaMail bridge which receives your eMails from an IMAP or POP3 account, converts it into a JMS message which then can be consumed by an AMQP 0.9.1 and/or AMQP 1.0 and of course JMS client.
You can make Postfix deliver all or any of your emails to an external program, where you can throw them anywhere you want. Some examples can be found here.