I would like to make a multi-threading UDP server in Python.
The purpose is to be able to connect several clients to the server (not sockets connections but username and password), act with each of them and do some actions on the server. All at the same time.
I am a little confuse with all the different type of threading and I don't know what to use.
To be clearer this is exactly what I want to do at the same time :
Wait for clients to send data for the first time and register their ip in a database
Act with "connected" clients by waiting for them to send datagrams and respond to them
Be able to act with the server. For exemple, change a client's password in my database
I would have a look at a framework that is good at handling asynchronous io. The idea is to not have a thread per socket and block until you receive data, but instead let one thread handle many sockets at once. This scales well if you want your server to handle many clients.
For example:
Gevent - "a coroutine-based Python networking library", example
Twisted - "an event-driven networking engine", example
Eventlet - "a concurrent networking library", example (TCP, but it uses a patched socket so you can also refer to the Python wiki page about UDP Communication)
Related
I have an application, foo which takes in data, does stuff to it, and then publishes the new treated data over AMQ for another downstream application to grab. Until this point, foo has always gotten its data by connecting to another AMQ server which another script is publishing packetized data to (a lot of handwaving here, but the specifics don't really matter).
Recently a change has been made, and foo needs to be able to grab its data from a UDP socket. Is AMQ able to connect to this socket and receive/listen to the data being transmitted over it? From my understanding, AMQ uses TCP to establish connection to the client, and some initial research points me to this UDP Transport documentation from Apache, but not much else.
Alternatively, I could develop a rough UDP socket listener in Python, and then publish those messages to AMQ for foo to grab, but it would be optimal to have it all included in foo itself.
Not necessarily looking for an exhaustive solution here; quick and dirty would be enough to get me started.
Thanks!
ActiveMQ itself is a broker and therefore doesn't connect to sockets and listen for messages. It is the job of a client to connect to the broker and send and/or receive messages.
The UDP transport documentation is just theoretical as far as I know. It is technically possible to use UDP as the base of a traditional messaging protcol, but I've never actually seen it done since UDP is unreliable. The documentation even says, "Note that by default UDP is not reliable; datagrams can be lost so you should add a reliability layer to ensure the JMS contract can be implemented on a non-reliable transport." Adding a "reliability layer" is impractical when TCP can simply be used instead. All of the protocols which ActiveMQ supports (i.e. AMQP, STOMP, MQTT, OpenWire) fundamentally require a reliable network transport.
I definitely think you'll need some kind of intermediary process to read the data from the UDP socket and push it to the broker.
I am creating a colloabrative note-making app in python.
Here, one guy on computer running the app can create the server subseuqently the changes on the screen([color, pixel], where pixel=[x,y]) will be transmitted to others connected to the server.
I am using kivy for creating the app. My question is with respect to transmitting the data over the server.
I can create server using this:
import socket
ip_address=socket.gethostbyname(socket.gethostname())
execfile( "manage.py runserver "+ip_address+":8000" )
Now, how do others connect to the server and request the data(assuming the above code is correct). Also, how to send the data in django.
Well, Django is a framework that allows creating a site or API that is reachable through HTTP protocol. This has several consequences for you:
Server cannot send a message to client unless the client asks. HTTP is a "request-response" protocol. Client sends a request (for example, http://server.com/getUpdates?id=100500) and gets a response from server.
Creating clients that ask the server to give them updates all the time is a bad practice, probably leading to server DoS.
Although you can use WebSockets, using Django for such a task is really an overkill.
Summarizing, you need a reliable duplex channel for sending data in both directions. I'd start with TCP server, rather than HTTP. Fortunately, Python stdlib has a module you can start with - socketserver.
Additional reading
TCP
UDP (you will probably want this for broadcasting)
Berkeley sockets (a socket standard underlying socketserver module)
TCP vs. UDP
When deciding what protocol to use, following aspects should be considered:
TCP is reliable. Messages never disappear implicitly. If there was a network error, message will be resent. If there's no connection, explicit error will be raised. TCP uses several algorithms to fit into the network channel. It is an intelligent protocol.
UDP is unreliable. It possesses no feature TCP has. Packets can disappear, get reordered. But UDP messages are lightweight and in experienced hands they summon to life such systems as network action games and streaming video (lost and reordered messages aren't crucial here and TCP becomes too slow).
So I'd recommend to start with TCP. It's way more easier to get working fast and correct than UDP. Switch to UDP if you have some experience with TCP and there are a lot of people using you app and wanting to get the lowest latency possible.
I have a python program that reads in values from an ADC then writes them to a file as well as send them via TCP if a connection is available. I can send the data fine however as data is constantly being read I would like to be able to keep the connection open. How do I get the client to check that the Server has more data to send and thus to keep the connection open?
This scenario seems very similar to one of our applications. We use ZeroMQ for this.
Using ZeromMQ with PUB/SUB
On PyZMQ doc is example for using Pub/Sub.
Data provider creates PUB socket and sends messages to it.
Data consumer sets up SUB socket and reads messages from.
Typically, PUB socket is fixed part of infrastructure, so it binds to some port, and SUB connects. But if you like, you can switch it and it works too.
Advantages are:
provider sends messages to PUB socket and does not block on it
reconnects are handled automatically
if there is no consumer or connection, PUB socket silently drops the messages.
the code to implement this is very short
Other messaging patterns with ZeroMQ
PUB/SUB is just one option, there are other combinations like PUSH/PULL, REQ/REP etc.
I have some services running for years with PUSH/PULL and it stays "quite well" (typically there are 6 weeks before there is a need to do some restart, but this is rather due to problems on hardware than in ZeroMQ library.)
I am developing a group chat application to learn how to use sockets, threads (maybe), and asycore module(maybe).
What my thought was have a client-server architecture so that when a client connects to the server the server sends the client a list of other connects (other client 'user name', ip addres) and then a person can connect to one or more people at a time and the server would set up a P2P connection between the client(s). I have the socket part working, but the server can only handle one client connection at a time.
What would be the best, most common, practical way to go about handling multiple connections?
Do I create a new process/thread whenever I new connection comes into the server and then connect the different client connections together, or use the asycore module which from what I understand makes the server send the same data to multiple sockets(connection) and I just have to regulate where the data goes.
Any help/thoughts/advice would be appreciated.
For a group chat application, the general approach will be:
Server side (accept process):
Create the socket, bind it to a well known port (and on appropriate interface) and listen
While (app_running)
Client_socket = accept (using serverSocket)
Spawn a new thread and pass this socket to the thread. That thread handles the client that just connected.
Continue, so that server can continue to accept more connections.
Server-side client mgmt Thread:
while app_running:
read the incoming message, and store to a queue or something.
continue
Server side (group chat processing):
For all connected clients:
check their queues. If any message present, send that to ALL the connected clients (including the client that sent this message -- serves as ACK sort of)
Client side:
create a socket
connect to server via IP-address, and port
do send/receive.
There can be lots of improvement on the above. Like the server could poll the sockets or use "select" operation on a group of sockets. That would make it efficient in the sense that having a separate thread for each connected client will be an overdose when there are many. (Think ~1MB per thread for stack).
PS: I haven't really used asyncore module. But I am just guessing that you would notice some performance improvement when you have lots of connected clients and very less processing.
I am developing a testbed for cloud computing environment. I want to establish multiple client connection to a server. What I want is that, server first of all send a data to all the clients specifying sending_interval and then all the clients will keep on sending their data with a time gap of that time_interval (as specified by the server). Please help me out, how can I do the same using python socket program. (i.e. I want multiple client to single server connectivity and also client sending data with the time gap specified by server). Will be great-full if anyone can help me. Thanks in advance.
This problem is easily solved by the ZeroMQ socket library. It is production stable. It allows you to define publisher-subscriber relationships, where a publishing process will publish data on a port regardless of how many (0 to infinite) listening processes there are. They call this the PUB-SUB model; it's in their docs (link below).
It sounds like you want to set up a bunch of clients that are all publishers. They can subscribe to a controlling channel, which which will send updates to their configuration (how often to write). They also act as publishers, pushing out their own data at an interval specified by default/config channel/socket.
Then, you have one or more listening processes that listen to all the clients' published messages. Perhaps you could even have two listening processes, one for backup or DR, or whatever.
We're using ZeroMQ and loving the simplicity it gives; there's no connection errors because the publisher doesn't care if anyone is listening, and the subscriber can start before the publisher and if there's nothing there to listen to, it can just loop around and wait until there is.
Bindings are available in ALL languages (it's freaky). The Python binding isn't pure-python, it does require a C compiler, but is frighteningly fast, and the pub/sub example is a cut/paste, 'golly, it works!' experience.
Link: http://zeromq.org
There are MANY other methods available with this library, including message queues, etc. They have relatively complete documentation, too.
Multi-Client and Single server Socket programming can be achieved by Multithreading in Socket Programming. I have implemented both the method:
Single Client and Single Server
Multiclient and Single Server
In my GitHub Repo Link: https://github.com/shauryauppal/Socket-Programming-Python
What is Multi-threading Socket Programming?
Multithreading is a process of executing multiple threads simultaneously in a single process.
To understand well you can visit Link: https://www.geeksforgeeks.org/socket-programming-multi-threading-python/, written by me.