Proper way to implement a Direct Connect client in Twisted? - python

I'm working on writing a Python client for Direct Connect P2P networks. Essentially, it works by connecting to a central server, and responding to other users who are searching for files.
Occasionally, another client will ask us to connect to them, and they might begin downloading a file from us. This is a direct connection to the other client, and doesn't go through the central server.
What is the best way to handle these connections to other clients? I'm currently using one Twisted reactor to connect to the server, but is it better have multiple reactors, one per client, with each one running in a different thread? Or would it be better to have a completely separate Python script that performs the connection to the client?
If there's some other solution that I don't know about, I'd love to hear it. I'm new to programming with Twisted, so I'm open to suggestions and other resources.
Thanks!

Without knowing all the details of the protocol, I would still recommend using a single reactor -- a reactor scales quite well (especially advanced ones such as PollReactor) and this way you will avoid the overhead connected with threads (that's how Twisted and other async systems get their fundamental performance boost, after all -- by avoiding such overhead). In practice, threads in Twisted are useful mainly when you need to interface to a library whose functions could block on you.

Related

Concurrency test of UDP server

I need to write a script to stress test the UDP server.It needs to simulate about 5000 online users and about 400 concurrent users.I couldn't find a similar function on Google, so I wrote a UDP client myself.But I had a problem simulating multiple clients.The solution I came up with:
One socket per client
How to mark online users and concurrent users when using multithreading and multiple sockets to simulate clients?
I encapsulate the client into classes,in this class __ init__ The method of adding one to a variable is used to record the of online users.In this way, concurrent operations cannot be performed successfully
Is it feasible to create 5000 sockets with threads? Is this a best practice? Good performance?
Other approaches?
Is there another approach I haven't thought of? Am I on the wrong track?
Is there a mature testing framework that can be used for reference?
Finally, English is not my mother tongue. Please forgive me for my typos or grammar.Thank you for your reading and look forward to your reply.
There is Apache JMeter tool which is free, open source and modular
There is UDP Request sampler plugin which adds support of the UDP protocol to JMeter, see
The "5000 online users and 400 concurrent users" requirement may be interpreted in the following manner: real users don't hammer the system under test non-stop, they need some time to "think" between operations, i.e. read text, type response, fill forms, take a phone call, etc. So you need to introduce realistic think times using JMeter Timers so you could come up with the configuration when:
5000 users are "online" (connected to the server)
4600 are not doing anything, just "sleeping"
400 are actively sending requests
As long as your machine is capable of doing this without running out of CPU, RAM, Network, etc - it should be fine, personally I would use something like greenlet

RPC Protocols comparison

I have to select a protocol/technology to use for communicating a client-server architecture, with support both for Python and C. The main requirements are:
Symmetrical communication in between ends: clients establish a connection and servers can send data back to clients through the same connection.
Avoid excessive overhead by using HTTP or a big stack (if possible, TCP direct communication).
TLS/SSL support for secure communications.
Ease of implementation.
For that, I evaluated the following protocols/communications technologies. I would appreciate that you could take a look at the table and tell me what you think, since most of times, it was quite hard to find the information I required for this analysis. In addition, I would also appreciate that any of you could add more protocols/technologies to the table below.
(*1) TLS support for RPyC is based in a no-longer supported Python library.
I use xmlrpc, but i think ZMQ is the best choice.

Python tcp socket client

I need to have a tcp socket client connected to a server to send data and receive.
But this socket must be always on and i cannot open another socket.
I have always some data to send over the time and then later process the answer to the data sent previously.
If i could open many sockets, i think it was more easy. But in my case i have to send everything on the same socket asynchronously.
So the question is, what do you recommend to use within the Python ecosystem? (twisted, tornado, etc)
Should i consider node.js or another option?
I highly recommend Twisted for this:
It comes with out-of-the-box support for many TCP protocols.
It is easy to maintain a single connection, there is a ReconnectingClientFactory that will deal with disconnections and use exponential backoff, and LoopingCall makes it easy to implement a heartbeat.
Stateful protocols are also easy to implement and intermingle with complex business logic.
It's fun.
I have a service that is exactly like the one you mention (single login, stays on all the time, processes data). It's been on for months working like a champ.
Twisted is possibly hard to get your head around, but the tutorials here are a great start. Knowing Twisted will get you far in the long run!
"i have to send everything on the same socket asynchronously"
Add your data to a queue, have a separate thread taking items out of the queue and sending via socket.send()

Interprocess messaging between two Python programs

We have two Python programs running on two linux servers. Now we want to send messages between these Python programs. The best idea so far is to create a TCP/IP server and client architecture, but this seems like a very complicate way to do it. Is this really best practice for doing such a thing?
I like zeromq for simple messaging, it's really lightweight and fast...very flexible as well. Using AMQP messaging isn't a bad idea either depending on the specifics of your situation, I've found kombu to be a very nice pythonic library for that. You could also use xmlrpclib or setup a simple REST API with bottle or flask. Every option has it's place, so I'd investigate all your options.
This really depends on the kind of messaging you want and the roles of the two processes. If it's proper "client/server", I would probably create a SimpleHTTPServer and then use HTTP to communicate between the two. You can also use XMLRPCLib and the client to talk between them. Manually creating a TCP server with your own custom protocol sounds like a bad idea to me. You might also consider using a message queue system to communicate between them.
You could have a mulitprocessing.managers. As doc says :"A manager object controls a server process which manages shared objects. Other processes can access the shared objects by using proxies."
In your case, you could create a master process that control your other processes, each of those processes will call the master to grab the data.

Creating connection between two computers in python

The question: How do I create a python application that can connect and send packets over the internet to another computer running the same application? Is there any existing code/library I could use?
The background: I am pretty new to programming (HS senior). I've created a lot of simple things in python but I've recently decided to start on a bigger project. I'm considering creating a Magic: the Gathering booster draft simulator, but I'm not sure if it is feasible given my skill set so I'm asking around before I get started. The application would need to send data between computers about which cards are being picked/passed.
Thanks!
Twisted is a python event-driven networking engine licensed under MIT. Means that a single machine can communicate with one or more other machines, while doing other things between data being received and sent, all asynchronously, and running a in a single thread/process.
It supports many protocols out of the box, so you can just as well using an existing one. That's better because you get support for the protocol from 3rd party software (i.e. using HTTP for communication means middleware software that uses HTTP will be compatible: proxies etc.)
It also makes easy to create your own communication protocol, if that's what you want.
The documentation is filled with examples.
The standard library includes SocketServer (documented here), which might do what you want.
However I wonder if a better solution might be to use a message queue. Lots of good implementations already exist, including Python interfaces. I've used RabbitMQ before. The idea is that the computers both subscribe to the queue, and can either post or listen for events.
A great place to start looking is the Python Standard Library. In particular there are a few sections that will be relevant:
18. Interprocess Communication and Networking
19. Internet Data Handling
21. Internet Protocols and Support
Since you mentioned that you have no experience with this, I'd suggest starting with a HTTP based implementation. The HTTP protocol is fairly simple and easy to work with. In addition, there are nice frameworks to support this operation, such as webpy for the server and HTTPLib from the standard library for the client.
If you really want to dive into networking, then a socket based implementation might be educational. This will teach you the underlying concepts that are used in lots of networking code while resulting in an interface that is similar to a file stream.
Also, check out Pyro (Python remoting objects). Se this answer for more details.
Pyro basically allows you to publish Python object instances as services that can be called remotely. Pyro is probably the easiest way to implement Python-to-python process communication.
It's also worth looking at Kamaelia for this sort of thing - it's original usecase was network systems, and makes building such things relatively intuitive.
Some links: Overview of basic TCP system, Simple Chat server, Building a layered protocol, walk-through of how to evolve new components. Other extreme: P2P radio system: source, peer.
If it makes any difference, we've tested whether the system is accessible to novices through involvement in google summer of code 3 years in a row, actively taking on both experienced and inexperienced developers. All of them managed to build useful systems.
Essentially, if you've ever played with unix pipelines the ideas should be familiar.
Caveat: I wrote major chunks of Kamaelia :-)
If you want to learn how to do these things though, playing with a few different approaches makes sense, and you should definitely check out Twisted (the standard answer to this question), Pyro & the standard library tools. Each has a different approach, and learning them will definitely benefit you!
However, like nosklo, I would recommend against using the socket library directly and use a library instead - simply because it is much much harder to get sockets programming correct than people tend to realise.
Communication will take place with sockets, one way or another. Just a question of whether you use existing higher-level libraries, or roll your own.
If you're doing this as a learning experience, probably want to start as low-level as you can, to see the real nuts and bolts. Which means you probably want to start with a SocketServer, using a TCP connection (TCP is basically guaranteed delivery of data; UDP is not).
Google for some simple example code. Setting one up is very easy. But you will have to define all the details of your communications protocol: which end sends when and what, which end listens and when, what exactly the listener will expect, does it reply to confirm receipt, etc.

Categories