I need to write a script to stress test the UDP server.It needs to simulate about 5000 online users and about 400 concurrent users.I couldn't find a similar function on Google, so I wrote a UDP client myself.But I had a problem simulating multiple clients.The solution I came up with:
One socket per client
How to mark online users and concurrent users when using multithreading and multiple sockets to simulate clients?
I encapsulate the client into classes,in this class __ init__ The method of adding one to a variable is used to record the of online users.In this way, concurrent operations cannot be performed successfully
Is it feasible to create 5000 sockets with threads? Is this a best practice? Good performance?
Other approaches?
Is there another approach I haven't thought of? Am I on the wrong track?
Is there a mature testing framework that can be used for reference?
Finally, English is not my mother tongue. Please forgive me for my typos or grammar.Thank you for your reading and look forward to your reply.
There is Apache JMeter tool which is free, open source and modular
There is UDP Request sampler plugin which adds support of the UDP protocol to JMeter, see
The "5000 online users and 400 concurrent users" requirement may be interpreted in the following manner: real users don't hammer the system under test non-stop, they need some time to "think" between operations, i.e. read text, type response, fill forms, take a phone call, etc. So you need to introduce realistic think times using JMeter Timers so you could come up with the configuration when:
5000 users are "online" (connected to the server)
4600 are not doing anything, just "sleeping"
400 are actively sending requests
As long as your machine is capable of doing this without running out of CPU, RAM, Network, etc - it should be fine, personally I would use something like greenlet
Related
I'm struggling to design an efficient way to exchange information between processes in my LAN.
Till now, I've been working with one single RPi, and I had a bunch of python scripts running as services. The services communicated using sockets (multprocessing.connection Client and Listener), and it was kind of ok.
I recently installed another RPi with some further services, and I realized that as the number of services grows, the problem scales pretty badly. In general, I don't need all the services to communicate with any other, but I'm looking for an elegant solution to enable me to scale quickly in case I need to add other services.
So essentially I though I first need a map of where each service is, like
Service 1 -> RPi 1
Service 2 -> RPi 2
...
The first approach I came up with was the following:
I thought I could add an additional "gateway" service so that any application running in RPx would send its data/request to the gateway, and the gateway would then forward it to the proper service or the gateway running on the other device.
Later I also realized that I could actually just give the map to each service and let all the services manage their own connection. This would mean to open many listeners to the external address, though, and I'm not sure it's the best option.
Do you have any suggestions? I'm also interested in exploring different options to implement the actual connection, might the Client / Listener one not be efficient.
Thank you for your help. I'm learning so much with this project!
Key points:
I need to send roughly ~100 float numbers every 1-30 seconds from one machine to another.
The first machine is catching those values through sensors connected to it.
The second machine is listening for them, passing them to an http server (nginx), a telegram bot and another program sending emails with alerts.
How would you do this and why?
Please be accurate. It's the first time I work with sockets and with python, but I'm confident I can do this. Just give me crucial details, lighten me up!
Some small portion (a few rows) of the core would be appreciated if you think it's a delicate part, but the main goal of my question is to see the big picture.
Main thing here is to decide on a connection design and to choose protocol. I.e. will you have a persistent connection to your server or connect each time when new data is ready to it.
Then will you use HTTP POST or Web Sockets or ordinary sockets. Will you rely exclusively on nginx or your data catcher will be another serving service.
This would be a most secure way, if other people will be connecting to nginx to view sites etc.
Write or use another server to run on another port. For example, another nginx process just for that. Then use SSL (i.e. HTTPS) with basic authentication to prevent anyone else from abusing the connection.
Then on client side, make a packet every x seconds of all data (pickle.dumps() or json or something), then connect to your port with your credentials and pass the packet.
Python script may wait for it there.
Or you write a socket server from scratch in Python (not extra hard) to wait for your packets.
The caveat here is that you have to implement your protocol and security. But you gain some other benefits. Much more easier to maintain persistent connection if you desire or need to. I don't think it is necessary though and it can become bulky to code break recovery.
No, just wait on some port for a connection. Client must clearly identify itself (else you instantly drop the connection), it must prove that it talks your protocol and then send the data.
Use SSL sockets to do it so that you don't have to implement encryption yourself to preserve authentication data. You may even rely only upon in advance built keys for security and then pass only data.
Do not worry about the speed. Sockets are handled by OS and if you are on Unix-like system you may connect as many times you want in as little time interval you need. Nothing short of DoS attack won't inpact it much.
If on Windows, better use some finished server because Windows sometimes do not release a socket on time so you will be forced to wait or do some hackery to avoid this unfortunate behaviour (non blocking sockets and reuse addr and then some flo control will be needed).
As far as your data is small you don't have to worry much about the server protocol. I would use HTTPS myself, but I would write myown light-weight server in Python or modify and run one of examples from internet. That's me though.
The simplest thing that could possibly work would be to take your N floats, convert them to a binary message using struct.pack(), and then send them via a UDP socket to the target machine (if it's on a single LAN you could even use UDP multicast, then multiple receivers could get the data if needed). You can safely send a maximum of 60 to 170 double-precision floats in a single UDP datagram (depending on your network).
This requires no application protocol, is easily debugged at the network level using Wireshark, is efficient, and makes it trivial to implement other publishers or subscribers in any language.
I have an application built with Python / Twisted Matrix which uses methods from a SOAP client in order to send some messages. Problem is that sometimes i want to send a lot of messages and when that happens i would like to do it in multiple threads. For example if i have to send 100 messages i would like this broken into groups of 20 messages and to create 5 threads to send the messages in parallel.
What should i look for ? Any ideas ? I would also like the threads to be able to report back with the gathered data
P.S. given the fact that probably working with SOAP clients is more of a problem of waiting around ... do you think that threading is not the best approach to solve this ? Can the callbacks of the soap client be used to create some sort of "pool" of senders and have the senders somehow as for new stuff to send as soon as they are free ? Ideas ?
The best approach is probably defined by the distribution of the SOAP services and methods your accessing.
My first suggestion would be to not use OS Threading, but use micro-threading generator-coroutines with inlineCallbacks and deferredSemaphores.
But you might want to tune things to reuse connections for the same server and/or retain server cookies.
I am developing a testbed for cloud computing environment. I want to establish multiple client connection to a server. What I want is that, server first of all send a data to all the clients specifying sending_interval and then all the clients will keep on sending their data with a time gap of that time_interval (as specified by the server). Please help me out, how can I do the same using python socket program. (i.e. I want multiple client to single server connectivity and also client sending data with the time gap specified by server). Will be great-full if anyone can help me. Thanks in advance.
This problem is easily solved by the ZeroMQ socket library. It is production stable. It allows you to define publisher-subscriber relationships, where a publishing process will publish data on a port regardless of how many (0 to infinite) listening processes there are. They call this the PUB-SUB model; it's in their docs (link below).
It sounds like you want to set up a bunch of clients that are all publishers. They can subscribe to a controlling channel, which which will send updates to their configuration (how often to write). They also act as publishers, pushing out their own data at an interval specified by default/config channel/socket.
Then, you have one or more listening processes that listen to all the clients' published messages. Perhaps you could even have two listening processes, one for backup or DR, or whatever.
We're using ZeroMQ and loving the simplicity it gives; there's no connection errors because the publisher doesn't care if anyone is listening, and the subscriber can start before the publisher and if there's nothing there to listen to, it can just loop around and wait until there is.
Bindings are available in ALL languages (it's freaky). The Python binding isn't pure-python, it does require a C compiler, but is frighteningly fast, and the pub/sub example is a cut/paste, 'golly, it works!' experience.
Link: http://zeromq.org
There are MANY other methods available with this library, including message queues, etc. They have relatively complete documentation, too.
Multi-Client and Single server Socket programming can be achieved by Multithreading in Socket Programming. I have implemented both the method:
Single Client and Single Server
Multiclient and Single Server
In my GitHub Repo Link: https://github.com/shauryauppal/Socket-Programming-Python
What is Multi-threading Socket Programming?
Multithreading is a process of executing multiple threads simultaneously in a single process.
To understand well you can visit Link: https://www.geeksforgeeks.org/socket-programming-multi-threading-python/, written by me.
I'm working on writing a Python client for Direct Connect P2P networks. Essentially, it works by connecting to a central server, and responding to other users who are searching for files.
Occasionally, another client will ask us to connect to them, and they might begin downloading a file from us. This is a direct connection to the other client, and doesn't go through the central server.
What is the best way to handle these connections to other clients? I'm currently using one Twisted reactor to connect to the server, but is it better have multiple reactors, one per client, with each one running in a different thread? Or would it be better to have a completely separate Python script that performs the connection to the client?
If there's some other solution that I don't know about, I'd love to hear it. I'm new to programming with Twisted, so I'm open to suggestions and other resources.
Thanks!
Without knowing all the details of the protocol, I would still recommend using a single reactor -- a reactor scales quite well (especially advanced ones such as PollReactor) and this way you will avoid the overhead connected with threads (that's how Twisted and other async systems get their fundamental performance boost, after all -- by avoiding such overhead). In practice, threads in Twisted are useful mainly when you need to interface to a library whose functions could block on you.